113
4
website:
http://web.student.chalmers.se/~seyedma/indexeddbspeedtest
.
Our benchmarks were based on CRUD (create, retrieve,
update, delete) operations of databases. In addition, we
considered indexing, which can have impact on benchmark’s
results, to achieve more reliable results. All test queries were
applied on simple tables and objectstores of the both
databases, where no joining or sub-queries were involved.
The data that we used to perform the benchmarks was JSON
and XML text files, which their size would not exceed 1
megabyte. Although, all benchmarks were performed on
small amount of data and cannot measure how well these
databases can handle large scale data but they can provide
guidelines and set basic operational expectations (Oracle,
2006). In order to generate data for our benchmarks, we used
Databasetestdata.com website.
The two host platforms for our benchmarks were:
1. LG Nexus 5 phone with a Quad-core 2.3 GHz
Krait 400 CPU, 2GB RAM, 16GB Internal
memory, running Android 4.4.2 (Kitkat).
2. ASUS Nexus 7(first generation) tablet with a
Quad-core 1.2 GHz Cortex-A9 CPU, 1Gb
RAM, 32Gb Internal memory, running Android
4.4.2 (Kitkat).
Both devices were using default configuration without
any optimizations. All background processes and
applications were terminated to prevent I/O speed
manipulation, and to get more precise results during all tests.
IndexedDB benchmarks were executed on the latest version
of Google Chrome browser (version 31.0.1650.59), which
was the latest version by the time of benchmarking. Also,
SQLite version 3.4.0 was used for SQLite benchmarks.
3) Literature Review on Security
Data collection for security was conducted using
literature review technique to identify the previous works,
which had been done in this field. According to Creswell
(2009), literature review becomes a basis for comparing and
contrasting findings of a qualitative study. To collect our
sources for literature review, we searched through digital
libraries to find relevant articles, books, and journals. Also,
since IndexedDB is introduced recently and there are not
adequate sources in this area, we also used blogs, and bug
reports. Table I presents the searched libraries and number of
found articles from each.
Sources
Related
Selected
IEEEXplore
16
9
ACM
11
5
Other
18
12
TABLE I.
Q
UANTITY OF PAPERS AND BOOKS FOUND AND SELECTED
.
The following are the search terms that we used for
collecting related resources:
“HTML5”, “Android”, “IndexedDB”, “SQLite”, “Security”,
“Relational database”, “Object-oriented database”,
“IndexedDB” AND “Security”, “SQLite” AND “Security”,
“IndexedDB” AND “Vulnerabilities”, “SQLite” AND
“Vulnerabilities”.
B. Data Analysis
1) Quantitative Data Analysis
We used descriptive statistics to analyse our captured
data through benchmarking. Babbie (2009) states, “bivariate
analysis is not only simple descriptive statistical analysis, but
also it describes the relationship between two different
variables”. Since there were two variables included in our
tests, which are “Time” and “Number of queries”, bivariate
statistical analysis was suited for our purpose to summarize
and represent our collected data on graphs.
2) Qualitative Data Analysis
The collected data, in qualitative part, was analyzed
using thematic analysis. Braun and Clarke (2006) describe
thematic analysis as “a method for identifying, analyzing and
reporting patterns (themes) within data”. After collecting raw
data through literature review, all data was reviewed several
times in order to extract patterns, which are known as codes.
Following to that, common concepts of the extracted codes
with their linked data were identified and reviewed to
determine suitable themes for them. If at anytime during data
analysis a new code were merged, we did not start over our
analysis and treated the merged code separately or if it were
possible, we would include it in one of the existed themes.
IV. R
ESULT
A. Performance Measurement
In this subsection, we present result of benchmarks on
both SQLite and IndexedDB databases. For this mean, we
divided our results according to each basic operation of a
database, which are “insert”, “select”, “update”, and
“delete”. Benchmark of each operation is represented on two
distinct graphs, where one is dedicated to an unindexed
table/objectstore and the other one is dedicated to an indexed
table/objectstore. In fact, both prototype used two
tables/objectstores, which have the same structure and
consist of three string attributes, but in one of them two
attributes were used as “index” in the table/objectstore.
Each graph is a relation of number of queries and the
amount of time to execute them, where time is measured in
millisecond. Execution time in all graphs is average of
performing each benchmark ten times. We performed each
benchmark with three different number of queries that are
1000, 3000, and 10,000. All benchmarks are performed on
two hosts for both databases, therefore, every graphs consists