Since you're reading this article, I assume that:
1. You're doing front end work, and need something similar to localStorage but more powerful.
2. You found IndexedDB may be the answer (which is correct), and did some research. Now you know what it is, what it does, etc. (If not, I recommend this video from Microsoft)
3. Unfortunately, you also find that IndexedDB has a bad reputation of being hard to use - the native APIs are not friendly at all.
4. Then you found idb, the most popular package about IndexedDB on npm.
(below is the comparison of popularity in the recent 1 year:)
(see live chart at npmcharts.com)
If you read this article slowly and carefully, line by line, I promise that:
1. You won't need other tutorials, this one is all you need.
2. You can learn IndexedDB by using idb, no need to touch the native APIs during the process.
3. You'll understand all the important concepts of IndexedDB, and become mentally comfortable using it. The concepts is a bigger barrier than the syntax.
First, open
(everything we'll do today is included here), then clickdemo1
.Below is the equivalent of demo1. If you prefer playing locally, just copy this code, and find a way to run the function (I recommend attaching it to a button, because more demos are coming).
import { openDB } from 'idb';
// demo1: Getting started
export function demo1() {
openDB('db1', 1, {
upgrade(db) {
db.createObjectStore('store1');
db.createObjectStore('store2');
},
});
openDB('db2', 1, {
upgrade(db) {
db.createObjectStore('store3', { keyPath: 'id' });
db.createObjectStore('store4', { autoIncrement: true });
},
});
}
Don't worry about reading the code now, just run it.
Then, open chrome DevTools and find localStorage. Underneath it you'll see IndexedDB, and see we've just created 2 DBs and 4 stores:
(on my CodeSandbox, you'll see a few other DBs already created by the website. Ignore those, we only care about db1 and db2)
This is a typical structure you may have in production - the project is big and complicated, and you organize things into different stores under different DBs like this screenshot.
Each store is like a localStorage on steroid, where you store key-value pairs. If you only need 1 localStorage on steroid inside 1 db, check out the package
idb-keyval
by the same creator of idb. You may not need to continue reading.
Now that we've had some stores, let's put in some data. It's actually more complicated than demo1 to create DBs and create stores, so we'll talk about those later.
Let's run demo2 (again, either click
demo2
on , or copy the code below and run locally):import { openDB } from 'idb';
// demo2: add some data into db1/store1/
export async function demo2() {
const db1 = await openDB('db1', 1);
db1.add('store1', 'hello world', 'message');
db1.add('store1', true, 'delivered');
db1.close();
}
Then in DevTools, hit the refresh button and see what changed:
Sure enough, our data has been put in! (Note that
delivered: true
appears before message: 'hello world'
, that's because the store is always auto sorted by key, no matter what order you insert them in)Let's look at the syntax of demo2: when you need to do something to a store, you first need to "connect to the db" by calling
openDB()
, which returns the db object, then you can call its methods. In VSCode, the intellisense will help you with the methods: after you type db1.
, "add" will pop up; after you type db1.add(
, the parameters will pop up.There are two questions you must be asking right now: What the heck is the argument 1 ? And why is keyName the last argument? Answers will show up in later sections, for now let's continue our demos:
demo3: error handling
// demo3: error handling
export async function demo3() {
const db1 = await openDB('db1', 1);
db1
.add('store1', 'hello again!!', 'new message')
.then(result => {
console.log('success!', result);
})
.catch(err => {
console.error('error: ', err);
});
db1.close();
}
db1.add()
returns a promise, so you can implement your own error handling. When you run demo3, "success!" will show in the console, but if you run it once again, an error will show, because keys must be unique in a store. In DevTools, there are 2 buttons to delete one entry or clear all entries in a store, use these 2 buttons to repeat demo3 to test your errors:
In code, the equivalent of these two buttons are
db.clear(storeName)
and db.delete(storeName, keyName)
. Again, intellisense will help you.A few things regarding db.close():
Q: Do I have to open and close the db every time I do something?
A: For demo purposes, snippets in this article all start with openDB() to establish a connection, and ends with db.close(). However, in reality, the typical pattern is to establish a single connection to use over and over without ever closing it, for example:
import { openDB } from "idb";
export const idb = {
db1: openDB("db1", 1),
db2: openDB("db2", 1)
};
Then, to use it:
import { idb } from "../idb";
export async function addToStore1(key, value) {
(await idb.db1).add("store1", value, key);
}
This way you don't need to open and close the db every time.
Q: Can I open multiple connections to the same db?
A: Yes, if you call openDB() at multiple places in your codebase, you'll have multiple open connections at the same time, and that's fine. You don't even have to remember to close them, although that wouldn't feel nice.
Q: In demo3, db.add() is asynchronous. Why did you call db.close() before things are finished?
A: Calling db.close() won't close the db immediately. It'll wait until all queued operations are completed before closing.
demo4: auto generate keys:
Now let's answer a question we asked previously: why is key the last argument? The answer is "because it can be omitted".
If you go back to demo1, you'll see that when we created store3 and store4, we gave the option
{ keyPath: 'id' }
to store3, and { autoIncrement: true }
to store4. Now let's try adding some cats into store3 and store4:// demo4: auto generate keys:
export async function demo4() {
const db2 = await openDB('db2', 1);
db2.add('store3', { id: 'cat001', strength: 10, speed: 10 });
db2.add('store3', { id: 'cat002', strength: 11, speed: 9 });
db2.add('store4', { id: 'cat003', strength: 8, speed: 12 });
db2.add('store4', { id: 'cat004', strength: 12, speed: 13 });
db2.close();
}
We omitted the last argument in this demo. Run it and you'll see that the ids become the keys in store3 , and auto incremented integers become keys in store4.
As you just found, numbers can be key. Actually in IndexedDB, dates, binaries, and arrays can also be key.
With auto generated keys, store3 and store4 look less like localStorage on steroid now, and more like traditional databases.
demo5: retrieve values:
The syntax of retrieving values are self-explanatory, you can run demo5 and watch the results in console log:
// demo5: retrieve values:
export async function demo5() {
const db2 = await openDB('db2', 1);
// retrieve by key:
db2.get('store3', 'cat001').then(console.log);
// retrieve all:
db2.getAll('store3').then(console.log);
// count the total number of items in a store:
db2.count('store3').then(console.log);
// get all keys:
db2.getAllKeys('store3').then(console.log);
db2.close();
}
demo6: to set a value:
Use db.put() instead of db.add() if you want to update / overwrite an existing value. If the value didn't exist before, it'd be the same as add().
// demo6: overwrite values with the same key
export async function demo6() {
// set db1/store1/delivered to be false:
const db1 = await openDB('db1', 1);
db1.put('store1', false, 'delivered');
db1.close();
// replace cat001 with a supercat:
const db2 = await openDB('db2', 1);
db2.put('store3', { id: 'cat001', strength: 99, speed: 99 });
db2.close();
}
In RESTful APIs, PUT is "idempotent" (POST is not), meaning you can PUT something multiple times, and it'll always replace itself, whereas POST will create a new item every time.
Put has the same meaning in IndexedDB, so you can run demo6 as many times as you want. If you used add() instead of put(), error would occur because you're trying to add a new item with an existing key, and keys must be unique.
In database terms, a "transaction" means several operations are executed as a group, changes to the database only get committed if all steps are successful. If one fails, the whole group is aborted. The classic example is a transaction of 1000 dollars between two bank accounts, where A+=1000 and B-=1000 must both succeed or both fail.
Every operation in IndexedDB must belong to a transaction.
In all the demos above, we had been making transactions all along, but all of them were single-action transactions. For instance, when we added 4 cats in demo4, we actually created 4 transactions.
To create a transaction containing multiple steps that either all success or all fail, we need to write it manually:
demo7: multiple operation within one transaction:
Now let's move our super cat from store3 to store4 by adding it to store4 and deleting it in store3. These two steps must either both succeed or both fail:
// demo7: move supercat: 2 operations in 1 transaction:
export async function demo7() {
const db2 = await openDB('db2', 1);
// open a new transaction, declare which stores are involved:
let transaction = db2.transaction(['store3', 'store4'], 'readwrite');
// do multiple things inside the transaction, if one fails all fail:
let superCat = await transaction.objectStore('store3').get('cat001');
transaction.objectStore('store3').delete('cat001');
transaction.objectStore('store4').add(superCat);
db2.close();
}
A few things about the syntax:
You first open a transaction with
db.transaction()
, and declare which stores are involved in this transaction. Notice the second argument 'readwrite'
, which means this transaction has permission to both read and write. If all you need is read, use 'readonly'
instead (it's also the default). After the transaction is opened, you can't use any of the previous methods we showed before, because those were shortcuts that encapsulate a transaction containing only one action. Instead, you do your actions using
transaction.objectStore(storeName).methodName(..)
. Arguments are the same, except the first argument (the storeName) is moved forward to .objectStore(storeName)
. ("objectStore" is the official term for a "store")Readonly is faster than readwrite, because each store will only perform one readwrite transaction at a time, during which the store is locked, whereas multiple readonly transactions will execute at the same time.
demo8: transaction on a single store, and error handling:
If your transaction only involves a single store, it can be less verbose:
// demo8: transaction on a single store, and error handling:
export async function demo8() {
// we'll only operate on one store this time:
const db1 = await openDB('db1', 1);
// ↓ this is equal to db1.transaction(['store2'], 'readwrite'):
let transaction = db1.transaction('store2', 'readwrite');
// ↓ this is equal to transaction.objectStore('store2').add(..)
transaction.store.add('foo', 'foo');
transaction.store.add('bar', 'bar');
// monitor if the transaction was successful:
transaction.done
.then(() => {
console.log('All steps succeeded, changes committed!');
})
.catch(() => {
console.error('Something went wrong, transaction aborted');
});
db1.close();
}
Notice in the end we monitor the promise
transaction.done
, which tells us whether the transaction succeeded or failed. Demo8 adds some data into store2, and you can run it twice to see one success and one fail in console log (fail because keys need to be unique).A transaction will auto commit itself when it runs out of things to do, `transaction.done` is a nice thing to monitor, but not required.
It's finally time to answer the burning question: what the heck is 1?
Imagine this scenario: you launched a web app, a user visited it, so DBs and stores have been created in his browser. Later you deployed a new version of the app, and changed the structure of DBs and stores. Now you have a problem: when someone visits your app, if he's an old user with the old db schema, and the db already contains data, you would want to convert his db into the new schema, while preserving the data.
To solve this problem, IndexedDB enforces a version system: each db must exist as a db name paired with a version number, in DevTools you can see db1 and db2 are both at version 1. Whenever you call openDB(), you must supply a positive integer as the version number. If this integer is greater than the existing one in the browser, you can provide a callback named upgrade, and it'll fire. If the DB doesn't exist in the browser, the user's version will be 0, so the callback will also fire.
Let's run demo9:
// demo9: very explicitly create a new db and new store
export async function demo9() {
const db3 = await openDB('db3', 1, {
upgrade: (db, oldVersion, newVersion, transaction) => {
if (oldVersion === 0) upgradeDB3fromV0toV1();
function upgradeDB3fromV0toV1() {
db.createObjectStore('moreCats', { keyPath: 'id' });
generate100cats().forEach(cat => {
transaction.objectStore('moreCats').add(cat);
});
}
},
});
db3.close();
}
function generate100cats() {
return new Array(100).fill().map((item, index) => {
let id = 'cat' + index.toString().padStart(3, '0');
let strength = Math.round(Math.random() * 100);
let speed = Math.round(Math.random() * 100);
return { id, strength, speed };
});
}
Demo9 creates a new db3, then creates a store moreCats containing 100 cats. Check the results in DevTools, then come back to look at the syntax.
The upgrade callback is the only place where you can create and delete stores.
The upgrade callback is a transaction itself. It's not 'readonly' or 'readwrite', but a more powerful transaction type called 'versionchange', in which you have the permission to do anything, including readwrite to any stores, as well as create/delete stores. Since it's a big transaction itself, don't use single-action transaction wrappers like db.add() inside it, use the transaction object provided as an argument for you .
Now let's do demo10, where we bump the version to 2 to solve the old user issue we imagined above:
// demo10: handle both upgrade: 0->2 and 1->2
export async function demo10() {
const db3 = await openDB('db3', 2, {
upgrade: (db, oldVersion, newVersion, transaction) => {
switch (oldVersion) {
case 0:
upgradeDB3fromV0toV1();
// falls through
case 1:
upgradeDB3fromV1toV2();
break;
default:
console.error('unknown db version');
}
function upgradeDB3fromV0toV1() {
db.createObjectStore('moreCats', { keyPath: 'id' });
generate100cats().forEach(cat => {
transaction.objectStore('moreCats').add(cat);
});
}
function upgradeDB3fromV1toV2() {
db.createObjectStore('userPreference');
transaction.objectStore('userPreference').add(false, 'useDarkMode');
transaction.objectStore('userPreference').add(25, 'resultsPerPage');
}
},
});
db3.close();
}
function generate100cats() {
return new Array(100).fill().map((item, index) => {
let id = 'cat' + index.toString().padStart(3, '0');
let strength = Math.round(Math.random() * 100);
let speed = Math.round(Math.random() * 100);
return { id, strength, speed };
});
}
In demo10, we add a new store called userPreference in db3. This will happen to old users who already have
db3 version 1
. However, if a brand new user (with db3 version 0
) runs demo10, both moreCats and userPreference will be added for him. `// falls through` means "don't break". Adding this line of comment will prevent eslint from nagging you to add a break.
You can delete db3 in DevTools, then simulate an old user by clicking demo9 then demo10, and simulate a new user by directly clicking demo10.
Version upgrade without schema change:
Many people think of upgrade as a "schema change" event. True, a version change is the only place where you can create or delete stores, but there are often other scenarios when a version change is good choice even if you don't need to add / delete stores.
In demo10, we added a store called userPreference, and set the initial value to be
'useDarkMode': false, 'resultsPerPage': 25
, which simulates some settings that the user can change. Now let's imagine you launched a new version, where you added a new preference called language
that defaults to 'English'; you also implemented infinite scroll, so 'resultsPerPage' is no longer needed; finally you changed 'useDarkMode` from a boolean to a string, which could be 'light' | 'dark' | 'automatic'. How do you change the initial settings for new users, while preserving the saved preferences of old users?It's a common problem that web developers face. When you're storing user preferences in localStorage, you might use a package like left-merge. Here with IndexedDB, let's solve it in demo11 with a version change:
demo11: upgrade db version even when no schema change is needed:
// demo11: upgrade db version even when no schema change is needed:
export async function demo11() {
const db3 = await openDB('db3', 3, {
upgrade: async (db, oldVersion, newVersion, transaction) => {
switch (oldVersion) {
case 0:
upgradeDB3fromV0toV1();
// falls through
case 1:
upgradeDB3fromV1toV2();
// falls through
case 2:
await upgradeDB3fromV2toV3();
break;
default:
console.error('unknown db version');
}
function upgradeDB3fromV0toV1() {
db.createObjectStore('moreCats', { keyPath: 'id' });
generate100cats().forEach(cat => {
transaction.objectStore('moreCats').add(cat);
});
}
function upgradeDB3fromV1toV2() {
db.createObjectStore('userPreference');
transaction.objectStore('userPreference').add(false, 'useDarkMode');
transaction.objectStore('userPreference').add(25, 'resultsPerPage');
}
async function upgradeDB3fromV2toV3() {
const store = transaction.objectStore('userPreference');
store.put('English', 'language');
store.delete('resultsPerPage');
let colorTheme = 'automatic';
let useDarkMode = await store.get('useDarkMode');
if (oldVersion === 2 && useDarkMode === false) colorTheme = 'light';
if (oldVersion === 2 && useDarkMode === true) colorTheme = 'dark';
store.put(colorTheme, 'colorTheme');
store.delete('useDarkMode');
}
},
});
db3.close();
}
function generate100cats() {
return new Array(100).fill().map((item, index) => {
let id = 'cat' + index.toString().padStart(3, '0');
let strength = Math.round(Math.random() * 10);
let speed = Math.round(Math.random() * 10);
return { id, strength, speed };
});
}
We didn't add or delete any stores here, so it could've been done without a version change, but a version change makes it much more organized and less error prone. You can simulate all possible scenarios by deleting db3 in DevTools, then click demo
9
10
11
, or 9
11
, or 10
11
, or just 11
.Where to write your "upgrade" callback?
If you establish multiple connections to the same db in your code, you'd want to fire the version change on app start, before any db connections are established. Then when you later call openDB(), you can omit the upgrade callback in the 3rd argument.
If you reuse a single connection with the pattern mentioned in between demo3 and demo4, then you can just provide the upgrade callback there. Remember it only fires when the db version in user's browser is lower than the version in openDB().
The block() and blocking() callback:
Similar to localStorage, IndexedDB uses the same origin policy, so if user opens your app twice in two tabs, they'd access the same db. It's usually not an issue, but imagine if the user opens your app, then you pushed out a version upgrade, then the user opened the 2nd tab. Now you have a problem: the same db can't have 2 versions at the same time in 2 tabs.
To solve this issue, there are another 2 callbacks that you may provide to a db connection besides upgrade, they're called blocked and blocking:
const db = await openDB(dbName, version, {
blocked: () => {
// seems an older version of this app is running in another tab
console.log(`Please close this app opened in other browser tabs.`);
},
upgrade: (db, oldVersion, newVersion, transaction) => {
// …
},
blocking: () => {
// seems the user just opened this app again in a new tab
// which happens to have gotten a version change
console.log(`App is outdated, please close this tab`);
}
});
When the 2-tab-problem happens, the blocking callback fires in the old openDB() connection which prevented upgrade from firing, and blocked fires in the new connection, upgrade will not fire in the new connection until db.close() is called on the old connection or the old tab is closed.
If you think it's a pain in the ass to have to worry about this kind of scenarios, I 100% agree! Luckily there's a better way: use a service worker and precache your js files, so that no matter how many tabs the user opens, they'll all use the same js files until all tabs are closed, which means same db version across all tabs. However, that's a totally different topic for another day.
You can create indexes (indices?) on a store. I don't care how you understand indexes in other databases, but in IndexedDB, an index is a duplicated copy of your store, only sorted in different order. You can think of it as a "shadow store" based off of the real store, and the two are always in sync. Let's see what it looks like:
demo12: create an index on the 100 cats' strength:
// demo12: create an index on the 100 cats' strength:
export async function demo12() {
const db3 = await openDB('db3', 4, {
upgrade: (db, oldVersion, newVersion, transaction) => {
// upgrade to v4 in a less careful manner:
const store = transaction.objectStore('moreCats');
store.createIndex('strengthIndex', 'strength');
},
});
db3.close();
}
Run demo12, then check DevTools, you'll see the "shadow store" named strengthIndex has appeared under moreCats. Note a few things:
1. The upgrade event is the only place where you can add an index, so we had to upgrade db3 to version 4.
2. This upgrade didn't follow the 0->1, 1->2, 2->3 pattern. What we want to show here is there's no fixed rule of how to do a version upgrade, you can do whatever you see fit. However, this one will crash if you delete db3 then directly click demo12, which simulates a bug that would happen to brand new users who directly land on v4 (these users didn't have the store, so you can't create the index).
3. In DevTools, you can see the strengthIndex store has the same 100 cats as the main store, only the keys are different - that's exactly what an index is: a store with same values but different keys. Now you can retrieve values from it by using the new keys, but you can't make changes to it because it's just a shadow. The shadow auto changes when the main store changes.
Adding an index is like creating the same store with a different 'keyPath'. Just like how the main store is constantly sorted by the main key, the index store is auto sorted by its own key.
Now let's retrieve values from the index:
demo13: get values from index by index key:
// demo13: get values from index by key
export async function demo13() {
const db3 = await openDB('db3', 4);
const transaction = db3.transaction('moreCats');
const strengthIndex = transaction.store.index('strengthIndex');
// get all entries where the key is 10:
let strongestCats = await strengthIndex.getAll(10);
console.log('strongest cats: ', strongestCats);
// get the first entry where the key is 10:
let oneStrongCat = await strengthIndex.get(10);
console.log('a strong cat: ', oneStrongCat);
db3.close();
}
Run demo13 and check the results in console logs. As you'll notice, since strength became key, the key is no longer unique, so
.get()
will only get the first match. To get all matches, we use .getAll()
.Demo13 performed two queries in one transaction. You can also use the single-action transaction shortcuts so you don't need to write transactions yourself, they're called
db.getFromIndex()
and db.getAllFromIndex()
, again, intellisense will help you.demo14: get values from index by key using shortcuts:
// demo14: get values from index by key using shortcuts:
export async function demo14() {
const db3 = await openDB('db3', 4);
// do similar things as demo13, but use single-action transaction shortcuts:
let weakestCats = await db3.getAllFromIndex('moreCats', 'strengthIndex', 0);
console.log('weakest cats: ', weakestCats);
let oneWeakCat = await db3.getFromIndex('moreCats', 'strengthIndex', 0);
console.log('a weak cat: ', oneWeakCat);
db3.close();
}
Demo14 retrieved value from strengthIndex in two transactions.
It's a very common task in all databases to search for items that satisfy a certain criteria, for instance you may want to find "all cats with a strength greater than 7". With idb, we can do it still by using
getAll()
, but give a range in place of keys.The Range Object is constructed by calling a native browser API called
IDBKeyRange
:demo15: find items matching a condition by using range:
// demo15: find items matching a condition by using range
export async function demo15() {
const db3 = await openDB('db3', 4);
// create some ranges. note that IDBKeyRange is a native browser API,
// it's not imported from idb, just use it:
const strongRange = IDBKeyRange.lowerBound(8);
const midRange = IDBKeyRange.bound(3, 7);
const weakRange = IDBKeyRange.upperBound(2);
let [strongCats, ordinaryCats, weakCats] = [
await db3.getAllFromIndex('moreCats', 'strengthIndex', strongRange),
await db3.getAllFromIndex('moreCats', 'strengthIndex', midRange),
await db3.getAllFromIndex('moreCats', 'strengthIndex', weakRange),
];
console.log('strong cats (strength >= 8): ', strongCats);
console.log('ordinary cats (strength from 3 to 7): ', ordinaryCats);
console.log('weak cats (strength <=2): ', weakCats);
db3.close();
}
Run demo15 and you'll see how we separated the 100 cats into 3 tiers.
Whenever you call .get() or .getAll() with idb, you can always substitute the key with a range, whether that's a primary key or index key.
Range works on strings too, since strings can be key, and keys are auto sorted. You may do something like
IDBKeyRange.bound('cat042', 'cat077').
For all the ways to create different ranges, check MDN.IndexedDB doesn't provide a declarative language like SQL for us to find things ("declarative" means "find xxx for me, and I don't care what algorithm you use, just give me the results"), so a lot of times you have to do it yourself with JavaScript, writing loops and stuff.
You might've been thinking all along: "Yes! Why do I have to learn how to query the database, why can't I just getAll(), then find what I want with JavaScript?"
Indeed you can, but there's one problem: IndexedDB is designed to be a database, which means some people may store a million records in it. If you getAll(), you need to first read a million records into memory, then loop over it.
To avoid using too much memory, IndexedDB provides a tool called a cursor, which is used to loop over a store directly. A cursor is like a pointer that points to a position in a store, you can read the record at that position, then advance the position by 1, then read the next record, and so on. Let's take a look:
demo16: loop over the store with a cursor:
// demo16: loop over the store with a cursor
export async function demo16() {
const db3 = await openDB('db3', 4);
// open a 'readonly' transaction:
let store = db3.transaction('moreCats').store;
// create a cursor, inspect where it's pointing at:
let cursor = await store.openCursor();
console.log('cursor.key: ', cursor.key);
console.log('cursor.value: ', cursor.value);
// move to next position:
cursor = await cursor.continue();
// inspect the new position:
console.log('cursor.key: ', cursor.key);
console.log('cursor.value: ', cursor.value);
// keep moving until the end of the store
// look for cats with strength and speed both greater than 8
while (true) {
const { strength, speed } = cursor.value;
if (strength >= 8 && speed >= 8) {
console.log('found a good cat! ', cursor.value);
}
cursor = await cursor.continue();
if (!cursor) break;
}
db3.close();
}
Check console log and you'll see it's pretty straight forward: you create a cursor, which starts at position 0, then you move it one position at time by calling
.continue()
, and read data from cursor.key
and cursor.value
.You can also use a cursor on an index, and / or with a range:
demo17: use cursor on a range and/or on an index:
// demo17: use cursor on a range and/or on an index
export async function demo17() {
const db3 = await openDB('db3', 4);
let store = db3.transaction('moreCats').store;
// create a cursor on a very small range:
const range = IDBKeyRange.bound('cat042', 'cat045');
let cursor1 = await store.openCursor(range);
// loop over the range:
while (true) {
console.log('cursor1.key: ', cursor1.key);
cursor1 = await cursor1.continue();
if (!cursor1) break;
}
console.log('------------');
// create a cursor on an index:
let index = db3.transaction('moreCats').store.index('strengthIndex');
let cursor2 = await index.openCursor();
// cursor.key will be the key of the index:
console.log('cursor2.key:', cursor2.key);
// the primary key will be located in cursor.primaryKey:
console.log('cursor2.primaryKey:', cursor2.primaryKey);
// it's the first item in the index, so it's a cat with strength 0
console.log('cursor2.value:', cursor2.value);
db3.close();
}
As you see, when on a index,
cursor.key
becomes the index key, and the primary key can be found in cursor.primaryKey
.If you use Typescript, don't forget to type your store, which makes your life so much better.
Unlike localStorage, you can use IndexedDB in service workers. It's a good way to store states in your worker (worker should be stateless because it can be killed any time), and it's a good way to pass data between the worker and your app, and it's very well suited for PWAs, because IndexedDB is designed to store lots of data for offline apps.
Most people write their service worker with
workbox
, in an environment where they can import npm packages. However, if module imports isn't possible in your setup, you can import idb
into your serviceWorker this way.That's all for IndexedDB with idb. I used it when creating a desktop app for Google Tasks, and it helped me so much. Hope it'll help you too.