paint-brush
How do Your Favorite Books Compare in a VR World?by@JohnDavidFive
500 reads
500 reads

How do Your Favorite Books Compare in a VR World?

by John David MartinJanuary 3rd, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

If you could search your own library of <a href="https://hackernoon.com/tagged/books" target="_blank">books</a> digitally, how would you do it?

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - How do Your Favorite Books Compare in a VR World?
John David Martin HackerNoon profile picture

Visualizing semantic relationships using spatial embeddings

🔍 Exploring Content ( the traditional way )

If you could search your own library of books digitally, how would you do it?

A simple text search could work but that approach typically only answers the question “where are these terms mentioned in my books?”

There are some questions that would be hard for a text search to answer:

  1. How would I find content where certain keywords may not be present?
  2. How do all my books relate to each other?
  3. Which of the books that I have not read yet would be a good place to start?

These questions are trying to access the “semantics” or the “meaning” within content regardless of the actual words used.

Beyond the Simple Text Search

Unicon (my most excellent employer) authorized me to spend some time creating a proof of concept to help clients navigate the field of Natural Language Processing (NLP) and how hidden semantic information in a corpus of text could be leveraged.

Unicon, Inc. is a leading provider of technology consulting, services, and support for the education industry that works with clients to find solutions to meet business challenges.

Unicon also green lit a Virtual Reality (VR) experience to further illuminate the powers of exploring semantic information.

For this initial proof of concept, I used 14 books that I had already read (or listened too 😅). Being familiar with the content would help me validate that the machine learning and layout algorithms were on the right track.

The books were non-fiction and covered science/math topics with the exception of one biography (which you will see stands out like a sore thumb).

📈 Extracting Semantic Information

Extracting semantic information from text is now a common practice in NLP. Machine Learning algorithms run through collections of documents and assign each document a semantic value (vector). Each portion of text can then be semantically compared to any other by comparing their semantic vectors.

Here were the Machine Learning algorithms we considered for generating semantic vectors for each document/node:

Ultimately, Doc2Vec was chosen as it provides better opportunities for discovery in a visualization. Doc2Vec focuses on generating semantic vectors where the closeness of two vectors depends only on semantic similarity and not on an additional gravitation towards particular topics like LDA and LDA2Vec do.

Rather than feed Doc2Vec each book as a document, we decided to assign each meaningful entry in the table of contents of each book its own semantic vector. This would allow semantic comparisons across every level in each book’s hierarchy.

Doc2Vec provided the semantic vectors. The next step was to make them usable for a visualization.

🏝️ Spatial Embeddings of Semantic Information

The semantic vectors that Doc2Vec generated were 300 dimensions. Humans can effectively visualize 2 or 3 dimensions. This meant the 300 dimensional vectors needed to be squashed down to 3 or fewer dimensions to make a visualization user friendly.

T-SNE (t-Distributed Stochastic Neighbor Embedding) was the first dimensionality reduction algorithm considered. It provided pretty incredible results:

T-SNE Projection of Semantic Vectors

Topic vs. Book Centric Layout

T-SNE provided a topic-centric layout which is common amongst most current semantic exploration applications. We realized we weren’t leveraging all the information that each book was giving us so we used a custom force-layout algorithm where the edge and node forces depended both on semantic similarity across all nodes as well as relationships between nodes within a book.

Going with a custom “book-centric” layout allowed the following benefits:

  1. Authors spend considerable time organizing their chapters and sections and we wanted to preserve and leverage that information.
  2. By preserving book structure in the layout, users can use their familiarity with certain books as starting points in the visualization. This could allow for quicker orientation and a better sense of what is already familiar to users.

This layout lost some of the emphasis on the topic groupings that T-SNE provided but we found alternative ways (see video down below) to expose this information.

Custom Force Layout Approach for 2D Embedding of Semantic Vectors

🔥 Visualizing Semantic Information in VR

It was now time to come up with a visualization for the spatial embeddings we had generated.

We decided to create a cozy island where each node would be a tree growing on the island. This provided a landscape which allowed users to utilize the “Memory Palace” effect for enhanced information recall.

To make the island more interesting, a “height map” and “splat map” were generated using the embedding data to add additional detail on the terrain.

Splat Map & Height Map

The different colors in the splat map are used to render different textures on the terrain.

The height map determines the elevation profile of the island. The lighter the value, the higher the elevation. The terrain is higher around nodes that are higher in their book’s hierarchy. For example, the root node that represents an entire book should be at the top of a hill while it’s chapters and sections are placed lower as the book spreads out over the terrain.

From there it was a matter of iterating on design and interactions in the VR application. Unity was used for development of the VR app and an Oculus GO was the targeted headset for the VR experience.

Here is a preview of the app:

🥇 Outcomes

It quickly became evident that the proof of concept helped answer our previous semantically oriented questions:

1 — How would I find content where certain keywords may not be present?

The semantic grouping of nodes makes it fairly easy to locate areas to start finding content that is semantically similar to a topic in question. The app allows you find a familiar node that represents a topic you are interested in. Then you can explore nearby nodes on the island or highlight nodes in other books that have semantic similarities.

2 — How do all my books relate to each other?

Take a look at the image below. The topics Neuroscience, Graph Theory, Chaos and Information Theory are draw to encompass their associated books:

  • Notice that on the left side the book “Steve Jobs: A Biography” by Walter Isaacson stands alone. It does not have any substantial overlap semantically with the other books.
  • The intersection of the Network Science and Neuroscience topics consists of a book called “Networks of the Brain” by Olaf Sporns. This book explores how Network Science can be used to study the brain. Wow!
  • The Probability and Statistics region rests nicely in the middle of all the science oriented topics. I tend to think that probability as the basis of everything, so it was nice to see the semantics and layout algorithms agree with me :-)

3 — Which of the books that I have not read would be a good place to start?

For those who may not have read one or more books in the map, they could quickly pick out a book that overlaps with books they have already read and look interesting based on how they are positioned semantically!

Bonus — The Memory Palace Effect

The VR experience provided the benefit of tapping into the spatial encoding abilities of our brains. After spending minimal time in the VR experience, I found it trivial to recall the layout of the books on the island. It’s as easy as drawing a map the way anyone would draw a map of their own home. This also provided hooks for me to recall almost all of the semantic similarities across books too!

🔭 Where to Go From Here?

The results of this proof of concept are exciting. It provides a whole new perspective on an existing body of knowledge and spurred more ideas on how to expand its impact:

  1. Allow users to drop whiteboards onto the island and write their thoughts and draw pictures on them.
  2. Provide users with reading plans across a new corpus that leverage semantics to optimize knowledge gain. Users could mark nodes complete so they can see their progress and anticipate new topics and connections that will exist from topics already covered.
  3. Take advantage of voice capabilities of VR headsets to do a voice search on text to find a related semantic area.
  4. Highlight paths across multiple books that are linked via semantically similar nodes (this perhaps could lead to fun exercises for students writing papers based on semantic similarities to tie multiple resources together).

Those are just a few new ideas that popped into our heads. What are some other ways you see this approach being used to help people explore their content?

👏 Like this article? Give it a few claps and help others discover it!

Have EdTech challenges? Let Unicon know!

Unicon, Inc. is a leading provider of technology consulting, services, and support for the education industry that works with clients to find solutions to meet business challenges.