Skip to content
Switch branches/tags

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Social Virtual Reality

CS 4331-002 - Virtual Reality Project 3

Supported platforms

Due: Tuesday, May 8, 2018

Video Demonstration

Try it here:


  • For Project 3 of this course, we proposed that we utilize A-Frame for WebGL based Virtual Reality and AR.js for WebGL based Augmented Reality to provide users with a multiplayer social interaction environment. This multiuser environment would allow users to access and experience data (i.e videos, photos and webpages) with other users at the same time. The AR aspect of our application would be more of a supplement to the online Web Application. Augmented Reality markers generated with AR.js would contain supplementary data (bookmarks/links) for users of our web application. We believed this could be achieved by making API calls using JavaScript. We would mainly like our program to function on mobile devices (i.e. Android, iPhone, Windows Phone). Our team consists of two members; Simon Woldemichael and Xujia Wu. Links to potential libraries and free API's to be used are linked below.

Note: In the proposal, we stated that we would be using AR.js as a supplement to the assignment, but as advised in previous projects, we should keep the application contained within itself and should not apply the use of external features that only complement the application and are not core aspects of it. We, therefore, did not use AR.js

Quick Start!

  • To run our application locally do the following

    1. Clone this repository: git clone
    2. Change directories into the project: cd VR_Project_3
    3. Assuming you have Node.js and NPM installed run: npm install
    4. Then start the WebRTC server: node server/easyrtc-server.js
    5. Navigate to localhost:8081
  • If you would like to change what port the program runs on, set the PORT environment variable or change 8081 in this line to your preferred port.

Tools Used


Work Distribution

  • Xujia: Front-end Integration
  • Simon: Back-end Integration

We learned....

  • The intricacies of real time communication within a virtual reality environment
  • Just how extreme library compatibility issues can be
  • API calling with JavaScript
  • Material design concepts
  • Multi-component event listener registration
    • If you have registered 1 component to the DOM and want to make multiple instances of that component, you cannot add duplicate event listeners to separate instances of the same component.
  • Don't plan on doing something complicated with an early-stages library using external dependencies and plugins that are basically untested and are not very beginner friendly 😬

Biggest issues

  • HUGE: The following error
    Cross-Origin Request Blocked:
    The Same Origin Policy disallows reading the remote resource at %RESOURCE%.
    This can be fixed by moving the resource to the same domain or enabling CORS.
    means we can't update the content of our components with data that doesn't exist on our domain. So, adding images and videos from anywhere outside of the local scope of the application, 100% of the time, is not possible. WITHOUT this Chrome plugin that enables the Same Origin Policy
  • We expect to have the same device compatibility issues that occurred during the first project that was completed for this course.
  • Compability
  • The 'beta-ness' of A-Frame caused problems when we tried to use custom components that were only tested using earlier versions of A-Frame (i.e. v0.6.1 & v.7.1.0)
  • Setting up Node to support a WebRTC server
  • YouTube's API prevents dynamic sourcing of videos to inner canvas elements and A-Frame is unable to draw an embed or live video stream into the scene
  • Rendering text to the scene is computationally expensive enough, rendering videos is nearly impossible
  • Attempting to <i-frame></i-frame> a YouTube embed instead of sourcing it with is impossible by the limitations of the library
  • The boilerplate practically set us up to run in circles, solving and causing problems until we decided to split all of the scenes up


  1. See the first entry under Biggest issues
  2. IMPORTANT: YouTube does not allow direct file access to their videos via their API. Because of this, we are unable to dynamically set videos. The function logic is simple:
function updateVideo(ix){
	var selected_source = document.getElementById('urlstore'+ix).getAttribute('value');
	var play_video = document.querySelector('#videoPlayer');
	play_video.setAttribute('material.src', selected_source);
  • Depending on what video the user selected in the interface, depends on what source url is loaded into the video player. But because the URL must be a direct link, and YouTube doesn't give it's API users access to direct links, we cannot update the video player to play videos. Please see this, this, and this (by a lead A-Frame developer) for more details.

Planned timeline

External asset sources


  • Please visit our Trello Board to see references and sources


Starting Scene

  • After the user has selected a room and username, they will be loaded into the hub scene where they will be asked to select a portal to be transfered to another, much more interactive, scene.

Multiuser Experience (current limited)

  • Since we made use of the Networked-AFrame plugin, scenes can be used as rooms. Since we also use the dynamic-rooms component, separate rooms can be created on the server that can be joined. Currently, this is only supported on the first level, and a majority of the components need to be synced to the RTC environment. Syncing components, custom or built-in is very tedious and didn't seem very efficient to do manually. (Notice in the screenshot below that the rotation of 2 players is not synced quickly enough, so it looks like the other player is facing the other way)

Material Design Aesthetic 🌌

  • After clicking 'Let's Go!' on the login page, the user will be taken to the Hub scene where they will be able to view the buttons for the paricular API's.

YouTube API Interface

  • Notice: Since it is impossible to directlys stream videos from YouTube into A-Frame without proxying the videos using external frameworks, we are currently only playing 3 test videos that have been pre-downloaded from YouTube. In the YouTube scene of our project, users are able to search for videos through YouTube's API. The interface shows the 3 most relevant videos returned by the database. We provide an API key in the application, but to get your own API key go here.

Chat board (early alpha) 📱

  • In this scene, the user can type and send messages to a board within the scene. This scene was mostly just to see if we could send asynchronous messages through the server, but once again setting up scynced components turned out to be much more complicated than we expected.

Flickr API Interface 🐤

  • In the Flickr scene, users are able to search for images through Flickr's API. The interface returns 12 images related tot he tag that the user types using the keyboard in the scene. We provide an API key in the application, but to get your own API key (Yahoo account require) go here.

Concepts Worth Noting 📖

Linear Interpolation 📈

  • For objects and components to be synced across Real-Time Communication (WebRTC), they need to be interpolated. Linear interpolation is a mathematical concept in which data points, in this case the position, rotation and other attributes of an A-Frame component, are curve fitted using linear polynomials to construct a new accurate and dynamically multivariate. This means, even in a 3D scene, the linear change will be smooth and clear. This is of course favorable to a choppy or laggy VR scene as this could cause motion sickness. (Source)

Credit for external components and JavaScript API wrappers go to their respective owners and creators 👍


No description, website, or topics provided.






No releases published


No packages published