Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

web app deployment #192

Open
lucasroth opened this issue Dec 20, 2016 · 8 comments
Open

web app deployment #192

lucasroth opened this issue Dec 20, 2016 · 8 comments

Comments

@lucasroth
Copy link

Hi, I have some good experience using AR SDK, but not to much web development.
I am stucked on how to deploy an existing example to a webserver. I tested uploading the whole tracking.js-master to a https server, and the examples work ok via internet. But can´t do the same for a new .html file done just by copying and pasting existing .html example.
Please can you detail or give a hint on how to wrap the web app correctly.

@AnthyG
Copy link

AnthyG commented Jan 6, 2017

You just need to include the tracking.js file (in the directory you downloaded, it should be located in /build).
So first of all, you need to add a video-element and a canvas-element to your html:

<div id="C">
    <video id="video" width="800" height="600" preload autoplay loop muted></video>
    <canvas id="canvas" width="800" height="600"></canvas>
</div>

Here's some CSS to get a mirrored video:

#C {
    width: 800px;
    height: 600px;
    position: relative;
    margin: 0 auto;
}

/* Hide your video-element with this */
#video{
    /*visibility: hidden;*/
}

#canvas {
    position: absolute;
    top: 0;
}

/* 'Mirror' video and canvas-elements */
#video,
#canvas {
    -moz-transform: scale(-1, 1);
    -o-transform: scale(-1, 1);
    -webkit-transform: scale(-1, 1);
    transform: scale(-1, 1);
    filter: FlipH;
}

Then you have to create a context for that in javascript as follows:

var canvas = document.getElementById("canvas");
var ctx = canvas.getContext('2d');

This should be called, once the dom is loaded, so just put it into a script-tag right before </body>.

Next you have to create a tracker, for this example, I'll just use the ColorTracker already provided with tracking.js:

// Only track the color magenta
var colors = new tracking.ColorTracker(['magenta']);
colors.on('track', function(event) {
    // Here is some sample code to draw dots onto the canvas
    event.data.forEach(function(rect) {
        var x = rect.x + rect.width / 2, // This will get the horizontal center of the rect
            y = rect.y + rect.height / 2, // .. and the vertical
            w = rect.width,
            h = rect.height,
            c = rect.color;

        ctx.beginPath();
        ctx.arc(x, y, 50 / w + 50 / h, 0, 2 * Math.PI, false);
        ctx.fillStyle = c;
        ctx.fill();
    });
}
tracking.track('#video', colors, {
    camera: true
});

Instead of drawing a circle, you could draw anything you want, here are two more examples:

  • Rectangle
ctx.strokeStyle = c;
ctx.strokeRect(rect.x, rect.y, w, h);
  • Line (this requires to save the last coordinates..)
ctx.beginPath();
ctx.lineCap = "round";
ctx.lineJoin = "round";
ctx.moveTo(oldx, oldy);
ctx.lineTo(x, y);
ctx.strokeStyle = c;
ctx.stroke();

oldx = x,
    oldy = y;

Also, you may just want to try this via localhost, if hosted on your own development machine (There's a npm-module called 'http-server' which does this just fine!)

Here's a link to the the MDN-Site regarding canvas'.
Hope I could help 😅

@lucasroth
Copy link
Author

lucasroth commented Jan 6, 2017 via email

@AnthyG
Copy link

AnthyG commented Jan 7, 2017

@lucasroth little bit hard to read, but if I understand correct, you want to use three.js to render a 3d-Object you got from clara.io? I don't really get, what you mean with 'in a z roder forward', sry 😞

But the part with tracking server-side and then rendering client-side is not very good, unless you have a really stable connection to the server, as you have to send all the data, the client get's from your camera, TO the server, do what ever you want there, and then send the result(s) BACK to the client...

I haven't really checked 3d-rendering with the canvas-element and js yet, and I've had limited success with three.js.

So you may just want to edit that part with 'z roder' and the 'video background' (Do you want to render the 3d-object bigger than the video or what 🤔)

I'll try to help as much as I can, but I won't really test myself with 3d-rendering just yet, though you may want to check out this issue #193

@lucasroth
Copy link
Author

lucasroth commented Jan 9, 2017 via email

@AnthyG
Copy link

AnthyG commented Jan 12, 2017

But doesn't three.js also use canvas to render the objects?
Maybe you just try and put the canvas-element 'underneath' the video-element, like this:

<div id="C">
    <video id="video" width="800" height="600" preload autoplay loop muted></video>
    <canvas id="canvas" width="800" height="600"></canvas>
</div>

I did actually try and set the z-index too, but it didn't work somehow 🤔

@devotionsolutions
Copy link

devotionsolutions commented Jan 12, 2017

Hi @lucasroth , we are both actually doing the same, a web based augmented reality experience :D
What I'm using is tracking.js + Babylon.js. And I'm adding the video from the webcam as a videotexture in Babylon's scene. This way, we are able to render the 3D object in front of the camera stream..
HTH

@lucasroth
Copy link
Author

lucasroth commented Jan 12, 2017 via email

@devotionsolutions
Copy link

Hi @lucasroth, I'm not sure if I understand what's your problem now. But PM me and I'll try to help.

Thanks,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants