Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

screenshot not working correctly #358

Closed
MirkoAlo opened this issue May 31, 2018 · 43 comments
Closed

screenshot not working correctly #358

MirkoAlo opened this issue May 31, 2018 · 43 comments

Comments

@MirkoAlo
Copy link

MirkoAlo commented May 31, 2018

Hi guys!

Im trying to add a function for make a screenshot of my 3D model rendered in AR, the model is captured but the nackground of the object is always black or white.

Anybody can help me??

Thanks

@liyasthomas
Copy link

@MirkoAlo Can you please share how you got it working? ie: to take a snapshot?

@MirkoAlo
Copy link
Author

MirkoAlo commented Jun 1, 2018

@liyasthomas

I have a button in html that whit a click launch this function:

document.querySelector('a-scene').components.screenshot.getCanvas('equirectangular');

i tried whit perspective also but the background of a cam is always black or white, i can see only the model.

Thank's for help

@liyasthomas
Copy link

liyasthomas commented Jun 1, 2018

@MirkoAlo Will you share your code snippet, so i could evaluate it? I too had same results when i tried to capture screenshot like this: Generate and Download Screenshot of webpage without lossing the styles (https://stackoverflow.com/a/44495166)

(function (exports) {
	function urlsToAbsolute(nodeList) {
		if (!nodeList.length) {
			return [];
		}
		var attrName = "href";
		if (nodeList[0].__proto__ === HTMLImageElement.prototype || nodeList[0].__proto__ === HTMLScriptElement.prototype) {
			attrName = "src";
		}
		nodeList = [].map.call(nodeList, function (el, i) {
			var attr = el.getAttribute(attrName);
			if (!attr) {
				return;
			}
			var absURL = /^(https?|data):/i.test(attr);
			if (absURL) {
				return el;
			} else {
				return el;
			}
		});
		return nodeList;
	}

	function screenshotPage() {
		var wrapper = document.getElementById("capture");
		html2canvas(wrapper, {
			onrendered: function (canvas) {
				canvas.toBlob(function (blob) {
					saveAs(blob, "Screenshot.png");
				});
			}
		});
	}

	function addOnPageLoad_() {
		window.addEventListener("DOMContentLoaded", function (e) {
			var scrollX = document.documentElement.dataset.scrollX || 0;
			var scrollY = document.documentElement.dataset.scrollY || 0;
			window.scrollTo(scrollX, scrollY);
		});
	}

	function generate() {
		screenshotPage();
	}
	exports.screenshotPage = screenshotPage;
	exports.generate = generate;
})(window);

HTML:

<div onclick="generate()">Take screenshot</div>

The above snippet uses html2canvas.js to capture the contents of an id and writes the same into a canvas, then save image file using FileSaver.js. So basically import those two files to get this code working.

Since html2canvas doesn't supports video tag yet, it wont capture contents for a-scene (in AR.js on A-Frame). Whenever i try to capture screenshot like this, i'm too getting screenshots with black background 😢 .

@Jamolinesca
Copy link

@MirkoAlo @liyasthomas
Hi! I am also trying to take a screenshot for contents in a-scene using A-frame. Any luck with different methods?

@liyasthomas
Copy link

liyasthomas commented Jun 5, 2018

@Esmolan @MirkoAlo Hi guys, lately I found a working example to capture a video frame and download it. Its here: https://tutorialzine.com/2016/07/take-a-selfie-with-js This simple uses a <video> tag along with getUserMedia and a hidden <canvas> tag to write the captured the frame, then download image using toDataURL(); See the demo here: https://jsfiddle.net/dannymarkov/cuumwch5

@Jamolinesca
Copy link

Jamolinesca commented Jun 5, 2018

@liyasthomas Thanks Liyas! I am struggling to get it to work just the code for some reason it tell me the video is null.

Also I asked in another forum how to make this work and they recommended rendering the camera feed as a texture and then automatically the screenshot component will now capture it document.querySelector('a-scene').components.screenshot.capture('perspective')

However, I've had no luck converting the camera to a texture when using A-frame up to this time.

@Jamolinesca
Copy link

Jamolinesca commented Jun 6, 2018

@liyasthomas Hey! I've been playing around with merging the video tag with A-Frame. Until now I was able to grab the video tag of A-frame and taking a snapshot but for some reason it only takes the webcam now and not the 3D model.

test 1

<body>
  <a-scene embedded arjs="sourceType: webcam; trackingMethod: best; debugUIEnabled: false;" antialias="true">
      <canvas></canvas>
      <a-assets>
        <a-asset-item id="gltf1" src="model_captain/scene.gltf"></a-asset-item>
        <a-asset-item id="gltf2" src="model_stormtrooper/scene.gltf"></a-asset-item>
      </a-assets>
      <a-marker type="pattern" url="https://raw.githubusercontent.com/Esmolan/Esmolan.github.io/master/juguetron-marker.patt" size="1">
			<a-gltf-model src="#gltf1" 
				position="0 2.5 -1"
				rotation="0 0 0"
				scale="0.05 0.05 0.05"
				animation-mixer="loop:repeat">
			</a-gltf-model>	
      </a-marker>
      <a-marker preset="hiro">
 			<a-gltf-model src="#gltf2" 
				position="0 0 0"
				rotation="0 0 0"
				scale="1 1 1"
				animation-mixer="loop:repeat">
			</a-gltf-model>	
	  </a-marker>
      <a-entity camera></a-entity>
    </a-scene>
  </body>
  <div class="container">
	  <div class="app">
	    <a href="#" id="start-camera" class="visible">Touch here to start the app.</a>
	    <img id="snap">
	    <p id="error-message"></p>
	    <a href="#" id="delete-photo" title="Delete Photo" class="disabled"><i class="material-icons">delete</i></a>
	    <a href="#" id="take-photo" title="Take Photo"><i class="material-icons">camera_alt</i></a>
	    <a href="#" id="download-photo" download="selfie.png" title="Save Photo" class="disabled"><i class="material-icons">file_download</i></a>   
	  </div>
  </div>

JS:
`

// References to all the element we will need.,
var image = document.querySelector('#snap'),
    //start_camera = document.querySelector('#start-camera'),
    controls = document.querySelector('.controls'),
    take_photo_btn = document.querySelector('#take-photo'),
    delete_photo_btn = document.querySelector('#delete-photo'),
    download_photo_btn = document.querySelector('#download-photo');
    //error_message = document.querySelector('#error-message');

take_photo_btn.addEventListener("click", function(e){

    e.preventDefault();
    var video = document.querySelector('video')
    var snap = takeSnapshot(video);

    // Show image. 
    image.setAttribute('src', snap);
    image.classList.add("visible");

    // Enable delete and save buttons
    delete_photo_btn.classList.remove("disabled");
    download_photo_btn.classList.remove("disabled");

    // Set the href attribute of the download button to the snap url.
    download_photo_btn.href = snap;

    // Pause video playback of stream.
    //video.pause();

});


delete_photo_btn.addEventListener("click", function(e){

    e.preventDefault();

    // Hide image.
    image.setAttribute('src', "");
    image.classList.remove("visible");

    // Disable delete and save buttons
    delete_photo_btn.classList.add("disabled");
    download_photo_btn.classList.add("disabled");

    // Resume playback of stream.
    //video.play();

});

    function takeSnapshot(video){
    // Here we're using a trick that involves a hidden canvas element.  

    var hidden_canvas = document.querySelector('canvas'),
        context = hidden_canvas.getContext('2d');

    var width = video.videoWidth,
        height = video.videoHeight;

    if (width && height) {

        // Setup a canvas with the same dimensions as the video.
        hidden_canvas.width = width;
        hidden_canvas.height = height;

        // Make a copy of the current frame in the video on the canvas.
        context.drawImage(video, 0, 0, width, height);

        // Turn the canvas image into a dataURL that can be used as a src for our photo.
        return hidden_canvas.toDataURL('image/png');
    }
}

`

Any ideas? I think we are almost there.
My guess is that it only takes a picture of just the webcam. Maybe by merging that one with the one the screenshot component generates of the 3D model is the way to go.

@liyasthomas
Copy link

liyasthomas commented Jun 6, 2018

@Esmolan Have you tried with a simple <a-text> or <a-box> or any other primitives rather than a 3D model? If not, try it. Technically, A-Frame and AR.js render AR content on top of video element aka camera feed, ie: think its like layers, camera feed is the 1st layer, AR content is the 2nd layer(which is positioned on top of camera feed making it seem as a single layer) and our screen is the third layer. And while capturing a video frame, surely we'll get video feed, but trick is to capture the feed along with the AR content which is basically in between our eyes and video feed. Lemme see how I could help.

And by the way, you are right, we are almost there!

"Once drawn to, the XR device will continue displaying the contents of the XRWebGLLayer framebuffer, potentially reprojected to match head motion, regardless of whether or not the page continues processing new frames. Potentially future spec iterations could enable additional types of layers, such as video layers, that could automatically be synchronized to the device's refresh rate."
-WebXR Device API Explained (https://github.com/immersive-web/webxr/blob/master/explainer.md)

@Jamolinesca
Copy link

Jamolinesca commented Jun 7, 2018

@liyasthomas Thanks! Sounds good. Yeah, just tried with a simple <a-box> and I am still getting a still of just my webcam stream, not in mixed reality form. I am curious then if I am technically taking the video element of a-frame why it does not capture all layers. If its how you say its like that its rendered on top.

Let me know if you find anything! I will keep playing with the code a debugging.

Thanks 👍

@tien
Copy link

tien commented Jun 8, 2018

I'm having the same problem as well, it's either just a blank canvas with the 3D objects or just the video stream without the 3D elements

@liyasthomas
Copy link

@crazycat9x Hi. If you don't mind, share your current code snippets & screenshots of output so others could easily debug it.
@Esmolan I'm too playing around it, but still no expected output. I'm still getting either black canvas or video feed without 3D content. I'll look into it, and inform if I've any updates.

@tien
Copy link

tien commented Jun 8, 2018

guys I got it to work!!!!!

// first get the image of the ascene layer canvas
let aScrene = document.querySelector("a-scene").components.screenshot.getCanvas("perspective")
// then get the video layer canvas
let frame = captureVideoFrame("video", "png");
// resize the ascene to the video scene
aScene = resizeCanvas(aScene, frame.width, frame.height)
// then merge the two base64 image using https://github.com/lukechilds/merge-images
mergeImages([frame.toDataURL(), aScene.toDataURL()]).then(b64 => {
		...
});

@Jamolinesca
Copy link

Jamolinesca commented Jun 8, 2018

@crazycat9x Cool!! Would you mind sharing your code! and screenshot!
Also question for you. Have you implemented a way to use image-based markers? Or markerless using AR.js? In other words, not using those white and black border marker generators. Thanks!

@liyasthomas Take a look :D

@tien
Copy link

tien commented Jun 8, 2018

function resizeCanvas(origCanvas, width, height) {
	let resizedCanvas = document.createElement("canvas");
	let resizedContext = resizedCanvas.getContext("2d");

	resizedCanvas.height = width;
	resizedCanvas.width = height;

	resizedContext.drawImage(origCanvas, 0, 0, width, height);
	return resizedCanvas.toDataURL();
}

document.getElementById("snap-button").addEventListener("click", function() {
	let aScene = document
		.querySelector("a-scene")
		.components.screenshot.getCanvas("perspective");
	let frame = captureVideoFrame("video", "png");
	aScene = resizeCanvas(aScene, frame.width, frame.height);
	frame = frame.dataUri;
	mergeImages([frame, aScene]).then(b64 => {
		let link = document.getElementById("download-link", "png");
		link.setAttribute("download", "AR.png");
		link.setAttribute("href", b64);
		link.click();
	});
});

The mergeImages func is from https://github.com/lukechilds/merge-images
The captureVideoFrame is from https://github.com/ilkkao/capture-video-frame but i modified it for use in this case, here is my modification:

function captureVideoFrame(video, format, width, height) {
        if (typeof video === 'string') {
            video = document.querySelector(video);
        }

        format = format || 'jpeg';

        if (!video || (format !== 'png' && format !== 'jpeg')) {
            return false;
        }

        var canvas = document.createElement("CANVAS");

        canvas.width = width || video.videoWidth;
        canvas.height = height || video.videoHeight;
        canvas.getContext('2d').drawImage(video, 0, 0);
        var dataUri = canvas.toDataURL('image/' + format);
        var data = dataUri.split(',')[1];
        var mimeType = dataUri.split(';')[0].slice(5)

        var bytes = window.atob(data);
        var buf = new ArrayBuffer(bytes.length);
        var arr = new Uint8Array(buf);

        for (var i = 0; i < bytes.length; i++) {
            arr[i] = bytes.charCodeAt(i);
        }

        var blob = new Blob([ arr ], { type: mimeType });
        return { blob: blob, dataUri: dataUri, format: format, width: canvas.width, height: canvas.height };
    };

@tien
Copy link

tien commented Jun 8, 2018

For the image based marker, I think it would be impossible as AR.js use ~20x20 ascii image as a mean of detection, it would simply be too pixelated to use anything besides simple marker with fat border.

@tien
Copy link

tien commented Jun 8, 2018

There is a bug in my code

resizedCanvas.height = width;
resizedCanvas.width = height;

suppose to be

resizedCanvas.height = height;
resizedCanvas.width = width;

@Jamolinesca
Copy link

Jamolinesca commented Jun 9, 2018

@crazycat9x Thank you so much! and for the info. Fixed your bug as well. One thing that happens with mine is that the screenshot component that takes perspective when merged shows the 3D element kind of squeezed. I do this in a mobile device. Any ideas why?
ar 7

@tien
Copy link

tien commented Jun 9, 2018

Yeah i got that as well, only on mobile thou , my laptop seem to work fine. The a-scene actually have width larger than the mobile display, so when you merge it got squeezed. I tried to force the a-scene width to 100% of display, but every element inside a-scene got squish instead.
I would be happy to hear if you can come up with a fix for this.

@Jamolinesca
Copy link

@crazycat9x Likewise. Laptop seems to work fine. I will let you know, maybe if we try to resize the a scene with the video size.

@liyasthomas
Copy link

liyasthomas commented Jun 16, 2018

How about we try to implement screen record and download the video?? That would be super cool!
This might help: https://webrtc.github.io/samples/
this too: https://stackoverflow.com/questions/18509385/html-5-video-recording-and-storing-a-stream

@Jamolinesca
Copy link

@liyasthomas Hey! That sounds cool. But I have another question. Is there a way to have image based markers in web VR? Perhaps not using A-frame but three.js, etc?

@liyasthomas
Copy link

I afraid its still not possible. Anyway its a core feature. I hope @jeromeetienne and team may consider it in near future.

@Jamolinesca
Copy link

@crazycat9x Hi Crazycat were you able to solve this?

@tien
Copy link

tien commented Jul 4, 2018

@Esmolan you mean the image being squeezed? then no, I havn't solved it yet.

@Jamolinesca
Copy link

@crazycat9x Thats right! I am checking that with other programmers bc I cant seem it to work it out. Will keep you posted

@Jamolinesca
Copy link

Jamolinesca commented Jul 11, 2018

@crazycat9x Hey! when you capture the video frame. Does it take the width and height of where its being captured? We should resize the a-scene to the width and height of the video capture first. And then merge them.

@Yufan-Lin
Copy link

@crazycat9x @Esmolan Hi~both! I modified resizeCanvas function. It looks normal ! Because a-scene's frame is 4096 x 2048 by default. So you need to deal with this by calibrating its frame size.
Have fun ! :)

function resizeCanvas(origCanvas, width, height) {
        let resizedCanvas = document.createElement("canvas");
        let resizedContext = resizedCanvas.getContext("2d");

        resizedCanvas.height = height;
        resizedCanvas.width = width;

        if (width > height) {
            // Landscape
            resizedContext.drawImage(origCanvas, 0, 0, width, height);
        } else {
            // Portrait
            var scale = height / width;
            var scaledHeight = origCanvas.width * scale;
            var scaledWidth = origCanvas.height * scale;
            var marginLeft = ( origCanvas.width - scaledWidth) / 2;
            resizedContext.drawImage(origCanvas, marginLeft, 0, scaledWidth, scaledHeight);
        }

        return resizedCanvas.toDataURL();
    }

@henvy
Copy link

henvy commented Jul 13, 2018

哈哈哈

@kthornbloom
Copy link

@Yufan-Lin - Using your code, the landscape works great, but portrait only shows the video capture, not the 3D model. Any thoughts on how to debug this?

@nicolocarpignoli
Copy link
Collaborator

for me was working better the 'old' resizeCanvas. anyway guys, thanks for this!

@axelraymundo
Copy link

axelraymundo commented Jun 17, 2019

@kthornbloom

@Yufan-Lin - Using your code, the landscape works great, but portrait only shows the video capture, not the 3D model. Any thoughts on how to debug this?

I just swapped width and height for portrait mode like below

if (width > height) { // Landscape resizedContext.drawImage(origCanvas, 0, 0, width, height); } else { // Portrait resizedContext.drawImage(origCanvas, 0, 0, height, width); }

@artemistint
Copy link

Hi, can someone post the complete example of working screenshot taking+downloading mechanism, in glitch, please?
I tried, but had no success :/

@taime
Copy link

taime commented Dec 24, 2019

Hi, can someone post the complete example of working screenshot taking+downloading mechanism, in glitch, please?
I tried, but had no success :/

Actually this is complete code: #358 (comment)

Just

  1. link this script to any basic example for AR.js and A-Frame
  2. and add to html-page this:
<button id="snap-img">SNAPSHOT</button>
<a href="#" id="download-link">DOWNLOAD</a>

That's all!

@tchesa
Copy link

tchesa commented Aug 29, 2020

I'd like to make a contribution to this thread. I've improved the captureVideoFrame method to automaticaly resize and crop the video image to how it is presented on the screen to the user.

function captureVideoFrame(video, format) {
  if (typeof video === 'string') {
    video = document.querySelector(video);
  }

  format = format || 'jpeg';

  if (!video || (format !== 'png' && format !== 'jpeg')) {
    return false;
  }

  const canvas = document.createElement("CANVAS");

  canvas.width = screen.width;
  canvas.height = screen.height;
  const imageWidth = video.videoHeight * (screen.width / screen.height);
  canvas.getContext('2d').drawImage(
    video,
    (video.videoWidth - imageWidth) / 2,
    0,
    imageWidth,
    video.videoHeight,
    0,
    0,
    screen.width,
    screen.height,
  );
  var dataUri = canvas.toDataURL('image/' + format);
  var data = dataUri.split(',')[1];
  var mimeType = dataUri.split(';')[0].slice(5)

  var bytes = window.atob(data);
  var buf = new ArrayBuffer(bytes.length);
  var arr = new Uint8Array(buf);

  for (var i = 0; i < bytes.length; i++) {
    arr[i] = bytes.charCodeAt(i);
  }

  var blob = new Blob([ arr ], { type: mimeType });
  canvas.remove();
  return { blob: blob, dataUri: dataUri, format: format, width: screen.width, height: screen.height };
};

@JTorrentQuasar
Copy link

JTorrentQuasar commented Oct 25, 2020

I think I've fixed the code. Maybe it can be useful:

`
$("#screenshot_btn").click(function()
{
document.querySelector("video").pause();
let aScene = document
.querySelector("a-scene")
.components.screenshot.getCanvas("perspective");
let frame = captureVideoFrame("video", "png");
aScene = resizeCanvas(aScene, frame.width, frame.height);
frame = frame.dataUri;
mergeImages([frame, aScene]).then(b64 =>
{
let link = document.getElementById("download-link", "png");
link.setAttribute("download", "AR.png");
link.setAttribute("href", b64);
link.click();
});
document.querySelector("video").play();
});

    function resizeCanvas(origCanvas, width, height)
    {
        let resizedCanvas = document.createElement("canvas");
        let resizedContext = resizedCanvas.getContext("2d");

        if (screen.width < screen.height)
        {
            var w = height * (height / width);
            var h = width * (height / width);
            var offsetX = -(height - width);
        }
        else
        {
            var w = width;
            var h = height;
            var offsetX = 0;
        }
        resizedCanvas.height = height;
        resizedCanvas.width = width;

        resizedContext.drawImage(origCanvas, offsetX, 0, w, h);
        return resizedCanvas.toDataURL();
    }

    function captureVideoFrame(video, format, width, height)
    {
        if (typeof video === 'string')
        {
            video = document.querySelector(video);
        }

        format = format || 'jpeg';

        if (!video || (format !== 'png' && format !== 'jpeg'))
        {
            return false;
        }

        var canvas = document.createElement("CANVAS");

        canvas.width = width || video.videoWidth;
        canvas.height = height || video.videoHeight;
        canvas.getContext('2d').drawImage(video, 0, 0);
        var dataUri = canvas.toDataURL('image/' + format);
        var data = dataUri.split(',')[1];
        var mimeType = dataUri.split(';')[0].slice(5)

        var bytes = window.atob(data);
        var buf = new ArrayBuffer(bytes.length);
        var arr = new Uint8Array(buf);

        for (var i = 0; i < bytes.length; i++)
        {
            arr[i] = bytes.charCodeAt(i);
        }

        var blob = new Blob([ arr ], { type: mimeType });
        return { blob: blob, dataUri: dataUri, format: format, width: canvas.width, height: canvas.height };
    }

`

@Alsania14
Copy link

hello someone can help me ? I have seen the code above and there is a querySelector ('video') section, where do we get the video tag? I don't use video tags at all when using ar.js

@0xdhu
Copy link

0xdhu commented Mar 22, 2021

@Alsania14
Yes, I have same issue with you. I cannot get video element.
When I tried to get video tag, it returns null. Did you have any solution?

@marcusx2
Copy link

Guys I'm having this issue. It takes the screenshot but half of the screen is white. How can I fix this?

@marcusx2
Copy link

marcusx2 commented Oct 28, 2021

Hi guys, I solved the problem. Here's a complete sample code:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="utf-8">
	<script src="https://cdn.jsdelivr.net/gh/aframevr/aframe@1c2407b26c61958baa93967b5412487cd94b290b/dist/aframe-master.min.js"></script>
	<script src="https://raw.githack.com/AR-js-org/AR.js/master/aframe/build/aframe-ar-nft.js"></script>
	<script src="https://cdnjs.cloudflare.com/ajax/libs/html2canvas/1.3.2/html2canvas.min.js"></script>
	<style>
	  .arjs-loader {
		height: 100%;
		width: 100%;
		position: absolute;
		top: 0;
		left: 0;
		background-color: rgba(0, 0, 0, 0.8);
		z-index: 9999;
		display: flex;
		justify-content: center;
		align-items: center;
	  }
	  .arjs-loader div {
		text-align: center;
		font-size: 1.25em;
		color: white;
	  }
	</style>
	
</head>

<body style="margin : 0px; overflow: hidden;">
  <!-- minimal loader shown until image descriptors are loaded -->
  <div class="arjs-loader">
    <div>Loading, please wait...</div>
  </div>
  <a-scene
    vr-mode-ui="enabled: false;"
    renderer="logarithmicDepthBuffer: true;"
    embedded
    arjs="trackingMethod: best; sourceType: webcam;debugUIEnabled: false;"
	device-orientation-permission-ui="enabled: false"
  >
    <!-- we use cors proxy to avoid cross-origin problems -->
    <a-nft
      type="nft"
      url="https://arjs-cors-proxy.herokuapp.com/https://raw.githack.com/AR-js-org/AR.js/master/aframe/examples/image-tracking/nft/trex/trex-image/trex"
      smooth="true"
      smoothCount="10"
      smoothTolerance=".01"
      smoothThreshold="5"
    >
      <a-entity
        gltf-model="https://arjs-cors-proxy.herokuapp.com/https://raw.githack.com/AR-js-org/AR.js/master/aframe/examples/image-tracking/nft/trex/scene.gltf"
        scale="5 5 5"
        position="50 150 0"
      >
      </a-entity>
    </a-nft>
    <a-entity camera></a-entity>
  </a-scene>
  <canvas id="canvas" hidden></canvas>
  
  <script>
		var sceneEl = document.querySelector('a-scene');
		var counter = 0;

		let shareData;
		document.addEventListener('pointerup', (event) => {
			if (counter%2 !== 0) {
				navigator.share(shareData).then(function()    {}).catch(function(err) {alert(err);});
				++counter;
				return;
			}
			function captureVideos() {
			  var canvas = document.getElementById("canvas"); // declare a canvas element in your html
			  var ctx = canvas.getContext("2d");
			  var videos = document.querySelectorAll("video");
			  var i, w, h;

			  for (i = 0, len = videos.length; i < len; i++) {
				const v = videos[i];
				
				try {
					w = v.videoWidth;
					h = v.videoHeight;
					canvas.width = w;
					canvas.height = h;
					ctx.fillRect(0, 0, w, h);
					ctx.drawImage(v, 0, 0, w, h);
					const a = canvas.toDataURL();
					v.style.backgroundImage = "url(" + a + ")";
					ctx.clearRect(0, 0, w, h); // clean the canvas
					canvas.width = canvas.height = 0;
				} catch(e) {
					console.log(e);
				}
			  }
			}
			
			captureVideos();
			sceneEl.renderer.render(sceneEl.object3D, sceneEl.camera);
			html2canvas(document.body, {width: document.documentElement.offsetWidth, height: document.documentElement.offsetHeight}).then(function(canvas) {
				function dataURLtoFile(dataurl) {
					var bstr = atob(dataurl.split(',')[1]), n = bstr.length, u8arr = new Uint8Array(n);
					while(n--){
						u8arr[n] = bstr.charCodeAt(n);
					}
					return u8arr;
				}
						
				const dataURL = canvas.toDataURL();
				const byteArray = dataURLtoFile(dataURL);
				shareData = {
					files: [new File([byteArray], "bla.png", {type: "image/png"})]
				};
				alert("done");
			}).catch(e => {
				alert(e);
				console.error(e);
				
			});
			++counter;
		});
	</script>
</body>
</html>

Point at the trex
tap once to take the screenshot. Wait for the "done" alert. Tap again to open the share window. You don't need to use mergeimages.

If you get the maximum call stack size error, see this issue

@jrDev1
Copy link

jrDev1 commented Oct 28, 2021

Hi guys, I solved the problem. Here's a complete sample code:

`

<meta charset="utf-8">
<script src="https://cdn.jsdelivr.net/gh/aframevr/aframe@1c2407b26c61958baa93967b5412487cd94b290b/dist/aframe-master.min.js"></script> <script src="https://raw.githack.com/AR-js-org/AR.js/master/aframe/build/aframe-ar-nft.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/html2canvas/1.3.2/html2canvas.min.js"></script> <style> .arjs-loader { height: 100%; width: 100%; position: absolute; top: 0; left: 0; background-color: rgba(0, 0, 0, 0.8); z-index: 9999; display: flex; justify-content: center; align-items: center; } .arjs-loader div { text-align: center; font-size: 1.25em; color: white; } </style>
<div>Loading, please wait...</div>

<a-scene

vr-mode-ui="enabled: false;"

renderer="logarithmicDepthBuffer: true;"

embedded

arjs="trackingMethod: best; sourceType: webcam;debugUIEnabled: false;"

device-orientation-permission-ui="enabled: false"

<!-- we use cors proxy to avoid cross-origin problems -->

<a-nft

  type="nft"

  url="https://arjs-cors-proxy.herokuapp.com/https://raw.githack.com/AR-js-org/AR.js/master/aframe/examples/image-tracking/nft/trex/trex-image/trex"

  smooth="true"

  smoothCount="10"

  smoothTolerance=".01"

  smoothThreshold="5"

>

  <a-entity

    gltf-model="https://arjs-cors-proxy.herokuapp.com/https://raw.githack.com/AR-js-org/AR.js/master/aframe/examples/image-tracking/nft/trex/scene.gltf"

    scale="5 5 5"

    position="50 150 0"

  >

  </a-entity>

</a-nft>

<a-entity camera></a-entity>

<script> var sceneEl = document.querySelector('a-scene'); var counter = 0; let shareData; document.addEventListener('pointerup', (event) => { if (counter%2 !== 0) { navigator.share(shareData).then(function() {}).catch(function(err) {alert(err);}); ++counter; return; } function captureVideos() { var canvas = document.getElementById("canvas"); // declare a canvas element in your html var ctx = canvas.getContext("2d"); var videos = document.querySelectorAll("video"); var i, w, h; for (i = 0, len = videos.length; i < len; i++) { const v = videos[i]; try { w = v.videoWidth; h = v.videoHeight; canvas.width = w; canvas.height = h; ctx.fillRect(0, 0, w, h); ctx.drawImage(v, 0, 0, w, h); const a = canvas.toDataURL(); v.style.backgroundImage = "url(" + a + ")"; ctx.clearRect(0, 0, w, h); // clean the canvas canvas.width = canvas.height = 0; } catch(e) { console.log(e); } } } captureVideos(); sceneEl.renderer.render(sceneEl.object3D, sceneEl.camera); html2canvas(document.body, {width: window.screen.availWidth}).then(function(canvas) { function dataURLtoFile(dataurl) { var bstr = atob(dataurl.split(',')[1]), n = bstr.length, u8arr = new Uint8Array(n); while(n--){ u8arr[n] = bstr.charCodeAt(n); } return u8arr; } const dataURL = canvas.toDataURL(); const byteArray = dataURLtoFile(dataURL); shareData = { files: [new File([byteArray], "bla.png", {type: "image/png"})] }; alert("done"); }).catch(e => { alert(e); console.error(e); }); ++counter; }); </script> `

Point at the trex

tap once to take the screenshot. Wait for the "done" alert. Tap again to open the share window. You don't need to use mergeimages.

If you get the maximum call stack size error, see this issue

Wait, html2canvas works? I've spent hours trying to get that working! Good job if you did!

Thanks,
crafTDev

@marcusx2
Copy link

marcusx2 commented Oct 28, 2021

Yes, html2canvas works. I could not make it work with html-to-image though. Enjoy. I also spent hours trying to make it work. You can see the issue I created here. Also if you can comment on this issue. I need to know how to monkey patch the variable at runtime. The author of html2canvas doesn't reply on github or email.

@saken14
Copy link

saken14 commented Feb 28, 2022

I think I've fixed the code. Maybe it can be useful:

` $("#screenshot_btn").click(function() { document.querySelector("video").pause(); let aScene = document .querySelector("a-scene") .components.screenshot.getCanvas("perspective"); let frame = captureVideoFrame("video", "png"); aScene = resizeCanvas(aScene, frame.width, frame.height); frame = frame.dataUri; mergeImages([frame, aScene]).then(b64 => { let link = document.getElementById("download-link", "png"); link.setAttribute("download", "AR.png"); link.setAttribute("href", b64); link.click(); }); document.querySelector("video").play(); });

    function resizeCanvas(origCanvas, width, height)
    {
        let resizedCanvas = document.createElement("canvas");
        let resizedContext = resizedCanvas.getContext("2d");

        if (screen.width < screen.height)
        {
            var w = height * (height / width);
            var h = width * (height / width);
            var offsetX = -(height - width);
        }
        else
        {
            var w = width;
            var h = height;
            var offsetX = 0;
        }
        resizedCanvas.height = height;
        resizedCanvas.width = width;

        resizedContext.drawImage(origCanvas, offsetX, 0, w, h);
        return resizedCanvas.toDataURL();
    }

    function captureVideoFrame(video, format, width, height)
    {
        if (typeof video === 'string')
        {
            video = document.querySelector(video);
        }

        format = format || 'jpeg';

        if (!video || (format !== 'png' && format !== 'jpeg'))
        {
            return false;
        }

        var canvas = document.createElement("CANVAS");

        canvas.width = width || video.videoWidth;
        canvas.height = height || video.videoHeight;
        canvas.getContext('2d').drawImage(video, 0, 0);
        var dataUri = canvas.toDataURL('image/' + format);
        var data = dataUri.split(',')[1];
        var mimeType = dataUri.split(';')[0].slice(5)

        var bytes = window.atob(data);
        var buf = new ArrayBuffer(bytes.length);
        var arr = new Uint8Array(buf);

        for (var i = 0; i < bytes.length; i++)
        {
            arr[i] = bytes.charCodeAt(i);
        }

        var blob = new Blob([ arr ], { type: mimeType });
        return { blob: blob, dataUri: dataUri, format: format, width: canvas.width, height: canvas.height };
    }

`

Thank you a lot!!! You solved my probem. All code above is working 'partially'. TNX) Now my app is working correctly on both Mobile phone and PC

@mr339
Copy link

mr339 commented Jul 19, 2022

Hi guys, I solved the problem. Here's a complete sample code:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="utf-8">
	<script src="https://cdn.jsdelivr.net/gh/aframevr/aframe@1c2407b26c61958baa93967b5412487cd94b290b/dist/aframe-master.min.js"></script>
	<script src="https://raw.githack.com/AR-js-org/AR.js/master/aframe/build/aframe-ar-nft.js"></script>
	<script src="https://cdnjs.cloudflare.com/ajax/libs/html2canvas/1.3.2/html2canvas.min.js"></script>
	<style>
	  .arjs-loader {
		height: 100%;
		width: 100%;
		position: absolute;
		top: 0;
		left: 0;
		background-color: rgba(0, 0, 0, 0.8);
		z-index: 9999;
		display: flex;
		justify-content: center;
		align-items: center;
	  }
	  .arjs-loader div {
		text-align: center;
		font-size: 1.25em;
		color: white;
	  }
	</style>
	
</head>

<body style="margin : 0px; overflow: hidden;">
  <!-- minimal loader shown until image descriptors are loaded -->
  <div class="arjs-loader">
    <div>Loading, please wait...</div>
  </div>
  <a-scene
    vr-mode-ui="enabled: false;"
    renderer="logarithmicDepthBuffer: true;"
    embedded
    arjs="trackingMethod: best; sourceType: webcam;debugUIEnabled: false;"
	device-orientation-permission-ui="enabled: false"
  >
    <!-- we use cors proxy to avoid cross-origin problems -->
    <a-nft
      type="nft"
      url="https://arjs-cors-proxy.herokuapp.com/https://raw.githack.com/AR-js-org/AR.js/master/aframe/examples/image-tracking/nft/trex/trex-image/trex"
      smooth="true"
      smoothCount="10"
      smoothTolerance=".01"
      smoothThreshold="5"
    >
      <a-entity
        gltf-model="https://arjs-cors-proxy.herokuapp.com/https://raw.githack.com/AR-js-org/AR.js/master/aframe/examples/image-tracking/nft/trex/scene.gltf"
        scale="5 5 5"
        position="50 150 0"
      >
      </a-entity>
    </a-nft>
    <a-entity camera></a-entity>
  </a-scene>
  <canvas id="canvas" hidden></canvas>
  
  <script>
		var sceneEl = document.querySelector('a-scene');
		var counter = 0;

		let shareData;
		document.addEventListener('pointerup', (event) => {
			if (counter%2 !== 0) {
				navigator.share(shareData).then(function()    {}).catch(function(err) {alert(err);});
				++counter;
				return;
			}
			function captureVideos() {
			  var canvas = document.getElementById("canvas"); // declare a canvas element in your html
			  var ctx = canvas.getContext("2d");
			  var videos = document.querySelectorAll("video");
			  var i, w, h;

			  for (i = 0, len = videos.length; i < len; i++) {
				const v = videos[i];
				
				try {
					w = v.videoWidth;
					h = v.videoHeight;
					canvas.width = w;
					canvas.height = h;
					ctx.fillRect(0, 0, w, h);
					ctx.drawImage(v, 0, 0, w, h);
					const a = canvas.toDataURL();
					v.style.backgroundImage = "url(" + a + ")";
					ctx.clearRect(0, 0, w, h); // clean the canvas
					canvas.width = canvas.height = 0;
				} catch(e) {
					console.log(e);
				}
			  }
			}
			
			captureVideos();
			sceneEl.renderer.render(sceneEl.object3D, sceneEl.camera);
			html2canvas(document.body, {width: document.documentElement.offsetWidth, height: document.documentElement.offsetHeight}).then(function(canvas) {
				function dataURLtoFile(dataurl) {
					var bstr = atob(dataurl.split(',')[1]), n = bstr.length, u8arr = new Uint8Array(n);
					while(n--){
						u8arr[n] = bstr.charCodeAt(n);
					}
					return u8arr;
				}
						
				const dataURL = canvas.toDataURL();
				const byteArray = dataURLtoFile(dataURL);
				shareData = {
					files: [new File([byteArray], "bla.png", {type: "image/png"})]
				};
				alert("done");
			}).catch(e => {
				alert(e);
				console.error(e);
				
			});
			++counter;
		});
	</script>
</body>
</html>

Point at the trex tap once to take the screenshot. Wait for the "done" alert. Tap again to open the share window. You don't need to use mergeimages.

If you get the maximum call stack size error, see this issue

Thanks a lot mate!!!, worked like a charm, only thing i had to change was the html part where i had my own aframe code i.e using mindar and then adding allowTaint and foreignObjectRendering
html2canvas(document.body, {
width: document.documentElement.offsetWidth, height: document.documentElement.offsetHeight,
allowTaint: true,
foreignObjectRendering: true,
}).then(function (canvas) {

to the html2canvas to fix the tainted canvas issue.
I had one question though, it works fine on android device but has issues on IOS(results in a full blank page) , is there any fix for that? @marcusx2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests