-
Notifications
You must be signed in to change notification settings - Fork 158
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Suggestion: Exporting video via MediaStream #156
Comments
This looks interesting, definitely worth a shot. We're concentrating on getting the currently open PRs merge at the moment, so I'm unlike to be able to dedicate some time to this for a while yet. However, if anyone is up for trying it out in the interim I'm very interested in the results! |
I'm do the same thing, this work good, next thing is record audio, but export full video need the video play to end ,have some method speed up ? <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<meta http-equiv="X-UA-Compatible" content="ie=edge" />
<title>Document</title>
<style>
* {
padding: 0;
margin: 0;
}
html {
width: 100%;
height: 100%;
}
body {
width: 100%;
height: 100%;
}
canvas {
display: block;
width: 1080px;
height: 720px;
margin: 0 auto;
}
.btns {
width: 100%;
height: 50px;
display: flex;
justify-content: space-around;
}
.btn {
background-color: #000;
color: #fff;
cursor: pointer;
width: 100px;
height: 100%;
display: flex;
justify-content: center;
align-items: center;
}
</style>
</head>
<body>
<canvas width="1280" height="720"></canvas>
<div class="btns">
<div class="btn play">play</div>
<div class="btn stop">stop</div>
<div class="btn download">download</div>
</div>
<script src="http://bbc.github.io/VideoContext/dist/videocontext.js"></script>
<script>
class Record {
constructor(canvas, { videoType = "webm" } = {}) {
this.canvas = canvas
this.videoType = videoType
this.init()
}
init() {
const stream = this.canvas.captureStream()
this.mediaRecorder = new MediaRecorder(stream, {
mimeType: "video/webm"
})
this.recordedBlobs = []
this.mediaRecorder.ondataavailable = this.handleDataAvailable
}
start() {
this.mediaRecorder.start()
}
stop() {
this.mediaRecorder.stop()
}
handleDataAvailable = event => {
if (event.data && event.data.size > 0) {
this.recordedBlobs.push(event.data)
}
}
download(name) {
const blob = new Blob(this.recordedBlobs, { type: "video/webm" })
const url = window.URL.createObjectURL(blob)
const a = document.createElement("a")
a.href = url
a.download = `${name}.${this.videoType}`
a.click()
window.URL.revokeObjectURL(url)
}
}
const bindPlay = videoContext => {
const playBtn = document.querySelector(".play")
const stopBtn = document.querySelector(".stop")
const downloadBtn = document.querySelector(".download")
const record = new Record(videoContext._canvas)
console.log(record)
playBtn.addEventListener("click", () => {
console.log("start")
videoContext.play()
record.start()
})
stopBtn.addEventListener("click", () => {
console.log("stop")
videoContext.pause()
record.stop()
})
downloadBtn.addEventListener("click", () => {
console.log("download")
record.download("test")
})
}
const createEffectNodes = (videoContext, n) => {
const {
MONOCHROME,
HORIZONTAL_BLUR,
COLORTHRESHOLD,
AAF_VIDEO_FLIP
} = VideoContext.DEFINITIONS
const effects = [
MONOCHROME,
HORIZONTAL_BLUR,
COLORTHRESHOLD,
AAF_VIDEO_FLIP
]
return [...new Array(n)].map(() =>
videoContext.effect(
effects[Math.round(Math.random() * effects.length)]
)
)
}
const start = () => {
const canvas = document.querySelector("canvas")
const videoContext = new VideoContext(canvas)
const videoNode = videoContext.video(
"http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4"
)
videoNode.startAt(0)
const effectNodes = createEffectNodes(videoContext, 2)
effectNodes
.concat([videoContext.destination])
.reduce((preNode, currentNode) => {
preNode.connect(currentNode)
return currentNode
}, videoNode)
bindPlay(videoContext)
}
window.onload = start
</script>
</body>
</html> |
I noticed sdobz and some others used a similar method in #124 as well. I would like to test out requestFrame, as I feel this could improve the timing. I may experiment with it later |
@MysteryPancake did you get a chance to test out |
Not yet - I'm not sure about determining the framerate. It would be good if seekToNextFrame was widely supported |
Yeah VideoContext has no knowledge of the framerate it just updates on each animationFrame (or tick in the webworker). Depending on your inputs you could have elements running at different frame rates anyway. So maybe it's a decision the user has to make? |
Any update on this one? I've tried it myself but adding the audio stream seams quite tricky |
What is the requirement? |
I believe the ultimate goal here would be to have an api like |
I'm not sure how well this would work, but the MediaStream interface allows the capturing of canvas content:
https://developers.google.com/web/updates/2016/10/capture-stream
Small example:
https://webrtc.github.io/samples/src/content/capture/canvas-record/
The audio and video tracks could be recorded using this method:
https://stackoverflow.com/a/39302994
For timing the render so every frame is captured, requestFrame could be used:
https://developer.mozilla.org/en-US/docs/Web/API/CanvasCaptureMediaStreamTrack/requestFrame
This could potentially fix #76 and #124
The text was updated successfully, but these errors were encountered: