[HOT] : PORTAL: HOW TO CREATE YOUR PORTAL THANKS TO WEB TECHNOS? – Part two


INDEX.HTML

Server presentation

Projet PORTAL 2O15

une expérience interactive

Une création utilisant la technologie WEBRTC . Credits to @Binomed

./JS/LIB/ADAPTER.JS

This file must be copied as it is from the codelab because it is the Polyfill class which allows to standardize the WebRTC API between Chrome & Firefox

./JS/CANVASFIRE.JS

Initialize this file empty in order to have the import from the html file that works

./JS/APP.JS

We will start from the file from step7: ./js/main.js.

Copy the entire file and we’ll remove what we’re not interested in.

LOCALVIDEO

In our project, we are not interested in showing the feedback of our webCam on the screen, so we will remove all references to this element.

var localVideo = document.querySelector(‘#localVideo’);

and

attachMediaStream(localVideo, stream);

in the handleUserMedia (stream) function

DATACHANNEL

In the same way, everything concerning the DataChannel does not serve us

where sendChannel;
where sendButton = document.getElementById(“sendButton”);
where sendTextarea = document.getElementById(“dataChannelSend”);
where receiveTextarea = document.getElementById(“dataChannelReceive”);

sendButton.onclick = sendData;

At the start of the project. Next

where pc_constraints =

‘optional’: [
{‘DtlsSrtpKeyAgreement’: true},
{‘RtpDataChannels’: true}
]};
is to be replaced by:

where pc_constraints =

‘optional’: [
{‘DtlsSrtpKeyAgreement’: true}
]};

It is also necessary to delete all that follows and which is at the level of the function createPeerConnection

if (isInitiator)
try {
// Reliable Data Channels not yet supported in Chrome
sendChannel = pc.createDataChannel(“sendDataChannel”,
{reliable: false});
sendChannel.onmessage = handleMessage;
trace(‘Created send data channel’);
} catch (e) {
alert(‘Failed to create data channel. ‘ +

‘You need Chrome M25 or later with RtpDataChannel enabled’);
trace(‘createDataChannel() failed with exception: ‘ + e.message);
sendChannel.onopen = handleSendChannelStateChange;
sendChannel.onclose = handleSendChannelStateChange;
} else {
pc.ondatachannel = gotReceiveChannel;

function sendData() {
where data = sendTextarea.value;
sendChannel.send(data);
trace(‘Sent data: ‘ + data);

function gotReceiveChannel(event) {
trace(‘Receive Channel Callback’);
sendChannel = event.channel;
sendChannel.onmessage = handleMessage;
sendChannel.onopen = handleReceiveChannelStateChange;
sendChannel.onclose = handleReceiveChannelStateChange;

function handleMessage(event) {
trace(‘Received message: ‘ + event.data);
receiveTextarea.value = event.data;

function handleSendChannelStateChange() {
where readyState = sendChannel.readyState;
trace(‘Send channel state is: ‘ + readyState);
enableMessageInterface(readyState == “open”);

function handleReceiveChannelStateChange() {
where readyState = sendChannel.readyState;
trace(‘Receive channel state is: ‘ + readyState);
enableMessageInterface(readyState == “open”);

function enableMessageInterface(shouldEnable) {
if (shouldEnable) {
dataChannelSend.disabled = false;
dataChannelSend.focus();
dataChannelSend.placeholder = “”;
sendButton.disabled = false;
} else {
dataChannelSend.disabled = true;
sendButton.disabled = true;

And finally to finish:

var constraints = {‘optional’: [], ‘mandatory’: {‘MozDontOfferDataChannel’: true}};

of the doCall () function is to be replaced by:

var constraints = {‘optional’: [], ‘mandatory’: {}};

TESTER

We can now test our application to verify that the video passes through the WebRTC API. To do this, we just have to start our server using the command:

node server.js

Our server is running on port 2013. We must therefore enter the url: http: // localhost: 2013 in our browser. It is very important to accept video sharing otherwise it will not work.

At this point, you should have a black screen … instead of your potential beers! Indeed, as we do not make visual feedback from our own camera, we have to open a second tab on the same url to check that it is working properly. However, there is a second reason why we are not seeing anything, the remoteVideo video tag has a “display: none ‘” style. This “display: none” will have to be removed for the duration of the test.

If all goes well, you should have a video on the 2 tabs corresponding to the rendering of your webcam. For each future test, I advise you to close the 2 tabs each time because the Node server stores the number of connected clients… and the limit has been set at 2 clients maximum!

STEP 5: ADDING THE FLAME WALL

Now that we’ve got the WebRTC part taken care of, we’re going to add some graphics to it all. At the moment our WebRTC feed arrives directly in a video tag, but it turns out that canvases and videos work great together! Indeed, we are going to take snapshots of our video tag that we will inject into a canvas and thus be able to start playing more seriously with graphic effects.

We will therefore have:

  • 1 video tag in “display: none”
  • 1 canvas restoring the video but with a mask
  • 1 canvas displaying the wall of flame.

DISPLAY:NONE

To do this, we just have to make sure that in our html, we have the following code:

CANVAS WITH VIDEO

We are now going to add to our application (app.js) the display of the canvas that will receive the video:

var canvasRemoteElement = document.querySelector(‘#canvasRemoteVideo’);

var ctxRemote = canvasRemoteElement.getContext(‘2d’);

function snapshot(){

var canvasToUse = canvasRemoteElement;

var contextToUse = ctxRemote;

var videoToUse = remoteVideo;

canvasRemoteElement.width = remoteVideo.videoWidth;

canvasRemoteElement.height = remoteVideo.videoHeight;

if (remoteStream){

ctxRemote.drawImage(remoteVideo, 0,0);

window.requestAnimationFrame(snapshot);

snapshot();

We simply use the possibility of drawing an image from a video in a canvas.

FLAME WALL

You need to copy the contents of the canvas.js file from the html5-canvas-demo project to our ./js/canvasFire.js file.

We will now display the wall in a canvas. So we need to edit our app.js file.

var canvasFireElement = document.querySelector(‘#canvasFireLocalVideo’);

var ctxFire = canvasFireElement.getContext(‘2d’);

We are also going to modify the snapshot method in order to include all the flames part.

var init = false;

function snapshot(){

var canvasToUse = canvasRemoteElement;

var contextToUse = ctxRemote;

var videoToUse = remoteVideo;

canvasRemoteElement.width = remoteVideo.videoWidth;

canvasRemoteElement.height = remoteVideo.videoHeight;

if (remoteStream){

ctxRemote.drawImage(remoteVideo, 0,0);

var idealWidth = Math.min(canvasToUse.parentElement.clientWidth, videoToUse.videoWidth + 100);

var minVideoWidth = Math.min(canvasToUse.parentElement.clientWidth – 50, videoToUse.videoWidth);

var ratio = videoToUse.videoWidth / videoToUse.videoHeight;

var idealHeight = Math.min(idealWidth / ratio, videoToUse.videoHeight);

var useVideoWidth = idealWidth === videoToUse.videoWidth + 100;

canvasToUse.width = idealWidth; //landscapeMode ? idealHeight : idealWidth;

canvasToUse.height = canvasToUse.width;

canvasToUse.style.top = ((canvasToUse.parentElement.clientHeight – canvasToUse.height) / 2)+”px”;

canvasToUse.style.left = ((canvasToUse.parentElement.clientWidth – canvasToUse.width) / 2)+”px”;

canvasFireElement.width = idealWidth;// landscapeMode ? idealHeight : idealWidth;

canvasFireElement.height = canvasFireElement.width;

canvasFireElement.style.top = ((canvasToUse.parentElement.clientHeight – canvasFireElement.height) / 2)+”px”;

canvasFireElement.style.left = ((canvasToUse.parentElement.clientWidth – canvasFireElement.width) / 2)+”px”;

var refValue = idealWidth;

if (localStream){

if (!init

&& canvasToUse.width == Math.round(refValue)

&& canvasToUse.height == Math.round(refValue)

&& canvasFireElement.width == Math.round(refValue)

&& canvasFireElement.height == Math.round(refValue)){

if (canvasFireElement.width != 100){

init = true;

canvasDemo.canvas = document.getElementById(‘canvasFireLocalVideo’);

canvasDemo.init()

if (init){

canvasDemo.refresh();

window.requestAnimationFrame(snapshot);

The added code allows us to graphically initialize the canvas and ask to drive the flame wall refreshes from our application. We will therefore have to make a modification in the canvasFire.js file: add a refresh method and delete the call to the requestAnimationFrame:

this.refresh = function(){

update();

// main render loop

var update = function()

smooth();

draw();

frames++;

//requestAnimFrame(function() { update(); });

STEP 6: ADD A CIRCLE OF LIGHTS

THE PRINCIPLE: CREATE A CIRCLE FROM A SQUARE

The principle for the wall of flames is simple: the html5-canvas-demo project provides us with a single wall of flames, apart from us, we want 1 circle. We will do it in several stages.

  • creation of a wall of flames on each of the cardinal axes. This way we’ll have flames all around our image
  • Setting up an oval mask to restrict the area displaying the flames
  • Rotate these flames to give them more movement and get closer to the Portal game rendering.
  • Finally, authorization of a second color because each portal has its own color (blue and orange)
  • As a reminder, the initial project works as follows: each time it can draw (window.requestAnimationFrame), we draw an image of flame particles using the drawImage method of the canvas context.