Vue and ThreeJs - Part One

Three.js Vue Vuex

Let's set up a Vue application with a three dimensional user experience powered by Three.js.

The completed code for this project is available here. You can also check out a demo. Our goal will be to convert this Three.js Example into a Vue application, and perhaps add a bit more functionality while we are at it.

To start, spin up a new vue application with the Vue CLI tool. (If you haven't done that before, check out this great tutorial.) For the sake of this tutorial you can choose any of the settings that you are comfortable with; just make sure you choose to have Vuex added to the project.

Next, we will add Three.js to the project with NPM. You can install it directly via NPM, however it does not yet play nicely with ES6 imports. To get around this, we can use the three-full mirror, which delivers Three.js in a format that is much easier to use with ES6 style imports. Let's install that now:

$ npm install three-full

We can now get our development server up and running by using:

$ npm run serve

The first thing we will do is remove the HelloWorld.vue file that came with your new application and we will also remove all references to that file from App.vue. We can also remove the default styles from the bottom of the App.vue file and replace them with this:

20<style>
21html,
22body {
23 width: 100%;
24 height: 100%;
25 overflow: hidden;
26}
27body {
28 margin: 0px;
29}
30
31#app {
32 height: 100%;
33}
34</style>

Here we are removing the default padding from the html and body tags and making sure that our #app div will fill the entire visible browser window.

We will use a ViewPort component to manage the canvas element generated by Three.js; let's create that now. Add a new file called ViewPort.vue in your /src/components directory. We can now import that into the main App.vue component. Update the script section in that file to look like this:

9import ViewPort from "@/components/ViewPort.vue";
10
11export default {
12 components: {
13 viewport: ViewPort,
14 }
15};

And we can now use that component in our App.vue template:

1<template>
2 <div id="app">
3 <viewport/>
4 </div>
5</template>

Let's now turn our attention to getting Three.js up and running. Three.js renders three dimensional scenes on canvas elements; before you can render a scene you need to create Camera, Control and Scene objects, that you will then give to a WebGLRender to have the canvas element generated for you. Three.js provides tools for creating all of these objects and populating the scene with three dimensional objects.

We are going to let Vuex manage the various components of our Three.js scene; this will make it much easier for us to modify the scene contents once they have been created, as we will explore in part two. Open your store.js file, and update the top of the file to look like this:

1import Vue from "vue";
2import Vuex from "vuex";
3import {
4 Scene,
5 TrackballControls,
6 PerspectiveCamera,
7 WebGLRenderer,
8 Color,
9 FogExp2,
10 CylinderBufferGeometry,
11 MeshPhongMaterial,
12 Mesh,
13 DirectionalLight,
14 AmbientLight,
15 LineBasicMaterial,
16 Geometry,
17 Vector3,
18 Line
19} from "three-full";
20
21Vue.use(Vuex);

Here we are importing Vue, Vuex and a litany of Three.js objects that we will use to create our scene. We are going to use vuex mutations to manage the creation of the scenes, and a vuex action to trigger those mutations. The general rule of thumb is that actions are used to handle asynchronous updates, and mutations must always be used for synchronous updates. In a normal Vuex workflow, you might have an action that makes an http request, which would then hand off the response data to mutations to store that data in the vuex state, which is where everything is kept.

Let's start by adding some new items to our state object:

24state: {
25 width: 0,
26 height: 0,
27 camera: null,
28 controls: null,
29 scene: null,
30 renderer: null,
31 axisLines: [],
32 pyramids: []
33},

We will use width and height to keep track of the canvas size. camera, controls, scene and renderer will be use to store the tools generated by Three.js. axisLines and pyramids will be used to keep track of the visual elements used in our scene; the scenery if you will.

First up, lets create a mutation that sets the height and width of the canvas. By convention, all vuex method names are upper case:

40mutations: {
41 SET_VIEWPORT_SIZE(state, { width, height }) {
42 state.width = width;
43 state.height = height;
44 },
45}

Here we are receiving a width and a height and updating the state accordingly. Next, lets create our renderer:

40mutations: {
41 // ...
42 INITIALIZE_RENDERER(state, el) {
43 state.renderer = new WebGLRenderer({ antialias: true });
44 state.renderer.setPixelRatio(window.devicePixelRatio);
45 state.renderer.setSize(state.width, state.height);
46 el.appendChild(state.renderer.domElement);
47 },
48}

Here we are instantiating a new WebGLRenderer (provided by Three.js) and setting the width and height of the scene that we want to create. Note that we this function receives a reference to a dom element (often referred to as el). The WebGLRenderer will create a canvas element for us, but it won't be visible unless we actually add it to the dom tree. el.appendChild adds the canvas element as a child node the el dom element.

Now lets create our camera:

50mutations: {
51 // ...
52 INITIALIZE_CAMERA(state) {
53 state.camera = new PerspectiveCamera(
54 // 1. Field of View (degrees)
55 60,
56 // 2. Aspect ratio
57 state.width / state.height,
58 // 3. Near clipping plane
59 1,
60 // 4. Far clipping plane
61 1000
62 );
63 state.camera.position.z = 500;
64 },
65}

Here we are creating a new PerspectiveCamera object with four parameters: The field of view of the camera's "lens" in degrees, the aspect ratio of the camera's output, and the "clipping plane" boundaries. Anything further than 1000 units away from the camera will not be visible. Finally, we set the starting position of the camera at 500 units away from origin on the z-axis.

Now lets create our controls:

63mutations: {
64 // ...
65 INITIALIZE_CONTROLS(state) {
66 state.controls = new TrackballControls(
67 state.camera,
68 state.renderer.domElement
69 );
70 state.controls.rotateSpeed = 1.0;
71 state.controls.zoomSpeed = 1.2;
72 state.controls.panSpeed = 0.8;
73 state.controls.noZoom = false;
74 state.controls.noPan = false;
75 state.controls.staticMoving = true;
76 state.controls.dynamicDampingFactor = 0.3;
77 },
78}

Here we instantiate a new TrackballControls object and set up a default configuration for it. The exact nature of this configuration is a bit beyond the scope of this tutorial, but you should feel free to play around with these values and see what happens.

The most important thing to note here is that we are passing in our canvas element as the second argument to the TrackballControls constructor. This will limit the controls to listen only for input events that occur on that dom element. If you don't provide this, it will default to listening to all input events on the entire document which will effectively steal focus away from any other content on the page and translate all input into camera movements in the rendered scene. By limiting this to just the canvas element we will still be able to interact with other content on the page normally.

Next up, the scene content itself. This one is a doozy:

80mutations: {
81 // ...
82 INITIALIZE_SCENE(state) {
83 state.scene = new Scene();
84 state.scene.background = new Color(0xcccccc);
85 state.scene.fog = new FogExp2(0xcccccc, 0.002);
86 var geometry = new CylinderBufferGeometry(0, 10, 30, 4, 1);
87 var material = new MeshPhongMaterial({
88 color: 0xffffff,
89 flatShading: true
90 });
91 for (var i = 0; i < 500; i++) {
92 var mesh = new Mesh(geometry, material);
93 mesh.position.x = (Math.random() - 0.5) * 1000;
94 mesh.position.y = (Math.random() - 0.5) * 1000;
95 mesh.position.z = (Math.random() - 0.5) * 1000;
96 mesh.updateMatrix();
97 mesh.matrixAutoUpdate = false;
98 state.pyramids.push(mesh);
99 }
100 state.scene.add(...state.pyramids);
101
102 // lights
103 var lightA = new DirectionalLight(0xffffff);
104 lightA.position.set(1, 1, 1);
105 state.scene.add(lightA);
106 var lightB = new DirectionalLight(0x002288);
107 lightB.position.set(-1, -1, -1);
108 state.scene.add(lightB);
109 var lightC = new AmbientLight(0x222222);
110 state.scene.add(lightC);
111
112 // Axis Line 1
113 var materialB = new LineBasicMaterial({ color: 0x0000ff });
114 var geometryB = new Geometry();
115 geometryB.vertices.push(new Vector3(0, 0, 0));
116 geometryB.vertices.push(new Vector3(0, 1000, 0));
117 var lineA = new Line(geometryB, materialB);
118 state.axisLines.push(lineA);
119
120 // Axis Line 2
121 var materialC = new LineBasicMaterial({ color: 0x00ff00 });
122 var geometryC = new Geometry();
123 geometryC.vertices.push(new Vector3(0, 0, 0));
124 geometryC.vertices.push(new Vector3(1000, 0, 0));
125 var lineB = new Line(geometryC, materialC);
126 state.axisLines.push(lineB);
127
128 // Axis 3
129 var materialD = new LineBasicMaterial({ color: 0xff0000 });
130 var geometryD = new Geometry();
131 geometryD.vertices.push(new Vector3(0, 0, 0));
132 geometryD.vertices.push(new Vector3(0, 0, 1000));
133 var lineC = new Line(geometryD, materialD);
134 state.axisLines.push(lineC);
135
136 state.scene.add(...state.axisLines);
137 },
138}

Most of this comes directly from the Three.js example that we are emulating, however there are couple important differences to note:

  • When we create the pyramid geometries we are storing them in state as an array. This will allow us to make changes to them later on if we want to and then re-render the scene with those changes in place.
  • We are also adding some straight lines that will follow along each axis of our three dimensional space. This will provide us with a grid of sorts that will help us conceptualize how our three dimensional scene is being rendered. We are also storing these grid lines as an array in state so we can make changes to them later.

Next we will create a mutation that will handle resizing the canvas element for us:

136mutations: {
137 // ...
138 RESIZE(state, { width, height }) {
139 state.width = width;
140 state.height = height;
141 state.camera.aspect = width / height;
142 state.camera.updateProjectionMatrix();
143 state.renderer.setSize(width, height);
144 state.controls.handleResize();
145 state.renderer.render(state.scene, state.camera);
146 },
147}

This should be pretty straight forward. When we want to resize the canvas we call this mutation and provide it with our new width and height. It then updates the camera and renderer accordingly and re-renders the scene using the new dimensions.

Finally, we will now set up two vuex actions that will orchestrate these various mutations for us. First we will use an action to initialize our scene on page load:

176actions: {
177 INIT({ state, commit }, { width, height, el }) {
178 return new Promise(resolve => {
179 commit("SET_VIEWPORT_SIZE", { width, height });
180 commit("INITIALIZE_RENDERER", el);
181 commit("INITIALIZE_CAMERA");
182 commit("INITIALIZE_CONTROLS");
183 commit("INITIALIZE_SCENE");
184
185 // Initial scene rendering
186 state.renderer.render(state.scene, state.camera);
187
188 // Add an event listener that will re-render
189 // the scene when the controls are changed
190 state.controls.addEventListener("change", () => {
191 state.renderer.render(state.scene, state.camera);
192 });
193
194 resolve();
195 });
196 },
197}

The init function returns a promise that resolves once all of our various Three.js components have been created and registered. It also takes care of the initial scene rendering and sets an event listener that will re-render the scene if the controls receive an input trigger.

Finally, we will use an action to set up our animation loop. This is a recursive function that re-renders the scene (if needed) during every tick of the event loop. We could use setTimeout here, but requestAnimationFrame does the same thing except that it will pause the animation loop if the browser looses focus. See more about animation frames here.

196actions: {
197 // ..
198 ANIMATE({ state, dispatch }) {
199 window.requestAnimationFrame(() => {
200 dispatch("ANIMATE");
201 state.controls.update();
202 });
203 }
204}

That should cover everything we need to get our 3D scene up and running. Now lets put it all together in our ViewPort component. The template and the styling of this component are very straight forward:

1<template>
2 <div class="viewport"/>
3</template>

We create a single dom node (the root node for this component) which will become a wrapper around the canvas element generated by our WebGLRenderer.

40<style>
41 .viewport {
42 height: 100%;
43 width: 100%;
44 }
45</style>

Here we are setting the height and width of the component to be 100% of its parent node, which in this case happens to be the main #app div (the root node of the App.vue component.) This will ensure that our canvas element uses the entire browser screen.

The real fun is in the script section of the component:

5import { mapMutations, mapActions } from "vuex";
6
7export default {
8 data () {
9 return {
10 height: 0
11 };
12 },
13 methods: {
14 ...mapMutations(["RESIZE"]),
15 ...mapActions(["INIT", "ANIMATE"])
16 },
17 mounted () {
18 this.INIT({
19 width: this.$el.offsetWidth,
20 height: this.$el.offsetHeight,
21 el: this.$el
22 }).then(() => {
23 this.ANIMATE();
24 window.addEventListener("resize", () => {
25 this.RESIZE({
26 width: this.$el.offsetWidth,
27 height: this.$el.offsetHeight
28 });
29 });
30 });
31 }
32};

To start, we import some helper methods from Vuex which will allow us to reference our vuex actions and mutations directly from this component. Next, when the component is mounted (on page load) we will trigger our INIT function, creating our three dimensional scene. When it is ready we will trigger our animation loop and set an event listener that triggers our RESIZE function whenever the browser window is resized.

That should be everything we need! Take a look at the URL being used by your dev server (usually http://localhost:8080/) to see your beautifully rendered scene.

The next installment of this tutorial will investigate creating a control panel that will allow users to manually manipulate the rendered scene.

About the Author

Ryan Durham is a software developer who lives in Portland, Oregon, with his wife and daughter. His numerous areas of interest include PHP, Laravel, Elixir and PostgreSQL, as well as organizational efficiency and communications strategies.

You can find him on GitHub and LinkedIn.