Technology

Hello World Open – A Story in the Visuals: Art and assets

October 29, 2014

Read time 6 min

Drawing from our experiences with the Code Camp visualisation, we knew that assets and 3D-models were going to be problematic. First of all, we didn’t have a 3D-artist in our team to fine-tune the models and importing models was not quite as easy as we had initially anticipated. The concept of Hello World Open’s visualisations put the cars in the spotlight, so we knew they had to look good.

We initially experimented with a custom CAD-model of a real electric race-car, but unfortunately, the model conversion proved to be highly problematic. All the conversions ended up with weird holes in topology or misaligned edges, which made further steps such UV-map extraction impossible. At this point, our time was running out fast and we were forced to make a decision to scrap the electric race-car model and buy a stock Ford GT40 model. Unfortunately for us, the store-bought model had 2.5 million polygons, which is a no-go for a real-time application. We needed help to make a more practical version of the model. Since the car model was used for the video-inserts, we decided to use that same model, but needed to cut the polygon count way down.

At this point we asked Timo and Mikko from Shader Oy to help us for a couple of days with the models and UV-maps. With their considerable skill, the duo quickly culled the polygon count down to more a reasonable 70 000 and split the model into separate parts such as windscreen, tires, lights et cetera. This gave us some much needed breathing room by allowing us to drop parts of geometry if needed – we were still quite worried about the framerate impact of the high polygon count. We had to agree upon the polygon count before any UV-maps could be considered. The UV-coordinates and maps would become unusable if the topology of the model was changed after the UV-maps were made.

model
The model consisted of many different parts of different materials.

Another major headache we had at this time was the actual importing of the model. Three.js has support for various different object formats, such as Wavefront OBJ, Collada, and JSON among others. The tools Timo and Mikko used best supported formats like Lightwave OBJ and FBX. We quickly needed to settle to one format that would allow us to establish a decent workflow, without wrestling with multiple conversions.

Wavefront OBJ was the first one we tried, but we quickly noticed that none of the importers supported the needed multipart objects. We then decided to switch to Collada, which did check the two important boxes: multipart objects and UV-coordinates for textures. We didn’t care too much about the materials, since they were going to be defined programmatically anyway.

Initial model loading success. As you can see, the orientation and scale are off and only chassis is visible.
Initial model loading success. As you can see, the orientation and scale are off and only chassis is visible.

Timo and Mikko then manually cut out the UV-maps from the model. UV-unwrapping can be compared to removing the skin of an apple and laying it out flat on a table, without having any part of the skin overlap itself. There are quite many ways to do it and depending on the need, some ways may be better than others. We had experimented with automatic UV unwrapping tools earlier, but the results were quite poor because a car is quite complex an object to unwrap. After some iterations the UV-maps were unwrapped, and ready to be textured by our heroic visual artist, Antero.

image09

We were planning to have distinctive textures for each team, so we made eight different textures for the single car model. This meant that we could use the same model and UV-coordinates, and load separate texture for each team. Unfortunately ,we still had problems importing the UV-coordinates along with the model, but after 12 model iterations and various export options we finally had everything sorted out.

It was amazingly rewarding to see the fully textured cars racing the track for the first time, but there was still a lot of tweaking and polishing to do.

Antero also created really beautiful skybox textures which really tied the visuals together. We experimented with buildings, but a simple background image worked best.

image04

In the code, we added a pinch of post-processing shader magic (bloom, bleach, saturation and contrast) to tweak the visual style and adjusted the scene lights to make the cars more visible. After that, we were mainly tweaking the values to find the most attractive output. Now all we had to do was to find the right angle to present our work.

The original Code Camp visualisation used fully manual camera selection with floating and panning transitions between the camera positions. There were a fair number of different cameras (roughly 10) and directing the action was quite cumbersome. When we collected feedback from the participants, most of them noted that the moving and jumping camera was very confusing. After all, the cars were extremely fast and lap times on a single track could be as low as four seconds. As a result, we decided to use traditional fixed cameras in the Hello World Open visualisation.

Each track had a set of three cameras: top, start line, and “juicy”. Top-view had the whole track visible, and it was the preferred default view. Start line camera was shooting to start line to capture the start and finish, and the “juicy” camera gave the best, most exciting view of passing cars. Each of these cameras had to be manually added to all ten tracks present in the final.

We created a camera director that would automatically find a suitable camera angle from a set of predefined cameras on a set interval. It analyzed the positions of the cars and searched for the best camera angle based on how many cars would be visible. We had to experiment quite a bit to find the suitable time between camera switches. We tried values from 3 seconds to 16 seconds, but the best compromise seemed to be a six-second interval.

The automatic camera switching logic scanned through all cameras, and counted how many cars were visible in the camera view frustum. If no close-up cameras had enough cars, next scheduled camera was the top cam. However, if the logic had chosen two close-up cameras in a row, the next one was forced to be the top camera. These simple rules worked remarkably well in practice. In fact, most of the live action was directed by the automatic director.

Additionally, the camera could be manually controlled with a keyboard input via a dead-man switch. The camera keys were bound to ‘S’, ‘T’ and ‘J’ which happen to be nicely distributed across the keyboard – this way the camera controller doesn’t accidentally choose the wrong camera in low-light conditions.

In the next part, we’ll discuss the overlays, audio and production setup.

Other parts of Hello World Open – A Story in the Visuals series

  1. The History
  2. New Dimensions
  3. Art and Assets
  4. Lights, Camera, Action!
  5. In Retrospective

 

“Hello World Open – A Story in the Visuals” is the story of the game that was built for the Coding World Championships. The series consists of five parts and it was written by members of the HWO technical team: Harri Salokorpi, Niklas von Hertzen, Teijo Laine and Tuomas Hakkarainen. The text was proofread by Eero Säynätkari. 

Never miss a post