Cookies   I display ads to cover the expenses. See the privacy policy for more information. You can keep or reject the ads.

Video thumbnail
(rock music)
♪ She give me rock now ♪
♪ Now ♪
(Audience claps and cheers)
- Alright.
- So Mark has been working in C++ at Side Effects Software
for over 25 years.
He works primarily on low level libraries
for the visual effects application Houdini.
He is also the chief author
of the Mantra Production rendering engine.
So, let's welcome Mark on stage.
(audience claps)
- Er, thank-you very much Bryce
for the wonderful introduction.
And erm, what I'd like to point out was that
that was the Scientific and Technical Academy Award.
There weren't very many actors in the audience.
It was a bunch of film scientists and engineers.
There were two pieces of hardware that were recognized.
The Shotover K1 camera and the Chapman telescope,
Hydrascope camera crane.
And that's really cool 'cause you can take
that crane under water.
I thought that was neat.
But, with those two pieces of hardware,
there were also a whole bunch of people,
software developers who were recognized.
So these six groups of people are all C++ programmers.
And that's sort of why the audience reacted the way they did
at the Academy Awards speech.
So they, they all recognize what the C++11,
actually how it changed our industry.
And so to talk a little bit about I'm--
Houdini has been around for a very long time.
It's always been C++.
It's been used in over 700 films.
It's actually received five
Scientific and Technical Academy Awards.
Only one Award of Merit.
And it's been used in all the visual effects films
that have won Oscars except for one year
in the past, how many years does it say there,
20 years.
Erm, and so to preface my talk,
this is a sort of overview of the talk.
I'm gonna talk a little bit
about Side Effects and our product.
I'm gonna talk a little bit about the industry.
And then I'm gonna get into the nitty gritty
of the C++ and how we abuse C++.
And so to, to begin my talk I have to go back
to the beginning of the computer graphics industry,
and that goes back before Side Effects existed
which is my company, it goes back before I started
at Side Effects, back before Side Effects started,
back to the 1980s, the early 1980s in fact,
and some of you may be too young to remember the 1980s
or, or weren't even born yet.
And so just to let you know what people looked like,
this is how people looked like in the 1980s.
(audience laughs)
Erm, and in 1982, Disney released a movie called Tron.
And it was sort of groundbreaking because it had a lot
of computer graphics in it.
At the time this is, these are
all the computer graphics companies
in the world at the time.
And every single one of them contributed
to the computer graphics that were shown in Tron.
So, at the time all those
graphics companies built their own software.
So they all wrote their own software.
There was no off-the-shelf software
that they could use.
In fact, some of them actually built
their own hardware as well.
Triple I there built their own machines.
Triple I was sort of the outlier in that group of companies
'cause they used LISP as their programming language.
All the others used C.
The important thing for, in terms of Side Effects' history,
is the fact that Omnibus was founded in 1982.
Omnibus was founded in Toronto where I'm from
and where Side Effects is from.
And Omnibus was very successful in the early days.
They were so successful
that they actually bought out Digital Productions
and they bought out Robert Abel and Associates.
And they, with those...
They were in the early days,
they had fantastic equipment like a VAX.
There's a Cray over there.
And there's a hard disk platter
that stores almost five megabytes of data.
(audience giggles)
Very very impressive hardware at the time.
- [Audience Member] Wow.
- Erm, they did a lot
of groundbreaking early computer graphics:
Flight of the Navigator, flying logos,
early experiments into character animation
with Marilyn Monrobot.
so this was all Omnibus, and Omnibus grew and grew
and grew and they were really successful,
until they declared bankruptcy and went out of business.
And that happened in the spring of 1987.
And two of the developers at Omnibus,
Greg Hermanovic and Kim Davidson, purchased the rights
to use the source code that Omnibus had built.
And so they started off
with a company called Side Effects,
and they founded it in the fall of 1987.
And it was all C code, and er really gross.
And so in 1989 I joined the company.
That's actually a picture of the entire company
at the time, and the guy in the striped shirt,
is actually a client.
This is at our first trade show,
and our client actually helped us out.
He's actually still a customer
which is pretty neat.
In the early days, all the artists
who used computer graphics, a lot of the tools
that they had were command-line tools.
So, for example, they would take a program called GFONT
which would generate some geometry using Helvetica
and they would be able to display the geometry
and see what it would look like.
There were other tools that would take the output
of one tool and generate a new piece of geometry.
So for example the Extrude program would take
some parameters and the input geometry
and generate new geometry.
And then you could color the geometry
and they display the geometry
and you'd have a flying logo right there.
The thing was that a lot of the artists
of the time were also programmers.
So, to be an artist at that time you had
to be very technical.
So they would build makefiles.
You could also pipe geometry because it was all Unix based,
so you could pipe from one program to another program.
And people would build makefiles
and write sed scripts and awk scripts
to actually create animation for their geometry.
So it was like, it was very hard
to be an animator at that time.
But we looked at what artists were doing,
and Greg Hermanovic, one of the founders,
was really inspired by analog synthesizers.
So this is a picture of Robert Moog,
and his analog synthesizer.
And the idea with the Moog synthesizer is
that you would take patch cords
and hook them between different generators,
and be able to generate new sounds
that you'd never heard before.
So, when we took a look at the makefiles
and the idea of patch cords, we came up
with a procedural way of generating geometry.
So, this is, this is what Houdini looks like.
Down in the bottom right you'll see a node,
the font node.
Above that font node are the parameters to the font node
so this is very much like the, the font command
that the artist would write except it's
in a graphical user interface.
And you'd see the result of the output there.
And then you could build a node network
which, just as like the pipe commands
that the artists would have to do,
and you've got all the parameters there.
The big difference with this is that,
that because it's a live application,
as you change the parameters you get immediate feedback.
So you don't have to wait for the makefile to run,
and for you to generate codes.
So you can change things on the fly.
So, we started looking at the nodes in Houdini
as a programming language.
Woops, so if you look at this node network,
this is a node network that takes a torus
and fractures it into little pieces.
Okay, so the artist puts this together
and they build a, what we call, a procedure.
So this is a procedure.
It's very much like a C++ procedure
where you have little nodes that do little operations
and then you have the result.
And just like with C++ if you have a procedure,
if you change the input, you get different outputs.
So if you feed a floaty toy, it breaks up the floaty toy.
But, we took a lot of the ideas from languages
and applied them to our nodes as well.
So we have encapsulation.
So the, on the upper right you'll see a small node network
where a grid is fed into a mountain,
and the mountain deforms the grid into looking like a,
well, not really like a mountain, like a bumpy grid.
But underneath it is, there's a big complicated node network
which is actually the implementation
of the mountain network, the mountain node.
So, we've encapsulated the mountain procedure
in its own procedure.
Erm, the other thing that you can,
that we realized is that you can take complicated
expressions and factor them out into little pieces.
So this is the incompressible Navier-Stokes equation.
And we, we've broken it out into little parts,
so each node does a little bit of that, erm that
the overall equation.
What this allows for is that you can isolate certain parts
of the equation and optimize that.
Rather than working on scalars though,
this is working on huge amounts of data
that are being fed through those node graphs,
that node graph, so you can take, you can do
parallelization and threading and a lot of,
it buys you a lot by breaking that out.
Also, with programming, you wanna leave comments
so we have sticky notes that you can put in your network.
When I write code, and people come over
and look over my shoulder,
I actually don't use syntax highlighting,
and people are just like, their jaws drop.
I can't, they can't understand how
I can't use syntax highlighting.
But, and so with nodes, node graphs, of course,
you also wanna be able to color your node graphs
so that you can impart information.
We also have tools to do layout.
You can take a very complicated network
and simplify it.
I know I've spent a lot of time manually formatting code,
and now we have things like claim format
which is great.
So we provide tools like that.
Another tool we provide are Kernaling.
We have some Kernal Languages in Houdini.
So we have a custom language called VEX,
but we also have OpenCL where the artist can actually write
OpenCL and feed that into a network
and have OpenCL run on their geometry.
So this in, in a sense, is like writing a functor
that you pass into a procedure.
So a lot of the ideas that we have
in the node networks are taken from computer languages.
One sort of advantage that we didn't realize,
and I'm not sure it's an advantage but, is dead branches.
So when you build a network that, if you look over
on the bottom right, three nodes in,
you'll see a node that has a blue circle around it,
that's the output of this network.
But there are a lot of dead branches in here
that aren't actually evaluated in that node network.
In C++ you'd put if that's surrounded or commented out,
comment out that code.
But with a node network you don't have to.
So you can experiments and you can have alternative paths
that don't get evaluated and aren't,
and are basically dead branches without having
to worry about cleaning up your mess.
So it allows you to go back and see
what the artist was trying to think of
when they were doing that.
Now, our node graphs are used in visual effects.
And visual effects artists are really often tasks,
the producer will come and say, I want to create
this effect that's never ever been seen before.
So a procedural paradigm like this is really flexible.
It doesn't lock them into a corner.
They don't have to do canned effects.
So, it turns out that we had to tweak a little bit,
but we've made our node graphs Turing complete.
So you should be able to do anything.
And in fact, when you've got a lot really creative artists
out there using our software,
and you've got very flexible package,
you can get a lot of interesting things.
So, for example, someone wrote a ray tracer using
our geometry operators.
So if you look really closely at the back wall
of the Cornell Box there,
that's actually a wire-framed grid.
So this, this geometry, this is not an image.
This is geometry that's been colored.
Each vertex has been colored
with the appropriate lighting and illumination.
The node network is up on, does it come through?
Yeah, the node network comes, is up on the right,
the right hand side.
And you can see the little parts
that do global illumination, recursion,
and all sorts of fun stuff, photon maps.
We've also had users write games.
This is an example of someone playing Houdini
against sim playing Reversi.
But, we, of course, don't stop at customized node networks.
We also make sure that our user interface is customizable.
And so you can go to a sort of meta-state
where users have created customized node network tiles
where you can do collision detection on your node tiles.
You can build IK simulations but underneath,
everything is still Houdini nodes.
So you can select your node,
and you can pull up your syntax highlighting
and color your node however you want.
In this one, he actually somehow manages
to select that node and change the name of the node.
And of course when you get something crazy like this,
the next step of course is to write a game using,
(audience laughs)
A node editor.
So if you (laughs) zoom in, each one
of these shapes is actually a node in the Houdini graph.
(Audience and Mark laughing)
Erm, I know I don't have time to actually do stuff
like this but our users do apparently.
It's an amazing.
To understand how Houdini is used it's important to realize
that, you know, it's not only geometry.
So this, most of the examples that we've looked
at have been geometry, and, so in the graphics pipeline,
the first stage that usually happens is modeling geometry.
And this is, this is a sort of an example
of how someone would model in Houdini.
So the, they've got the live viewport there
and they can select some polygons and do some operations.
But if you look over on the right hand side,
as they do these operations, Houdini's actually building
a node graph underneath.
So that means that the user can go back
and change parameters or edit their construction history.
What's also kind of interesting to notice is,
as the user manipulates the geometry in the viewport,
sometimes they go over to the parameters,
and actually manually enter values.
So it's a sort of hybrid way of working,
where you can work interactively, or work parametrically.
And it's sort of up to the user to decide how.
Once you've built a model,
the next stage of the pipeline is usually animation.
So, the, this is an example where the animator's trying
to change the weight or the motion
of how a character interacts.
So they're changing the power of the,
this is an animated Minotaur,
they're changing how the arm comes down
so that it's got a little more oomph.
Other parts of the pipeline include shading
where we take real world physics
and do Monte Carlo evaluation of BRDFs
and all sorts of fun stuff.
Applying textures.
But the, the sort of important thing to note is that
in Houdini every one of these pipelines is still building
and using procedural networks underneath.
So this is the node graph for the simple shader
that does gold.
And you can see it's actually a very complicated node graph.
It gets very complicated because even some
of these operators are actually encapsulated
and they're even more complicated procedures underneath.
So, once you've got shading done, you go to layout,
and then you go to effects.
And effects is sort of the, the strength of Houdini,
it's the real forte.
Because there, you never know
what kind of effects people will need, artists will need.
So it might be something like building a muscle rig
on the Minotaur, so that as the muscles,
we run an FEM simulation on the muscles,
and as the muscles deform, they adjust how the skin moves
so you get nice secondary animation on the skin.
I don't know if you, it comes across in the slide.
But you might also take a model like the squab
a giant squab and fracture it into a million pieces
or thousands of pieces in this case.
And in this case, you're running a physics simulation
where it's rigid body dynamics underneath.
And each one of those little pieces has to interact
with all the other little pieces.
So there's some, some heavy, well,
this is a little bit of heavy data.
If you run fluid simulations, you get a lot more heavy data.
So this is just a simple particle simulation.
In this simulation here,
where the peak number of particles
that comes through is over one point one billion particles
that we're moving around.
But there's also a volume, a voxel grid
that is used to store the state of the simulation.
And that voxel grid is 12 hundred by 12 hundred
by three hundred at the peak, so that's like
half a, 500 million voxels.
And so we're dealing with some big data
but the, and here's a billion voxels.
It's an animation, it just moves really really slowly.
So, we're pushing a lot of data through
in these simulations but the other side of this is that,
when the artist is setting up a simulation like this,
we can't actually just optimize for big data,
we also have to be able to optimize
for a thousand particles going through
or 10 thousand particles going through.
So our data structures need to be both able
to handle simple data, and also be able to scale up.
And, you probably all know Atwood's Law
which is something that can be written
in JavaScript will eventually be written in JavaScript.
(audience laughs)
This is not the kind of stuff that you can
just cobble together with a JavaScript frame.
We also get to fun problems because sometimes,
even the beefiest machines we have can't solve
a simulation altogether.
So we'll distribute the simulation across multiple nodes.
So, in this case each machine is transferring data
back and forth between other machines
so that the boundary conditions work out
and you get smooth simulation.
You, we also have to deal with load balancing
and all sorts of fun problems.
So, it's really fun working in our industry.
I can't recommend it enough.
The pipeline gets inordinately complex.
There's actually a lot of people doing research
into pipelines.
The cost of a pipeline can be, can be real
in terms of dollars so this is actually pertinent
to film, for studios, and so there's actually
quite a bit of research going into pipelines,
and optimizing pipelines and making sure they don't break.
But, pipelines can also be cyclics
so you can actually take data from part of the pipeline
and feed it back into the other pipeline.
So for example, here, we're taking an optical flow
of an image, like an animation,
and using it drive particle simulation.
So, my, the purpose for going through the sort of pipeline
of a typical studio is to point out that
every part of the pipeline faces unique problems.
So, when you're modeling geometry you need fast interaction,
you need quick UI, good UI fast interaction, animation.
You also want fast feedback.
One of the studios out there did something really clever,
which is, as the animator moves a character in one pose,
what they'll do is, they'll start a lot
of background threads and figure out what that change does
for all the other frames.
So when the animator wants to go and see what the change is,
it's already baked and pre-computed.
So there's a lot of efficiency we spend in compositing.
One of the people who is on stage
at the Academy Awards, presented a paper a few years back
on the optimal way to convert floating point integer data.
Because you would, you think well that's, it's really easy.
But, when you're doing image compositing
and you're working in floating point data,
you want to get every little ounce
of the machine out of power out of the machine, right.
So, we spend an inordinate amount of time trying
to optimize our code to make it fast
for the user and a better user experience.
So, I'm now actually gonna get back
to the C++ side of this talk.
So, we started off with a software called Prisms
which was all C based, and it had a little bit
of proceduralism in it.
And we wanted to get to, we knew we wanted
to get to Houdini which was gonna be all C++ based,
which involved the new UI library,
new geometry libraries, new, new everything.
But, we were only a team of six or eight developers
so the way we did this was we created a bunch
of transitional applications as we transitioned
from the old C code to C++.
We did this because we didn't want our clients,
the people using Prisms, to suffer because we were
just stopping development on the C code.
So we had duel development going on at the same time.
Now if you look at the timeline,
we started writing C++ in 1992,
and Houdini was first released in 1996,
and so if you weren't born in the 90s
or have forgotten what the 90s were like,
I just want to let you know that people
in the 90s looked like this.
(audience and Mark laughing)
But to put it more into perspective
that people might understand here is
that the draft STL came out in 1994,
two years after we started working on our C++ code.
The first release of C++, of STL for C++ on SGIs came out
in 1997, 1998 and Boost wasn't even out yet.
So, we'd been working on C++ a long time
before a lot of the nice containers
and functions came around, which of course meant
that we had to write all our own classes.
So, we spent a lot of time building our libraries.
We wrote List, we wrote String, Unordered Map,
Atomic Int, all of those things
that you can take for granted these days,
we had to write from scratch.
And there were a lot of benefits to this.
We had cross platform consistency
because we were controlling how the classes behaved.
We had a lot of control over behavior
and that was really important in the early days.
There was one platform where ostrstream,
when you wrote to it one character at a time
would reallocate the buffer, copy over all the data
and then append the new character.
So if you were, it was a real fast way
to write an over n squared algorithm
by writing out data in the linea fashion.
It was awful.
So, we had to write our own strstream
to work around that strstreambuf.
And I don't know, people, people here in this audience,
probably remember cfront?
Some of them.
Do you remember template repositories?
Templates worked really well, provided you were
in a single library.
But when you had a large application,
the template repositories were a nightmare.
And so we got burned by trying to adopt templates
too early and we avoided templates
and suffered because of that so we had a real bad taste
in our mouths after, about templates,
and so we suffered for many years.
There were of course a lot of disadvantages.
We're not as smart as the standard library developers.
But learning curve for new developers coming in,
who were very familiar with the STL,
it's a cost for them to learn our new classes.
And, we have to maintain them.
The, there was actually, it turned out to be
a really hidden cost, a hidden benefit pardon me,
to having our own classes,
and that was something we didn't sort of realize
'til later, which is that Houdini is part of an ecosystem.
So, all the studios, like all the people up
on stage write their own code.
And they have to interface with our application
and other vendors applications.
And so, having our own classes meant
that there was no name-based collisions,
there were no library version differences.
Erm, when we started using Boost we tried
to keep it header-only Boost.
But you know, once you've tastes
that forbidden fruit you've just gotta go a little deeper.
(audience chuckles)
And so as soon as we started doing that,
we noticed all our clients started complaining that,
oh, we're using Boost one point five one
and you're using Boost one point five two.
and things crashing and, so one
of our engineers spent six months name-spacing Boost
for Houdini, and it got rid of the problem
but it, you know there are all these costs
to having to exist in an ecosystem.
So, all these studios have their own code.
Even more studios on top of that,
and we realized that what was needed was some kind
of organization to help maintain a versionitis that happens.
So Side Effects and Autodesk and we pulled
in the Visual Effects Society, got together
and came up with the Visual Effects Reference Platform.
And the idea with this is to specify a common set
of libraries, a common set of language features
that can be used in visual effects industry.
And so in 2015 we were using GCC 4.8,
Boost 1.5.x, Qt 4.8.6,
4.8.x, and Python 2.7.x,
and a bunch of other libraries.
The next year, well we stayed on GCC 4.8.2
'cause people don't want to advance very fast.
If something's working, they don't wanna break it.
Erm, the Visual Effects Reference Platform changed
that x into an eight and the x
at the bottom into a five.
The big change here was jumping Qt versions,
but the huge change was that in 2016,
we were allowed to use C++11.
(audience rumbles)
and believe it or not there was huge pushback
from the studios.
There was one really really big studio out there
that said, you can't do that.
You can't use C++11.
We've got register all through all our headers.
(audience laughs)
So, there was a lot of pushback.
What I also found out recently is that there's one really,
one application that's used everywhere in the world.
It's huge.
And until very recently it was actually C99.
So they still compiled C99 and eventually,
to use some of the libraries that are
in the VFX Reference Platform, they had
to jump their compiler to C++.
For 2018 which is this year, we finally get
to use some features from C++14
and we bumped the compiler all the way to GCC six.
A lot of the studios around the world,
the guys who are building this VFX Reference Platform are
on Linux which is why GCC is specified,
but this is maybe changing.
So, the VFX Reference Platform was all sort of run
by volunteers and participation was voluntary.
And, so there's some studios out there
that don't adhere to it, some studios just go their own way.
But I wanna point out that the Academy,
those guys who give out the statues,
they do more than just giving out the statues.
So they have a real interest in the film industry.
So, and there are a lot of people on the Academy
who are actually scientists and mathematicians
and computer scientists.
In fact, the Scientific and Technical Award committee,
is broken into two parts and there's the engineer side
and then there's the digital imaging technology section,
and that DIT section is actually much bigger
than the engineering side.
So the Academy came out with something called ACES,
the Academy Color Encoding System.
And it provided a standard for representing
and converting color.
So this idea of a standard, they send it
to all the camera manufacturers,
to all the software vendors, to all the projection makers,
so that the idea is that when a director has a vision
of how that color should look, when it comes
onto your screen or your TV or your theater,
it should be represented the right way.
And so ACES sort of transformed the industry
and was a great step forward.
And just this year, in August, the Academy announced
after two years of a lot of research
and a lot of input from industry,
they announced the Academy Software Foundation.
And what this is, the purpose of this is to look
at all the packages of software that are relevant
to the film industry and make sure
that there're standards that are adhered to.
So, we're really looking forward to seeing where that goes.
And so, now, to take you back, we wrote a lot
of C++ classes and we don't like that.
We don't like having our C++ classes
so we needed a way to be able to transition
away from our classes.
So, for example, we wrote our own UT_AtomicInt.
To go back to 1992, name spaces (laughs)
this is history, name spaces had just been announced
in 1990 so there wasn't really a lot of compatible Support.
So that UT underscore is a poor man's name space.
So that's our utility class library.
So we still use that and suffer.
But, so if you, if you have problems with that,
just replace the underscore with a colon colon
and you should be okay.
Erm, so we wrote our own UT_AtomicInt
and we'd like to transition to std::atomic of course.
But, you'll notice that our method
for add std::atomic is fetch_add.
So our process for transitioning is
to create a deprecated method for add
and switch it to fetch_add.
So we do this because, when we have our deprecated method,
we can easily find it in all our code.
But the thing is that we also send out our headers
to all the studios who use our software,
and they may have code that depends on this class.
I hope they don't, but they may have this code.
So this deprecated method actually has to live
for a couple of years or a couple of versions of Houdini
before we can actually clean it up.
And once we clean it up, then we can just do a using
and create a typedef for our UT_AtomicInt.
And this way we don't actually have to change
any of our code, we still use AtomicInt,
but underneath it's using a std::atomic.
And we like this because that means
that we don't have to change our code.
We don't have to do all these passes through our code.
So we also do this when we pull in our foreign classes.
So when we pull in boost::unordered_map,
actually if you look at it, it's hboost::unordered_map.
That's 'cause we went and spent
six months doing a name space change.
So we have hboost::unordered_map,
and so we create a little wrapper around hboost
or the unordered_map, and so we use that as UT_map
in all our codes.
So we don't actually ever expose our higher level libraries
to naked boost.
And that allows us in the future, if we wanna change
that to std::unordered_map we can just do that
and hopefully everything will just work fine.
So some of our classes are still around.
We wrote our UT_Array class which is very similar
to a std::vector class.
And, er, this is still around.
One of the things that happens with the vector
or an array is that as you push back items
onto the array, eventually you have to regrow
and resize the array.
And the way this is sort of done,
this very simplified and not correct maybe,
in vector is that you create a temporary new array,
move the old array to the new array, delete the array,
reset your array and reset your size.
Of course this doesn't actually work
if you're shrinking the array but that,
that's another thing.
So, and this is, this fine, this works great.
But when we started off, we were a bunch
of C programmers who were learning C++ in the process.
And I know that there's a term
for writing code in a Pythonic Way.
I don't know if there's a term
for writing code in a C++ way.
But when you get a bunch of C programmers transitioning
over we wrote our growCapacity like this.
So, we use realloc instead of creating a new buffer
and we'll, and then just reset the size.
So there are two, two artifacts of this.
One is really subtle and the other is really blatant.
So the subtle effect is that realloc doesn't actually have
to allocate new memory and copy over the data.
If your memory allocation library underneath says,
oh I can just grow that buffer,
you don't have to do the reallocation,
you don't have to allocate a new memory
and copy over the data.
So there's a, there's a performance benefit to this.
And in, just to make sure
that there was a performance benefit,
we did a little testing before I came down,
and we were unable to get the vector
to ever match the performance that we get with our UT_Array.
In fact, when you, we're compiling debug libraries,
the std::vector approach is actually twice
as slow as realloc and, you say, well debug doesn't matter
except that it's all developer time
because if developers are running debug libraries
then it's their time that you're wasting.
So, we get a two x performance by using realloc.
Now, that the blatant problem with this, of course,
is that realloc does a blind copy of the memory.
So it works just fine if you've got a UT_Array
of double or any pod type.
It actually works on UT unique_ptr as well,
which has complicated move constructors
and copy, er, destructors.
So it magically works for that.
But of course it doesn't work in a lot of cases.
So, for example, LLVM SmallVector.
You've got some, the way this is laid out is
that there's some, a small buffer of storage inside
and a begin and end, and when you initialize the vector,
you initialize your begin and end to point
at the storage that's inside.
So, if this got reallocated and blind-copied,
while the storage would be copied fine,
but that begin and end pointer would actually be
pointing to garbage memory, the old memory
that, that it was using.
So, that doesn't work.
And so you can't use a UT_Array of std::string,
and you can't use the UT_Array of SmallVectors.
It just doesn't work.
Erm, boost::function does work,
and, so in our quest to minimize our dependence
on boost, we said, so we had this code
that was using the UT_Array of boost::functions,
and we said let's change it over
to a UT_Array of std::function.
And so now, here's a pop quiz.
So how many people think that realloc will work
with std::function?
Hands up.
Two people.
Two people I see.
How many people think that it's not gonna work?
Ah, much larger percentage, maybe 25%.
And how many people don't know, or don't care?
(audience and Mark laugh)
Okay, so it turns out that everyone is right.
So, on some platforms std::function does work
under reallocation, on other platforms it doesn't.
On some platforms it has an introspective pointer.
And so, we were fortunate, this code actually was only used
in one place and we were able to figure out,
we were able to precompute how big the array was.
So, we actually didn't even need to grow the array.
So, in the long run, we actually improved our code
when we switched over because you never have
to grow the buffers.
But we're very very very excited to see,
I can't pronounce it but we're excited
to see trivially_relocatable coming down the standard.
Erm, this is not just so that we can fix our code
but it's also so that we can make our code better.
We can figure out why and where we want
this kind of behavior.
So for the next part of the talk, I'm next C++ code,
you need to understand a little bit
about how our geometry is stored in Houdini.
So, we have some specialized classed in Houdini
that are specific for geometry but can be generalized.
So, the way that, the simplest piece of geometry
in Houdini is a point.
Just like in you could.
So, a point in Houdini can have a whole bunch
of attributes or properties to it so it could a position,
a velocity, a temperature, color,
whatever the user needs to have on the point can be
on that point.
If you've got a cloud of points,
every point in Houdini will have the same set of attributes.
When you connect the points you create a primitive.
And so that face can also have attributes on it.
So you can have one of the attributes is a vector
of the points that are being referenced.
Another could be a string referencing anything,
the shader that's gonna be applied to that face.
And so if you have two primitives, you end up
with the same properties or attributes on each primitive.
So you can represent this as an array of structs
which would kinda look like this.
Or you could represent it the way we do
which is a struct of arrays, which is,
allows us to have a lot more efficiency.
So we have a struct of arrays where you end up
with a different layout in memory but the same data.
So, if we look at that original node network
of the font.
If we look at how the, the data is stored,
each node in that graph, we wanna be able
to keep caches so that as parameters change,
you don't have to reevaluate the whole graph
every time a parameter changes.
So we have, we have caches.
And so each cache will store a copy of the geometry.
In the top geometry you've only got 740 points
in 10 faces.
But after you extrude it, you get a lot more faces
and a lot more vertices.
So you're up to almost three thousand positions
and then 760 vertices in the, 760 faces pardon me.
And then when you add color to it,
we add a new color array.
So, in Houdini when you're passing geometry down
from node to node, there's a lot
of data that's being copied.
So, when you look at that colored geometry,
you've got the position points to a blue block
of memory and the color points
to the orange block of memory.
Each is a size of float three times the three thousand
so it's about 36k.
It's not bad.
Color's about the same, so 36k.
And then the naive approach would be
to just copy that memory, copy those vectors over
and then work on the new vectors.
But, you know, we're computer scientists.
We know better than that.
We can use COW.
And if you think that I'm gonna avoid the opportunity
to, er, use a fancy pun, like this, you're mistaken.
It's not that cow, it's copy-on-write.
So, the way that Houdini works is we use copy-on-write,
we have pointers to the originally geometry,
and if you need to modify something,
well, you make it duplicate when you, er,
when you go to write it.
And so, to understand, sort of,
how this effects Houdini geometry,
well let's take a look at some standard geometry.
So, the Stanford bunny is 700 thousand triangles,
360 thousand points.
So, this can save a ton of memory
when you're talking that around.
The Standford dragon is 5.5 million triangles.
I've got some custom, movie character
which I can't show you the picture.
It comes in at about 20 million polygons,
20 million points.
The movie modelers tend to make quadrilateral meshes
rather than triangle meshes.
So when we look at a piece of geometry,
you're now looking for the P array,
you're looking at 240meg.
So, when you copy that 240meg, or when you push,
push back a new point onto that, that can be a big cost.
So UT_Array for example gives us a little benefit
when we're doing that.
But if you only wanna change one point in that array,
COW doesn't help because you still have to copy
that entire array just to move one point.
So, I know this is being covered
in other CppCon talks, but we have a paged array.
So rather than representing our entire geometry arrays
as a flat array, we actually have pages of data.
What's nice about this is that if you push back a bunch
of items on this array, you don't have to copy
over those other pages.
They're immutable.
So, you can actually just allocate a new page
and write data to that.
So, there's a lot of benefits to this.
It adds a little complexity.
But if you wanna move that one point there,
you only have to copy that one page.
So, there's a huge huge memory benefit
if you're only doing a little manipulation.
So if someone, if a modelers just playing
with the shape of the mouth, then you don't have
to copy over a ton of data all the time.
So our page array looks kinda like this
where the square bracket operator
just splits the offset into page
and offset within the page and does that look up.
But we, after a lot of analysis of the pages of data
that we get, we actually found
that there were other things that we could do.
So, a lot of times, let's say you apply color
to a piece of geometry,
a lot of times the color is constant over a page.
So what we'll do is we'll, once we've got pages,
we can analyze each page and if the value's constant
over the page, then we just represent
that as a single value.
So that saves even more memory.
But it also allows us to do operations
across the geometry where we only have to change one value.
So if we wanted to change the color from red to blue,
well we'd only have to change it in one place
rather than change it in millions of places.
So we can work a little more efficiently that way.
The constant page compression, or any page compression,
adds cost to the square bracket operator
on the PageData class.
But it's a small, a small price to pay when you're dealing
with like a billion particles, trying to move them around
and only some of them are moving.
You get a lot of benefit.
And that brings us to strings.
So, as you saw with the two triangles, strings,
the triangles can store string attributes as their data.
So that means you can have the user,
and we don't control what goes into the strings.
The user can add string data to every polygon
in the model if they want.
And often those strings are repeated or the same thing.
So, one of the types of data that we have is a style sheet.
So, just like cascading style sheets,
we have a little snippet adjacent which determines
how materials get applied to primitives.
So there's a nice little simple stylesheet.
Erm, and we never expected users
to create a 15 megabyte stylesheet.
So, there was a user out there
who created a 15 megabyte stylesteet,
and then copied that stylesheet to a thousand primitives
and so very quickly you get up to 15 gigabytes
of string data.
So we wanted something that would be able
to deal with that like COW,
like the way we deal with COW on geometry.
So, we said well std::string used to be like that
on GCC until they put the SSO in,
and the reason that it was not, is not like that now,
is that the square bracket operator allows you
to modify the string so you need,
so it needs to do COW and it gets
very complicated with threads.
But we said, we don't actually want to string
to manipulate the string, we just want a string
to store the string.
So we came up with a StringHolder class.
So there's a nested class in there called the holder
which actually stores the string,
stores its length, stores a hash and a rough count.
We could have done this with like an intrusive pointer.
But the intrusive pointer doesn't have c_str
or length so being able to have a little wrapper class
around that is a, we found it a nice thing.
So the idea is that you have a StringHolder,
it's reference counted that same string can be used
thousands of millions of times in your data,
and it's not, there's almost no cost, the atomic cost.
And then we started looking at our code.
And there were a lot of places where we had static data.
So, static data adds to your startup time,
and we constantly try to minimize our startup time.
It costs developer time, it costs user time
if your application takes a long time to start up.
So there's this class, this case
where you've got a static StringHolder,
which at startup time is gonna do an allocation
and a memory copy of that literal into the object.
And that's no good.
So, we wanted something that was like a UT_String reference
so, it referred to the string
instead of actually holding it.
So we modified our StringHolder to be a String Reference
and if you look there's a union of a pointer
to the literal, or a pointer to the holder.
And we use the length of the string
that's stored in the StringHolder as a,
or the StringRef as a flag to switch
on whether it's actually a holder
or a reference to literal.
So if it's a, if it's a holder,
we can actually store the length
with the holder object.
If it's a literal we actually store the length
with the string.
And so the constructor for the StringRef looks
kinda like this.
If the string fits as a string literal
then we actually just keep a pointer to the string literal,
and set the length to the length.
If it's, otherwise if it's a nul string that's past in
or an empty string, what we'll do is we'll store,
appointed the holder to a slingletonEmptyStringHolder
and if it's a, if the literal is actually bigger
than ::max we'll allocated a holder for it.
So our holders will actually store very very large strings
and if it's a small string and it's a literal,
we can actually store it quite nicely.
So the copy constructor just copies
over the member data.
But then it says if it is holder,
we've got to make sure to reference the holder.
And if on the destructor,
if it is a holder we dereference the holder.
So, this is a sort of I think it's a fairly unique pattern.
I haven't seen it anywhere else.
Is what we do is we subclass a StringHolder
from StringRef so there's no data on a StringHolder.
They are identically in memory the same object,
but they have different behavior
on the constructor and same behavior on the destructor.
So the constructor of a UT_StrongHolder will always make
a copy of the data.
So if you pass in const char it's gonna duplicate
that const char.
This is sort of like the string behavior
except that it's reference counted.
The copy constructor, when it takes a UT_StringRef,
if there is a holder, if there's no holder,
it will duplicate whatever the data coming in is,
because the StringRefs are const chars
and they're short-lived.
You don't know about their lifetime.
So the StringHolder you're guaranteed
that the string will always be held around.
Otherwise, if it is a holder we actually
just do the same thing as what was done
in the StringRef copier.
And for the holder, we always set the length to zero
to indicate that there's a holder.
So what's kind of nice about this is
that we have the same object but we can look
at it in different ways.
So if we have class StringSet.
Define method takes a StringRef
because we know that we're never gonna hold
that string inside the StringRef,
inside the StringSet.
The insert method on the other hand,
takes a StringHolder and, because it will make a copy
of that StringHolder it will reference
that StringHolder inside itself.
So if you write code like this,
where you're passing a literal hello
and literal world to insert, we actually duplicate
those hello and the world when they're inserted
but when we call find, we don't create a duplication.
So it just creates a temporary short-lived StringRef object.
And then if you pull something out,
you get a StringHolder which you can
just reference with an atomic,
an intrusive reference count.
So, we had a developer come in
who worked on a sort of side project
and they wrote it all using std::string,
and we got them to change over to UT_StringHolder
and there was a, so there was a huge drop
in the number of calls to malloc.
And malloc is one of those things
that it's like a death of thousand cuts,
you can never get rid of those calls to malloc
because everybody's doing something with it.
But by switching over to a reference-counted string,
we were able to cut the calls to malloc 20x
which is huge.
And it's those kind of small little optimizations
that are really hard to find sometimes.
So, because UT_StringRef and UT_StringHolder are the
exact same object, we can use this pattern
which is to UTmakeUnsafeRef.
So given a reference we can reinterpret_cast that
to a holder.
And you think, well that's really dangerous
and it sort of is except that the StringRef
actually handles the StringHolder case just fine.
So, we can now make a const UT_StringHolder
that doesn't do any allocation at startup time
which is good.
So, of course, what we'd like to do with this is
to make a user-defined literal
And just do this instead,
and with C++11 we had a little fun trying
to get the hash function constexpr.
We managed to do it, and this was, this was all working
and we were so happy the first time we did this,
it worked right out of the box, the hash was computed
at runtime, the length was computed at runtime,
the date was stored, it was all just beautiful.
And then we tried it on another compiler and it didn't work.
So, the first compiler we had lied to us,
saying that this was the right thing to do.
It was, well, we were very happy that it did that,
but in general we can't really rely on it.
And, we never found a solution.
We tried seven ways 'til Sunday to try to figure
out a way to have this user-defined literal do
what we wanted and we actually ended up having
to create a new class in the hierarchy UT_String literal
which is just, it allows us
to have the constextpr compile time.
Eventually we'd like to go to the user-defined literal
and we're very happy that in C++20 we should be able to,
if we read it right,
we should be able to do something like this.
And there was a great talk on regx earlier
that also is looking forward to this.
And with the VFX Reference Platform,
we should be able to do this in like 2027 or 2028.
(Mark and audience laugh) I'm looking forward to that.
And so now, we've been writing C++ code
for over 25 years and so to close off I'd like
to have a few reflections.
The first thing is that we've sort
of learned is don't jump on the bandwagon
before the bandwagon is ready.
When we started using templates
in 1992 we tried to do the right thing,
we tried to do it, but the compiler hurt us,
it chewed us up and spit us out.
And it burned us.
So if you try to jump on the bandwagon
before it's ready, you can sometimes get burned.
Number two, don't jump on the wrong bandwagon.
Back in 1992 we thought inheritance was the way to go.
So, we are struggling now with,
you wouldn't believe how deep the inheritance goes
in some of our class hierarchies, it's awful.
Nobody can comprehend it, it's terrible.
Inheritance of course has a place but don't overuse it.
The next thing, we will of course continue
to transition to more STL classes,
getting rid of all our obsolete classes.
'Cause standards are a good thing.
But, I think there's always gonna be places
for custom classes and custom patterns.
That UT_StrongRef pattern that we have,
there are other classes that we can apply that to.
Some of our UI classes could really use the idea
of something that's not, doesn't have
the expensive construction costed
at startup time and then erm,
but has a way to be a safe version where ownership is.
Performance is hard to retrofit.
So, everyone, and I will also tell you,
stay away from early optimization, premature optimization,
but when you're designing code,
make sure to have performance in mind
because if you, say, oh I can worry about this later
and come back to it later, well,
when you come back to it later it's gonna be a big rewrite,
and you don't wanna have to do that.
So, always have performance in the back of your mind.
Beware of template abuse.
So templates we stayed away from
and now we embrace them as we should,
generic programming is great.
But beware of template abuse.
I've debated with myself whether
to call out OpenVDB by name.
That's a library that all
of the visual effects industry uses.
When we were using GCC five and other compilers,
well GCC was using three point five gig per source file
to compile the code in OpenVDB,
and when we moved to GCC six point three,
it's using six point five gig
to compile those source files.
So, compiler writers out there, please, I encourage you
to download OpenVDB and see what's going on.
Please (laughs)
(audience laughs)
We have some beefy machines at Side Effects, right.
My machine's got 64gig and 32 cores.
It's great, it's fun to compile on.
But we had to change our build process
for OpenVDB because when you get 32 instances
of a compiler running and each of them is taking
over six gig, that pushes all machines, right.
And, the last one, it's gonna sound a little bit jaded,
but, you know, we've been burned by standard libraries,
we've been burned by operating systems,
we've been burned by compilers,
so, trust no one.
Don't just the operating system, don't trust compilers,
and most importantly, don't trust your own code, okay.
Thank-you, if they're any questions.
(audience claps)
- One of the points you made in the reflections was
to not jump on the wrong bandwagon.
And I think that's very easy to say in hindsight.
So do you have some tips
for identifying the wrong bandwagon
or conversely the right bandwagon?
- That's a really hard question.
I think that, so we still jump on the wrong bandwagon
every once in a while.
I personally have overused auto.
I think that there are certain rules of thumb
that you sort of pick up as you go along,
the good patterns and the bad patterns.
And so, it's constantly a learning experience.
Sometimes, though, the bandwagons can be really really big,
and in those cases, like generic programming,
you probably want to give it a little time to set in.
I'm sorry, Briani, but concepts, I'm gonna wait
just a little bit to see how it works.
(people laughing)
- Hey Mark, great talk.
What do you think about adding geometry types
to the standard library?
Do you think that's something
that you guys would use if we add
some standard geometry types and primitives in the STL?
- I think we would probably shy away from that.
But that's just me.
I think that when you look at geometry,
the different ways of representing geometry,
a realtime engine is gonna have to represent
it very differently than a DCC, a digital content creation.
So, I think that finding a standard way
of representing geometry might be
a very very difficult problem.
- Yeah, yeah, I agree.
- (laughs) Okay.
- Hi, great talk by the way.
- Thank-you - Quick question,
why is atomic in the template class?
In the examples, so you had Atomic Int, the custom class,
and there was a templated class as a parameter.
- Because you want to be able to have a 32bit Atomic Int,
you want a 64bit Atomic Int.
So, when you're counting memory,
you need an Atomic Int to be able to count memory.
But of course all applications are 64bit now
so you need to be able to have a 64bit Atomic as well.
I think the standard also has atomic bool.
We don't have atomic bool so switching
to the standard atomic would buy us that.
- Thank-you.
- Hi, ignoring shaders for a moment,
do you run much of your C++ core engine
on the GPU, and if not, make a wish?`
So, all of the GPU stuff we use is all through OpenCL,
and we do that through our node graphs
so we write the OpenCL functors that get past
to, through the node graph
so all the OpenCL we write is
through our procedural networks.
So we don't, we have underlying libraries for OpenCL.
We use OpenCL to make sure
that we're graphics card ignostic.
But there's nothing preventing us
from adding cooder operators or something like that as well.
Does making a wish?
- Yeah, I was, what would you like from cooder?
What would help you from cooder?
- (sighs) I'm not sure, I'd have to think about that one.
- Alright, let us know.
- Sure.
- Hi, So having a C++11 version there should be
as simple as adding a compiler option to have it,
but I'm pretty Houdini was more complicated than that.
So I was wondering how long does it take
to keep, to have a C++, a working C++11 build,
and what kind of issues did you face?
- Sorry, sorry the question was?
- The question was, having a C++11 build should be
as simple as adding a flag right?
- Yes, it was.
- But, it was, so you your build and you added a flag
and you have no issue whatsoever with the C++11 build?
- That's right.
- Awesome.
- The problem was when clients tried
to use our code and the headers had C++11 features in them.
We actually use C++11 a lot
before the Visual effects Reference Standard allowed us to.
But what we did was we made sure
that it was all in C code,
so it was all compiled C++11.
Header files were still prior to C+ C,
L to three or whatever.
- Have you guys looked at string_view?
- Yes, so that actually came up.
What's the difference between a string reference
and a string_view, right.
The big difference is, for, so a StringHolder
and a StringRef are the same object.
So we can pass them around interchangeably
and they actually have memory of that they were.
So if we pass a StringHolder into a StringRef
that gets passed into something
that goes to a StringHolder, it actually knows
that it was a holder.
Well, a string_view, it doesn't work.
So, we also used that pattern
for our small array for example.
So our small array is exactly a UT_Array
but with other stuff on it.
So that when we pass a small array into something
that only takes a UT_Array we don't have to worry about it.
It actually converts just fine.
So there's no conversion cost when you do that.
- It seems like you guys profile at scale.
Can you share how you do the profiling?
- So, every developer at Side Effects has their
own preferred way of profiling.
I like cashgrind.
Though I will sometimes use the Google Perf Tools,
I think they're called now.
It used to be Oprofile.
Some people run Vtune.
It depends on the developer.
- Do you build in something into the product
to know how it does, like malloc counting and so on?
So, we do have some libraries that do the malloc counting.
We also have performance tools for Houdini itself.
So you can profile the cookie graph of your network.
And that allows us to zero in on code
that we actually wanna tune and profile.
- So, obviously the performance is the key issue
in your work.
Are you satisfied with the current state
of the compiler optimizations?
Do you often have to fight the compiler
or does it do well enough for you?
- We're usually pretty happy with the compiler.
A lot of our code, where it's very very performance
critical will sometimes use a compiler intrinsics
to generate code for AVX or SSE.
- Okay, thank-you. - Better,
compilers are actually doing
a really really good job these days.
- Hi, great talk. - Thank-you.
- So, I'm working on something somewhere code-based
where it's like five plus years older than I am,
and we have the same thing whereas
before standard library we have our own custom stuff.
We haven't done this. - I thought I was
the only one. - Yeah, who knows.
But we're about to do our jump.
We haven't yet, to like C++11.
We were doing like '98, 11 or 14
You said moving to 11 was a huge boon for you,
could elaborate more on kind of why.
- So, it was a boon as a developer.
So a lot of the things that are in C++11
that we used were, they simplified our code
and made it much more comprehensible.
The fact that you can use lambdas now,
huge huge thing.
Auto queue word is great.
A lot of the benefits of C++11 were really
as developers to simplify our code
and make much more comprehensible.
- Okay, thank-you. - I don't think the users
ever saw any benefit but for it was great.
- Yeah, development time.
- Yeah.
- Hi, you mentioned some memory optimizations you've made
in custom classes like the string.
I'm wondering if you've done,
if you're doing anything like
more systematic memory optimization
with things like custom allocators
or custom memory managers?
- We have a lot of, we do have some custom memory allocators
and we've actually got rid of a lot of them recently.
We find, for, we use jemalloc underneath
as our memory allocator except on Windows.
On Windows we use tbbmalloc,
and we were constantly investigating the system mallocs.
It's something that we always, you know, battle against.
But we do have a small object allocator,
and we actually just use tbb.
We switched over to the tbbmalloc for that
because it had better thread performance,
better performance over threading.
- Alright, thank-you. - Okay.
Er, sorry, one other object that we have is a stack buffer.
So we have a templated stack buffer
that allows us to create a buffer
on the stack of whatever we need.
And that's really handy sometimes.
- You talked about your node networks
being Turing complete.
- Yes. - So, and people
writing those games and stuff.
So do you do any optimization passes
on the node network itself?
- We've actually considered running LLVM
over our node networks to do an analysis of the graph.
We don't at the current time.
Internally, our Cook engine does a lot
of dependency tracking so very early
on our expression language that we used
in Houdini allowed you to reference any node
and any data anywhere in the graph.
And so, there are all these sort of implicit paths
that are very hard to track explicitly.
And so running additional graph analysis tools is tricky.
It's definitely something we want to investigate though.
There are certainly even, even certain subgraphs
that where we, where we could take advantage
of that, alright.
- Thank-you.
- Er hey. - Hi.
- Regarding the interaction
and the interfacing with others that complied
to this VFX standard. - Yes.
- Is if understood from the presentation,
queued is part of it.
Wouldn't it make sense to at least use
in the public header exclusively queued instead of your own?
- We would love to switch over to queued exclusively.
Just a note, we have 10 minutes left.
We'd love to switch over.
However, we're still a small team.
We have I think 20, 25 developers,
and to, for us to transition,
we're actually in the progress of transitioning
over to queued entirely.
So, you can now have queued panels inside Houdini
and eventually we'd like to switch all of our panes
to be over to use queued, and so, but it's a stage process.
Just similar to the process we moved from C to C++.
- Yeah, I mean it makes sense then.
- Thank-you.
- Hi, you mentioned the finite element method
for simulating the muscle mass
and the Navier-Stokes equations.
- Yes. - So, I was,
my question is, what were the main challenges
that you had over the years
for physics simulation in general.
- So, I don't actually work a lot on the physics engines.
I know that dealing with, the amount of data
that comes in and processing
that data is always challenging.
Interfacing to OpenCL and getting the data over
and doing work on the GPU that's challenging.
But, again, I'm sorry, I don't do a lot
of work on the physics side.
- Thank-you.
- Hello. - Hi.
- You use graph or processing right,
so you have a processing graph
and that's the way you have organized your computations.
And you gave us the idea it was
a low level data structure like go and page it.
But what is high level data structure that you're seeing?
- So, the high level data structure
would be something like a node graph.
So, when the user puts together a bunch of nodes,
that, in fact, is part of Houdini.
So, when we ship Houdini,
a lot of our nodes are really high level,
in that they're made of other nodes inside Houdini.
So, the, we do have, I don't know if that's--
- I will say like your array of points and triangles,
your colors, your objects.
- So, we have a geometry container
that contains all the arrays and points
and all the page arrays, indirection arrays,
but you can look at it as a piece of geometry.
We've got classes like a texture map
that underneath have a bunch of other classes
that are involved.
- Then do you organize your geometrical objects
into the three as well?
- So, our geometry objects are actually very flat.
So we don't.
We have custom primitives
where you can represent complicated geometry,
and we call them packed primitives
where you can pack geometry away and store
that as a COW reference or duplicate it that way.
So there are ways to represent complicated hierarchies
inside our geometry but that's fairly new
and gets more complicated.
- Do you use some open data format
to serialize your scene?
- So there is, there are a couple of open data formats.
There's Alembic which is, was released by Sony and ILM,
and that's a way of storing animation and geometry.
And coming down the pipe is something called USD
which is being released by Pixar.
Universal Scene Description, not U.S. Dollars.
It's very similar acronyms.
- Thank-you. - Okay.
- Hi, I wanted to ask about your holding guidelines.
Since you have quite specific requirements on your code,
do you use some custom coding guidelines?
- We, our coding guidelines go back a long way
and some developers adhere to them
and some developers, we're a bit loose
with our coding guidelines.
So, not everybody adheres to the same coding guidelines.
So, we have a lot of very very talented,
very passionate developers,
and sometimes it's a little constraining
to make them work within a certain guideline
if they're more comfortable working a different way.
As long as the code works that's mostly what counts.
- That's perfect, thank-you.
- Hi, you mentioned initialization of time.
- Yes. - And I was wondering if
constexpr helps, or did you make any investigation on that?
- So, it would, it would definitely help.
But the trick is to make sure that the compiler is,
that we can use constexpr
in all the places we want to, right.
So yes, constexpr does help but a lot
of the startup time is actually malloc
and things that aren't, can't be done constexpr.
So we're battling those more
than the static initializations.
- What's your approach on error handling?
- Oh, that was one bandwagon we didn't jump on.
So we didn't jump on the exception handling.
So we, our error handling has got
to be the worst in the industry.
We have a global error manager
and everybody can add to that,
oh it's awful, you don't want to know what we use.
(Mark and audience laugh)
It's one of those things that's definitely due
for a rewrite or a transition.
- Are you looking forward to the expected?
- Er, I saw that talk yesterday.
It certainly holds a lot of promise yes.
It's gotta be a lot better than what we have.
One of the problems that we have, though,
is that sometimes the error actually occurs
in an event handler so we don't actually have the stack
from the guy who's doing the try,
the error gets thrown in some separate thread,
in some totally separate call stack.
So it gets tricky for us in our UI library.
I have to think about how that would work
with our, some of our awful code.
- Thank-you.
- Did you-- - Three minutes.
- Did you come up with any other idioms for dealing
with mostly immutable data and occasionally changing,
let's say, just like the page-based data modification
that you'd talked about but
in the context of multiple threads?
So one, something like the concurrent queue
but largely immutable data
but occasionally one thread might be?
- So er, we do use tbb vector,
a concurrent vector sometimes,
which has that property.
But we haven't spent a lot of time investigating
those kind of algorithms ourselves.
The pages also have a small side benefit
when it comes to threading, is that you can easily assign
a thread to each page and have them work independently.
If the page needs to be compressed,
it's one thread's job not the global program.
So there are a lot of advantages
that you get with page array.
- Hi, nothing to do with C++,
it's more to do with your node networks.
As, you said it is Turing complete, it is code.
How the hell do you manage that?
And do you do diffs on a big node network?
- So, diffs on big node networks is a real problem
that we actually only finally got around
to addressing last release of our software,
like 20, however many years into it.
So, because the node networks can get inordinately complex,
you can, we now are able to expand them
and serialize them and then you can do the diff
on the serialized code.
So, you can represent all the node networks as text as well.
So, there are underlying commands just like the old days
where you had standalone commands to do that.
So, you can run the diff on that and see what's changed.
- Okay, thank-you.
- One more. - How do you test
your software?
We test our software with our softwares.
So we have a bunch of regression tests.
Aside from unit tests, we also regressions tests
that will take animation files and render images
or create geometry.
And so we also validate that the process
of creating geometry is also the same.
So we use our software to test our software.
- Do you mock and automate testing a lot,
or you have still decent amount of manual testing.
- We, every developer does a certain amount
of manual testing obviously before they commit.
But we also, on every build,
we have machines that run our regression tests.
So after build, they run a regression test,
and it allows us to bisect and easily sort of
zero in on what might be problematic.
- So regression is okay.
But what, what about the new features
and functional tests?
- Er, when you add a new feature you add a test
to make sure that it keeps working that way.
- Kind of, kind of unit test, or something high level?
- So the way, we'll build a unit tests
for low level structures, but for some
of the high level structures, because how do you test,
when you've got a network of nodes
that is hooked together,
how do you do test whether that works?
Right, the best way to do that is to actually run
that network of nodes on some input data
and make sure that that the output data is correct.
We don't have full coverage testing.
There's almost no way to do that.
But at least we have unit tests
that test whether the output geometry is sort of
what we expect.
And sometimes we have actually found bugs
where it's the original test data was incorrect,
and so the developer didn't do their due diligence.
- So about the input data,
so you can use historical data
or manual made. - Yes.
- But do you also mock the data like--
- Yes, of course, of course.
Yes, we mock data too.
Okay, I think that's it.
So thank-you very much.
(audience claps)
If you want, just come up to me
and talk to me.