Cookies   I display ads to cover the expenses. See the privacy policy for more information. You can keep or reject the ads.

Video thumbnail
- [Chandler] Good morning, everyone.
- [Man] Good morning.
- [Chandler] Really, oh, come on, you have
to do a little better.
Good morning, everyone.
- [Everyone] Good morning!
- Okay.
So, I am super excited to be up here.
Number one reason why I'm super excited to be up here,
I do not have to present up here.
Actually, the real reason I'm super excited
to be up here is because I'm really excited
to introduce your next speaker, Titus Winters.
And I have a small story to tell to give you an idea
of how I got to know Titus professionally.
You see, my team and I, we built a bunch of C++ tools,
amazing C++ tools, miraculous.
We were sure they were gonna change
the landscape of C++.
They couldn't be beat, it was wonderful.
But we made this critical mistake.
We forgot something.
Tools do not have impact.
I know, everyone's like, wait, what?
Tools don't have impact, they don't, they really don't.
Tools can't change the world.
People use tools to have impact and to change the world.
And Titus and his team used our tools
to change the world of C++ at Google.
And he turned our C++ codebase
from something that was falling down around us,
and that we didn't know what to do with
into something sustainable and maintainable
for years and years to come, for decades to come.
He turned our developer ecosystem and our developers
from some kind of den of confusion, and frustration,
and angst into actually a productive, and happy,
and empowered, and informed community.
And that's why I'm really excited
to be up here and introducing him.
However, I'm supposed to be up here telling you
why you should be excited to listen to Titus.
And I can't do that.
So, instead I'm gonna ask you all a question.
Why are you here?
No, I'm dead serious, why are you here?
And you should be asking yourself every day,
everything you do, why am I here and why am I doing this?
I suspect a bunch of people here, how many folks are here
to learn something in some way or another?
To grow, to improve incrementally year over year, I hope.
In conversations in the hallways with new ideas
and revisiting old ideas, listening to great talks.
But to learn, we need a teacher.
And Bjarne reminded us yesterday
of the importance of teachers.
And so, it's my pleasure to introduce the person
who taught Google to write better C++
and to write C++ better, the finest teacher
that I've had the pleasure of working with at Google,
Dr. Titus Winters to teach us about living at head.
- Thank you, my friend.
All right, so, C++ as a Live at Head language.
We certainly know some of those words.
This is a deep and esoteric, in some moments, sort of topic.
But in order to get into that,
we need to sort of set the stage,
we need to motivate it, we need to start with a story.
And getting here required moving through Google
and working with the Google codebase.
And much of this story is sort of a history
of Google's C++ code and portability.
Historically, Google has had
pretty limited portability requirements.
For many years, we controlled our tool chain
and production environment.
Chandler controlled it very well.
There were some projects that had, some projects
had usual portability requirements.
Things like building Google Earth for Windows.
But it was an extreme minority.
And our focus on it was sort of largely an afterthought.
Such an overwhelming majority of our code
had completely controlled constraints
that it just wasn't worth focusing on other things.
And you can sort of see this in our interactions
with some of our open source projects.
Early on, we built cool things and we wanted to share them,
put out things like g flags, and logging,
and Google Test, and all of these things.
But then as our control over our build
and production environment got better and better,
it sort of stopped making organizational sense for us
to spend a ton of effort providing those libraries
for platforms that we never touched.
It was very outside of our needs.
And so, our interactions over time,
over a decade, sort of dwindled.
And that started changing on the upswing
a little while back.
Mobile, shockingly, has different needs than production.
Our open source projects that support cloud
have different needs still.
And we sort of managed to put blinders on
and muddle through regardless for a while.
But eventually, it did become clear throughout
the organization that we needed a different approach.
The era of, Google's codebase doesn't need
to be like everyone else, had come to a close.
And it took us a while but we did realize that.
On the other hand, that era gave us amazing experiences
and powers, things that we've spoken about previously.
Talks that I've given, colleagues
from former teams, Hyrum Wright.
While we only had a single platform that we cared about,
it was easier to do things like write tests,
and impose rules like our Beyonce rule,
if you liked it, you should have put a test on it.
This is the type of rule that allows janitors,
and toolmakers, and compiler team
to make changes to the infrastructure
that literally everything in our codebase uses
with confidence.
If your team actually wanted your code to not break,
you needed a test for us to validate against.
This monoculture made things possible.
Now, we make changes at scale.
Millions of edits are not just possible,
but actually pretty straightforward.
My team next quarter will almost certainly
make a million at it's store codebase.
I would be surprised if we didn't get to that number.
We've evolved to the place where anything
that needs to be changed can be changed.
We are sustainable.
That is very powerful.
So, when a couple of years back,
when our leader started taking other platforms
and code portability seriously,
I really wanted to ensure that we didn't lose
that property, that sustainability.
When they started talking about funding efforts
to improve code sharing, I made sure
to get involved in those directions early.
I shared what I knew and what I wanted to ensure,
and as a result, I got tapped sort of broadly
to make code sharing between our codebase
and the rest of the world more feasible.
This talk is really about two things.
One, Google is open sourcing a bunch
of C++ common libraries.
These are the bits and pieces
that underpin literally everything we do.
And I want to do this in a way
that doesn't impact our ability to make changes,
but that is also good for users.
And that is a difficult balance.
There's not a lot of great examples
for how to pull that off yet.
So, this is what I'm gonna try to motivate.
The discussions that have happened comparing
our internal development strategy and how the rest
of the world works have been illuminating.
To really explain things and to make this worthy
of a plenary talk, this is a big stage
and a big audience, I wanna step way back and look
at some sort of core software engineering practices.
And first, software engineering.
These are the things that we do
when we work on a project, why we do it.
I wanna be a little bit like the kid that asked why over
and over to get to some more abstract stuff.
Get us to take a look for a while
at sort of basic software engineering practices,
or when I'm being snarky, software engineering orthodoxy,
and get you to evaluate why you do
the things that you are doing.
But first, we can't really look at those practices
of software engineering without some working definition
of what is software engineering.
We all know what programming is.
What is the difference between these two things?
Why do we have two words for these concepts?
Software engineering, to me,
is programming integrated over time.
I'll put this another way.
A programming task is, oh hey, I got my thing to work.
And congratulations, programming is hard.
And a lot of the time, the right answer
to a problem is, I'm going to go write a program,
solve that problem and be done.
Right, that's great, no judgment, totally fine.
Engineering on the other hand,
is what happens when things need to live for longer
and the influence of time starts creeping in.
The library that you used has a new version.
The operating system you developed for has been deprecated.
The language you wrote in has been revised
and you need to update.
The requirements have changed
and you need to add new features.
Other developers are working on it.
You're moving to a new team.
You're moving to a new computer.
Your hard drive died.
Time happens, things change.
How you handle that, and how you
keep your programming working is engineering.
It may bleed into the programming itself
as you plan for change within the structure of the code,
or it may be entirely layered on top
when your quick one-off hack becomes enshrined
as a critical feature of your team's toolset.
Build tools, Dan.
The first time I can remember having this thought,
engineering is programming integrated over time,
is still relevant.
This is optional versus required in protobufs.
People at the company for longer than I, and smarter than I,
started saying, "Required fields are harmful."
Let's ask why.
Protobuf, if you remember, is the message format sort
of like JSON, that we at Google build all
of our RPC service out of.
In the early days, you'd define a message maybe like this.
The name of this type is request,
that has a single field, query id.
That field is required, it's a 64-bit integer.
The name of the field isn't present, and it's wire encoding,
just the id number of the field, 17.
If a process is sent a request
that doesn't have a 64-bit integer
in id 17, it will fail to parse.
The request just dies before it's
even made it up the stack to the RPC level.
And logically, that makes sense.
You can't process without the request.
You can't process the request without this id,
why would you think you could?
It seems perfectly reasonable to make this required.
But what happens when time creeps in,
we change the service to have a different option?
What if you provide the raw string
instead of a pre-computed query id?
In that case, we aren't sure which one
of these will be present, and so maybe
we make both the id or in the string optional,
and we do the validation at a higher level.
What's that look like when we deploy?
We have some FrontEnd that is sending requests
to the BackEnd server.
Unshockingly, if we deploy the FrontEnd first,
we'll start sending maybe requests id,
or maybe sending request string,
and the server, remember, is somewhere back in time.
It is still running with the old definition.
In the server's view of things,
the id is still required.
And so, if the server is configured so that
that garbage request is an assertion failure,
or just silently ignored, things go sideways.
None of this is surprising.
We update the server first.
This is very in line with everything
that everyone knows about writing protocols.
You need to be strict in what you omit
and liberal in what you accept,
so clearly, we have to update the BackEnd first.
No shock.
Make sure we're capable of accepting any definition.
But what about rollouts?
What if there are multiple servers?
What if we don't upgrade all the servers at once?
That does seem safest, after all.
We know the previous version wasn't working.
Updating all the servers to the new version
at once might be catastrophic.
So, we do some of them and then the FrontEnd.
And if the FrontEnd doesn't know the difference
between which ones support the new things
and which ones don't, then we're back to the same problem.
This only gets worse when you start
having complex networks of servers.
I could not possibly draw the graph
of server interconnects in Google production.
You do not want to get into a situation
where every service in the fleet has
to be restarted all at once in order
to handle a protocol update.
And so, we could, of course, fix this
by adding version fields to every message definition.
But that in and of itself is still adding time
to the protocol.
All of this complexity is fundamentally
a different flavor than, I got it working.
It is fundamentally a different flavor than programming.
And it is completely unnecessary in some contexts.
If you know that your programmer service
only needs to work for a limited time, right?
I need to get it today, I need it to last through the demo.
I need the next round of funding.
We don't need to plan for the future,
then you don't plan for the future.
That's fine, context matters.
But when you aren't certain, when you have to be prepared,
and that preparedness is what I look
at as software engineering.
It is the impact of time on a programming system.
And the point of this whole anecdote
is that something that makes sense on the face of it,
required fields, turns out to be a horrible idea
when you start thinking about change over time.
And I would argue that there are many things that we do
that make perfect sense in a limited domain,
in a programming domain, but do not necessarily
make sense when we are thinking about it
as engineering and change over time.
At the core, I think everything that we think of
when we talk about software engineering tools
and technology, is about resilience to time.
Version control is about being able to go back in time,
coordinate with your colleagues over time.
CI, Continuous Integration is about making sure
that recent changes haven't broken the system.
Unittests are very similar.
This is, make sure the feature that you wrote
hasn't been broken by the idiot down the hall,
or you in the future.
Refactoring tools are how you update
from an old way to a new way.
Design patterns give you some ability
to plan for future changes without knowing exactly
what those changes will be.
Dependency management is how we deal with the fact
that the libraries we depend upon may change.
And that last is really crucial
in what I'm gonna talk about for a while here.
Let's talk about dependency management
and the fundamental problem in dependency management.
Diamond dependencies.
Diamond dependencies are the bane of package management,
content management, dependency management,
any sort of management.
And they rise commonly in cases
where multiple libraries depend
on a lower level library.
Here we have liba and libb, they both depend on libutil.
As soon as versions change, there's
now two green boxes for libutil.
If everyone in your infrastructure doesn't upgrade
at the same time, then we have version skew.
Libb depends on v2, liba depends on v1.
This is even now still fine in theory
until some additional package, maybe yours,
chooses to depend on both of the mid-level libraries.
Now, we have a problem.
You are the pink box.
In some situations, for some languages,
we can make this work by compiling
in both versions of libutil,
so long as there are not types from libutil
that pass through both of them into user code,
and so long as there's not global initialization in libutil,
and a long host of other things.
But in general, for many languages and for many situations,
this just results in failed builds,
or maybe in some cases worse, failed runtime problems.
There is a standard mechanism that we use
for identifying this problem.
And it is so ubiquitous, it's even worked
its way into this diagram.
When discussing two incompatible versions of libutil,
my instinct is to describe these
as versions one and version two, not one and 1.01,
not apple and orange, not A and B.
Clearly, I describe these as one and two.
This is what was intuitive to you, as well, I'll guess.
We've come to take it for granted
that version numbers are what identifies a release
and changes in major version,
indicates some level of incompatibility.
You changed a thing.
This is what we call semantic versioning,
and it is the standard answer
for versioning in dependency management.
Semantic versioning, just so that we're all
on the same page, generally gives us version numbers
of the form x.y.z, 1.3.17, for instance.
The right-most number is the patch number.
These are things that shouldn't affect anything,
just a bug fix.
The minor number, the three, is a minor additional release.
We've added some new APIs, things that weren't present
in the old thing, but nothing has changed,
no one will be broken.
And the major number bump is reserved for,
I made a change to an existing API, your code won't build.
So, when you express dependence on a particular version
of one of your underlying libraries, libutil,
you say, "I depend on 1.3 or better."
And 1.3 or better means it could be 1.4, it could be 1.5.
It can't be 2.0 because in that one to two switch,
you made a breaking change, scare quotes.
Implicit in the system is that you
can tell the difference ahead of time
between a chain that breaks someone
and a change that doesn't.
And it's actually sort of worse than this.
Using an incompatible major version doesn't mean
that things won't work, it just means things might not work.
It is entirely possible that the major version got bumped
because of API changes that you don't actually care about.
That is, imagine that liba doesn't depend
on everything in libutil, only one part of it.
Changes unrelated to APIs in libutil
may not affect liba in any meaningful way.
And yet, in a SemVer sense we've revved libutil to v2,
and a new major version because there
was a incompatible API change.
So, in some instances SemVer is over constraining.
Which if you've ever had trouble installing something
because of version incompatibilities,
is not a pleasant thought.
I think it has been a week since I had that annoyance.
And I would give you better than even odds
that if we just made it shut up
and turned off all of the version checking,
it would probably work, very annoying.
But assuming that you can identify
what breaks your users, and assuming that you're okay
with that slight over-conservatism,
this does give us a standard way
of identifying things that will work together.
Note what SemVer doesn't say.
This is not providing any form of proof
that your dependencies can work out.
It is just saying that if there is a set of versions
for your dependencies that satisfies all stated constraints,
you can, in theory, find it.
And of course, each version of all
of your dependencies comes with its own requirements.
And it is especially important
to note that any dependency tree you see
that fits on a slide is wildly simplified.
And the popular dependencies are very popular.
And that diamonds form at many levels and scales.
And as your dependency graph grows,
it grows quadratically.
That is how graphs grow.
Only a linear faction of your dependency graph
is under your control.
You are the pink box.
You get to pick the things that
are directly depended upon by you.
You do not necessarily have control over the dependencies
of everything else in your graph.
Do the math.
The chances that SemVer and it's potential
slight over-constraining leaves you
with an unsatisfiable dependency tree, grows quadratically.
We are doing better and better
in the open source world at making things sharable,
and we are heading to disaster.
Why do we use SemVer?
It is a compromise position between
nothing ever changes and there is no stability promise.
There is nothing about this that is necessary,
nor is it in any way sufficient.
As your dependency graph expands,
dependence on SemVer is equivalent to,
I sure hope that major version bumps in my graph
happen sufficiently and frequently that this all works out.
It's a hope and a prayer.
What's the difference between a patch release
and a major version?
It gets worse.
That is, what guarantee are you giving
when you bump from 1.3.17 to 1.3.18
that you aren't giving when you go to 2.00.
That is put another way, what constitutes a breaking change?
And because I've done quite a bit to change
and update Google's codebase over the years,
I can answer this quite confidently, almost everything.
Breaking changeness is not binary, it is shades of gray.
Almost certainly fine, or boy, we sure hope so.
Adding whitespace or changing line numbers.
Clearly, this should be fine, right?
In any language that has even the
most primitive reflection properties, C++,
say extract, get me the current file name
and line number in this file.
It is theoretically possible for production code
to depend on those properties.
This is the stupidest factorial implementation of all time.
(audience laughs)
Slide code is by its nature, ridiculous.
But you can certainly imagine that spread
over 10,000 lines of actual production code,
something that reduces to this could, in theory, exist.
And practically speaking, if you saw my talk
on tests a couple of years ago,
because log statements tend to include file name
and line number, brittle and poorly designed tests
regularly do depend on file name and line number.
We shouldn't care, but we also should not pretend
that this does not have the potential to break something.
On the more extreme end, removing a public API.
This is clearly a breaking change.
Everyone would agree with this.
This is the type of thing that everyone knows,
recognizes a break, this is not contentious.
But at the same time, if you've had an API
that's better in every way for five years,
for 10 years, for 20 years,
do we really have to continue supporting the old thing?
What if you go talk to Chandler,
and you're like, hey, Chandler,
can we just have a tool that updates everyone
from the old thing to the new thing, just provably?
Do we delete the old busted API then?
Even the delete a thing that is clearly a breaking change,
isn't entirely impossible.
What about things in the middle?
How about adding an overload?
Is this a breaking change?
Oh, in C++, this is a breaking change.
If someone is taking the address of your function,
any function pointers, this is almost certainly
gonna be a break, right?
You have changed the function,
or you've made it ambiguous which function you wanted
out of that overload set, build break.
We'll hit more on this later.
So, is that a breaking change, adding an overload?
What about changing the number
or alignment of members in an object?
Someone could be depending on that, obviously.
Or runtime efficiency.
What about cases where runtime efficiency gets better
for almost all callers, but some of them regress.
Coming up with a definition for breaking change is hard.
No matter what we do in a Hyrum's Law sense,
someone could be relying on every observable.
And as the number of users of your API goes up,
the odds of someone relying on every observable behavior
of your API goes to certainty.
I'm going to include a slide on this whenever possible
until everyone else is quoting it.
As far as I'm concerned, this is one
of the great practical constraints in software engineering, basically is just that quote.
We miss you, Hyrum.
And there is, of course, an xkcd for this.
Here, a clever user is relying on a bug
in their keyboard drivers to configure their EMACS workflow,
and then complaining about the bug fix.
So, Semantic Versioning.
This is the thing that we all use
to avoid having dependency problems.
This is based on the idea that we can tell the difference
between a change that breaks our users and one that doesn't.
Even though Hyrum's Law tells us better.
Or if we're being charitable,
it tells us about changes in API,
and assumes that non-API changes don't matter.
SemVer identifies correctly when the versions
that we have installed theoretically don't work together.
It is possible that in practice that they do,
if the API that caused the major version bump
is unused or any host of other scenarios.
I could draw a comparison here
to supporting C++ language versions.
As it turns out, C++11 support was not a binary thing.
We might still disagree a little bit
on whether any given compiler really implements it.
Expressing your requirement as, requires C++11,
turns out to not be entirely useful.
Collapsing things that are complicated down
into a single dimension for support
is theoretically satisfying.
It makes great graphs on whiteboards,
but it is not very useful practically.
The SemVer constraints do not promise solutions.
They merely promise if you are able
to obey these constraints, things will work.
The likelihood over over-constraint,
or leads to unsolvable constraint problems,
and this grows quadratically.
As the frequency of major number bumps
goes up in your dependency graph,
you are increasingly likely to be stuck.
But when it works, we don't have
to worry about diamond dependencies.
So, that's great.
Those are important because those are hard.
And let's look at really why these are hard,
not what we currently do to work around it.
Diamond dependency, think about you as the pink graph here.
You are not an expert in the code that has to change, liba,
the code that did change, libutil.
Like, why did they change this thing?
I don't know.
And whether or not any change that you happen
to make to liba to fix this works.
It's a whole lot of no idea, dog.
There are three things I've seen that make this work,
in theory, that avoid the diamond dependency issue.
Nothing ever changes over time, good luck with that.
Add a level of abstraction,
we're just gonna draw a bigger box.
That doesn't really scale.
Or the group making the change needs
to update everyone, the monorepo approach.
Or make it trivially easy to do the update even by a novice.
When you phrase it this way,
we have so completely solved this problem
in the last five or 10 years.
We just haven't stepped back, asked why,
looked at why we do the things we do,
and considered the full impact.
Let's look at some of the technical
and conceptual changes in the last few years
and come back to the idea of what would it take
to make upgrades easy for a non-expert.
What happened recently?
Unittests are on the rise.
This is the search result for the Google Ngram Book Viewer.
The is the frequency of the term unittest
in the Google books data.
I really wish it went past 2007,
because I'm 90% sure that that graph just keeps growing up.
Unittests, right?
That would actually just cover how to verify the change.
If liba just has a nice reasonable, easy to run,
runs everywhere, unittesting suite,
you could make whatever change you want to libutil
and see if, did it still work?
'Cause keep in mind, the changes that we're talking
about are not intended to be behavioral or semantic changes.
Largely, we're talking about fix up the APIs
so that the existing invariance are still holding.
Existing unittests shouldn't have to be updated in general,
assuming they're good unittests.
Go see that other talk I mentioned.
That covers verify the change,
how about where and how to change?
As it turns out, we've talked about this at this conference.
We have published papers on this.
We have a tool that is specifically designed
to be an extensible platform for identifying
and converting an old, bad pattern into something better.
So, that kinda covers the rest of those bullets.
Upgrades are easy if we, your library vendors,
promise not to break you without providing
a tool for the upgrade.
The tool will identify the places that are affected
by the break and perform the upgrade automatically.
Assuming that we're all writing well-behaved code
and being good engineers and providing unittests.
In that world, a complete novice can do the upgrade
for a project they don't work in.
Keep it locally, submit the patch-up stream, whatever.
It will not be a silver bullet.
Going back to Fred Brooks and The Mythical Man-Month essays,
tooling is not going to get us double performance,
double productivity.
We aren't going to magically make these gains.
We can't just wave the magic tooling wand
and make all of the problems go away.
These tools solve one particular class
of things extraordinarily well,
but they are often constrained by practicalities like,
the code you wrote is insane.
The deal is going to be this.
If your code is well-behaved,
the tools should be able to enable the types
of refactoring and changes that we have collectively
been most afraid of, the major version bump changes.
Tools are great for doing non-atomic API migrations.
You introduce the new API.
You provide the tool.
Wait for people to apply the tools.
Then if necessary and practical, you remove the old API.
With these tools and policies,
I claim that this is enough to solve diamond dependencies.
No API breaks without tools.
Users are well-behaved.
Unittests everywhere.
For source distribution.
If you are dealing in binary compatibility
and things like that, you are often ahold
of your world, and good luck.
But the sad reality is, almost anything
you can do will break someone's build.
So, projects that intend to actually work over a time,
should be clear about what they promise
and what they require from you in exchange.
I am trying to get the standard to be better about that.
Stay tuned, no one can actually speak
for the whole committee, there's 100 people there.
But I do believe there's rough consensus that we,
the committee, will put out something more robust soon.
Right now, very few people in this room
know what the standard actually says
about reserving behavior for future changes.
And the standard itself does not say nearly
as much as the committee relies upon.
So, the details in the next couple
of slides may change before the committee
actually commits to something.
But roughly, I imagine that it'll be something like this.
Rule number one, we reserve the right
to add things to namespace std.
You don't get to add things to std.
This shouldn't be surprising.
The standard library is going to expand over time
and we need to not be worrying about
whether it's safe to add things to std.
Of course, step back.
This is a prime example of an engineering rule,
not a programming rule.
If you know for sure that you will never have
to upgrade compilers or standard libraries,
and no one else is ever gonna use your code,
you're never gonna share it, it's never gonna get off
that hard drive, you can add whatever you want to std.
Your build's gonna work, it's fine.
It's not a good idea.
But this is an engineering rule, not a programming rule.
And it's gonna break if you do that,
and you ever try to share that, or upgrade that,
and we're all gonna laugh at you
if you complain about having done that.
There are, of course, some things
in std you are allowed to do.
For your own types, you're allowed
to specialize std hash of T.
It does have to be your type.
Specializing a hash for some unhashable std type,
still forbidden.
Specifying the hash, or really anything else,
for a type that you got from someone else's library
is in general just bad form.
You are preventing them from doing the same operation.
It makes library maintenance just nightmarish.
Don't define things for APIs you don't own.
Forward declarations in the standards case
are a particularly subtle and annoying subset
of don't add things to std.
You shouldn't forward declare things.
You shouldn't add things.
Your forward declaration assumes that the details
of that name in std never change.
And it is not your right to enforce
exactly the details of those names
if it's not promised by the standard.
In an engineering system, you can't add that constraint.
That's reserved for us.
So, let me say that again.
You're not allowed to forward declare things you do not own,
at least not if you want your build
to not break under API maintenance.
I pulled this out of boost code from a couple of months ago.
Here's an example.
Don't do this.
If the committee decides to add an additional template
to unordered set, with a default parameter
so all code that is building currently continues to build.
This four parameter forward declaration
is now a build break.
Congratulations, you have saved your user
a pound include in exchange for a build break.
Not a good choice.
Also, I believe that the standard is going
to come to something along the lines
of assume the call only interface.
This is going to be bits like,
don't take the address of functions,
don't depend on metaprogramming or introspection.
Taking the address of a free function
in namespace std will be a build break if
and when we add an overload for that.
So, you know if we decide to put an allocator
in the free function for std line,
that'll break your build someday.
Amusingly, according to Howard,
the standard doesn't currently forbid this.
You are not allowed to take the address
of member functions in std.
So, you can't take the address of std string size,
but there is nothing in the standard,
as far as we can tell, that technically forbids you
for doing this for free functions.
That said, I'm pretty sure I speak for the committee
on this one, we super don't care
if your build breaks because you did this
and we added an overload.
(audience laughs)
There's another important class
of usage that the standard doesn't promise
is stable over time, metaprogramming.
This is a stunningly stupid is even.
Take a look.
It does the obvious thing in 11 and 14,
and becomes super wrong in 17.
The type of std vector emplace back changed
to return a reference to that new element.
Obviously, no one is actually going to write this.
This is merely slide code.
But you can imagine over 10,000 lines of code,
building bad dependencies on metaproperties
of things out of the standard.
A huge number of the properties of the standard library
are detectable at compile time,
and the vast majority of those are subject
to change from version to version.
This code correctly implements is even in C++11 and 14,
and becomes completely wrong in 17,
because we've changed the type of emplace back.
Don't do that.
So, that's kind of an example of this compatibility promise
that I think needs to become more clear
for all of the libraries that you're depending on.
If it is being maintained like at a professional level,
they need to tell you what they promise,
what they ask from you in exchange.
If we're talking about engineering projects
and compatibility over time, everything here needs
to be understood as a two-way street.
The underlying library, be it the standard
or boost or anything else, needs to be clear about
what compatibility it promises and what it doesn't.
It is not theoretically satisfying,
but those promises are far more complex
than the simplistic SemVer compatibility story.
SemVer has lured us in with the idea
that we can collapse compatibility down
into one concept, brushing aside the fact
that it both over-simplifies and over-constrains.
It also importantly brushes aside the fact
that without real compatibility promises
and discussions, and without tooling to ensure
that users of the library are well-behaved,
even a patch release in a SemVer world
can break things arbitrarily.
Over the years, we've gone from nothing ever changes,
like early POSIX, to nothing changes
in a fashion that might break things.
Both of which can work but are sort
of annoying and stagnating.
Software engineering is coming up on maybe 50 years,
depending on where you draw the starting point,
and it's gonna go for a while more.
The idea of something staying completely static indefinitely
seems like a bad gamble.
In an attempt to rebel against no breaking changes,
we've glossed over the fact that we haven't really defined
what is and is not a breaking change.
We focused intensely on API breaks
and closed our eyes to everything else,
pretending that SemVer solves things.
But as our dependency graph grows quadratically,
the odds of this becoming unsatisfiable grow, as well.
So, what if there was a world that worked differently?
There is that 3rd option for
avoiding diamond dependencies after all.
Make it always easy to update, even for a non-expert.
Anyone that's lived in a monorepo
has already experienced this.
Anyone that's worked at Google
in the last five years has seen this in action.
The burden is on the people making API changes
to update their users.
We've spoken at length about how we make this work.
It also puts the burden on the people making
the change to deploy the change,
which is a much better scaling property.
So, go back to the start of the talk.
I need to make code inside
and outside Google API compatible.
That is my mission.
We've have years of experience with,
the upgrades are easy, variety of things,
and as a result of our policies and our monorepo.
And now, I would like to offer that to you.
So today, we're introducing Abseil.
Don't go to the website, I'm still talking.
(audience laughs)
We're releasing common libraries,
with an engineering focus.
We want this to work over time.
Just like the required fields example,
some of what we ask may not entirely make sense
at first, or it may seem arbitrary and weird,
but I promise you over time, this will be the right answer,
or we'll figure out how to change it to be the right answer.
These are the libraries that are used internally at Google.
We released soft launch yesterday.
I have a quarter of a billion lines
in C++ code that depend on this.
I have 12,000 active developers depending
on this day to day.
These are the libraries that are used,
they're about to be used by other
Google open source projects, Protobuf, gRPC, TensorFlow.
This is a big part of why we're doing
this right now, actually.
Things in this codebase tend to get written
against the same set of APIs and
when we open source them,
we don't have a nice canonical supported set of those APIs
to work with for the open source releases.
So, everyone of those projects, everyone of those teams,
spends a bunch of effort like hacking
their way back out of the codebase.
We're just gonna fix that problem at the root.
We've spoken for years about the internal processes
and our ability to do large scale refactoring.
I don't want to lose that.
Ability to change is important.
I've spoken about what it takes to be sustainable
and I'm very proud of having
gotten Google's codebase to that point.
Ability to react to change isn't a big part of that.
We also don't want to break you, the open source users.
We have to be responsible and provide easy upgrade paths.
All of that, plus what I've already talked
about for the last 40 minutes,
if you are well-behaved, we won't break ya.
If we have to make an API change,
hopefully it's rare, but we'll ship a tool
to just do the update.
I don't believe in SemVer, and I further
also don't believe in precompiling this.
I want you to build from source and build from head.
I want you to Live at Head.
If the libraries you depend on are being updated smoothly
with the policies like what I've described,
and your code is well-behaved,
there is no benefit to pinning to a particular version.
Syncing to the current version of your upstream deps
should be no different than syncing
with the rest of your codebase.
In practice, until we work out better mechanisms
to ensure that all of your code is well-behaved,
it's gonna be a little tricky at first.
But it's a concept, it's not a technology.
The more that you can keep your deps up to date,
the less exposure you have to unsatisfiable dep problems,
the happier you'll be.
Plus, you get all the new stuff.
We're gonna ship some example scripts for Basil and GitHub
that make it easy to sync from upstream when you want to.
If you're a larger organization with security policies
and the like, that clearly won't apply,
but this is a concept, not a technology.
Get in the habit of syncing regularly.
If something goes wrong when a daily
or weekly upgrade is going, check your code
against our compatibility guidelines.
Maybe we did something wrong,
maybe there are things we haven't called out,
or maybe your code is just being silly.
The more aggressively that we shake those out,
the easier all upgrades become.
It was a lot, lot harder to do these sort
of upgrades that we do internally five years ago.
We've shaken the tree and now it's nice and smooth.
In the end, remember, what I'm doing here
is bringing the APIs and the user experience
that Googlers have lived with for years.
I am not saying it's perfect or the ultimate answer,
but this is what this project needs
in order to still have a chance of maintaining velocity
for those quarter of a billion lines of code.
Let me go into some detail on what you can't do
with Abseil code and still qualify as well-behaved.
Although remember, this is, of course,
an engineering restriction, not a programming one.
If all you need is a moment in time, go ahead and hack away.
Just don't come to us if it doesn't work
when you try to do an upgrade.
We reserve the right to add things to namespace absl.
Everything on this list has the potential
to break existing code just because we added a new symbol.
We will eventually get better static analysis linters,
clang-tidy checks, et cetera, updated public
to catch as much of this as is practical.
But I guarantee you everything on this list
is a thing that you can trigger a build break with
when I make an internal change,
or when I add a new name.
We also reserve the right to change implementation details.
The ABI will not be stable.
Do not rely on, I built this six months ago,
and now I'm trying to link against
the version of it from now.
No, we change things all the time.
Don't do that.
Don't depend on internals.
Any namespace with internal in it is not for you.
Any filename with internal in it is also not for you.
Please stay away from those.
I know that everyone's like, oh, underscore underscore
in the standard means that's reserved for internals,
but I can do it just this one time.
No, you can't.
Don't define private public, not even once.
It's not funny, don't do that.
(audience laughs)
Our include graph will change, please include things.
Includes are not bad.
On the topic specifically of ABI and internal
as not being stable, I really like
go see Matt Kulukundis talk Wednesday afternoon
for why we find those things so important.
If we were constrained by any of this,
then our recent work on changing hash tables
would have been impossible.
Improvement requires change.
So, what is Abseil?
This is a library.
It is zero config, it is a bunch of utility code.
It is the low level nuts and bolts of C++ as we write it.
It is string routines.
It is debugging and analysis.
One of the things I'm really, really excited about
is it will include guidance.
Internally for the C++ library teams,
it's probably been five years now,
we started publishing a Tip of the Week series,
which is sort of a longer form essay
about something that's been coming up.
These are largely compatible with the core guidelines,
just in a sort of longer form discussion.
I've been trying for years to have a smooth venue
for shipping all of that, and now I finally
have a mandate to do it, so that's nice.
We have C++11-compatible versions of standard types
that we are pre-adopting.
So, if you really want stuff out of 17,
but you're stuck on 11, maybe you wanna give it a shot.
And in a couple of cases we have alternative designs
and priorities to the standard.
But the general goal for everything
is support five years back when possible.
So, let's look at what that means.
Five year compatibility and zero config means
we are going to assume the standard,
but work around it if needed.
Here is an example.
The thread local keyword for C++11
has been implemented by all of our compilers,
everything that we care about, it works fine.
But for some reason in Apple Xcode,
up until Xcode 8, it was disabled.
So, we have tagged in here Apple builders in eight,
we detect that scenario, and everywhere
that we are using thread local, we have to work around it.
The other half of this is, five years after Xcode eight
and the fix has been released, we're gonna delete this
and all of those hacky work-arounds.
You get five years, we are planning for the future.
Abseil is a bunch of utility code, string routines.
I'm really amused in the aftermath of Bjarne's talk,
we were talking about we'd really like
something like a python split.
I'm like, give me 45 minutes and I'll ship this for ya.
So, we have StrSplit.
It takes string-ish things and a variety
of delimiters in the simple case,
and it returns you a vector of string, or does it?
It actually also returns you a vector of string view,
or a list of string view, or your container of whatever.
It does a fair amount of nice magic
to give you a very powerful, expressive, basic concept,
I just wanna split a string, and do it in a configurable,
customizable, very C++ sort of way.
You can also go the other direction,
joining strings, a little less magic.
One of my absolute favorite things StrCat.
This is a variadic unformatted, just convert it
to string and concatenate it.
And it is blazingly, ridiculously fast.
I've had a number of Googlers say,
"You know, if you don't release anything else
"through Abseil but you release StrCat,
"project is still a success.
"It's a really simple way of life improvement for us.
"I just need a string, just get me a string."
It is debugging facilities.
We have leakchecking APIs.
The API is always there, and if you happen
to build your code with LeakSanitizer enabled,
then the API is implemented by LeakSanitizer.
If not, maybe you feel like implementing leakchecking
with some other thing, that's fine.
Same API will be there.
And your code will always build regardless
of what platform you're on.
We have stack traces on supported platforms, of course,
get back the function pointers for your function stack
on even more limitedly supported platforms we're working
on symbolizing those so that you get
the names of those functions.
We have nice built-in ties
to AddressSanitizer, ThreadSanitizer.
You can tell what what tools we use internally.
And the static thread annotations, I really like those.
We are pre-adopting C++17 types in C++11,
so you have string view, optional, any and soon, variant.
Turns out, it is hard to write variant
so that it works on all compilers in C++11.
We'll get there, we're almost there.
"But wait," you may say, "types are expensive."
You have heard Titus speak in other forums
and he says, "Types are Expensive."
So consider, in a C++14 codebase,
you wrote something, absl optional Foo,
returned by MaybeFoo.
Now, you want to update to a C++17 world,
and you don't wanna spell absl optional,
you wanna spell std optional, right?
We're in 17, we should use the standard.
Clearly, you should use the standard.
So, you have to go track down every place
that this is called, and update those things
in the same change.
That's kind of annoying.
And then you have to track down everywhere that f goes.
So, you have to update every function that it passes into.
And you can see like in theory,
this might get actually kind of messy and difficult.
It turns out, nope, we planned for this.
We check at build time.
Are you building in C++17 mode?
Do you have std optional?
If so, absl optional disappears.
It is just an alias for std optional.
They are never separate types.
They are at most separate spellings.
In any build for Abseil, you should have one
of std optional and absl optional,
and string view, and any et cetera.
This means when you use the new standard,
the pre-adopted types just melt away,
and you don't have any restriction
on how you update those spellings.
Both spellings are the same type.
So, you can just update one file at a time
as you feel like doing it, or you can leave it for later.
Per the five year policy, five years
after the relevant standard is available,
five years after C++17, we will ship a clang-tidy check
that updates the spelling everywhere
from absl optional to std optional,
because we assume that you have 17,
and then we will delete our version.
However, this is important, this is part
of why we can't have you relying
on argument dependent lookup on ADL.
ADL is one of those surprising bits of the language.
When you call an unqualified function,
the overload set is formed by looking
in the associated namespaces
for the arguments of that function.
And this means, this code works fine in 11 and 14.
I have an absl string view,
and I can call unqualified StrCat, and it will say,
"Oh, one of my arguments is from namespace absl.
"I will look in namespace absl.
"Oh, there's an absl StrCat.
"Done, my build succeeds."
this code obviously breaks when you update to 17,
because absl stops being the associated namespace
of that type, it's std string view in 17.
So, if you write your code to depend
on things like ADL, you will break over time.
Don't do that.
We will try to ship static analysis
to prevent you from doing that.
So, don't rely on ADL.
Like I said, we're shipping the guidance,
Tip of the Week series, there's about 130 of these.
I checked very recently, these are cited
about 25,000 times a month internally,
so it's a little important.
We cite them by number traditionally in code review.
So, I know the numbers of probably 15 or 20
of them just off the top of my head.
If I see you doing something that indicates
maybe you don't understand how return values
and copies work in C++, I'm going
to cite Tip of the Week 77.
And this is just kind of a shorthand
for you need to go read that.
Go read through that, report back,
figure out what you've done wrong, et cetera.
So, we're gonna ship the same ones with the same number.
Not all of them still matter in the public.
Some of them don't even matter internally.
So, there will be holes in the numbering system.
I hope you can all get over that.
I don't want to learn another set of numbers.
This is largely compatible with the core guidelines.
And in a couple of places where we find
that it's not quite in line, I really look forward
to working out under what circumstances is
that guidance the correct guidance, or just fixing ours?
And then there's standards alternatives,
how to alienate my friends on the committee.
I promise it's not actually that bad.
So, the standard, design priorities for the standard.
You do not pay for what you do not use.
I think this has been mentioned five times
in talks that I've been to so far this conference.
And further, honestly, I'm not sure the committee agrees
on a whole lot more than that.
(audience laughs)
But as far as it goes, this is a really important
guiding principle for the standard.
And it is a big, big part of why the standard works.
This is a very good thing.
This is the right thing.
A side effect of that is for any problem space
that the standard is solving,
if there is runtime overhead for some design or feature
on a reasonable platform or workload,
we, the committee, will find an option to avoid it.
We will design a different solution.
So, let's look at the example, std chrono.
Chrono needs to work just as well
if you are in high frequency trading
where the CPU costs of time operations
on nanoseconds actually start to dominate the discussion.
Chrono should also work on an embedded microcontroller
that has 16-bit one-second ticks.
Clearly, the microcontroller cannot afford the precision
and requirements of the high frequency traders,
nor would the microcontroller system
even remotely suffice for the traders.
And so, the C++ solution is, of course, we add a template.
Our compromise is class templates.
By default the representation for duration
in time point is the a signed integer.
This leads directly to things like,
by default you can't express an infinite duration,
and there is undefined behavior on overflows.
That is what signed integers do.
And this makes perfect, complete sense
in a world where you cannot afford any additional expense,
or the safety checks don't matter.
But this is clearly not the only sensible set
of design priorities, or design constraints.
What if we wanted something other than,
don't pay for what you don't use?
What if we prioritized safety, clarity, ease of use,
and still didn't want to write in Java?
So, we're shipping time and duration.
These are classes, not templates.
They happen to be, I think, 96-bits right now,
but their representation could change.
They have saturating arithmetic.
They have InfiniteFuture, InfiniteDuration.
If you do math on an infinite, it stays infinite.
It's never undefined.
And asking for what time is it now is,
the last time I ran a benchmark on this,
it was twice as fast.
We have optimized because we run a lot of stuff.
We also have slight design differences for mutex.
Our mutex has a little bit more stuff built into it.
It has Reader/Writer locks.
So, the standard supports mutex and shared mutex.
Once you have built your whole system,
and you discover that you have contention on that mutex,
and that you will be well served by changing it
to a read lock instead of an exclusive lock,
then you have to go thread shared mutex
through all of those APIs to make sure
that you can take a shared lock.
For us, it's just one type.
There is a little bit of overhead on that,
but it does make the ability to use
that feature much more readily available.
You don't have to do API impact and refactorings.
You just have to go find the places where you're
only grabbing at for read, change it to a read block.
In debug mode, we have deadlock detection built-in.
This is theoretical deadlock detection,
not practical deadlock detection.
It's not, oh hey, you seem to be hung.
It's, we are doing the graph theory
to identify what order your mutexes are taken in
by any given thread, and if one thread grabs A
and then B, and the other thread grabs B and then A,
in theory over time, sometime you're gonna deadlock.
We just notify you of that in debug mode.
And it also has a slightly harder to misuse API.
Notice, if you're using the standard mutex,
you have to have a separate condition variable
to lock as a mutex, cv is a condition variable.
And you specifically have to trigger something
on the condition variable to say, "Hey, done."
Absl mutex, note the major differences in that finish.
Absl mutex, when you unlock, the mutex says,
"Is anyone waiting?
"Can I check any of these conditions?"
And then it goes and evaluates the condition.
It's harder to misuse because you can't forget to signal.
Which is kinda nice.
A build input there, a very, very few flags
to the build for Abseil.
We try to intuit everything from compiler inputs,
things that are provided by your tool chain,
not things that are user specified.
One that we do have is absl allocator nothrow.
The standard go back wants to be applicable everywhere.
But is also very hesitant to allow the possibility
of build modes at the standards level.
I'm gonna be pragmatic.
In some cases, the result isn't entirely satisfying.
So, many platforms don't have throwing allocation failure.
If you allocate, there's no memory available,
your process crashes.
Or the operating system OOM killer
starts killing off something else.
However, because the standard allows
for the possibility that all allocation is an exception,
any type that may allocate during
move construction can't be no accept.
Vectors of all of those types are slower to resize.
Using a vector of a type on a platform
where you aren't actually gonna throw,
every time that you resize that vector, you are paying.
Abseil recognizes that this is annoying,
and provides build system hooks
so that you can define centrally,
does my default allocator throw?
And guidance for how you can tag
your own move constructors accordingly.
Then if you build on an non-throwing platform,
vectors of those types just work better.
And on throwing platforms, it's all still good
and compatible and correct.
We aren't saying that you can't work
on a platform that throws, we're just saying
that many of you aren't on one,
and maybe you should have a more effective,
efficient vector resize.
We are not competing with the standard.
Standard is still much bigger.
These aren't better designs,
these are designs resulting from different priorities
and different engineering legacies.
You should decide which set of priorities works for you.
And standard is still unquestionably
the right thing for interoperability.
That's a brief introduction for Abseil.
Before we wrap up, let's circle back around
and get kind of the big picture.
With Abseil as and example, let's consider
what makes C++ particularly well-suited, or not
for Live at Head, the title of the talk, after all.
We have some challenges.
We don't have standard build flags or build modes.
The One Definition Rule makes pre-building anything
with potentially different build configurations
very dangerous, very bad.
Taken together, these mean that source distribution
is pretty common.
And while these technical challenges in general,
every challenge is an opportunity,
and the result of source distribution being
as technically necessary as it is in C++,
means that Live at Head is maybe more useful,
necessary, and impactful.
I also think it's the case that C++
is a particularly challenging language.
Maybe it is because I am so steeped in it,
but to me, it feels like other languages provide
fewer surprising mechanisms for a change
to break a remote user.
Like, if we we're doing Abseil in Java,
we wouldn't have rules like, don't depend on ADL,
or don't use the global namespace.
I'm not sure there's Java equivalence
that have the same language level subtlety.
Maybe they do, maybe I'm just not up enough in Java
to know where all of the pitfalls lie.
But by reputation, I do suspect that we're special here.
This all means that we have to be a little bit more clear
about what we require from users.
Also, on the downside, the lack
of a standard build system does make all
of this more challenging than it would be otherwise.
The upside, we don't have a standard package manager.
Something like a Live at Head world is not gonna fly nearly
as well in Python, where the community has completed formed
around this is the way that we
do dependencies and package management.
We do have good and consistent unittests,
according to a recent stack overflow survey.
For C++, if you take Google test and boost test,
that covers about 75% of unittest framework usage.
Just nice because it means you, the non-expert,
can run the tests for a thing that broke
without having to know a whole lot about,
how do I even run these tests?
It's only a couple of options.
And of course, in order to make tooling work,
we need good tooling.
The things that we do these days
on a pretty regular basis would have been waved off
as clearly impossible 10 years ago.
It is still not perfect.
We're missing off the shelf for some common patterns,
but it's all incremental things that are missing,
not the core revolutionary tech.
It's like we've invented cold fusion,
we just haven't quite figured out how
to rig it up to the power grid.
Or maybe we've got cold fusion,
but it needs a slightly different distribution network.
And so, we need to build that out
and demonstrate demand at the same time.
And so, the call to action.
Concretely, what does it take to push us as an industry
and a community from where we are to a Live at Head world?
I need you to consider engineering versus programming.
When you're doing a thing,
what is the life span of that thing?
Does that need to live indefinitely,
or does that need to live until Thursday?
You need to be aware that there's a difference
between those, and you need to behave differently.
I said in a talk over the summer,
"Computer programming right now is one
"of the only domains I know of
"where there's something like a four order
"of magnitude difference between how short
"something might live and how long something might live."
It is entirely possible that you write a thing
and you are done with it the first time it runs.
And it is entirely possible that you write a thing
and it lives for 50 years.
It is insanity for us to believe that the same rules,
and guidance, and principles apply
on both ends of that spectrum.
Please be aware of what you are doing.
Understand your dependencies.
What does the standard provide?
What does the standard ask of you?
Behave accordingly.
Write well-behaved code.
I need a better term for this,
I really sort of wanted it to be,
up to code, but that was confusing.
Use up-to-date versions of things.
Don't pin to some six-month old dependency
if you don't need to.
Apply tools when provided.
Write tests.
If you can do all of this,
if we can change the way we conceptualize change,
especially this ridiculous idea of breaking changes,
if we can understand the engineering quality
of libraries that we depend on rather
than just the programming sufficiency,
we can find a far better world.
As users, if you help us by living at head when you can,
cooperating with us as we figure this out,
will help lead you to a better world.
As library authors, if you follow these same ideals,
tell your users what to expect,
make change carefully and make it easy to work with,
we'll help lead you to a better world
where you can make the changes you desire
without harming your users.
It is going to be a bumpy road to get there.
Change is not easy, but I have been saying for years,
it is important that it is possible.
That is true for code and that is true for communities.
And I hope it can be true for this one.
And of course, Abseil is no different, we'll change, too.
There will be a whole bunch of more stuff that comes.
This is just the initial drop.
My plans go out probably at least three years.
While you are still here, I really recommend
sort of spiritual followup talks.
Matt Kulukundis, I mentioned earlier is talking
about Google and hash tables on Wednesday.
From my team, Jon Cohen, is talking about
what it takes to move to rename a type.
We've learned a little bit about that in the last year.
Gennadiy Rozenthal will talk about ABI issues
and part of why it is so important that you
not depend on a pre-compiled representation of this code.
And also today, I will give a Hands on With Abseil,
assuming I'm still conscious,
which should answer some of your additional questions here.
I cannot even remotely claim the lion share
of the responsibility for this.
This is the work of many fine people.
Thank you all so very much.
And volunteers not quite on my team, some former team,
and for those of you from my team
that are watching this remotely, I recognize
that you have more important things going on.
Thank you for bringing up the next generation of Abseilers.
And with that, I will turn it over
to audience for questions.
- [Man] It's well known that Google code base
does not support the use exceptions.
- True.
- [Man] How well does Abseil play with the exceptions?
- It is aspirational.
(audience laughs)
So, by that, I specifically mean I'm not aware
of any glaring places that it's a problem.
If you can identify, oh, this obviously needs
to be exception safe and currently isn't,
we'll definitely accept that patch.
A low-level library needs to be supportive of all comers.
That said, there are places not all exceptions are smart.
If you decide that your hash functor is gonna throw,
I'm just gonna tag it no accept and laugh at you.
There are some things that just
are completely preposterous and not a good design.
Like exceptions should not work everywhere.
They should work in sensible places,
and we'll try to make that work.
But it's gonna be sort of a balancing act,
'cause yeah, it's not our area of expertise,
so we'll have to learn from you.
- Hi, thank you for a great talk.
My question is about compatibility and new features.
So, unified code syntax, also known as
in different languages as extension methods.
What kind of concerns do you think there are
on implementing them, or how could the standard
move forward to actually to actually
have something like extension methods.
- So, my concern with unified syntax initially is
specifically, I don't want you extending my types.
Because if I then need to add that same API,
I'm constrained by what you did,
or I'm breaking your code, or it's a semantic change.
I would be perfectly accepting of unified syntax
not as arbitrary user gets to arbitrarily extend my library.
I would be perfectly happy with unified syntax
as you can have extension points
within the same modular boundary, things like that.
But I need control over how you use my library.
And the initial proposals for unified syntax,
I was concerned.
- [Man] Okay, thank you.
- Yeah.
- Hi, you talked about the kinds of guarantees
that Abseil makes and what you think the standard will make.
And you mentioned the notion of a call only interface.
- [Titus] Yes.
- It's quite common in C++ if you write generic code
to have to check these sort of things.
So basically, things like if you wanted
to check is constructible as opposed
to actually constructing something,
that wouldn't be covered.
So basically, if you're an application developer,
but you have to write some piece of generic code
for your company for your use case.
You have no guarantees at all from standard or from Abseil.
- [Titus] True.
- At that point, it just sort of makes it tempting
why depend on Abseil if your generic code
doesn't have any guarantees, just keep writing
your own generic code all the way down?
- True, and if you want control,
it's on you to control it.
But that said, there's a difference
between no guarantee and it's not gonna work.
If you are checking if it's constructible
in order to construct it, and then you construct it,
that's one thing.
If you are checking if it's constructible,
and then you go do a computation or some random other thing,
that's a completely different,
that's a horse of a different color.
And the issue becomes, it is easy for us,
far easier for us to specify there's no guarantees
if you depend on these sort of metaproperties of things.
Because listing off the things
that you could potentially use the existing behavior for
in a way that's likely to be fine in the future,
I don't know how to come up with that list.
And when in doubt, like a big part
of why today is a big deal is I don't get to live
in my little hobbit hole of internal Google anymore.
Like, I have to participate with the rest of the world.
So, if you have like, hey, I would like to depend on this,
you could ask, right?
Like send us an email, like we'll have a discussion.
It'll be like, no, you're completely insane,
or yeah, I can't actually imagine how
in practice that's gonna be a problem, you're fine.
But for things as complicated as programming,
it's awfully reductive to try to just narrow it all down
to something that can be pithily written in a list.
In practice, asking is a really good answer.
- [Man] Thank you.
- Yeah.
- Hi, as far as I know, Google has a very large coding base.
On some of your slides, I saw two name like ClangMR.
Do you really store your code at MAP produce
around clang at MAP produce to make refactoring.
- So, the codebase is, I mentioned, about 250 million lines
of Google-authored C++ code.
It is big, yes.
The original refactoring tools
that Chandler initially cobbled together
to give my team, these were MAP produces.
The code base itself is not stored in the MAP produce,
I don't think that's quite what you meant.
But you do basically parallel build
and run the tool over each translation unit separately,
and then reduce it down into like,
to get rid of the dupes and headers and things like that,
so you can generate a set of edits across code base.
ClangMR is not gonna be what we rely on
because they're MAP produced infrastructure.
Two things, one, the MAP produced infrastructure
isn't quite the right thing for the rest of the world.
And two, largely, we wind up doing that
because our codebase is freakin' bananas huge.
For most people, a clang-tidy check
and run it overnight on your codebase.
It should be fine.
- [Man] Thanks.
- First of all, thank you for a great talk.
I'm an author of a third library and I'm interested
in how to make the upgrades easy as you said.
Specifically, what do you suggest should be in the tools?
Should I provide tools for my users
to upgrade quickly, and how should they work?
- So, is the question, how do they upgrade like
what version of the library is checked in,
or how do they upgrade their usage
of the library to an incompatible API?
- [Man] The first one, what do you suggest should be
in the tools to make it--
- I don't think it's your responsibility
to provide tools for your user changing their version.
The user should know that you are providing something
that will be stable and accurate and over time,
and should just upgrade aggressively.
Live at Head.
- [Man] So, what is the purpose of the tools actually?
- The tools are for when changes
to the user's code are necessary in order
to react to a changing implementation
or API when they do that update.
Their build will break or their code
will not execute correctly as a result
of some change in their underlying dependency tool
to execute, fix their codebase, move on.
- I see. - Yep.
And the point is we have a quarter
of a billion lines of code already depending on this.
When we make a change, we have to automate it.
- [Man] Okay, thank you very much.
- Yep.
- Do I see both snakecase and pascalcase in your public API?
- Yes, so we snakecase when we are specifically trying
to match something out of the standard.
And we pascalcase, or camelcase, because that's
how the vast majority of our code is written.
- All right, well, already ruined.
No, no, it's okay, it was just a joke, thank you.
- Yeah, we have an initiative at my company
that is quite similar to this library,
where we do implement the future standards
and some proposals,
more general tools that could be used
in all parts of our organization.
We're actually the three people who made most
of that library are here today.
But I had one more technical question
regarding your ADL requirement.
Do you rely on ADL internally in your library?
- Generally not.
There may be a couple of places that snuck through,
but I believe that we have explicitly qualified everything,
not fully qualified, but everything is tagged Abseil.
Partly because we want the implementation,
and the usage and testing of our code
to look like how the user's gonna use it.
So, we know more fully like, does that look right?
So, yeah. - Thank you.
- You gave an example of, for example,
pre Xcode h walk around,
which you outlined in the library.
And you said that you have five years rule
for after which you remove it.
I wonder how you are going to maintain it
because it should be enormous number of places
in the library where you have some kind of stuff.
And it should be just like human readable,
and then when you come all the way,
you feel free to delete it or you have
some automated tooling around, hey, this date has passed
and this code is automatically eliminated.
- There's not enough of those that automation
is gonna ever pay off.
I think there's probably, I don't know,
20 or 30 sort of work arounds that we've got
in there right now for those sorts
of random, odd, little technical glitches.
And spending any significant time
on automation to work around 20 or 30 things,
it's probably not worth it.
It's much easier to just have a reminder set
for five years from now and have it pop up
in your inbox and be like, oh, hey,
I get to delete 100 lines of funky code today.
- [Man] Cool, thank you.
- So, I was curious about the ADL thing.
A lot of operator overloading uses ADL,
and I think you kind of covered it in the the talk,
but I want to be able to use cout on a string view.
And that uses ADL, so...
how do you expect to solve that?
I just wanna do cout-- - You don't need to use,
you don't need to use ADL for function invocations.
Everything's gonna work fine
in the normal, expected fashion.
Like cout's gonna work just fine.
It is really just unqualified calls,
especially into Abseil where things become a problem.
- [Man] So, it's not like don't use ADL,
it's like don't use ADL for function,
or don't rely on ADL for function calls.
- Don't rely on ADL when you're just being lazy.
If it's an API that specifically is like necessitates ADL,
like iostreams, then yeah, that's sensible.
- [Man] Okay, cool.
- Hi, I got two questions.
Number one, is Windows a supported platform for you?
- Yep.
- All the way, including stack trace and stuff?
- [Titus] Nope. - Okay.
(audience laughs)
- We'll try, it is not our area of expertise.
One of the things that I really like about all of this.
This is the stuff that's running in production.
This is battle tested.
But much like any given army in a battle,
is battle hardened against the things
that they've experienced, right?
Like the crusaders, battle hardened,
won't matter when the martians show up.
So, I'm sure that there are gonna be things
that surprise us when we branch out into deeper
and more lasting ties onto other platforms.
And we'll try to work around it.
- Okay, and the second question is,
now you advocate providing tools for library vendors
so that they can upgrade their users.
Does that mean that every one of us has
to provide like a clang plug-in or something,
or how do you see this working?
- I think the tooling story is evolving.
For us, like our bread and butter is clang-tidy plug-ins.
I really hope that this spurs more innovation
in having other platforms that do that.
I recognize that there are going
to be some Windows codebases that still
just can't build with clang for whatever reason.
And we're gonna have to negotiate all of that.
But a big part of all of this is,
a lot of what we're currently doing is
in theory, one thing, and in practice,
the world is a very different place.
And so, I'm just gonna try to push it all down into,
let's just be practical, let's just do these things
in practice, and see how it works.
And if it's insufficient, then we'll know where
to invest more resources going forward.
- [Man] Okay, thank you.
- Yeah.
- Okay, so we really would hurt, and we do not pin ourselves
to a particular version of a library.
And build from sources was great,
but what about build times?
- Buy another CPU.
Use AWS.
I don't know, like this is also a place
where our priorities are going to change,
are going to show through.
Our experience is we have a really massive build farm.
And I don't care about you adding an extra include.
And I know that that's not everyone's experience.
It may turn out that that makes it impalatable.
But practically speaking, build times are an annoyance
that is also a solved technical problem,
it is one of resources.
Maintenance of a library and an ecosystem
is not a solved technical problem,
and we need to claw back some of that
in order to solve it, or have a prayer of solving it.
So, we'll see.
That said, I can build everything on Abseil
on the laptop in 30 seconds.
- So effectively, you are saying that build times
is not on the top list of your priorities.
- No.
Build times is a programming problem.
I'm trying to plan for how does this work
for the next 10 years?
That's an engineering problem.
- [Man] Okay, thanks.
- Hi, I've got a quite similar question.
Even if we can resolve the build time problem,
because, I don't know, we get some infrastructure
that has also measures,
there's still the issue of software
that we release like not daily but every three months
or six months and that we have to support
for five years or even 10 years.
Can we still Live at Head with that?
- Maybe, it's gonna be very dependent on exactly
what promises you're making in that code.
It's gonna be dependent on, are you exporting Abseil types
through your APIs, because that's a whole different world?
There's not a single, simple answer,
because fundamentally, all of these things
are really, really complicated.
I say often, "Software engineering is solving
"the hardest current solvable problems."
Because everything else is easy, it's already been solved.
And everything that isn't quite solvable
is off the other end of the cliff.
So, everything that we're doing here is solving the stuff
on the ragged edge, and we're trying to build better
and better infrastructure for leaning further out the cliff.
It's not gonna be easy.
There's not a off the shelf.
- All right, but what about, I don't know,
I have a bug that I have to fix and rebuild
for a five-year-old version.
If you do not provide versioning,
how do I get your stuff and rebuild everything
from five years ago?
- You should, in that instance,
probably pin to a five-year-old version
for your five-year-old maintenance thing.
And practically speaking,
I recognize that not everyone is going to be able
to pull off the whole Live at Head thing.
One of the things as protobuf, and gRPC,
and et cetera starts depending on us,
rather than have willy-nilly releases
of that growing dependency chain,
what I'm gonna do as a practical nod is every six months
or so, we'll just tag whatever's currently there and say,
"We'll support this for a couple of years
"if anything important comes up."
I don't recommend that, it doesn't scale.
But practically speaking, this is a big ship.
It's gonna take a while to steer it.
- [Man] Yeah, I'm on board with that, thank you.
- Yep.
And I think we got one more time.
Oh, Eric.
- [Eric] Yeah, hi.
- For anyone else that was in line,
I will do a bunch of questions in this afternoon's session.
So, please take a look at the library,
and come with questions and we'll talk then.
But, Eric.
- So first, congratulations on releasing Abseil.
- [Titus] Thank you.
- So, package managers, I think I heard you say
that you think the lack of package managers
is actually a boon to the Live at Head philosophy.
I tend to be of the opposite opinion
that the lack of package management
is one of the things that holds C++ back.
So, did I hear you correctly,
and do you see that the lack of package management
as a good thing, or would it be possible
to have package managers that work
in the Live at Head world?
- Yes, I believe there would be.
There's definitely, in theory, package management systems
that work better for Live at Head.
If we just do a thing that is, hey,
here's SemVer again for C++ code,
and if you use this tool chain, here is precompiled code.
And otherwise, I guess maybe we have source distribution,
and maybe we have compatible build systems,
and I don't know what else.
That's not gonna solve anything.
That's just gonna lead us more down this path
that I don't think works.
And I think there is a solution
where we make it clear, where does the code live?
We make it clear how to pull it.
We make it clear like what is current?
And that would be great.
I think that would be a much better world.
So, it's not all package management
is inherently bad, it's just
the most likely scenarios seem like bad ones.
- [Eric] Thanks. - Yeah.
Thank you all.
- Bash Films can shoot your event with multiple cameras,
link to presentation slides, add titles,
and edit your event live for a full broadcast experience.
- How is this even working?
So, this is actually a more interesting program to
look at in a lot of ways.
Let's profile it.
Do a little bit of time to do a profile for us.
I'll see exactly what it is that's making this faster
or slower based on the different inputs.
You can really gain a lot of insight
by actually looking at the profile like this.
- I worked at Sesame Street.
I got brought on to be a writer's assistant
on a show called Sesame Street English,
which was to teach English to kids in China and Japan.
It's seems very simple, the shows that they put together,
but it's actually really hard to design a show
that is not only for young kids, but also the parents.
- Confession like this is therapeutic.
I hope you all get something out of this,
but if you don't, the therapy will have been good for me.
So, thank you.
(audience laughs)
Seven years ago, I was working,
I wasn't working at Google, where my previous employer
which was a large multinational investment bank.
I had what was up to that point,
the worse day of my career.
- And then came the anger.
Anger at ourselves because we knew we were responsible
for America's first space disaster.
We rolled two more words into our vocabulary
as mission controllers, tough and competent.
Tough, meaning we will never again shirk
from our responsibilities because
we are forever accountable for what we do.
Competent, we'll never again take anything for granted.
We will never stop learning.
From now on, the teams in Mission Control will be perfect.
Because as a team, we must never fail.
- One other thing,
we're all in a very fortunate position.
We've been very luck in our lives, and so forth.
I think it's part of the mission.
It's also good sometimes to take that fortune and give back.
To make sure that we take this platform
and we use it towards worthy causes.
That's good in karma, that's good stuff in the universe.
- We understand that your event will have needs
that are specific to your organization.
Please, email or call us directly
to discuss your particular event.
We look forward to discussing your goals
and helping make your event a success.