Cookies   I display ads to cover the expenses. See the privacy policy for more information. You can keep or reject the ads.

Video thumbnail
Good morning, everyone.
Good morning.
Well, I didn't expect an answer, but let's try that again.
Good morning, everyone.
Good morning.
Not used to that, but thank you.
I'm Rafael Reif and MIT's president.
And I'm delighted to welcome you to this discussion
of the ongoing work of the MIT task
force on the Work of the Future, a topic of great importance
to our whole society.
And for that, I thank you for joining us
and I'm delighted to see so many of you here this morning.
But I'm opening this conversation,
the credit for this interim report
goes to the members of the task force.
In particular, I want to thank task force co-chairs David
Autor and David Mindell, as well as the executive director, Dr.
Elizabeth Reynolds.
Today's report reflects their research and their leadership.
And we are fortunate to hear from each of them today.
This morning's program will also feature important voices
from public higher education, labor, and industry.
And our moderator is one of the nation's leading reporters
on economics.
In a few moments, Liz Reynolds will introduce them properly.
But I want to convey my deep thanks to each of them
now for enriching this event with their presence, insight,
and expertise.
For an MIT delegation to hold a briefing in DC
about policy on business practices
is not an everyday occurrence.
So let me offer some context.
In 2017, we saw that the American people
were increasingly worried about a future in which robots
and computers could perform many human jobs.
And the worst part seemed to be the sense of powerlessness,
the worry that automation is coming to get us automatically.
As president of an institute with technology in its name
and national service in its mission,
I had to take those concerns very seriously.
And there were large open questions.
Will this technological revolution
be like those in the past, with many jobs lost
but many better jobs created?
Or would this time be different?
Would the changes be so rapid and far reaching
and their impact so uneven and disruptive
that it would threaten the stability of our society
itself?
And above all, what, if anything, could
we do to shape the outcome?
No matter who I asked, I heard no convincing answers.
There were many strong differing opinions, but relatively little
research.
So from my point of view, the next step was obvious.
I asked some pretty smart people from across MIT
to work with smart people from many other places
to do their best to figure it out.
So that's how the MIT task force on the Work on the Future
got started.
And we will hear today what they have learned in the past year
and what they hope to clarify in the year ahead.
Before we dive into their findings,
let me offer this last thought.
We will hear a great deal today about how society
can shape the work of the future through,
both public policy and private business practices.
Those levers are important.
But I believe that those of us who are technologists
and who educate tomorrow's technologists
have a special role to play.
This has become specially clear as we're
launching the Stephen A. Schwarzman
College of Computing, which we expect to accelerate progress
in fields like machine learning and artificial intelligence.
In the past, many technologies that MIT has championed
involve machines acting in society.
By contrast, technologies like AI
represent machines acting upon society.
Technologies embody the values of those who make them.
And this creates a space of responsibility for us,
to embed the study of ethics, culture, and society
in every aspect of the new college.
It means that while we are teaching students
in every field to be fluent in the use of AI strategies
and tools, we must be sure that we
equip tomorrow's technologies with equal fluency
in the cultural values and ethical principles that should
ground and govern how these tools are designed
and how they are used.
And it also means that we are strongly committed to helping
the United States maintain its leadership in these advanced
technologies.
Because those nations that act now
to help shape the future of AI, will shape the future for us
all.
One thing is abundantly clear from today's report,
automation will transform our work, our lives, our society.
Fortunately, the harsh societal consequences
that concern us all are not inevitable.
How we design tomorrow's technologies and the policies
and practices we build around them will profoundly
shape their impact.
Government, industry, labor, and educational institutions
at every level all have a vital role to play,
whether the outcome is inclusive or exclusive, fair
or less fair, it's up to us, it's up to all of us.
In this work, those of us leading, benefiting from
and educating the new leaders of this technology revolution,
must help lead the way.
This is not someone else's problem.
It's up to those of us advancing new technologies
to help make certain that they do not
wind up damaging the society we intend them to serve.
Getting this right is some of the most
important and inspiring challenges of our time
and it should be our priority for everyone
who hopes to enjoy the benefits of a society that's
healthy and stable because it offers opportunity for all.
I'm deeply grateful to the task force members for their latest
findings, to advisory board members for their guidance,
and to all of them together for the ongoing efforts
to pave an upward path.
Thank you.
[APPLAUSE]
It is now my pleasure to turn the morning over
to the executive director of the task force Dr. Elizabeth
Reynolds.
Liz.
[APPLAUSE]
Thank you, President Reif.
And also, thank you to the MIT DC and News
offices for their collaboration as we
release this interim report.
It's so great to see many friends and colleagues here
from the DC area, people we've worked with over the years.
Thank you, again, for joining us.
This report represents the work of many over the last year
plus.
It is a synthesis of our research
and our knowledge to date on the relationship between technology
and work and what we see for both
on the future-- on the horizon.
As President Reif said, there is anxiety and uncertainty
in society manifested in multiple ways,
despite a relatively strong labor market.
These feelings are not unfounded,
but grounded in reality.
Technological progress, essential to growth
and improving living standards, has delivered neither
the productivity growth we would hope for
nor shared prosperity for all--
far from it.
For those with less than a college degree, whether two
or four year, which represents approximately 40%
of our current labor force, there
has been little rise in wages over the past several decades.
The economic progress of minority workers has stalled.
And less economically vibrant places
are being left behind as their populations decline and grow
older.
Technology, of course, is not the only factor
that has played a role, so, too, has China's rise,
the weakening of institutions that support workers,
and public policies that would buffer market forces.
But technology has clearly played a role
in exacerbating inequality.
Employment polarization has increased
and the introduction of what our colleagues call
so-so technologies have potentially
led to the displacement of workers but only
modest productivity gains.
Yet, our challenge going forward is not the quantity of jobs,
but the quality of jobs.
Demographic forces, one of the few things
that we can confidently predict going forward
will create labor scarcity, not labor abundance.
Indeed, employers already cite automation
as a response to today's shortage of workers.
While the challenges we face are longstanding and urgent,
the adoption of these new technologies, as we see it,
is not occurring overnight as popular discussions
would suggest.
We see robots, indeed, moving out
of factories and into retail, warehousing, farming,
medical services, and a number of different areas.
They will have many benefits and they will certainly slowly
displace a lot of relatively low paid work.
AI's broader impact on work is more uncertain.
Its most successful application has
been in machine learning, which differs
from previous ways of automation in that it
applies to high, as well as low education jobs
and can learn as it goes along.
But today most applications apply at the task level,
automating part of an occupation, not
a whole occupation.
The example we often use is radiologists.
These effects are unfolding slowly
and they vary across industry and firm size.
Autonomous vehicles are a great lead use case.
This is an area where we've seen tremendous excitement,
investment, as well as anxiety.
But the industry has been ratcheting back
its expectations in the last few years.
AVs, in fact, from our perspective,
represent technology that will largely complement rather than
entirely replace human drivers for many years to come,
except in special settings.
We argue for tempered optimism.
We can create better work and broadly shared prosperity
as other peers, countries, have done.
They are not assured, but they are achievable.
And technological advances make them more so.
The crucial link is the institutions
that mediate between technological progress
and our desired labor market outcomes.
That brings me to our recommendations
who fall into three broad categories.
First and foremost, developing skills of the future.
Of course, developing the right skills for the workforce
is critical, particularly for those we deem most at risk.
Those who lack strong technical training, or two to four year
degrees.
We speak first to building on what works.
We have great examples of successful programs
at the community college level, work based sectoral training
as well.
The dissemination of these best practices and building programs
to scale should be a high priority
and we will have some of our panelists speaking
to this later.
We also need to encourage the innovation and experimentation
that we see emerging in a number of areas,
online learning, new non-degree credentials,
and adult learning.
But we really need to evaluate rigorously
across all of these areas.
Finally, we speak to training for middle skill jobs.
Even as middle skill jobs are declining in the aggregate,
we will have significant demand in new health care jobs
as well as traditional production and trade jobs.
We underscore that the supply side approach to this problem
is not enough.
If we skill them, the jobs will come.
But this is an inadequate response.
We need to build on other policies and institutions
as well.
To that end, our second area of recommendation
speaks to the rebalancing of incentives toward human capital
investment.
This first applies to tax policy.
While we are very supportive of investments and incentives that
speak to capital investments, we really
think there's a lot more that can
be done to support human capital investments
and ways in which we can equate, for example, an R&D tax
credit to something we might do for investments in workers.
This also applies more broadly to the US practice
of shareholder capitalism where maximizing shareholder value
has been the sole purpose of the corporation.
We need to return to more of a stakeholder perspective
where workers and communities are important constituencies
along with shareholders.
We speak also to strengthening workers voice
and representation.
Finally, and perhaps not surprisingly from MIT,
we speak about reinvigorating investments
in new technologies, in both complementing workers
and fostering innovation.
While new technologies are often readily available to firms,
their successful adoption and implementation
requires organizational innovation.
This often involves engaging workers
in the adoption of the new technology
and redesigning work itself, a complicated process.
We need to encourage and incent firms
to learn from best practices in these areas.
We also need to reinvigorate US leadership
positions in AI related technologies
through R&D investment.
Other countries are surpassing, or are
close to surpassing the US in this dimension, particularly
China.
This is one area where we cannot afford to fall behind given
the range of applications of these technologies.
As president Reif said, by leading
in the development of them, we also
help shape their trajectory toward broader goals
for the country.
These recommendations don't speak
to all of the important areas that need to be addressed.
Many of you in this room are already
working on those topics.
No one policy or action alone is going to set us
on the right trajectory.
We look forward to working with you here in DC
as well as across the nation at the regional level
to refine these ideas toward more actionable policies.
In summary, we have an opportunity
to bend the arc on both technological development
and institutional reform to lead to better outcomes for society
at large and build a better foundation for shared
prosperity in the 21st century.
With that brief overview, let me now turn to our panel
and ask them to take their seats.
Everyone in the room should have biographies of our panelists
and we will be collecting questions
for the panelists on cards that you might
have received as you came in.
I am going to briefly introduce the panelists
as they take their seats.
On my far right, David Mindell, co-chair
of this effort, and professor of history
of engineering and manufacturing in
aeronautics and astronautics.
David Autor, also co-chair, professor of economics.
We got a switch.
Juan Salgado, chancellor of City Colleges of Chicago.
John Kelly, chief technology officer of IBM.
Liz Shuler, secretary treasury of the AFL CIO.
So let me now hand over to our moderator, Eduardo Porter.
Thank you.
Hi, good morning.
[APPLAUSE]
Good morning.
In a way, I find it, like, kind of remarkable that this meeting
is even happening.
When I started writing about these things, about economics,
a few years ago, I can't imagine that an economist looking
at the kind of horizon of possibilities brought
about by the technologies that we see today, the machine
learning, and the artificial intelligence, and so on, could
have concluded anything but productivity
is going to rise at a fast clip and prosperity
is going to increase.
You know?
But that's not how the public saw it.
The public for some time has been
looking at that same technological landscape,
looking into the future, and concluding,
I might lose my job.
What they see when they look at technology
is they see skills rendered useless
by the rise of the machine.
They see very little hope that their wages are ever
going to rise.
And what's really super interesting.
And I think that's what I find remarkable about this meeting,
is that this polarized reading of the landscape
is kind of converging.
And the way it's converging is that it's
the experts that seem to be moving more
in the direction of ordinary people's view of what's
going on.
And maybe the best way to put it is that the economists seem
to have come to acknowledge that prosperity on average
is not really the most relevant measure of a society's
well-being.
How that prosperity is distributed
is perhaps even more important than how fast
it grows at the mean.
And I think that this panel is a product
of this kind of awareness.
How can we say we've solved the economic problem,
to steal a term from John Maynard Keynes, when
it remains so really unsolved for so many of us?
So as I understand it, the purpose of this task force
is kind of help us identify some of the tools that
might help spread this prosperity more
broadly and more equitably.
And, of course, it's not a coincidence
that they focus on work.
The labor market is the most powerful technology
that we know to convert our endowments, our brain,
our brawn, into the stuff that we need to live,
like food, and housing, and health care, and so forth.
And so today we're going to hear about some early ideas
because this is just the beginning of a process
about how the market for work might be improved
to spread prosperity more broadly in a way that benefits
us all.
So you've heard the panel.
You've got two David's, David Mindell, David Autor,
Juan Salgado, John Kelly, Liz Shuler.
It's a great panel.
And I'd like to start with David Autor raising a point that we
just heard a moment ago from Liz,
is, in a way it's kind of a strange moment to be worrying
about the future of work.
The labor market, from a certain perspective, looks fine.
We're at full employment by standard definitions.
Wages have been rising at the fastest
clip since the second half of the '90s,
since the first dotcom boom.
So why worry now?
That's a great question.
And that's what we ask ourselves, because [INAUDIBLE]..
Economists-- the zeitgeist has not caught up with the data.
But we think that the public is, to be honest,
I think that's often the case, in fact, is that economists,
we're humbled to realize that we often discover things
that people have known for years and then
we get to publish them.
But if you look at the last 80 years of economic history,
if we look at the first three post-war decades,
we saw rapid productivity growth and rapid even wage growth,
and living standards were rising across the board.
If we look at the period from kind of 1975 forward,
productivity growth has been slower for sure,
but the main difference between these two periods
is the distribution of that productivity
growth across individuals.
The aggregate GDP per worker has risen about 70%.
The median earnings of a US worker
has risen about 12% in that time.
And so this disconnect between productivity growth
and the experience of the typical worker, not
the average worker.
The average has largely kept pace with productivity growth,
at least until 2000.
And so people are right to be concerned,
not because the technology won't yield innovations,
benefits, things that will raise incomes,
but whether people will be beneficial stakeholders
in that process, or whether they will be displaced without being
on the winning end of that.
And we think there is an opportunity as well
as a challenge, because looking back at the last 40
years of history, we see we could have done better,
some countries did better, all of them
faced the same headwinds, none of them
did as well as they did in the immediate post-war period,
but there's many things working in our favor.
One of them is, in fact, demographics.
We are entering a period of labor scarcity.
Our workforce is aging, the growth rate
has slowed, educational attainment is rising rapidly,
which augurs well for productivity.
It also means there are going to be fewer people available
who want to do trades, construction, and service work.
And so employers are going to have to work harder
to attract those people, but we need
to invest in the skills and the institutions that
translate those opportunities into a well functioning labor
market.
And we strongly believe that a well functioning labor market
is the foundation of a healthy middle class and a well
functioning economy and political economy.
And so we are focused on improving work,
not on redistributing income, but on making work work
for as many people as possible.
Thanks, David.
So I'll hand it over to the other David, David Mindell.
Hi.
Again, keeping to this theme of why worry now,
if you could talk to us a little bit about what's
different about this technological revolution,
because we've had them in the past,
and we've always been worried, and the worry
has mostly proven unfounded.
The jobs have been generated at fairly high wages.
And so the people who lost jobs find new ones.
But what's different now?
So obviously the notion of artificial intelligence,
it has a kind of human ring.
It helps us redefine what's human.
Previously technologies, very often were mechanical,
or we often talk about mechanization
in the 19th century, affected the human body
and replaced things that people were doing with their body.
And the senses and certainly the public
discourses around intelligence as coming for our brains, which
also means, by the way, white collar work as well
as blue collar work.
It's technology that can learn, that can draw experiences
from the world and bring it in, and it
draws on age old fears going back
to The Golem or Frankenstein about our creations
getting out of control and becoming better
than we are at certain things.
And those elements are all present.
I mean, one of the paradigms we started with in the task force
was a billboard that Liz drives by on the Mass Pike every day
going to work that says, the robots
can't take your job if you're retired.
And it's an advertising for a retirement--
a pension fund company.
But that's a pretty good encapsulation
of the kind of public feeling.
And there is this sense that the robots are coming.
And as MIT, we really ask ourselves,
what do we have to say that's different
and what do we need to say, given Rafael's charge to us,
that MIT has a responsibility to talk about here.
And one of those is technology is not
something that happens to us.
Technology is a human product.
It's something that people create.
Many of those people are MIT graduates, but many of them
are also shop floor workers who are innovating in processes
and innovating in things.
And it's not always how people see that technology.
So the future is not automatic.
It will not take shape as it comes to us.
Now, AI is a little bit special, or at least
it has claims to be special, because it does learn,
it does adapt to the world.
And, of course, today's AI technologies
that so far are most successful, or for a very particular subset
of what scientists consider AI, which have to do with machine
learning and neural network type models,
which have been around for a long time,
but in the last 10 years or so the availability of data
and compute and algorithms to use those things efficiently
have really kind of skyrocketed.
Very interesting kind of data, because--
kind of development, because machine learning technology
is based on this enormous volume of data that we use.
So Rafael talked about AI impacting society.
Today's AI is society, right?
It is literally the embodiment of lots of human activity
that computer scientists and programmers
have learned to draw on.
And so AI is us literally, and that
has a special kind of promise, but also a special kind of fear
that feels different from the industrial mechanical age.
Yeah.
Well, John, perhaps you're in the best place here to tell us
what's the state of our progress along this dimension.
What can machines do?
What is the state of automation today?
And how would you, if I asked you to look 10 years ahead,
how would you see it moving across society,
across the economy?
Yeah.
Great question, Eduardo.
And first, let me begin by acknowledging Rafael
for kicking off this study, because I
think when we started this, there
was no stake in the ground, people were just
sort of grasping at the issue.
And I really think that this first phase report, David,
David, and Liz, is a great stake in the ground for us
moving forward.
Eduardo, the way I like to think about this
is this is moving very fast.
It's a very long-term trend in artificial intelligence.
And we're at the very beginning.
Think of it as Moore's law for AI.
This is going to be a 50, 60 year run at least
and we're less than a decade into this.
Secondly, the technology is advancing not at a linear rate,
but at an exponential rate, like Moore's law.
Think about when we had the first transistors back
in the '60s, nobody could have predicted
that you would have the smarts of that cell
phone in your pocket.
So it's hard for us to grasp what it's
going to be like in 50 years.
That said, the technology today can
do a lot of important things.
And relative to workforce, it can go all the way
from assisting and augmenting people in a call center
to respond better, faster, deeper by advising
the person to a high end oncologist that
can give a better diagnosis and treatment to a cancer patient
and everything in between.
So I think the challenge for us is not necessarily
for the oncologist, or how do we improve the lower
end of the wage scale, but it's that
in between, the midsection, because we believe,
and I think we know now that AI and machine learning,
yes, it may eliminate a few jobs,
but by and large it's going to impact every single job
in the workforce.
Every job is going to be impacted.
So the challenge is, how do we, and where do we
insert that AI technology into the job market,
and how do we do it.
We have found working across all industries, from health care,
to financial services, to retail,
that the technology needs to be fit into the process.
There is a section in there on so-so technology.
Well, why sometimes when we put technology in--
it may improve productivity, but it doesn't improve
the overall solution and value.
And what we're finding with AI is that kind of thing.
You can't just, like a computer, you just
can't throw it in, plug it in, and turn it on, and it works.
You've got to adapt the technology for the process.
And if you don't change the process of humans,
or the business, and all you do is automate something, which
is lousy, you're going to get a lousy result.
So we have learned it will impact every job,
it's advancing very, very fast.
It's already everywhere.
It is society because it's no more
than a reflection of the data that we're
creating as a society.
So the challenge for us, as the study shows,
is how do we advance the technology,
but how do we intelligently insert it into the workforce,
and how do we bring the workforce
along such that this thing doesn't continue to bifurcate?
Yeah, yeah, yeah.
The way I, sort of, see it is technology is going to happen.
It's going to continue no matter what we do.
It's going to progress.
And so a way to think about what you guys are doing
is, what are we going to do about it?
Given technology, how do we do this?
What are the institutional guardrails
that we put in place to ensure that it
is steered in a pro-social way, as if it were together.
To make sure that happens.
And so I'd like to turn it to Juan at this point,
because education is always the first thing that comes up
in conversations about how to mitigate inequality, adapt
to technological change, adapt to economic shocks.
Education is always, perhaps, the first word
out of policymakers mouth.
But people mean lots of different things
when they say that.
Some people will talk about, well, we
need universal college education, universal bachelor's
degrees, which in my view is kind of like science fiction.
But we hear, well, we need to start
with early child education, zero to three, perpetual education
training.
So I'd love to hear from you, what's the role for education
here?
What are the levers that you see are
most promising to be pulled?
The box to check is the all of the above box,
because it really does work as a system.
But I'll just speak from my vantage point
of working with 77,000 students in the City Colleges of Chicago
that come from the broad diversity that
represents our city.
And we are in many respects ground
zero for this opportunity and challenge.
I mean, our students are workers.
They are low wage workers.
54% of them have had food insecurity in the last 30 days.
15% are homeless.
They deal with a set of life circumstances
and yet they persist, and yet they're engaged,
and yet they have a career interest in mind that is broad.
Everything from transfer to a four year university,
to getting a middle skill job, to getting
a certificate so they can get on the marketplace right away.
And so really our overarching challenge
is to make sure that we're working in partnership
with the industry.
One of the things that we've done
at City Colleges of Chicago is to transform our system.
If you looked at us seven, eight years ago, what you would see
is an institution that did nothing but transfer to four
year institutions and trades.
And now we're doing a little mix of the both.
And we created something called centers of excellence.
We've asked each of our colleges to focus in on a growing
area of the economy, to work with the top employers
in that area, to build new innovative facilities,
a new advanced manufacturing and engineering
center, a new transportation, distribution, logistics center,
a whole new medical facility with simulators,
with the top end technologies, so that our students are
prepared for the transformations that are actually occurring
in the economy, because we're hearing
about those transformations at the early stages
of those changes in the economy.
And so what we are really positioning ourselves to do
is to make sure that we as an institution
can adapt and evolve.
And I will say one of the things that we
need to examine as a society is the degree to which we
are equitably putting resources into institutions like ours.
Community colleges are the least supported institutions
of higher education in society, and yet, they
worked with the very students that need the most assistance.
And so looking at issues of equity
are going to be critical to ensuring that we actually
achieve the dual objective of productivity, economic growth,
but also a society that we can all be proud of.
Yeah, thanks.
Yeah, sure.
Please, John.
A data point that I totally agree with Juan,
we have a program called Petaker Pathways to Technology
in 200 some-odd schools, and we have found, in a sense,
we've overspec'd some of these jobs.
So if I take a cybersecurity analyst in an operations
center, we've just sort of traditionally said,
well, you have to have a four year degree.
Not true.
If we take a two year degree person
and we give them AI tools, they can
do the job as good or better than a four year person.
So we can on ramp from the two year schools.
And then if we give them AI enabled tools,
they can hit the ground running and often
surpass people with four or higher level degrees.
And we are partnering with you in Chicago.
We have a P-Tech school.
OK.
It works.
It works at scale.
Liz, you and I were talking a moment ago before the session
started about how interesting it is that unions are now kind
of like in everybody's brief.
Again, when I started writing about this,
people have been thinking about, well,
what's the role of unions in the economy?
How can they help workers?
Then you get this, meh, yeah, they're there.
They represent 7% of private sector workers,
a little less than that.
So they're not extremely relevant.
But now they're clearly back.
At every meeting that I am like this, or every group of--
every study group about work and what to do about inequality,
unions are back in the conversation.
And they're clearly, in this report,
they were an important part of when
thinking about what are the institutional changes that
are needed, well, one of them is find some way of increasing
worker voice.
And so there is that fact that unions represent only 7%
of the private sector workforce.
So how do we increase worker voice
given the difficulty of organizing that workers still
face?
Right.
I'm glad to hear you say that it's in every brief now.
Which it's been a while that we've
been knocking on these doors and saying,
hey, over here, workers, because when
you talk about the future of work,
you would think workers voices would be
included in that conversation.
So we're thrilled to be here.
I want to say thank you to the authors of the report,
because what came through very clearly was
this nagging problem of inequality
we have in this country.
And we share that concern in the labor movement,
as well as the potential for innovation
and how workers can actually contribute
to those conversations and be in the workplace,
and say, hey, wait a second, if you're
going to implement this technology,
we can tell you how to do it.
But so those two things that are streaming
through the report around innovation, inequality,
we believe go right through the labor movement,
because worker voice is absolutely
essential in this debate.
And you cannot have a strong worker voice without strong
bargaining power associated with it,
because you can raise your voice all day long,
but if you don't have the ability to flex your muscle,
come together collectively, and leverage something,
nothing's going to change.
And so we believe very strongly that worker voice
means worker bargaining power.
And we've seen this over time, of course, since our inception,
we have been molding and changing
and adapting to technological change since our inception.
If you think back at the turn of the century, what technology
looked like up till today, even with the so-so technology
that's mentioned, I have many examples
in my head of workers who have been dealing with that even
today.
So I think that we have so many examples.
We represent 12.5 million working men and women.
Even though it's 7% of the economy,
it's still a ton of people.
Right?
And I will say 6.5 million of those
are women, which women and people of color
are going to be the growing demographic, as we have all
talked about in this workforce of the future.
So I like to point to examples because we're seeing it,
as I said, in every workplace, in every sector.
And we recently saw workers at Marriott hotels go on strike.
What did they go on strike about?
They, of course, were thinking about wages.
They were thinking about health care.
But technology was one of the top three reasons
why they went on strike, because they said, if there's
going to be mobile devices, and people
are checking in and checking out of their hotels,
if they're going to have robots when they arrive,
we think we need to have a voice in that
and have a seat at the table as that technology is introduced
in the workplace.
And so they were able to negotiate provisions
in their contract for notice periods,
for transition assistance, so that they
could have a fund to actually retrain
people who might be displaced.
So that's what we think of when we think of worker voice.
And there are a lot of examples of worker voice out there.
We're seeing a moment of collective action
in this country unlike we've seen in a very long time.
But so we think all forms of worker organization and worker
voice, meaning a seat at the table for working people.
And thinking also that Teamsters also
tried to have some tech provisions in their contract
with UPS, I remember.
But they were much less successful than UNITE
HERE in doing that.
They wanted some say over what kind of autonomous vehicles
and stuff that you UPS would introduce.
It's evolving.
But the company was very resistant to that.
Yeah, I bet.
Yeah.
I know.
Thanks.
So listen, David Autor, I'd like to get back to you.
Perhaps we should step back for one little second.
I mean, this is a preliminary report.
You're going to spend the next year,
like, patching through these ideas
and coming out with a more complete set of findings.
But maybe you could just give us a little bit of a panorama,
or what are the margins, the most promising margins
you see for policy action?
I mean, we're talking about worker voice,
talking about education, but could you just
tell us what else is in the bag?
Sure.
So certainly the discussion of education,
I think we've highlighted that here,
and any discussion of future workforce
involves appropriate skills.
And we focused in particular on the group
between high school and four year,
because we think that's where there's the most innovation,
and also the most complexity.
K through 12 is incredibly important.
A lot of people are focused on that.
Grade four year college is incredibly important.
We think we've got that.
But in terms of helping people transition into--
so there's enormous growth in health care work, some of it
is great high paid work that doesn't require four year
degrees.
Some of it is terrible work.
So the home health care industry is growing incredibly rapidly.
It's incredibly insecure and low paid.
That could be improved.
So that's avenue, but you expected that.
A second avenue we talk about a lot
is incentives around human capital investment and physical
capital investment.
Our tax code heavily subsidizes capital investment.
The margin, we have R&D tax credits,
we have immediate depreciation, you
can defer your capital gains forever.
Labor is taxed at a relatively high marginal rates,
it's not paid by the employer, it's paid by the worker,
but that still creates a wedge between what the firm pays
and what the worker receives.
And so sometimes, at the margin, the government goes in with you
when you want to buy a machine to replace a worker.
And we're all in favor of capital investment.
We're not opposed to that.
But we think the playing field should
be leveled, that we should be recognizing both types
of investments as valuable.
They should be acknowledged in corporate income statements
and we should be working harder to give firms reasons
to upscale workers and to move them into favorable positions.
And that that's a tricky business, obviously.
If you don't do that carefully, companies will say, oh, yeah,
sending person on a lunch break, well, they're
learning something.
And we don't want that.
We want that it has to be recognized credentials,
it has to be things that we can verify
as being genuine and useful.
The US has had good success with the R&D tax credit.
The evidence is it has worked.
We hope we can do something similar with human capital
investment.
Similarly, this is what Liz said,
we do believe that there has to be some recognition
that workers are stakeholders.
It's not accurate to think that the only stakeholder of a firm
is the shareholder, because their firm's policies affect
workers, they affect communities.
And if those costs aren't recognized,
that's actually inefficient, what
we would call an externality, as an economist.
And so this is not a proto Marxist document.
But we think that the last 40 years
of the rise of pure shareholder capitalism, Milton Friedman's
dream, and then Jensen and Meckling,
who really brought that into the economics view.
And by the way, Jensen, Michael Jensen,
who was really one of the persons who introduced
that notion has renounced it.
Has said, we went too far with that.
So we think there's greater opportunity for innovation
in this area, not just through adversarial labor management
relation, but through central bargaining,
through different forms of worker voice.
But you're seeing the current labor union, labor movement,
and allowing to expand.
And then finally, we think that innovation is something that
is not an autonomous activity.
It's something that we shape.
The government sets priorities and tells researchers.
Here's the important problem, whether that's
putting a person in space, whether that's
creating a telecommunication system that
became the internet.
And in fact, the National Science Foundation
is already looking at ways to foster innovation that benefit
workers as well as firms.
So we need to invest in technological leadership.
We should not step back, oh, we're afraid of that.
Let's tax it.
Let's not do it.
That's not the right message.
The message should be, we should shape it.
And by leading it, we have an opportunity
to create, both economic prosperity
and to share that prosperity.
The broad national conversation about what
do we do to distribute, say, prosperity more broadly,
includes several other things.
There are several other things in kind
of like the national bag.
There is from tax policies, earned income tax credits,
to regulatory policies, to address
the issue of what David Weil calls
the fissuring of the workplace.
There is talk about whether something
should be done about a proposed monopsonistic power
of big employers.
Silicon Valley loves the UBI.
These are not in your toolkit, how come?
Whoever wants to grab it.
I mean, there's many things that are not
in the report, of course.
And David is going to do a better
job of explaining our position on UBI than I will.
And again, on the one hand, it's a preliminary report,
on the other hand, it's a framing document.
We are trying to shape the way to think about this problem.
And then we'll be filled in with the research
and the empirical part of it in the coming months and the year.
And we really tried to focus on what are the ways
that we can think about it differently.
To build on what David said about intelligence,
one thing we've learned about intelligence
is that there are many artificial intelligence says,
there are many different ways to be intelligent.
Machine intelligence today is not human intelligence,
but most people in the field sort of feel
like it is intelligent in its own way.
And there are many different ways that that technology
can and will evolve.
We talked a lot about China, which is also not
very much in the report, but very much on our mind.
And we have researchers there as we speak.
And it's a good example of, yes, China leads in AI in some ways,
but it's also a kind of state surveillance AI,
and it's oriented toward a particular set of purposes.
The US has really developed and led
AI from a military point of view during the Cold
War and the years after.
And you can point to how autonomous cars are
very much created by DARPA and seeded by DARPA.
It's a great success for DARPA.
Also, contains some of the limitations of that model.
And so the US has an opportunity to lead in AI
in a way that is worker centric, that
embodies all these different values and ethics that we're
talking about here.
That will be a different kind of AI, one
that we feel is extremely important economically,
nationally, and otherwise.
And that's really the focus we've been on.
And then how that pervades into policies in different ways
is partly our work in the coming year,
but it's also partly work that we
want to leave to others because we lay out
a series of principles and framing
and how that develops into specific policies
is a boundary that we draw around our purview.
Sure, sure, sure.
So picking off of that, and Liz mentioned it earlier,
and it's come up a little bit, this idea of so-so technology.
And well, your colleague Daron Acemoglu
has written a lot about that, about automation
that doesn't really boost productivity that much,
it just knocks out a worker.
And well, one question I have is,
how do we identify the good automation
from the bad automation, the good tech from the bad tech?
And what is the right policy response?
I mean, how can one modulate that?
Is it a question of rejiggering the tax incentives, as David
Autor, you were mentioning before,
or is there some other way of addressing
that to encourage the good type and push against the bad type?
And I don't know, maybe Liz, you also
have thoughts about that and John,
but whoever wants to grab this, I'd love to hear you.
Well, let me start.
I think we have to accept the fact that this
is going to happen.
There is no question that this technology is going to happen.
And as I mentioned earlier, it is on an exponential curve.
So if we decide not to do it, someone else
is going to do it in our workforce that does not have,
is not augmented with AI.
It's going to be a heck of a disadvantage to another,
say, a foreign government or worker.
So it will happen.
I don't think that it's something
that can be regulated at the core,
because the technology is advancing so far.
There's no way the regulation could possibly
keep up with the advances in artificial intelligence.
I think the introduction of the technology into the workforce
and the creation of it, as Rafael pointed out,
is the responsibility of us that create it,
and the partners that we work for, or work with,
in the various industries.
It can do great things in health care, or financial services,
or it could do things that could be harmful.
So we have to introduce it in the right way.
But I think there are areas around the margin where there's
implications of the technology.
One example is privacy.
And I know the Business Roundtable is taking--
the members are taking a pretty strong position
on consumer privacy as an example,
because artificial intelligence can
be used to mind and look at your privacy and all of your data
and understand in ways we can't see.
So I don't think, Eduardo, the core technology
is something that needs to be stopped, or regulated.
We need to drive and innovate as fast as we can,
because our competitors are doing the same.
And in the field of AI, there will be no second.
There will be no second, because when you're first,
you have such an advantage over the second.
So we need to watch the implications of the technology
in areas like privacy and make sure
that we work collaboratively with the government
to not violate things that are important to humans
beyond the core technology.
And I will echo that.
But the labor movement is not anti-technology.
We are very much coming to grips.
Obviously the technology is coming.
It's already here.
But we definitely need the guardrails, as you said.
And who are we doing this for, right?
We want this to benefit everyone.
And so if we don't make the right policy choices now,
then we're going to be in a whole heap of trouble.
I was just in Welch, West Virginia yesterday
and I saw what happened to that community
when, obviously, the coal industry has
picked up and moved away.
People are devastated.
The community is crumbling.
They're just now trying to get teachers
to come back and educate the kids that are left there.
And so we don't want that to happen again.
So we have the ability to shape that right now.
And I will say, I wanted to respond
to something David said about buttressing a worker voice.
We believe that the inequality we're seeing today
is as a result of the decline in unions.
And there's actually a Princeton study
that came out a year or two ago that
links the rise in inequality with the decline of unions.
And until we're able to actually change the laws in this country
to make it easier to form a union,
we're going to continue to see inequality widen.
And it takes an act of heroism to form a union these days
with the way the law suppresses it.
People get fired.
I was just talking to you about digital journalists who
have organized their newsrooms.
And what do the employers do when people step forward
and exercise their voice?
Fire them.
Shut it down.
So until we pass things like the Pro Act in Congress
and unleash the ability for people to form unions,
no matter what sector it is, it could
look different and more modern, or in a different approach,
but we need to have that collective power
to balance the scales of what we're seeing
in terms of this consolidation.
Sure.
David.
Just adding, sort of, synthesizing
the points you've said, well, what do we
do about social technologies?
And the answer is, obviously we can't regulate them.
And then Liz said, well, worker voice is relevant.
And let me say that this all comes back to incentives.
So one of our board members runs a large German manufacturing
company and we spoke with that board member, and said, well,
what are you doing about robotics and automation?
And that person said, well, look, just
reducing headcount, that's really not an option for us.
We don't get to do that.
That's not an arrangement.
If we want to reduce the headcount,
we've got to retrain the person.
So all of our investments-- we believe robotics
are key to our productivity.
We're an expensive country.
We're a manufacturing country.
We need to be really good.
But our investments are things that
make workers more productive.
We're not just trying to make them redundant,
we're trying to complement them.
And when we make an investment that causes displacement,
we're going to be doing reskilling.
So the incentives matter, right?
It doesn't make sense to introduce a technology that's
1% more productive and displaces 1,000 workers,
that's too costly.
So we can't regulate the type of technology.
And it's not truly a matter of guardrails.
It's incentives to think about what are the costs
and benefits of these actions.
And you could argue that the German system in which there
is more worker voice, where workers are on boards,
and where there is sectoral bargaining,
gives employers an incentive to think
about the external costs of displacement,
and that affects their innovation decisions.
It affects the way they want to use the technology.
Thank you.
Juan, I wanted to turn to you now.
And I was just thinking-- well, we're
talking about workers being displaced.
And I wondered if we could just look
at that particular moment in the life of a worker, of a person.
Because when one hears of education
it's a question of something that goes down the generations.
If you start young, you might be fixing the labor market
25 years from now.
But there's a very specific question
about what happens to this large cohort of workers who
are now in their 40s and 50s.
And when you think about education, you think of them--
I expect there's some kind of specific hurdles.
These are folks who have had one job for most of their lives.
They came of age again in the 20th century
technological landscape.
And is there a particular challenge
in increasing their skill set, in changing their skill set,
so that they can take advantage of this new economy?
Well, I like to think about it as what
happens with that same worker years earlier.
I mean, we've really got to be in the mindset
that we're in continuous learning,
that our job security in many respects
is our ability to keep up with the skills
and demands of the marketplace on a regular basis, right?
And so shame on us if we are not doing the things as a society
to ensure that those workers in their early 20s
don't get into their 40s and are stuck with only one
set of skills, but have a multiplicity of skills
from which to draw from, should the environment around
them change.
And so I look at those situations
because I see them as, in many respects, catastrophic.
That person, that stability that came to that family,
it's not just--
these are folks with children.
These are homes that they've invested in.
You are tearing apart the very fabric of a community person
by person.
And the education system has to be more than education
in those moments.
And so the supports that are required
for people that are going through this transformation
that they have to go through very
rapidly in a short period of time, gotta do a lot of catch
up, right?
And we seem to think with a, sort of, low touch
type of retraining, we can get them there right away.
It just doesn't really work--
the frustration-- they end up in much lesser occupations
that don't really provide.
And the burden gets left on the next generation
to make up for what their parents can no longer do.
And so I think we've got to get smarter about this
and be thinking about every worker
that we have today as an asset and invest in every worker
that we have today, so that they are building
upon a base of knowledge and have the ability to adapt
to changing circumstance.
I think that goes back to incentives, right?
And what is going on in companies,
and why, in fact, there are budgets for actually worker
development and employee development often
are the first to get looked at when they're making reductions
in their overall allocation of resources.
So we've got to figure out ways to bump that
up and use our resources, like our colleges, more readily.
I mean, you've run into the problem
that firms are not going to want to provide
their workers with a lot of general skills
that might be used by the firm next door, right?
Because that's a problem of investing something and then
losing the asset, which is mentioned often.
By the way, I forgot to mention earlier on--
I was remiss-- we're going to move
into a question and answer, a Q&A moment, in a few minutes.
But I think you all have cards where
you were going to write your questions
and somebody is going to come and collect them
and then I'll read them up here.
But it's a few minutes away still, just to remind you.
And I wanted to move to you, John.
I mean, I think one thing that's sort of come up
is, well, what's the responsibility
of the corporation here?
And we heard from the Business Roundtable,
not just a few days ago, this thing about they're
no longer going to focus just on shareholder value,
but they're going to think of the interest of the broader
set of stakeholders.
I mean, I've got to say that I'm a little of Milton
Friedman-esque on this.
But today, I could be wrong.
What do you think is the responsibility of corporations
to try to manage this environment
for their workforce?
Well, first of all, we, in IBM, we
believe fundamentally it's just the right thing
to do, to help our workforce, to train them, to advance them.
But also the economics are plain and simple.
It costs a lot more to replace somebody
than to re-educate, retrain, or provide the tools.
So we talked a lot about the education,
and that's where people normally go when they think about,
well, let's enable or improve a worker's ability
to do a new job.
I keep coming back to provide them the AI tools.
A great example of this that we've used
and other companies are starting to
is in the area of cybersecurity.
So there's no place where a field
is changing more quickly than in the cyber threat, cyber defense
field.
So envision a cyber security operation center
where you're getting thousands of
feeds of malware, malicious intent, networks going down,
server problems.
It used to be that it took very highly skilled people
to sit there and look through that,
and say, OK, I can trace that network,
I know where that threats coming from, I've seen that before.
We have found that we can take either apprentices that we've
trained, or two year degree people,
enable them with an AI tool that has already pre-analyzed
that threat data coming in, and says to that person,
don't look at those thousand threats,
look at these three things.
And oh, by the way, I have found those three things
someplace else.
That's something that a person with what
we would traditionally, say, lower
level skills, or education, can do really well.
And the great thing is that the AI tools
are advancing at such a rate that they're
advancing consistent with the threat or the opportunities,
so that the human doesn't necessarily
have to be re-educated.
Constantly, the AI tool can do it and bring the human along
in this man machine, sort of, augmentation.
So from our vantage point, it's the right thing
to do, economic sense.
We're very much supportive of the education.
But we think that for the first time
these tools are not just put a machine in to do a task,
put a machine in that learns and can pull the human along
with it.
That's a whole new thing we've not seen before.
Yeah, David.
And you can imagine that not just in computer security,
but, for example, in health care, right?
There's so much work to do in health care.
And the most expert people are so expensive and so scarce,
and a lot more work can be delegated
with machine augmentation to allow people
to triage, to diagnose.
And even one of our colleagues in MIT Julie Shah, who's
a roboticists, she works very hard on the floors
of hospitals, and she does--
the charge nurse on a hospital is like an air traffic
controller with like a much harder traffic
pattern and much worse tools.
And augmenting that person to be able to allocate tasks
effectively, to triage what's important, to delegate.
And so there's an enormous amount of work
to do where there is effectively scarcity,
a congestion, an expense, and coordination in more--
a judgemental task can be augmented, delegated,
and there can be virtuous interaction
between people and machines.
So computer security, obviously, is a cool example, not as
big as health care in terms of employment going forward,
I hope.
[INAUDIBLE] jobs.
Cyber security, by the way.
One of the key things in the report,
also, is that if you look at the immediate future, many AI
tools, you've seen them.
Image recognition, face recognition, voice recognition,
but they've mostly been deployed by big organizations,
big companies.
We're in a period of rapid democratization
of that deployment.
These things are now easier to develop.
Well, you don't have to develop them,
you can draw them from the cloud in different ways.
They're easier to deploy.
And so the far out future is harder to imagine,
but the immediate future partly is about tools
that you've seen becoming deployed.
I sometimes say Alexa at the gas pump.
The things that you've already seen
deployed by large companies deployed in a more
ubiquitous way, and that's a pretty good way
to think about the immediate future.
That said, when cybersecurity AI system is making a decision
about who's friendly and who's a threat, that's fundamentally
a social decision.
It involves a lot of other factors.
Ditto, that's true in a health care setting,
and how do we make sure that those social decisions that
are either embedded in the code are transparent
and well understood for what they are, or left
to the person, in the health care case,
to a doctor to make the ultimate decision on a diagnosis
and a treatment, and that the ways that the decisions are
aided or supported are understood
for the nature and the way that they're
based on the data, which itself is either biased
or potentially insecure?
So, OK.
I'm going to open it to questions from the floor.
Here are some quite funny ones.
Here's a question that says that Bill Gates has
suggested taxing robots.
I'm sure you all read his piece where robots replace humans.
What if we tax robots at a higher rate
than we tax human work?
Do you think that would change the deployment of robots
in a positive way?
Or is that is that an idea worth considering or not?
Yeah.
It's worth considering and rejecting out of hand.
[LAUGHTER]
It's a terrible idea.
I mean, we need to innovate.
Even if we said, let's tax robots 100%,
we'll have none of them.
We will have them.
We'll just be importing them implicitly
and all the products we buy from overseas,
because this is a competitive world.
So we should not put the brakes on innovation by penalizing it.
We should tax, we should treat capital and labor
in a balanced way.
I don't think singling out robots
because they seem especially scary is the right way
to go about this.
And in fact, most of this stuff is software.
It's not robots anyway.
Robots are tiny relative to AI and all the software that
does most of the tasks.
It's also-- the definition of robot is so fluid
and it's so hard to put your finger on.
I mean, there is a kind of industrial robot
that looks like an arm that welds a car body
and those have been around for a while,
but most of the technologies we're talking about
are not easily confined to that really early 20th century
idea of robot, per se.
We use the example in the report of the Amazon warehouses
that have the robots running around.
And the individual robots are almost a trivial part
of that system.
They're just little devices that scoot around.
The entire fulfillment center really is the robot.
It's composed of people.
It's composed of what we used to call robots.
It is composed of software, which is critical.
And so what part of that is a robot?
Is that one robot, or is it 600 robots?
And then when you put a constraint, like a tax
around it, you'll see all kinds of innovation
and further blurring that boundary.
Yeah, yeah, yeah.
Put eyes on it so it's no longer a robot.
I always say, a car is just a robot that you sit-in,
and an airliner is an autonomous robot
that pilots happen to turn on and turn off.
And just pinging off of that--
I'm sorry I'm being a little undisciplined here.
Another crazy idea that has come out
in this conversation about, what do
we do about the future of work and prosperity,
is, should we ask the companies that
are using the data to develop AIs to pay people for the data
that they're providing into the system to train these AIs?
And Eric Posner at the University of Chicago
has suggested that there could be some real money there
that could actually put a dent into inequality.
I wonder, again, this is kind of left field,
but if you've thought about it, do you think it's insane?
I don't think it's insane.
I mean, there's value in data.
And how that value gets allocated and who extracts it
and at what cost and what benefit
to whom, which is always a question that we ask about,
you go back to the question of so-so technology,
it's really so-so for whom, and beneficial for whom.
And we haven't gone into the details of data privacy
and control, but it's a crucial question that's
going to continue to evolve.
And that's true in workplaces.
I've done a lot of work on cockpit automations
in airlines.
I work in Europe because I'm not able to do it in the US,
because as a researcher, you cannot observe a US airline
cockpit for security reasons.
Interestingly, in Europe, every airliner
collects oodles of data about--
and in the countries I've worked in,
that data is the personal private property of the pilots.
They don't physically possess it and the airlines
are enabled to aggregate it and then
they have to destroy it after six months,
but the pilots own the data and they can access it, or not.
And those models are out there.
And we think ownership of worker data is a huge issue.
And we think, too, that that could be one of the modern ways
that unions become more relevant to working people,
because helping them navigate through these emerging
concerns.
I mean, who would have thought you
had to worry about your data, right?
So to have some place to go, like,
a union where you can actually figure out how to monetize
it, maybe draft a contract.
And I use that as one example, but training and education
was another area where helping workers ladder up and advance
their skills to move into the emerging jobs
that they might not be qualified for is
another center of gravity where we need
a place, an independent place, that's
scalable and sustainable, like a union, we think
can be very valuable and relevant for the future.
Can I just say--
Go, go.
Because I'm burning on this thing.
I think what we need is a North Star that
defines the kind of society we want
to be and look like with these technologies incorporated.
And I'm not sure that we have that North Star in order
to guide the policy well enough.
I would just-- it may not be a perfect North Star,
but we've attempted to place one,
because we deal with this all the time.
First, let me just remind everybody
that data is the fuel for AI.
There would be no artificial intelligence without the data
for it to train on.
But we have, at least in IBM, taken a very firm North Star,
which says, your data is your property,
whether you're an individual or a corporation.
We will not use your data, even if we're processing it
without your approval.
And then we go further and say, if we
use AI in any of our solutions, we
will tell you we're using AI, and we will tell you
exactly how it was trained.
So in health care, you will know if the doctor is
using artificial intelligence and exactly where
that was trained.
That oncology solution was trained at Memorial Sloan
Kettering, period.
So we believe in transparency and clear ownership
of the data.
And I think if you sort of start with that,
it may not be a North Star, but it
points in the right direction, a lot of your decisions become
pretty easy.
So here's another question, what policies
should local governments take in light of these?
Is there a rule for local policymaking
that will change some of--
that could really affect outcomes here
that you've thought of?
I think one thing that we all are sensitized to
and really is not as fully represented in this report
as it will be in the subsequent one is how much these things--
the impacts differ across locations and how much
prosperity in the United States has become very concentrated
in a bunch of superstar cities.
And that's where wages are rising
and opportunities are abundant.
And then there are many places that
are sort of not participating in the same way.
In fact, we can see, of course, the areas
that were in heavy manufacturing,
in labor intensive manufacturing, and energy
sector activities.
So yes, I think the solutions--
it's easy to talk in the abstract,
but many of the solutions have to be local.
We think the community colleges, of course,
are the institutions that's most reactive, most responsive,
in terms of trying to deal with these skill mismatches
and identifying new opportunities.
And so one thing we want to be able to highlight
in our future report is, what are some models
that we should be looking at?
There's so much heterogeneity in experimentation
in our community college sector, which is admirable,
one of the strengths of the US, but then
trying to kind of filter that and say, what should people
be emulating?
Similarly, and this is something that the White House has worked
a lot on, is apprenticeships and sector
based training and trying to foster opportunities that
produce, not just jobs but good careers for people
without four year degrees.
And again, that has to be very localized.
It has to be directed at what are the opportunities there.
And then I think some of it is trying to bring--
I think there is evidence that actually-- broadband access,
which, in fact, the Department of Agriculture works on a lot
[INAUDIBLE]---- makes a difference to a community's ability
to take advantage.
And hopefully over time--
we've talked a lot about the death of distance,
but in fact, agglomeration has become more important,
being close in proximity.
We hope over time that will attenuate.
And in fact, there will be more spread out opportunities
through high speed communications as well.
Yeah.
So the last question I find really interesting,
because this is a theme that has been percolating
through this whole conversation, is, what can we
learn from other experiences, from the experiences
of other countries?
And this is something that I could level at you,
Liz, what can we learn about unionization models elsewhere,
other means of acquiring worker voice,
but I think all of you-- and Juan,
you mentioned this idea of a North Star here.
Is there anything in the experience
that we can see out there that would
help us find these new models and these new ways of doing
things?
Well, I think we can if worked very closely with our union
counterparts globally, so that we can
learn from the best practices.
And we keep coming back to, of course,
the fact that most developed nations have a social safety
net, so that people aren't struggling to figure out
how to get health care and retire with dignity,
and then now throw technology on top of it, right?
So that we need to figure out as a country
how we're going to invest and have
that North Star of sustainability and people
thriving and prospering.
And so, yes, we have a lot to learn from others.
Of course, around privacy we know overseas they've
done a good job of at least opening
the door to that conversation.
But once again, I hate to be a one note Janet here,
but basically, until we remedy the inability for workers
to really come together in a formal way
and exercise their rights and their voice
and their power in a model that is sustainable
and at scale, we're going to continue to find ourselves,
we think, in a world of hurt.
But I want folks to know that the labor
movement is looking forward.
We are looking in a modern direction.
We want to reinvent, really, what it means to be in a union.
I'm looking at game developers, for example, video game
developers, who are global.
Their work is fluid.
And they're trying to figure out how
to find their voice and their power
and not be exploited when they're developing these games.
Perhaps a union, a modern union that
can give them the leverage that they
need to find their security is exactly what would be remedied.
David.
One example we should look to as a kind of a crystal
ball for our future is contemporary Japan,
which has very low fertility, a rapidly aging society,
and very low levels of immigration.
And you can see the enormous pressure that
creates, both, in terms of labor scarcity, pressure
for automation, but also really challenging to adapting
and meeting the care needs and the service needs.
And we're not in quite the demographic crunch.
We will not be in quite the demographic crunch
that Japan is now.
But we can foresee what pressures that's
going to create and how we could respond better.
One of the forces that's causing slow labor force
growth in the United States and contributing to the rapid aging
is our declining immigration rates.
And I think we're going to feel that very acutely.
Japan has had trouble mustering the capacity
to deal successfully with immigration,
but the United States has been the most successful
of any country in doing this.
And I think we should be cognizant of how much that
has benefited us over time.
Well, thank you, guys.
Thank you very much for the panel.
Let's give them a hand, perhaps.
[APPLAUSE]
You can find copies of the report
online at workofthefuture.mit.edu.
I hope you enjoyed the morning.
Thanks a lot for coming.