Podcast
Podcast
- 12 Mar 2025
- Managing the Future of Work
Cal Newport on slow productivity and next-wave AI
Joe Fuller: Knowledge work has always lacked well-defined measures of productivity. The rush to adopt generative AI and the battle over remote work have only added to that challenge. Underneath it all, as anyone engaged in or managing it knows: Knowledge work is hampered by distractions, inefficiencies, and competing demands on time.
Welcome to the Managing the Future of Work podcast from Harvard Business School. I’m your host, Harvard Business School professor and nonresident senior fellow at the American Enterprise Institute, Joe Fuller. It’s my pleasure to welcome back to the podcast Cal Newport, Georgetown professor of computer science, author, and contributor to The New Yorker magazine. Cal is a longtime observer of the world of work and the impact of digital technology. In his latest book, Slow Productivity: The Lost Art of Accomplishment Without Burnout, Cal renews his critique of the cult of busyness. He advocates a sustainable approach that prioritizes deep focus, long-term impact, and higher-quality output. We’ll talk about how email and social media conspire to derail us and how to structure work to limit digital interruptions and promote manageable workflows. We’ll also consider why AI, like the World Wide Web in its early stages, has yet to serve up killer apps and boost productivity despite rapid technological advances. Also, amid concerns that AI might exacerbate inequality, Cal argues that the technology has the potential to increase access to good jobs. Finally, we’ll explore the tension between regulation and innovation, as well as how technology, used wisely, can help restore work-life balance. Welcome back to the Managing the Future of Work podcast, Cal.
Cal Newport: Well, thanks for having me.
Fuller: Maybe you could just give our listeners a brief background about your research before we turn to the question of your recent book, Slow Productivity.
Newport: Well, if we go all the way back, I’m trained as a computer scientist. I’m a tenured full professor of computer science at Georgetown, and my specialty is algorithms. I mean, this is what I actually trained to do, was to study the theory of distributed algorithms, which is just as arcane as it actually sounds. I also, though, have been a writer the whole way through. I started writing books when I was an undergraduate. There’s a key inflection point, though, where these two worlds start to come together, and that’s right around the time I’m a young professor at Georgetown. And that’s when I write my 2016 book, Deep Work. And so Deep Work is about the cost of digital distractions in the workplace and then what we should do about it. This is where my writing began to turn toward understanding the impact of the type of technologies that I was, in theory, helping to support as a computer scientist. So the books, starting with Deep Work, that I’ve written really all have some sort of interrogation with the world of work, our personal lives and their intersections with technologies, the hidden dangers or troubles as causes, and the various strategies we might deploy to try to get around those dangers. I now actually spend a lot less time doing straight-up computer science, and on my academic front, I am actually very heavily involved in the formal study of technology and its impact. So I’m now a founding faculty member of Georgetown Center for Digital Ethics. I’m also the founding director of an academic program, a major at Georgetown called Computer Science, Ethics, and Society. So now everything has basically converged in my academic and writing life. I wrangle with technologies and their impacts and what to do about them.
Fuller: How did that all come together for you in your most recent book, Slow Productivity? What did that add to the canon of Cal Newport thought?
Newport: Well, Slow Productivity, the impetus for that book was me really understanding, having written about this topic for years and years, what we were doing in knowledge work wasn’t working. And the culprit, at least in my analysis, was technology. So I really understood, because of the books I wrote like Deep Work and like A World Without Email as well as my reporting for The New Yorker, when we began putting personal computers on the desks of cubicles, right—so when IT revolution happened in the front office—it caught the knowledge work sector off guard, and it caused all sorts of unforeseen consequences. And coming at the issues of knowledge work from this technological lens, I understood in maybe a unique way the challenges that knowledge workers were beginning to have around the notion of productivity. There was this real fatigue. There was a real frustration with this term “productivity.” And I said, “The problem is, our definition of productivity and knowledge work was never really that sensical.” We basically just said, as a fallback solution or a heuristic as knowledge work became a big field, “Let’s just use activity, visible activity, as a proxy for useful effort.” The more stuff you’re doing, the better. If you need to do more work, show up earlier, right? I call this “pseudo productivity.” It was basically the heuristic we fell back on, because in many sectors of knowledge work, it was just too difficult to manage knowledge workers otherwise, right? What are they doing? I don’t know. It’s all so haphazard, ambiguous. So once we had computers on our desks, the collection of possible activities that each knowledge worker could be doing exploded. Now things that before would be handed off to different sort of specialists—this person types, this person works on the finances, this person does the travel booking—now each individual knowledge worker could do basically everything on their computer. Then we network those computers and when we had the digital communication revolution as kicked off mainly by email, the velocity at which people began passing work to each other began to increase. And the granularity at which you could demonstrate that you were doing useful activity, that you could show that you were busy, got really, really small. And then we had the mobile computing revolution. So now there was no time boundaries on this demonstration of activity. I could do it at home, I could do it on the weekend, I could do it while I was traveling. And suddenly, this pseudo productivity idea that we had informally embraced since the 1950s, essentially, in the age of computers and networks was running off the rails. It was exhausting us, and people were feeling an almost nihilistic dread about “I am busy all the time and I don’t even know what I’m doing.” So I came at the issue of productivity with a foot firmly in the world of understanding tech and tech impact. I think that’s what gives me my unique approach on it. A lot of people will see the issues with productivity maybe through a labor economics point of view or a political philosophy point of view. What does this have to do with the way labor is exploited or the relationship between managers and workers? I saw it from a different angle. It’s computers and it’s email and it’s laptops. This broke knowledge work. Now we have to figure out ways to solve it.
Fuller: I’m curious how you view that through the lens of the very active discussion that we’re having in the United States about potentially a parallel phenomenon, which was the role of social media in warping people’s behavior and getting people to be preoccupied with what the technology provides. How do you account for the human factor, the psychology of this pseudo productivity, where I’m constantly checking my email. I’m constantly drawn into, if you refer to the famous Eisenhower matrix, of whether something’s important or not and urgent or not, I’m constantly being drawn into that urgent non-important sell of the four-box matrix. How do the way we promote, reward talent, the way we describe jobs, how does that contribute to this? Is it a self-afflicted addiction, or is it just the universality of technology crowding in on us, and we are victims as opposed to perpetrators of the crime?
Newport: Well, I think it’s a critical question. And having written books about both of these, think of them as “distraction magisteria.” So the email in the office on one hand and social media on our phone on the other. I’ve really come to understand that the impacts in both these cases are very similar seeming: It’s distraction. The underlying dynamics leading to these results is very different in those two worlds and, therefore, the responses in both cases are different. So when we come to the office, we’re very distracted in the sense that we are checking an inbox or chat channel, depending on what research you want to reference, something like once every six minutes. And I think most people say, yeah, that’s about right. I’m constantly involved in these ongoing conversations, whatever the tool is that we’re using to have those conversations. The question is, why? Well, there I say the problem is not that there’s some company that wants us to check our inbox more often. I mean, Microsoft wants Outlook to be useful, but they don’t get more licensing fees from the companies if you check your email 100 times versus 50. They just need to make sure that it’s vital that you have some sort of email. So why do we check our communication channel so much in knowledge work? Well, there I see the problem is autonomy, right? The touchstone of knowledge or productivity as laid down by Peter Drucker in the 1950s is that you can’t tell a knowledge worker how to do their work. You have to leave it up to the knowledge worker to figure out on their own how they’re going to do their work, which is right. I mean, we can’t run an office like an assembly line. Okay, that makes sense. But the problem is with that autonomy, we had to figure out, how are we going to collaborate on work. And when email came along, this was the lowest energy state. It was the lowest, the path of least resistance. We can just let work unfold with these sort of back-and-forth ad-hoc messaging. It’s a collaboration mode I call the “hyperactive hive mind.” So the reason why we check our email all the time is because we have unchecked workloads, and each of these mini-tasks and projects that we have some sort of involvement with are unfolding collaboratively with unscheduled back-and-forth messaging. So I have to check these inboxes all the time, not because I’m addicted, but because there’s 15 different ongoing conversations happening. And if I spend too long outside of my inbox, they could all stall. Each one of these might require a quick response from me to keep moving forward, and some of these conversations might be time sensitive. Like, if I wait three hours before I check my inbox, we’re not going to get enough back and forth on this issue to solve it by end of business, which we really need to do. So this is an issue of how we structure work plus technology. Go to the world of social media, very different dynamic. There the dynamic is, I make more money if you look at this more often, and so I’m going to intention-engineer my product to make it as irresistible as possible. More active user minutes means more revenue. So there it’s a problem of an engineered result. So the solutions here are very different. In the workplace, we actually have to just rethink notions about what do we mean by productivity, how do we want to collaborate. We need to explicitly involve how the human brain actually functions when we think about issues of productivity. It’s a business process issue, and the solutions will be business process focused. In the world of social media, this is a cultural problem, right? This idea that we should have consolidated most of internet activity onto a small number of private platforms that hundreds of millions of users use concurrently, we could argue maybe that is from a cultural perspective a bad way to think about and use the internet. And maybe what we need to do from a cultural perspective is stop making it seem so mandatory and ubiquitous that we have to use these platforms that are being engineered. In other words, the solution to something like using Twitter too much is typically something simpler. It’s: Stop using it. Whereas the solution to checking your inbox too much, ooh, that’s harder, because now we need an alternative way to keep track of workloads and to structure collaboration. So similar impacts on attention, very different dynamics leading to those similar destinations.
Fuller: Yeah, we’re talking in the winter of 2025 and we’re seeing a growing movement in corporate America and even in the U.S. federal government to oblige workers to come back physically to work and a reduction in or elimination of remote or hybrid work for a lot of white-collar workers of the type we’re talking about. Now, how does that play into that? Has remote or hybrid work been a benefit to advancing the type of focus and enhanced productivity that you advocate? Is it just a different locale for getting the same frequency of messages and the same level of interruption? And how do you feel about a company’s decisions to revert? Does it make sense to you or do you think that they’re reverting out of kind of a laziness and a Type A personality of a bunch of senior executives?
Newport: Well, I’m not surprised by the reversion. I predicted this, I think, way farther back in the pandemic that we were going to see a reversion to much more in office. My further prediction, however, was that going forward from here, we might then see a return to more remote work, but it was going to look different. Why I think this is true, this has been my argument since April of 2020, when I wrote my first article about this. Why I think this is true is that, for remote work to be successful, you have to fix a lot of these problems that already exist about knowledge work already. In particular, this idea that work is haphazard and ad hoc, we just sort of throw tasks at each other over email and figure things out with ongoing unstructured conversations. This hyperactive hive-mind workflow that we’ve implicitly adopted in digital knowledge work doesn’t adapt well to remoteness, right? Because what happens with this is that it takes the inefficiencies in that system, and it exaggerates them. So if I can no longer grab you in the hallway or for a couple minutes after a meeting to sort of bounce this ball back and forth on some ad-hoc project we’re working on, I now have to set up a Zoom call, and it’s going to sit there and take 30 minutes on my calendar, because I can’t make an appointment any smaller than that. And suddenly, my day is going to be full of these conversations. Suddenly, the quantity of emails and slack going back and forth is going to increase because again, we lose all these sort of physical touchpoints to have quick real-time synchronous interactions. And so what a lot of people suffered from during remote work is, “I feel even more overloaded with the admin overhead and spending even less time working on my work.” I mean, this was the big surprise of the summer of 2020, was people saying, “Wait a second, I’ve cut like an hour of commuting out of my life, and yet I’m working longer hours and still feel less productive than before.” So the solution, I think, to remote work is if you have more structure—about here’s how we figure out who’s working on what and what a reasonable workload is—if you have more structure—about here’s how we communicate about work, let’s structure that so it’s not ad hoc, requiring you to monitor channels—if you have that structure, remote work works really well. And you can not only not fall into those traps that we fell into in the summer of 2020, but actually you get to reap all the benefits of it—the flexibility, the not having to commute, the lack of distraction. And in fact, as I argue, we’ve seen this. What is one of the sectors that pre-pandemic already was successfully working with remote work, successfully had a lot of case studies of remote-only companies. It was software development. Why? Because software developers already have solved all those problems. They use agile methodologies like Scrum that have a very clear approach for how we track work and assign work with clear limits on how much you should be working on at the same time. There’s also structured communication. We have a daily stand-up. It takes six minutes. “What are you working on today? What do you need from other people to get it done? Go do that work now.” With that type of structure, remoteness works great. And actually, a lot of software companies pre-pandemic took advantage of this, the talent pools that weren’t geographically proximate. But we’re figuring out more, especially the native remote start-ups that started during the pandemic, are figuring out more: “How do we bring this sort of software-engineer-style structure to knowledge work?” They’re going to be very successful with the remote-only companies. And I think those ideas are going to percolate back to the existing big players, and we’re going to see, it’s going to be like a U-shape. We’re going to see a return to more remote work, but it’s going to be with—I don’t even know what the acronyms will be—but there’s going to be these much more structured work methodologies that are going to become popular in three or four years that’s going to support a return to more remote work. at least that’s my theory.
Fuller: Well, I think we’ve done some research here that would confirm elements of your theory. One is that, in our estimation, a critical missing variable in making remote or hybrid work successful was training managers to supervise teams that were hybrid or remote. They find it frustrating if the way they supervise people is to manage by wandering around or have that meeting, as you illustrated, that post-meeting for the five minutes: “What are we going to do differently?” We also see in some research we’ll be publishing in the spring that, for high-paid workers, a substantial percent evaluate a job related to the availability of hybrid or remote work. That’s particularly true for higher-wage female knowledge workers. And as you well know, women are almost 60 percent of the college enrollees, are an absolute majority of all graduate degree enrollees. In a workforce that is going to require a substantial number of degrees, at least in jobs that can be done in a hybrid or remote fashion, responding to the needs of that demographic is going to become increasingly important if they gain more and more market share of higher-ed credentials. It’s inevitable—I’m sure you are asked this question on probably more than once a day, but you are a computer scientist. We are living in the age of generative AI: What should we be looking for in terms of generative AI? What are the key questions on your mind, and how is that going to reshape this world you’ve described in your research historically? Is it going to help us do fewer things? Is it going to help us improve our quality? Is it going to allow us to work at a more natural pace, to cite some of the themes in your book?
Newport: Well, something that’s been interesting me about AI as I’ve been reporting on it recently is this idea of, I think of it as the technology impact gap. So the underlying technologies of these generative models has been keeping up with, if not exceeding, predictions about where it’s going to go. When it comes to predictions of impact, however, especially professional impact, we’re falling well short. So what’s actually going on here? And I’m echoing other people’s analysis here. But it seems what has happened is that there was a hope by the boosters of generative AI that somehow these products would be able to skip the normal sort of annoying experimental process of finding product user fit, where you try to understand what is the actual form in which this is going to be useful to real markets, into workers’ lives. That work has to be done, and that takes time. I think Scott Galloway used this terminology: The interesting effort now is in the application layer, not in the actual technology layer. How do we actually connect something like generative AI into specific products? And this takes time. I mean, we’ve seen this before. This happened with the internet. You go to the mid- to late ’90s and people were rightly pointing out: This technology is transformative. And they were right. It really did change the economy and our lives, even how civic life unfolded. but they were frustrated at first. These changes didn’t come all at once because it took six or seven years to actually start to figure out the right product market fit. So these initial big moves that received all these investments in the late ’90s, that bust because it wasn’t the hope that just being on the internet, bringing the internet to people was going to change everything. It wasn’t so simple. We actually had to figure out the right form factors. Oh, Amazon figured something out. Web 2.0 figured something out. Social media figured something out. That’s where we are, I think, with generative AI. So it’s going to take more time, and I think the impacts are going to be more focused. Here’s an impact for this sector. Here’s an impact for this type of employees. The way I’ve been predicting what this first wave of actual big impacts will look like, I think it’s going to be more AI helping users take advantage of existing features and software that previously were too advanced for them to know how to use. So we see this, what’s happening with computer programmers, it’s helping computer programmers who are already writing code in these IDEs. It helps them write the code faster and access functions and libraries they might not have really known about before. I think we’re going to see this in other software as well. I could be an amateur user of Excel, but now I can get Excel, I can use the advanced features of Excel, because I can just sort of explain what I want to do without having to learn how macros work or how to click all these buttons. So I really think that first wave of notable professional impact is not going to be automation, and I don’t think it’s going to be introducing brand new capabilities that people hadn’t known before. It’s going to be unlocking more of the latent productivity of existing tools. And that’s going to be sort of layer one. But this, like the internet, these impacts are going to unfold in the moment slower than we’re predicting, but also like the internet, it’s very possible 10 or 15 years from now, the landscape of work is very substantially different.
Fuller: I think there’s a really interesting paradox here, which is, as you pointed out, that the technological progress has exceeded expectations, but the adoption in companies has been less than expected. And what I would indicate, what our research indicates is, we are seeing companies adopt it in augmentative ways. So: “How can I improve the way I do the task I have been doing?” What we are seeing is a lot of rigidity in organizations to really embrace it for widespread automation of their current processes, because if you really build your processes to maximize the productivity of generative AI, it’s a complete rebuild. You’re going to change the job descriptions, the reporting relationships, the incentives, the metrics—a complete overhaul of every process in the company. It’s incredibly intimidating. And you’re getting that kind of J-curve effect, where the actual cost of deploying the AI while running your current systems and your current approaches to problem solving actually exceeds your base cost. So it’s going to be very, very interesting to see what happens when we start getting generative AI native companies, when we get companies that go ahead and bite the bullet. Because in these large models, they with the most data, that start using it better, sooner, win. And so it’s a big competitive risk not to do it, but it’s a big challenge to actually bite the bullet and embrace the technology in the way it ought to be used.
Newport: I think it’s just too early. I think the problem with our current model is the companies have to figure out how to build tools on their own using this relatively, the bare metal layers here, the actual generative models, where I think that, what we’re waiting for now is, we actually need specific products. So if I run a shipping company, I shouldn’t be hiring people to work with the OpenAI API. What I’m waiting for is a really good product that uses AI that is aimed at shipping companies. This is why we’re not seeing the impact. It’s too hard to try to build your systems from scratch. It’s disappointing. But if you have professional AI programmers build a system for your company, and 10 other people try to get in the sector as well, and this is the one that worked the best, learning off what worked and what didn’t from the other nine competitors, that’s where you really begin to get impact. So that just takes time, but I think that extra layer of professional applications—again, this is Scott Galloway’s notion of the application layer is where the action is going to happen in AI in the next five years—I’m completely on board with that.
Fuller: It’s also interesting, Cal, that in our research, what we’re finding is that companies are saying one of the major inhibitors of their adoption is the lack of vendor support. I think in the last 20 years, companies have, in some ways, maybe some kind of ode to Isaac Asimov, kind of forgotten how to do certain things, and they’re incredibly dependent on the customer success functions of their software vendors, particularly in the ERP systems. And they don’t know how to do for themselves, particularly. But the AI vendors don’t have big effective customer success infrastructure. They don’t know how. The consultancies need time to figure out what to do and how to do it, although some of them are beginning to gain some momentum. So that this notion of getting the companies to be able to customize it in that application layer, per Galloway’s point, that’s going to have to come together almost organically. And that will, of course, take time as you suggest.
Newport: Or we could just wait for the vendors to catch up, which I don’t think is a bad scenario. I mean, I think, yeah, it does take time to try to figure out, how is this technology best used, what can it do, what can’t it do. What are the best practices? There’s a sense of urgency I think among certain people in certain companies, like, we need to be in this right now. Much as in 1996, a lot of individuals and companies were like, “We have to be in the internet.” But, actually, we didn’t see that explosion until I think a lot of these sort of vendor-produced tools became more successful and polished. I’m okay with it taking time. I tell people, when individuals ask me, “What should I be learning about AI so that I stay competitive?” I say, “It’s the history of technology and commerce tells us. It’s not really, unless you’re in a very specific tech-forward job, it’s not your job to figure out what the killer app for this technology is going to be. When that killer app comes along, it’s going to make itself unmistakably apparent.” It’s like when email came along or Google came along. When the tools become a killer app, it’ll be self-evident.
Fuller: So over time, how do you see AI changing work, especially for the type of workers that you’ve had in mind in your research, those white-collar, highly credentialed people who are surrounded by this technology and in the pseudo-productivity trap?
Newport: I’m most interested in what the second generation of changes could be. So the first generation is, like we’ve talked about, it’s a natural language interface and the software tools you’re already using. The second generation I think is where things get interesting, which is where you could potentially get this model of what I think of as an AI chief of staff. You could take this sort of highly interruptive, hyperactive, hive-mind interaction where we have to constantly be tending to all of these ongoing conversations to keep information flows moving, which is a disaster cognitively. It makes us miserable and terrible at our jobs. If that could be outsourced to AI agents—so my AI agent can talk to your AI agent to get the information I need when I need it—that really could be a major change to what knowledge work jobs are like, and I think would also be an exponential productivity boost. If work moves more toward that, much less time on administrative overhead of my obligations, much more time spent on the actual obligations, themselves, that really would be a revolution. AI can’t do this yet. The problem is this requires a planning capability, which requires keeping state and future simulation, none of which is possible in a language model. You need federated models, you need different ensemble of model types. But I know it’s a direction that we’re probably heading. That second generation of impacts, that really could change the complexion of knowledge work.
Fuller: Historically, we’ve talked about a digital divide often based on economic class in the United States. Do you see any threat of that emerging related to AI? And we’ve seen some interesting data that utilization of AI is much higher among men than women currently. Is that just a temporary phenomenon or is there something more to it in your view?
Newport: I think that’s a temporary phenomenon, because these tools are so unpolished at this point. Again, it’s just the bare metal model access. Who is using these tools? The same people that 50 or 75 years ago might’ve been playing with ham radios, and they just sort of like messing around with the different circuitry. Tech hobbyists tend to be more male than female. I think that’s where that is coming from. I actually think the polished version of these tools, the ones that actually are going to have impact, could close that divide. It could close the class divide, in particular, because a couple of things you get out of these tools is, one, again, it’s unlocking advanced features without having to rely on hard-to-obtain advanced training. They also could minimize some of the—think of it as the sort of social acclimatization, the class acclimatization, like, I sort of know how to navigate this sort of complicated office interaction that’s happening here. When you have tools that are helping you do information flow and get information on your behalf, you don’t have to navigate things through sort of fraught back-and-forth email conversations where exactly how you word things are going to matter, and you really have to sort of understand the social context of everyone involved. That also opens up work to, I think, a larger group of people. You’re not just people from a similar background socially and from education, as well. So I think policy AI tools will actually potentially increase access to tech-forward jobs. Right now, I think, again, it’s just hobbyist though. That’s where that gap’s coming from.
Fuller: There’s at least two schools of thought that one hears and one associates one with a current U.S. administration but with the U.S. more generally, that we don’t want to overregulate this technology. We want the technology to expand, and it’s contrasted with the EU’s approach, which seems to be much more attuned to regulation, much more concerned about the evolution of AI, and largely unburdened by having an AI sector of their own to worry about. As a friend of mine says, “The easiest way to find yourself writing a billion-dollar check to Brussels is to be a successful U.S. tech company.” How do you view the balance between allowing significant innovation, and what type of measures do we need to be taking to ensure that the technology doesn’t take on a life of its own, get out of control, or the bad actors are able to use it in ways that go beyond even our current colorful imagination?
Newport: Yeah, I think regulatory efforts here are confused and inadequate right now. One, I think we have the incentive from the major AI companies in the U.S. Clearly, they want regulatory capture, and they want to create this sense that these AI models require these super huge variations of language models, that you need trillion-parameter models that are being trained in these billion-dollar data centers, and that they should have a complicated burden of regulation, because there’s maybe three companies who can do that. And so that pushes off competition. The reality—as I’ve been arguing for a long time and I think now more people accept, seeing what happened with DeepSeek coming out of China—is that academics have known, who’ve been studying these models for years, you don’t need a trillion-parameter model to get really useful use cases out of language models; that relatively small models that are optimized for the use case you care about or maybe have some interesting optimizing heuristics like DeepSeek did—where, for example, you can chunk together more tokens so that you don’t need as big of a token window and its computation that’s expensive and it’s a small trade-off in performance—that you can build really useful models that can do any of these things or most of the things we’ve been talking about as big productivity boosters. You don’t need a 30,000 GPU data center that only two companies can afford. For me to train a model, for example, that can make me really good at Microsoft Excel, like we talked about, that’s probably a 20 billion-parameter model. An academic could do it with a couple GPUs on a computer. If I want a model that helps me write code like we see with a Copilot plug-in for GitHub, that doesn’t need [GPT4o] or whatever to do that. I mean, you train this on a bunch of code. It’s not that big of a model. The other thing I think is a little bit misguided is that we focus too much—and here in the U.S. but certainly the Europeans—I think we had too much of a worry, we got captured in our concerns on the specific form factor of a chatbot. And we quickly locked into like, “Oh, this is what generative AI is. It’s an oracle you talk to through a chat interface.” And then we got sort of obsessed about, well, what matters is making sure that the oracle on the other side of that chat interface behaves itself, and it doesn’t say bad things, and it doesn’t express itself in a way that is going to be objectionable and that it’s not giving bad information. It’s an accident that ChatGPT got that popular. OpenAI thought that was just a demo of their underlying language model, and the real business was going to be building custom applications on it. So this language-centric approach to thinking about language model regulations I think has been a distraction, because five years from now, the problem is not going to be was I able to convince Google Gemini to say something racist because no one’s going to be talking into a chatbot. It’s going to be integrated into software applications as a natural language interface. Those issues aren’t going to be relevant. The final issue there is the issue about it growing too big and growing out of control. That is an interesting issue. I don’t think we’re on that trajectory yet. I do think that’s an issue that maybe will come up at some point. So there, I think, is more interesting. But all of these threads should weave together, I think, with a common tapestry here, which is the regulatory picture is entirely confusing right now.
Fuller: Well, it’s certainly interesting to see the whole field wake up to the fact that you don’t need a pile driver to make every hole you want to dig. I mean, the LLMs are these incredible tools, but they’re really pile drivers. And we can use the idea of these models at appropriate scale to create things much more affordably, much faster, and unpack this risk that you’ll have the regulatory capture, which I must admit, in the previous administration, I fear we were headed toward that. Well, Cal, you’re great to spend time with us. I’m curious about what can we expect to hear from you in the future? Do you have any important projects you’re working on or yet another book?
Newport: There’s always some sort of book project lurking here and there. I’m actually working on a book now that’s a bit of a one-off. Almost everything I write about is directly about some sort of technological impact we’re grappling with. I’m actually working on a book now about cultivating a meaningful life, and here is the connection. This is a topic that got big during my podcast during the pandemic. I recognize I can’t just be talking about how technology is keeping us from the rest of our life if people don’t have a super appealing rest of their life to be trying to optimize. Actually, the best tool to get people to use their phone less is actually making the analog part of their life much better. And then I’m doing a lot of journalism still, especially for The New Yorker. Around the time we’re recording this, I’m in the editing phases of an article that I’m sure will be out by the time this comes out about how Silicon Valley has grappled with employee productivity over the last 70 years, like, the whole history of them trying to figure out how do we keep track of what our employees are doing to make sure that they are productive. Certain members of Silicon Valley around the time that we’re recording this are trying to give the impression that Silicon Valley has it all figured out, and if only the government could follow their lead, they’d be better. Turns out it’s a hard problem, and so I’m writing about that, the history of productivity in Silicon Valley and a bunch of other projects as well. So I’m staying busy for sure.
Fuller: Well, that’s great, because we always benefit from your work and enjoy spending time with you. One last question: Can some of the technology we’re talking about actually be used to cause your analog life to be better?
Newport: Oh, sure. I mean, I just wrote an article on my newsletter not long ago where I talked about, for example, if you leave X and Instagram and TikTok and go to the small corners of the internet—the parts of the internet that still function like the 1996 dream—there’s all this great community and discovery to be had. And I gave the example in this article about how the Washington Nationals baseball team had played their first spring training game. And I had the radio call on. And I had open a blog, a community blog. And there’s this group of 20 or 30 characters who just gather on that comments thread and just talk to each other as the game unfolds. They know each other. And, “Hey, how’s it going?” Or, “How’d your kid’s thing do with this?” And it’s just lovely. And it’s very warm, and it’s very connective, and it was enabled by the internet. Technology isn’t necessarily going to make your life worse. The problem is, we have to, if we’re going to be good techno selectionists—to use a term that I like to lean on—we have to be willing to say, “Okay, here’s a place where technology is causing a problem. I’m willing to radically change how I engage with it over there,” but also find the places where it’s making a big difference. For too long we’ve just shrugged our shoulders and when the bad tech is dominant, we say, “This is just what tech is. What am I going to do, be a Luddite?” But we can be more fine-tuned in that. And I love the internet, I love technology. I’m a computer scientist. I just don’t like bad technology. And so this has been kind of my rallying call. The internet, your phones, computers, this can be a part of a really rich life, but you have to just use them on your own terms.
Fuller: Well, Cal, we’re actually hopeful that Boston will have a competitive Major League team for the first time in a number of seasons this year. But you’ve left us with the hope that springs eternal and the hope that “good technology,” to use the term, can provide some answers to some of the questions we’ve been discussing. Thanks so much for joining us.
Newport: Oh, always a pleasure.
Fuller: We hope you enjoy the Managing the Future of Work podcast. If you haven’t already, please subscribe and rate the show wherever you get your podcasts. You can find out more about the Managing the Future of Work Project at our website hbs.edu/managingthefutureofwork. While you’re there, sign up for our newsletter.