Sam Schillace’s breakthroughs in collaboration technology and engineering leadership have helped transform the way we work. In this episode, the Microsoft CVP and Deputy CTO discusses his experiences developing (and using) AI tools, and how they have the potential to shift our productivity paradigm. He also shares his perspective on what leaders need to do to foster creativity and innovation, and how they can stay ahead of the curve in a moment of technological disruption and transformation.
Schillace is the second guest for season 5 of Microsoft’s WorkLab podcast, in which host Molly Wood has conversations with economists, technologists, and researchers who explore the data and insights about the work trends you need to know today—from how to use AI effectively to what it takes to thrive in a digital age.
Three big takeaways from the conversation:
Schillace believes that we’re on the cusp of a transition that could be as profound as the one at the dawn of the computer age, and the ability to use conversational speech with AI is a key reason. “We think the computer is a tool for helping us, but a lot of what we do is we are really helping the computer do stuff. If you don’t think that’s true, tell me how often you spend time trying to fix the formatting, not understanding why it’s not working right.” He says we’ve had to learn the syntax computers understand because “we had to teach the computers to do stuff. But now we’re moving to this more semantic realm where the computer can have context, it can know what you’re doing—it can actually help you instead of you helping the computer.”
Schillace encourages leaders to approach this moment of technological disruptions and opportunities with open-mindedness and a growth mindset. “Being able to know what you don’t know, being able to ask questions in an environment where you have low information and be aware of things like biases and preconceptions that prevent you from getting good results [out of an AI tool], I think is useful. I think a growth mindset is going to be much more important now than it’s ever been.”
Schillace reflects on his own record of innovation and notes that breakthroughs often come from unexpected places and a positive approach. “Don’t ask the why not questions. What if is a better question,” he notes. “What does the world look like if this works? And if the what if question is compelling, then you work through the why not problems…the real prize comes from being optimistic and right. And the real penalty comes from being pessimistic and wrong.”
Follow the show on Apple Podcasts, Spotify, or wherever you get your podcasts.
Here’s a transcript of the episode 2 conversation.
MOLLY WOOD: This is WorkLab, the podcast from Microsoft. I’m your host, Molly Wood. On WorkLab, we hear from experts about the future of work, from how to use AI effectively to what it takes to thrive in a digital age. Today, I am very excited to be talking to Sam Schillace, who has been transforming the way we work for decades.
SAM SCHILLACE: Don’t ask the why not questions, ask what if? What if is a better question: What if this works? What does the world look like if this works? And if the what if question is compelling, then you work through the why not problems to get there. So, what if I could transform my business in a certain way? What if I didn’t need to make this kind of decision? What if this process, which is very manual, could be automated?
MOLLY WOOD: His trailblazing breakthroughs in collaborative software and engineering leadership have led him to his current role as Microsoft CVP and Deputy CTO, where he focuses on consumer product culture and the next phase of productivity, which are two topics that are pretty near and dear to our hearts on the WorkLab podcast. Here’s my conversation with Sam.
[Music]
MOLLY WOOD: So lots of people are saying that AI tools like the Bing Chat chatbot, Microsoft 365 Copilot are game changers for how we work. What are your thoughts on that?
SAM SCHILLACE: Yes and no. I find a lot of parallels to the beginning of the internet in the current moment. If you’re a practicing entrepreneur, programmer, or whatever, you could see that the world was going to change a lot. It wasn’t entirely clear which things were going to matter. Nobody knew what a social network was going to be. We didn’t have smartphones yet. It was hard to build websites, we didn’t really have the cloud yet… I mean, you can go on and on, right? I kind of feel like we’re in that moment with AI, like, clearly the world is going to change. Clearly, this is a very powerful and important programming tool. Clearly, there’s a lot of stuff to be done and a lot of new capabilities that are reachable now. But I still think it’s kind of early days, like we’re still trying to figure out how to put the pieces together. Yes, it’s going to massively change a lot of things. I don’t think we entirely know how yet. And I think we have a lot of both programming practices and tool chain to build still before we really understand it deeply.
MOLLY WOOD: You’ve written about and observed that as platforms emerge, we have a tendency to get stuck in old paradigms. We just use tools or programs the same way we always have, even though there’s technology that lets us do so much more. Can you talk a little bit about that, and how it’s tended to play out over time, and what it tells us about our current AI moment?
SAM SCHILLACE: I mean, I think it’s a very natural place to be. It’s hard to jump more than one or two jumps at a time conceptually, for anyone, for good reasons, right? So, you take a thing that is working and you iterate a little bit, you mutate it a little bit. And so I think that’s a natural thing to do to begin…
MOLLY WOOD: Well, you have a personal example. You founded a start-up decades ago that created what became a whole new kind of always-on interactive document. But at first, you and your colleagues, and even early users, couldn’t really get the full potential out of it. Can you talk about that evolution?
SAM SCHILLACE: Yeah, originally, it’s really just a copy of the desktop. It took a few new affordances from the New World. It took ubiquity, you know, so it was always on, always there. And we did collaboration, because that was a new capability that you could have, because you’re connected, it kind of took advantage of this. But we didn’t completely reinvent what a document was. Now that we’re used to these documents being more virtualized and abstracted, now we’re ready to go another step and maybe think about them not being static anymore. Maybe they’re fluid, maybe they’re something you talk to, maybe there’s just actually a live thing that reconfigures how it looks and what’s inside—it’s fuzzier, things like that. And that’s a beginning of taking what we have now and adding one or two pieces of the affordances of the next platform, which is the AI platform. What happens is, you know, companies work through that, engineers work through that one step at a time. You do one thing and it makes sense, and then you do another thing, and it makes sense. And then you kind of build on those. So I think that’s the other thing that happens a bit, is like, you try things that are new to the platform, and then you find problems that are new to the platform, and then you have to go solve those problems. And that’s how the solutions sort of evolve.
MOLLY WOOD: You are, I believe, one of the earliest users of Microsoft 365 Copilot, which is in a, no pun intended, pilot phase. Can you talk a little bit about how you’re seeing maybe a similar evolution, how it’s already maybe starting to change the way that you think about documents or—you know, you’re in such a perfect position to imagine where it could go in the future.
SAM SCHILLACE: Yeah, there’s this really interesting thing going on. I think we’re actually kind of at the beginning of the second version of the computer industry entirely. The first version of it was largely about syntax and these fixed processes, because we had to teach the computers to do stuff. But now we’re moving to this more semantic realm, where the computer can have context, it can know what you’re doing, it can actually help you instead of you, the person, helping the computer, which is a lot of what we do. A lot of what we do, even though we think computer is a tool for us, we are really helping the computer do stuff, and like, if you don’t think that’s true, tell me how often you spend time trying to fix the formatting, you know, not understanding why it’s not working right, or whatever. So I think the natural next set of evolution for the copilots is in that direction of fluidity, in the direction of helping, away from these fixed static artifacts and more towards, well what do you need? What are you trying to do? Oh, I need to do this presentation, or brainstorm this thing with me. Oh, I need to cross back and forth between what we thought of as application boundaries—I need to go from Word to Excel, I need to build some, you know, decision or some process, I need to work with my team. I think that’s where we’re heading. Right now, if I gave you a document and I said, this can never be shared in any way—you can’t email it, you can’t collaborate on it, you can’t put it on the web—it would just be this weird, anachronistic—like, why is that? Why would I want that? You know, documents are for sharing, collaborating. Non-online documents seem very anachronistic now. I think non-smart applications and documents are going to seem anachronistic in exactly the same way, in not very long. Like, why would I work with something that you can’t just tell it what I want?
MOLLY WOOD: Well, as documents and AI tools like Copilot get smarter, what sort of new capabilities are unlocked?
SAM SCHILLACE: We do these interesting things right now that are just a tiny little baby step in this direction. So we’ve been working on this project that we call internally, the Infinite Chatbot. So it’s a chatbot, like any other copilots, and it just has a vector memory store that we use it with. And so these things are very, very long-running. Like, we have one that’s been in existence for six months that’s teaching one of the programmers music theory, and he talks to it every morning and it gives him ideas for what he can practice in the evening.
MOLLY WOOD: Oh, wow. So it’s not just that it remembers what you’ve asked it before, it remembers about you.
SAM SCHILLACE: Well, it can see all the conversation, it can see the timestamps and remembers anything you told it. And the way that the system works is, it’ll pull relevant memories up, based on what it infers your intention to be moment to moment in a conversation. But one of the things we like to do with these that works really, really well is, you tell it, I’m working on this technical system, I want to describe the architecture to you, and then we’re going to write a paper together. And so they’ll interview you. You can set them up, you know, you can control their personalities and their memories and stuff. And you set them up to be interviewers. And so they’ll interview you, they’ll talk to you and ask you questions about this technical system for a while. And that’s of course recorded, it’s got a chat history, so you can see all of it. But that chat history has populated that bot’s memory. And so the next person can come in and just ask questions. And so that’s now a live document. So you can ask them, like, give me an outline of this architecture. So that’s like a very small baby step. I think where we want to take that is you have more of a canvas that you’re sitting and looking at that, rather than a linear flow, you can just say, show me this, show me that. So that, to me, feels like the beginning of a live document. A friend of mine was talking about, she has a bunch of information about her father’s medical history and status, her elderly father, and it’s not really a linear fixed thing. It’s more like a cloud of related ideas and facts. There’s his actual medical stuff, and there’s maybe how he’s doing day to day, or maybe there’s like some diet stuff mixed in there, his caregivers. And you might want to look through different lenses at that, right, you might want to be able to talk to that document about like, well, he’s coming over, what’s a dinner we should have that we haven’t had for a while that will fit with his medical diet, or I need to talk to his, you know, let me review his blood pressure over the last two weeks with his practitioner, if he’s got the right permissions for that. So that kind of thing, it’s less of a static linear list of characters that never changes and more of a, if you will, like a semantic cloud of ideas that you can interact with that can get presented in different ways.
MOLLY WOOD: I don’t know how much of a sci-fi fan you are, but what you’re saying makes me think of the intelligent interactive handbook called “A Young Lady’s Illustrated Primer” in Neal Stephenson’s novel…
SAM SCHILLACE: Yes, The Diamond Age. Absolutely. It’s one of our North Stars.
MOLLY WOOD: It is?
SAM SCHILLACE: Yeah.
MOLLY WOOD: Because that’s what it sounds like. Apologies, listeners, if you have not read this, but you definitely should, because it gives you a sense of what we could be talking about here, this level of intelligence, the adaptation—a book that tells the reader a story, but can also respond to your questions and incorporate your suggestions. And it’s all super personalized in real time. And so, Sam, I think what you’re talking about with these live documents is the ability to, in a business setting, abstract away the time-consuming acts of creation, like, I don’t want to spend my time figuring out how to create a chart, right?
SAM SCHILLACE: Right. You want to express goals. So when I was talking about syntax versus semantics, that’s also expressing process versus expressing intention and goal. Syntax is about, I’m going to tell you how to do this thing one step at a time. That’s very tedious. You know, think about a simple example of driving to the store. If you had to specify in advance all of the steps of turning the wheel and pressing on the gas, and you know, it’s brittle, it takes forever to specify that—it’s very difficult. What you want to be able to say is, I want to drive my car to the store. And you want that for business, right? You don’t want to have to specify a business process, you want to be able to specify business intent. But the thing about the primmer from The Diamond Age, I joke with people with these high, stateful copilots, the stateful bots, I need to have a sign behind me that says it’s been this many days since I’ve accidentally proposed something I heard about in science fiction first. Because we’re constantly, like, there’s a thing in The Matrix about, now I know kung fu. And we actually do that, like, we have multiple agents that have different memories. And you can take the memory from one of them and give it to another one and read or read-write for, and then that agent now knows both what it was trained on plus what that new memory has in it. There’s things like that.
MOLLY WOOD: You have taken a stab, a little bit, at publishing the process of refinement that could occur. You’ve got the Schillace’s Laws, a set of principles for large language model AI. One of them is, ask smart to get smart.
SAM SCHILLACE: Sure. So, first of all, somebody else called those “laws,” and I probably would have called them Schillace’s “best guesses at the current moment about writing code with these things.” But that’s a little bit hard to fit on a postcard. They’re just things we have observed trying to build software in the early stages of this transformation. Ask smart to get smart, one of the interesting things about these LLMs is they’re big, it’s big and high-dimensional in a way that you’re not used to. And so you might ask a simple question like, oh, explain to me how a car works, and you’ll get a simplified answer because it’s sort of matching on that part of its very large space. And if you want to get a better answer out of it, you have to know how to ask a better question, like, okay, explain to me the thermodynamics of internal combustion, you know, as it relates to whatever, whatever. And I think that’s an interesting hint in the direction of what skills are going to be important in the AI age. I think you need to be able to know enough to know what you don’t know, and to know how to interrogate something in a space that you’re not familiar with to get more familiar with it. I think, you know, anyone who’s gone through college kind of understands that—you get to college, and the world is gigantic, and there’s all this stuff going on, and you don’t know any of it. You get these classes, you’re kind of swimming in deep water, and you have to develop these skills of making order out of that, and figuring out where the rocks are that you can stand on, and what questions you can ask, and what things you don’t even know, and all that stuff. So I think that’s—it’s fundamental to these systems, and I think a lot of people are not getting good results out of programming these because they’re expecting the model to do all the work for them. And it can’t make that inference—you have to navigate yourself to the right part of its mental space to get what you want out of it. So that’s the ask smart to get smart.
MOLLY WOOD: I feel like that gets to a trust factor at work, too, which is you want to believe that the employee who’s interacting with this has asked three times—I’m actually a big fan of ask three times and then triangulate the answer from that in real life, and when dealing with AI. But that in order to feel confident that the strategy you might be building on top of some of these agents is accurate.
SAM SCHILLACE: Yeah, I mean, I think there’s lots of examples starting to emerge of, you need to have good critical thinking or mental hygiene skills. There’s an example of the lawyer who got sanctioned, I think we all know about this guy. So some lawyer used ChatGPT to file his case, it made up a bunch of cases. So, first of all, he didn’t check, which is a mistake. Second of all, when the judge challenged him, he doubled down on it, and you know, elaborated, which was also—that’s a good counter example of maybe putting too much trust and not using your critical thinking, right? The systems aren’t magic, they’re not—maybe they’ll be magic eventually; they’re not magic yet.
MOLLY WOOD: I think there’s this sense that, oh, this will save us all this time. But you still have to invest the time up front to get the product that you need.
SAM SCHILLACE: Well, and there’s different things, right? Some of it is saving time, and some of it is getting new things to be even capable, right? Both can be happening in a situation, and only one can be happening in a situation. It may be that you’re much more capable of something, and maybe you can reach for a design point that you wouldn’t have been able to manage before because you couldn’t have kept all the points in your head, or something like that. Or, you know, I’ve got an old house in Wisconsin, it’s got a lot of spring water on the property, so it’s a good candidate for geothermal. I don’t know anything about geothermal, but I know enough about it to know which questions to ask. And I’ve been slowly designing a system, you know, with an AI helping me. I didn’t get to say, here’s my house, please design my geothermal system, but I am getting to explore the space and do this new capability.
MOLLY WOOD: What do you think this does tell us about where employees and business leaders and managers should focus their efforts? What skills should we be developing in the workplace to make sure that these kinds of interactions are happening? Because it’s a big shift in thinking, you know, from how to interact with a dumb document to how to interact with a smart document, that’s a big leap.
SAM SCHILLACE: It is a big shift. Again, this is one of those things, it’s going to be hard to predict more than a little way down the road, right? There’s going to be a lot of changes that happen over time. What we know right now, I think a little bit, is critical thinking is important, right? Being able to know what you don’t know, being able to ask questions in an environment where you have low information and extract information. And be aware of things like biases and preconceptions that prevent you from getting good results out of a system like that, I think is useful, that kind of open-mindedness, growth mindset stuff. I think growth mindset is going to be much more important now than it’s ever been. I think, you know, trying to not be attached to status quo. It’s hard to get away from it. But I think having that mindset is really important. One of the things that I really like a lot and try to live as much as I can every day is, when we are confronted with disruptive things—and this is certainly a very disruptive thing—our egos are challenged, our worldviews are challenged. When your worldview is challenged, you kind of have this very stark choice of either I’m wrong or it’s wrong. And most people choose the it’s wrong path. And we’re good at telling stories, so we tend to tell these stories about why something isn’t going to work. I call these why not questions. There’s a lot of these why not stories—it’s not factually correct, it’s not smart, it made this mistake, I can jailbreak it. Those are all true, they’re real. But that doesn’t mean it’s never going to work. They’re just problems to be solved. So the questions that I like to ask, and I think everybody should ask, to answer your question is, don’t ask the why not questions, ask what if. What if is a better question—what if this works? What does the world look like if this works? And if the what if question is compelling, then you work through the why not problems to get there. So what if I could transform my business in a certain way? What if I didn’t need to make this kind of decision? What if this process, which is very manual, could be automated with an LLM? Would that change my business? How would it change my business? That would be amazing. Okay, well, now I need to trust this thing. I need to be compliant, I need to do this and that—now I can do the why not. But the what if is the place to start.
MOLLY WOOD: Yeah, that’s the place to start today. As you’re starting to think about how to implement this, don’t jump to the end. I love it. I mean, you have said that actually, creative, interesting ideas almost always look stupid at first.
SAM SCHILLACE: Absolutely. They really do. One of my flags is if people call it a toy, you know, oh, that’s a toy. That’s never gonna work, or whatever. That’s always like, oh, that’s interesting. Like, that’s probably not a toy. Anything people dismiss as being unrealistic or being a toy, I’m almost always like, that’s okay. I can take a look at that, see what's going on there.
MOLLY WOOD: So, big picture, before I let you go—what mindset should business leaders have when they’re looking ahead to a future with AI?
SAM SCHILLACE: You know, there’s not really much of a prize for being pessimistic and right; there’s not much of a penalty for being optimistic and wrong. So, the real prize is in the corner of the box that’s labeled optimistic and right. And the real penalty is pessimistic and wrong. So, you know, you can kind of do the game theory on this—the right place to be is optimistic and, you know, try lots of things. If you can, experiment a lot, have that what if mentality, and assume things are solvable rather than the other way.
MOLLY WOOD: Sam, thank you so much for joining me.
SAM SCHILLACE: Thank you. Glad to be here.
MOLLY WOOD: Thank you again to Sam Schillace, CVP and Deputy CTO at Microsoft. And that’s it for this episode of WorkLab, the podcast from Microsoft. Please subscribe and check back for the next episode, where I’ll be talking to Christina Wallace, a Harvard Business School instructor, a serial entrepreneur, and author of the book The Portfolio Life. We’ll talk about how leaders need to rethink skills and career growth in the age of AI. If you’ve got a question or a comment, please drop us an email at worklab@microsoft.com, and check out Microsoft’s Work Trend Indexes and the WorkLab digital publication, where you’ll find all of our episodes, along with thoughtful stories that explore how business leaders are thriving in today’s digital world. You can find all of that at microsoft.com/worklab. As for this podcast, please rate us, leave a review, and follow us wherever you listen. It helps us out a ton. WorkLab is produced by Microsoft and Godfrey Dadich Partners and Reasonable Volume. I’m your host, Molly Wood. Sharon Kallander and Matthew Duncan produced this podcast. Jessica Voelker is the WorkLab editor. Thanks for listening.
More Episodes
Futurist Amy Webb Shares the Most Plausible Outcomes for AI and Work
There’s a lot to look forward to if business leaders act out of excitement, not fear
Harvard Business School’s Christina Wallace on How AI Can Help Us Rebalance Our Lives
Advice for leaders on building a diversified “portfolio life,” and rethinking careers and growth for the AI-powered work.