WIRED Editor-in-Chief Nicholas Thompson interviewed YouTube CEO Susan Wojcicki on Tuesday at the South by Southwest festival in Austin, Texas. Here is an edited transcript of the talk.
Nicholas Thompson: So you have had a crazy year and a half. All the social media companies have had a crazy year and a half. Tell me about your evolution and your perception of what YouTube is as a platform, as you’ve had story after story, and crisis and response, congressional investigations—as all that’s gone on, how has your philosophy of what YouTube is changed?
Susan Wojcicki: YouTube started out with the tagline, “Broadcast Yourself.” And we modified it a little bit when I came to YouTube [in 2014]. We started talking about different freedoms that we believed in—freedom of expression, freedom of information, freedom of opportunity. And I think what this year has really shown is that sometimes those freedoms are in conflict with each other. Our goal with YouTube is really to be the next generation platform, to be able to deliver video, distribute video globally, do so with the latest technology, with the broadest set of content. And one of the things that I’ve actually been thinking about recently is that we’re really more like a library in many ways, because of the sheer amount of video that we have, and the ability for people to learn and to look up any kind of information, learn about it. But I think what this year has really taught me is how important it is for us to be able to get that right—to be able to deliver the right information to people at the right time. Now that’s always been Google’s mission, that’s what Google was founded on, and this year has shown it can be hard. But it’s so important to do that. And if we really focus, we can get it done.
NT: So you’re a very well-funded library with a lot of protestors and fires in the back.
SW: My grandmother was actually a librarian—Jane Wojcicki—she was a librarian at the Library of Congress during the Cold War. She headed the Slavic department. And I can’t imagine it was an easy time to figure out what books you actually hold during the Cold War in the Slavic department. And there have always been controversies if you look back at libraries. I mean, libraries celebrate the banned book. They have banned book week. So there has always been, in the history of information, some information that people think other people shouldn’t have. And if you look at libraries, they actually have this bill of rights that corresponds a lot with our freedoms. That you should be able to offer information to everybody no matter what their background is, and that you should be able to distribute information regardless of what that information is about as broadly as possible, to enable the largest set of voices to be heard to the largest audience.
NT: So that’s very pro-free speech, maximize the amount of information available. But a lot of the tension around YouTube is when it is fed inaccurate information. So more the equivalent of a library that has a book that has something false in it. Let’s talk a little about misinformation and how you think about that. When the freedom to express something comes in conflict with the desire to have your readers, your viewers, accurately informed.
SW: This has been a year of fake news and misinformation and we have seen the importance of delivering information to our users accurately. There was a lot of stuff happening in the world a year ago. And we said, look, people are coming to our homepage and if we are just showing them videos of gaming or music and something really significant happened in the world, and we are not showing it to them, then in many ways we’re missing this opportunity. We had this discussion internally where people said, you know, ”What do those metrics look like, and are people going to watch that?” We came to the conclusion that it didn’t really matter. What mattered was that we had a responsibility to tell people what was happening in the world. So a year ago, we launched a few things. One of them was this top news shelf. So if you go to search, the information that we show at the top is from authoritative sources, and we limit that to authoritative sources. We also have that you, for example, can be in your home feed with news, looking at gaming, music, other information, something major happens in the world or in your region, and we decide that we’re going to show it to you.
NT: What is authoritative?
SW: Being part of Google, we work with Google News. Google News has a program where different providers can apply to be part of Google News, and then we use a different set of algorithms to determine who within that we consider authoritative. And then based on that we use those news providers in our breaking news shelf, and in our home feed.
NT: And what goes into those algorithms? What are some of the factors you consider when deciding whether something is authoritative or not?
SW: We don’t release what those different factors are. But there could be lots of different things that go into it. These are usually complicated algorithms. You could look at like the number of awards that they have won, like journalistic awards. You can look at the amount of traffic that they have. You could look at the number of people committed to journalistic writing. So, I’m just giving out a few there, but we look at a number of those, and then from that determine—and it’s a pretty broad set. Our goal is to make that fair and accurate.
NT: It’s super complicated because we don’t want to over-bias with established places and make it harder for a new place to come up. Facebook has started evaluating places based on how trustworthy they are and giving out surveys. And one of the obvious problems if you give a survey out and you ask, “Is that trustworthy?” and they’ve never heard of it, they won’t say yes. And that makes it harder for a startup journalistic entity. YouTube is, of course, the place where people start, so that’s tricky.
SW: It is tricky. There are many factors to consider. But the other thing we want to consider here is if there’s something happening in the world, and there is an important news event, we want to be delivering the right set of information. And so, we felt that there was responsibility for us to do that and for us to do that well. We released that a year ago. But I think what we’ve seen is that it’s not really enough. There’s continues to be a lot of misinformation out there.
NT: So I’ve heard.
SW: Yes, so you’ve heard. And the reality is, we’re not a news organization. We’re not there to say, “Oh, let’s fact check this.” We don’t have people on staff who can say, “Is the house blue? Is the house green?” So really the best way for us to do that is for us to be able to look at the publishers, figure out the authoritativeness or reputation of that publisher. And so that’s why we’ve started using that more.
So one of the things that we want to announce today that’s new that will be coming in the next couple of weeks is that when there are videos around something that’s a conspiracy---and we’re using a list of well-known internet conspiracies from Wikipedia---that we will show as a companion unit next to the video information from Wikipedia for this event.
NT: YouTube will be sending people to text?
SW: We will be providing a companion unit of text, yes. There are many benefits of text. As much as we love video, we also want to make sure that video and text can work together.
NT: I love them both too.
SW: Yes, you must love text—as a writer.
So here’s a video. Let’s see… “Five most believed Apollo landing conspiracies.” There is clear information on the internet about Apollo landings. We can actually surface this as a companion unit, people can still watch the videos, but then they have access to additional information, they can click off and go and see that. The idea here is that when there is something that we have listed as a popular conspiracy theory, the ability for us to show this companion unit.
NT: So the way you’ll identify that something is a popular conspiracy theory is by looking at Wikipedia’s list of popular conspiracy theories? Or you have an in-house conspiracy theory team that evaluates…and how does someone in the audience apply to be on that team? Because that sounds amazing.
SW: We’re just going to be releasing this for the first time in a couple weeks, and our goal is to start with the list of internet conspiracies listed where there is a lot of active discussion on YouTube. But what I like about this unit is it’s actually pretty extensible, for you to be able to watch a video where there’s a question about information and show alternative sources for you as a user to be able to look at and to be able to research other areas as well.
NT: Let’s shift into something that came up this weekend. There’s a piece by a woman named Zeynep Tufekci who wrote in the New York Times about what she called the radicalization on YouTube. Her theory is that if you start to watch YouTube videos about running, you’ll eventually be led to ultra-marathons. If you start to watch videos about vegetarianism, you will be lead to videos about veganism. If you start to watch videos about politics, you will eventually be led to videos about chemtrails. Is she right? And if she’s right, is that wrong? And she says very clearly in her piece that she doesn’t think YouTube’s a conspiracy—YouTube’s not trying to make us all ultra-marathoners. It’s just that that’s what people seem most interested in, and that’s how the algorithm works.
SW: We designed our recommendation system with a couple of different components in mind. One of them is that if you’re looking at one set of information, or you’re enjoying music, or here at SXSW, you’re listening to one artist, that we actually show you related songs, related artists, something that’s related to what you’re watching. Say you’re doing crafts. You’ve done one craft, here are a few more crafts associated with that. So that’s one principle that we have used, and I think she alludes to that more from a radicalization standpoint, but there is something that we are showing you a lot more content within that genre you’re currently looking. But then there’s the second category of ways that we think about recommendations, which is about diversity. I’ve heard our engineers talk about it like you’re a chef and you’re providing a buffet. We don’t know what you necessarily always want, maybe you want to have a salad but maybe you want to have the meat and if so, here are a number of different options. I think there’s both the consistency of keep you within the genre, but we also want to know about you and other things that you’re interested in and keeping that diversity there. I think some of the challenges come up—and were pointed out in this article—particularly when it comes to news or to politics, the concern about seeing just within your own area. I think that’s definitely an area where we see there’s an opportunity for us to figure out how can we continue to diversify the content that you’re seeing, continue to improve the recommendations, rely on the authoritativeness of the publishers. There are many things we’re doing to continue improving how our recommendations work.
NT: So let’s assume that Zeynep’s hypothesis is true, and let’s assume that there were a way to not have that happen—to not have people become more radicalized in their political views, to have people not turn into ultra-marathoners, to not have people eventually lead down a path to chemtrails…but that would lose you money. How do you decide whether to make those changes or not?
Because Zeynep’s hypothesis is that the reason this happens is because YouTube makes money based on the amount of time people spend. It’s not that simple because you have different CPMs and different categories. But it’s what people have shown their interest to be, so you get more time with the user if you do this—if you ultimately take them one step further, one step further, one step further. So if you were to make a change that had people spend less time, in general it would probably lead to less advertising revenue for the platform.
SW: When we make changes to recommendations, there’s nowhere in our formula that assess the business output that comes with the change. We want to do that right thing for the user. That’s easy to say and hard to do. What does the user really want to do, and what should we do? What metrics should we use associated with doing the right thing with the user? When we first started, we looked at clicks, like is the user clicking on the video? And then we realized that’s not great because there’s a bunch of clickbait—that’s not a good solution. So we thought maybe we should do number of views, which is similar to clicks and we discarded that. Afterwards we started using watch-time, because watch-time means that the user has actually engaged in this video at this point. And then we also realized it is possible for users to watch a video, engage, but not be satisfied with the video. I have compared this to junk food. You go somewhere, and you eat a bunch of donuts, which I love, but afterwards you might not feel like that was a great idea, that you just ate all those donuts. So we started doing surveys, we started much better understanding our users, we included satisfaction. And there was actually a very clear case where we had a bunch of content that was very clickbait-y, it was coming up in our system. We built a classifier, which is a machine learning way to identify it. What we saw is that our users actually were happier and coming back to us more when we removed this content. And the question is, what are the metrics that we should be optimizing on in the future in addition to the satisfaction of the users?
NT: But what if the satisfaction of the user is optimized by radicalized content? Then what do you do? Where does morality come in in your own sense of, actually we just shouldn’t send them down that path?
SW: I have two answers for you. First of all, what we’re starting to do, is to build in not just satisfaction, which is where we are right now. We’ve been building in also this concept of responsibility. We’re still in the process of figuring out exactly what that means, but that could be a variety of different factors that we look at, from authoritativeness of the publisher, diversity of the content, educational value, there could be a whole bunch of different ways. But it’s clearly difficult and complicated. The other way we look at that is we say, look if people are moving from being a vegetarian to a vegan, does that really matter? Also, look, ultra-marathons are great. Who else are you going to recommend it to? You’re going to recommend it to someone who’s a marathoner, realistically. Because people are not going to go from the 1K to the ultra-marathon, right. So some of those make sense. Where the value for us to invest in first, is really understanding the news and the politics and how to make sure that that area is one where we’re showing a diversity of content and making sure we’re showing authoritative sources.
NT: Going back to the food analogy, because I think it maybe makes an analogy for politics. What if it wasn’t vegetarianism to veganism but it was eating cookies, to eating donuts, to eating evermore extreme unhealthy foods? Where does your sense of like, wait, that’s not what people should be doing and we’re leading people down that path, where does that kick in?
SW: In general, we don’t want it to be necessarily us saying, we don’t think people should be eating donuts, right. It’s not our place to be doing that. That’s why we started using the surveys and the satisfaction that we got back from the users. Our goal is to think about how we build a systematic, scalable way of getting feedback from our users, figuring that out and based on that building an algorithmic system to be able to enforce that. And what we saw from our users is they were saying they were not satisfied after they saw this clickbait-y content.
NT: You’ve just hired 10,000 people to help with some of these issues. What is the balance between what the humans will do and what the machines will do? Because it’s kind of rare for Google—10,000 people is a lot of people.
SW: Yeah 10,000 people’s a lot of people, but we do need those people. About a year ago today, we started working on making sure we were removing violent extremist content. And what we started doing for the first time last year was using machines to find this content. So we built a classifier to identify the content and lo and behold, the machines found a lot more content than we had found. Once we started doing this machine classifier, the machines were finding a lot more classifiers.
NT: And they’re looking for abuse, jihadism?
SW: There could be a whole set of content that violates our policies. But once they started finding this content, we’re like wow, we need people to review this. And so those people start reviewing the content to make sure, is this content violating the policies or not? Then we ask are the machines doing a good job? When the machines don’t do a good job and the humans can say that, we can say, let’s change the algorithm based on what the humans just learned and the data. What’s actually removed and what’s actually good or stays is not violating the policy will then get fed back to the machines so that machines can get smarter.
NT: So, the humans are training the machines who will eventually replace all the humans?
SW: Well, I think we’ll always need humans. Even right now, in the violent extremism, we now remove 98 percent of the content with machines. But we need humans to review it and to make sure that that’s being done correctly. And the other place we really need humans are the experts. The people who can say, we understand what is happening here, we’re an expert in this area, we can tell you how your policies should be written, the kind of content you should look for.
NT: Let’s talk a little bit about comments, because that’s a place where there’s interesting balance. So Jigsaw, part of Google, has developed software to identify toxicity in comments and determine whether this is a toxic comment or not a toxic comment. Instagram has done something similar where they, using the same process of humans training machines, they have identified what is a nice comment and what is an un-nice comment. Instagram has decided to optimize for nice comments. What is your philosophy on how to sort comments and how to use the new technology to do sentiment analysis of comments?
SW: The first thing we’ve been focused on with comments is making sure that we’re removing comments that have inappropriate terms, that we think are harassing in some way. And we have the philosophy that what we do is we actually queue them for the creator to review and to decide, do they want to have these comments on their site or not? And so, again, we are using machines so that the creator can make that decision a lot faster. And we have seen a drop in the number of flagged comments.
We’re also introducing ranking into our system, to understand that these are comments that people have voted up, voted down. We have introduced a lot more of the ranking to try to surface at the top the comments that we think will be most relevant and useful for our users.
NT: How do you think comments will be sorted in five years on YouTube?
SW: Comments are an important part of the YouTube experience. It’s the way that the creator and the fans are mostly speaking to each other. There’s a whole culture around it. So if you think about like this is how the fan and the creator communicate with each other, how can you make them a whole lot better. Unfortunately, we’re still in the basics of the ranking, filtering out the inappropriate comments. But I would really like comments to be much more time-based, for example. If someone’s watching this video they can be like, well this was really boring, but this part was really exciting, and I have a lot to say about what Nick and Susan said at minute 12. You see that in the comments where people will actually focus on a specific moment in time. We have prototypes where we actually scan all the comments and then we can say, here are the 10 different topics that were discussed here, you’ll care about topic X, go here and see all the comments that we’re just discussing about that. I think we can get a whole lot better with comments, it’s just going to take time.
NT: Let’s talk a little bit about your education videos. Is there any system you have for evaluating—and let’s say it’s about like, I don’t know, how to vacuum your carpet or fix your blinds—is there any system you have for how to find whether it’s actually good advice or bad advice?
SW: We do a billion views a day of education content. Everybody has a YouTube story of what they learned using YouTube, they fixed something, they did the bowtie, they fixed their washing machine, they fixed their car, they learned a new sport. I think you’re asking how do you think about it being authoritative? How do you think about it being accurate? And at the end of the day that probably has to come through some kind of reputation system. Understanding the publisher, understanding the feedback, understanding the comments that are there as well. Is it linked to other sites that we know are authoritative?
NT: Do you use those? I was teaching my kid how to throw a football. So, we looked up on YouTube: the best way to hold a football when you’re throwing it. And then there’s one with 600,000 views so you click on that. But that may just be because it had a lot of views so it’s risen to the top in the algorithm. Is there any way in the future that you will have humans or machines that really do analyze and say, “This is the way Nick should teach his kid how to throw a football”?
SW: I would hope with our systems, as we understand who’s really authoritative, who are the experts, we have an ability to better surface that. The good news is that throwing a football, that should probably be from a professional. And there are professional football players on YouTube throwing footballs. The good news is there’s not a lot of incentive to not do the educational part correctly. Most people are doing this out of the goodness of their heart. I’m so thankful to these creators who have posted and said, no matter what car you have, how old it is, and you want to replace a lightbulb somewhere in that car, there is a YouTube video telling you how to do that.
NT: News just broke yesterday that Facebook is going to bring news into Watch. So, Facebook will be going into news video. Is this a competitive threat that interests you? I’m seeing that somebody in the audience has said, “Facebook is looking to expand their video platform, and they’re promising to pay the creators better than YouTube.”
SW: Our goal has always been to have news videos on our platform. The way I think about it with creators is it’s our job to find a way to give them the most promotion, the most number of views, and the highest income. And if someone does that better than us then yes, they will go to that other platform. Or they’ll do it in addition to YouTube.
NT: What do you think YouTube is going to be like in five years? The growth over the last five years has been crazy. What are some things that you think are going to change in five years in how it’s used and what it looks like and what it does?
SW: I think it will continue to grow in the way it has, with even more sets of video, even higher quality of production, larger diverse set of content from all over the world. But we’re also really investing in the idea of community and communication. And so, I’m really hopeful that in the future we have very deep discussion between the fan-creator community. What differentiates YouTube is it can be this two-way conversation---if I care about a creator, how can I communicate with them, how can I post my questions to them, have them respond, have them go live, be able to have a sponsorship because I love them so much, get special live sessions, live communication, access to special videos with them, be able to communicate with other fans who live the same creator, and also really invest in the AR/VR. TV was this very distant relationship between creators and fans and now we have a closer connection, and will it be even closer in five years? Where you can have much more dynamic communication, and VR in some ways will make you feel like you’re there?
NT: Let’s take it back to politics where we started. YouTube’s first political influence was I think you probably remember the Macaca video, which was somebody filming George Allen saying something offensive. Then we have the 47 percent video. In fact, we had yesterday that hilarious example of the woman in China rolling her eyes during the conference. So, it was known as a place where you get funny, surreptitious videos. In five years, is its role in politics going to be still that, just more of it? Or will you have it be more like 60 Minutes type shows? As you seem to be going more toward authoritative content, which side will win out or will both sides—the person with the cellphone or the processed show with the talking heads?
SW: I think it will be both. I think that’s actually the amazing part of the platform, is that you don’t need to choose. And so, I would hope that in five years you have all this mobile video that people have uploaded of different discussions that they’ve had with candidates, but then you also have the candidates on the platform as well talking about what their platform is. And you would be using our tools and services for candidates to be communicating with potential voters, and asking questions, or doing sessions, or doing live town halls. And so, I think there is an opportunity to both have the candidates using it as well as just mobile uploads.
A video of the full conversation can be found here.
- YouTube says it will show users Wikipedia entries to combat online conspiracy theories.
- YouTube's content-moderation system has become an inconsistent mess.
- YouTube and Facebook featured conspiracy videos about the victims of the Parkland school shooting.