Explore

LISTEN — Class Disrupted S4 E17: How Testing Boundaries, and Embracing a Child-Like POV, Could Transform School Innovations

There’s a beauty to embracing a child-like mind in exploring the limits, to understand where new ideas work — and importantly, where they break

Shravan K Acharya/Unsplash

Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

Class Disrupted is a bi-weekly education podcast featuring author Michael Horn and Summit Public Schools’ Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system amid this pandemic — and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on Apple Podcasts, Google Play or Stitcher (new episodes every other Tuesday).

In this week’s episode, Michael Horn and Diane Tavenner grapple with a concept that pushes their understanding of the test-and-learn approach in education innovation and see the beauty of embracing a child-like mind in learning and exploring boundaries to understand where new ideas work — and maybe more importantly, where they break.

Listen to the episode below. A full transcript follows.

Diane Tavenner: Hey, Michael.

Horn: Hey, Diane.

Tavenner: Michael, this is our penultimate episode of Season 4. Which is just a word I love, so I always have to use it when I can. It has me feeling a little bit like, where has this year gone? I’ll admit that’s a bit of a welcome change to the last few years that have been pandemic years. I’m pretty sure each of those years, by this time, I was like, can school please just end as fast as possible? You’ll keep me honest about that one, but I think this is different.

Horn: Well, and I suspect you’re not alone on that. I think a lot of folks felt that way. But it feels like the rhythms are returning more and more to what they are, and what they were, rather. That any scar tissue, shall we say, of past years, for better and for worse, frankly, is fading more and more into people’s memories it seems, Diane.

Tavenner: Yeah. Which is why, among other reasons, I think it’s only fair that we share with our listeners that this time of year has us wondering a bit. We started Class Disrupted shortly after the pandemic began, as a way to make sense of what was happening in the world and education and in our schools. Honestly, in hope of turning that really horrible time into an opportunity for the change in our schools that we both advocate for and believe in, and think is imperative. Now the pandemic is officially over, and I’m almost officially finished with my 20 years of leading a school system. I mean, I guess the big question that’s coming up for us is, where do we go next, if at all, with our conversations?

Horn: Wow. That, if at all, sort of lands with a thud for me, Diane. But we’re being transparent here with our listeners. We always are, but we’re especially being so right now. Because frankly, I don’t think we’d be credible, Diane, if we weren’t asking ourselves first, should we continue doing Class Disrupted? Second, if we do continue it, and hopefully people are listening right now and being like, “No, keep on it.” I think the question then becomes, if we do, how might we iterate on it? I’ll be the first to say, selfishly, I really enjoy working together on this project with you. I love learning from you. I feel like every episode I come out a better human. 

So I’m not ready to end this. I also want what we’re doing to be relevant and purposeful and helpful to those that we’re trying to reach, our listeners. I would love it, again, selfish request, but for all you listening, drop us a note. Get us on Twitter, LinkedIn, wherever you’re connected to us. Email us. What are the big questions on your mind, where we can be helpful? What would you like to see from us, if we continue this? You can tell us to stop, also, that’s OK. Any and all suggestions, ideas, feedback, advice, all frankly welcome as we figure out what’s next here, Diane.

Tavenner: Yeah, Michael, we definitely would love to hear from folks. It’s one of the highlights when we hear from people. Certainly, we always take the feedback to heart and try to incorporate it. I think it’s made us better. This concept of being purposeful is really important. There’s got to be meaning and intent in what we’re doing. Everyone’s time is extraordinarily valuable, and we know that there’s a ton of things to listen to out there. What do you need? What do you want? What do you wish from us? I’m really curious to hear. Speaking of curiosity, Michael, yours is going to lead us into the topic we’re going to talk about today. 

All this year, we’ve been getting into the weeds and tracking the details of a pilot we’ve been running at Summit, which has helped us really think about innovation and continuous improvement and all of that. As we wrap up the year, we’ve both been reflecting on what we’ve learned. It’s taking a big step back around that. You got curious about a concept from your co-author of Choosing College, Bob Moesta, and how it might be useful in my reflections on the year. So we’re going to dig in to red lines and green lines today.

Horn: I suspect people are like, what the heck is a red line, and what the heck is a green line? I thought red lines were things you saw on papers. First, to give folks a bit of context. Bob, along with Clay Christensen, they’re the originators of this jobs-to-be-done theory, which has kind of taken on a life of its own. It’s actually, I think, googled more than disruptive innovation at this point. It’s basically a theory that says, people buy things or switch behavior because they have a struggling moment in their lives, and they want to make progress. They aren’t so much buying what you offer, as they’re looking to make progress. So the focus is really on what is that progress, and designing around that. If you’re lucky enough to have designed your product to match the progress they want, then you could get hired. That’s sort of the simplistic way. It’s more complex than that.

I’m not actually sure, Diane, I’ve ever described it the way I just did, but I think it’s a really powerful window into product or service development. Bob had frankly a couple other unbelievable mentors besides Clay. I feel like if you step back and you’re like, “You got mentored by Clay Christensen,” I pinch myself. Bob also had Edwards Deming, of continuous improvement fame, that you know well. He also had Genichi Taguchi in his corner, who was a Japanese engineer and statistician famous for developing the Taguchi methods for design and statistics. He sort of had an embarrassment of riches around him in that way. Diane, Bob for years has been telling me that the real key in innovation is not to test and learn from what works, but to learn when something fails. This has felt like a Yoda-like statement to me. 

Tavenner: That’s what people say all the time. People are always talking about learning from failure. I’m so glad we’re digging in, because it is Yoda-like. What’s he talking about?

Horn: Exactly. Frankly, I never fully understood. I’m like, you test and learn to figure out success. I would push him a couple times, and not get it. I’d just feel really dumb, and then I’d forget about it. He has his own podcast called The Circuit Breaker. In a recent episode, he was talking about this concept again. It finally clicked for me, while I was at a gym at ASU+GSV, of all places, the education conference. We’ll link to the episode, which is titled “Red Line Green Line Development.” Essentially, what Bob learned from Taguchi stemmed from what he observed when he was helping build cars for Ford, and comparing it to how Toyota and the other Japanese automakers designed cars. I’ll do my best to summarize, because I suspect people didn’t come in for an automotive lesson.

In essence, the Japanese at the time were doing far more prototypes early in the process of design than the Americans were. Whereas the Americans were basically just trying to figure out the one “best process” that worked, and then just codify it, and do it again and again. The purpose of the prototypes in Japan was in essence to learn, where were the boundaries of the things they were building, where things would actually break? They would basically create something that worked, but then they would start to tweak it to see, where won’t it work? They would build lots of prototypes. It was all about learning a lot early, early on in the beginning stages of development. Now, we talked a lot about failure here on the podcast this year, but this was like they were intentionally in some ways trying to fail early on, so that they could learn. 

It almost flips it on its head a little bit, so that you didn’t have failure at the end of this process. It was really to understand, I also think, of like causality and theory building. What led to what, and in what circumstances and where didn’t it work? In essence, the green line was that approach that Bob observed in Japan. Where there’s a lot of frenetic activity in the beginning. No one’s waiting on anyone else. Everyone’s working in parallel, doing lots of experiments and prototypes. You’re working on something. I say, “Gee, what are the variables you’re playing with, so I can learn about how it might impact my part of the project?” As you get closer and closer to launch, things actually start to calm down. It’s sort of a, explore, explore, explore upfront, and then settle down. 

Whereas, the red line approach is the opposite. It’s all about, just make it work. Projects start slowly. They’re done in series. I wait for you to design your part, and then I get started on mine, which is sequential. All the tests are around verifying that stuff worked, rather than learn about how they didn’t work. Then as launch approached, things would get super frenetic, which I suspect people are familiar with. The deadline’s looming. Stuff’s breaking, we’ve got to fix it. Lots of changes, and changes are a lot more expensive toward the end of a project than they are at the beginning. There’d be a lot of fixes then, even once we’ve launched. 

The big philosophical differences I think from this green line versus red line is that, in the red line, we’re designing experiments to prove our hypotheses. Whereas with Taguchi and the green line, we’re doing experiments with no sense of what will happen. We’re just learning and finding limits, and we have a certain humility around it. The way I created something on a past project, it might inform what I’m doing, but I don’t take that as ground truth. I’m using it to learn more, and what may or may not be relevant. It’s all about discovering the boundaries. Once I’ve built something, where does it break, Diane? Let me stop there. I’ll give some maybe examples later to make it more concrete, but I’m curious how that lands. 

Tavenner: Well, so much is coming up for me. We’ve been talking about this for several days now. As you were talking this time, I was like, oh, click, click, click. A whole bunch of ideas were sparking for me. The first one, and I’ll be quick about this. I always want to bring it back to students. As you were talking, Michael, I was like, “Oh, my gosh. You’re describing the green line is what it looks like when a project that students are doing is really well designed and working really well.” Because at the beginning, there’s like this frenetic energy where they’re trying things, trying things, getting feedback, getting feedback. A lot of it’s wrong and not good, because the learning’s happening, they’re building the skills.

But by the time you literally get to the end of the project, there’s essentially nothing to “grade,” if you will. Because as a teacher, you’ve seen everything all along. You’ve seen the growth, you’ve seen it come together. It’s calmer at the end. Versus the reverse, which is what I think we see most in schools. Which is nothing, nothing, nothing. It’s very linear, very ordered, very slow. Then the night before something’s due, it’s like this cramming of trying to put things together. Then we wonder why it’s not good, and the learning is not there when you do that.

Horn: Wow. The focus is all on process on the latter one, versus the learning on the front one. Wow.

Tavenner: Exactly. I mean we’ve been talking for days, and that was the first time it clicked for me. I was trying to really get a visual in my head of the difference between these two lines, when you were talking. Clearly I think what’s at the root of these two things is mindset. It’s so key. On the green line, you really have a growth mindset, or a childlike mind. It’s filled with wonder. It’s filled with curiosity. It’s absent of ego and certainty. You’re really just front-loading the work. Here’s the key. It’s going to be messy. It’s going to look messy, it’s going to seem confusing. I think we tend to try to get away from that in schools, and in life. 

But that’s actually where you trace the outlines of the absolute limits of what’s going to be possible, for both how you’re going to build the thing, or your process, but also what it is you’re building. I do think there’s two parts there that are important. That’s really what you’re doing first, is figuring out that boundary. People often use this analogy of the sandbox. We play in a sandbox. When I think about a sandbox, I’m like, “Yeah, you’re pushing the walls of the sandbox, and making it as big as you possibly can.” Then what you can do sort of fits within those boundaries. After you’ve done that, then you can figure that out. This brings up something for me that I’m curious about. 

Instinctually, I am a leader, and I think people who work around me would say this, who really likes to be involved heavily upfront. I know they would say it, because I know they talk to each other and say, “Hey, whatever you do, make sure you get Diane involved upfront.” Don’t get too far down the line when we’re designing a new program or a project, because that’s where I really like to mix it up. As I think about that process, it’s very quick and messy, and there’s a lot going on. I, in my mind, always call it alignment. Once we then get aligned after a little bit, then I’m like, “Good, run with it.” We can check in. I’m very hands-off at that point. I’m thinking about those instincts I have now, and they feel green line-ish? I don’t know. Yeah, I’m thinking they feel green line-ish.

Horn: Fascinating, fascinating. I’d be curious… I’m so curious to have Bob listen to this afterwards, because I think he’s similar, by the way. He’s very upfront. Then once the idea is sort of understood, other people bring it into the world. I want to move it into education a little bit more, another step, because you just went there with how children might approach something. I thought that was fascinating. Diane, I had also those several ahas as I listened to this podcast of Bob’s, of how it applies to education. I’m going to get to one of them in a bit. But the first one I had was, frankly… I was at ASU+GSV. There’s a lot of ed tech companies around, and I kind of think that ed tech companies don’t really do this green line work at all. Almost everything you get, at least in the market, seems like it has to be implemented under perfect conditions to get these results. 

Almost all the research I see is very like, “Hey, the process works like this. If you don’t get the dosage, you don’t get the impact. It has to be in this model with this rotation, interacting with this curriculum, blah, blah, blah.” The real world just doesn’t work like that. I kind of think, the ed tech companies, are they spending a lot of time trying to figure out how to break what they create? The curriculum and stuff like that, to figure out, “Hey, these are the meaningful boundaries,” so that they could stretch beyond the one way to do it, the one dosage. Instead, maybe get a deeper understanding of causality, what really causes the learning outcomes that they see. What’s the most important thing? What truly are the non-negotiables that you have to have? Because they’ve looked at it through a lot of different circumstances, and developed accordingly. 

I kind of think if they did that, we might see a much more robust set of products out in the market. The word robust I use there, meaning it’d be resilient to a lot of different learning models and conditions and school types and so forth. As you know, I’m working on a book right now, again with Bob and with Ethan Bernstein at Harvard Business School. It’s actually adjacent to education. It’s about helping people find their next job, so career switching. We built a process actually to help people switch jobs, and it started working really well. It was a several-week-long process. It was different reflection exercises and so forth. There was a coach. Then we wanted to learn, how do we make it simpler? How do we make it so, when you’re reading it in a book, you can keep doing it? 

Then, where does it break? We did one sprint where we did it in two weeks, so we knew it was going to be too fast. We knew there was no way they were really going to get the impact out of it, but we wanted to understand how it broke. Then, when is it too long? When is it too self-guided? On and on. So we had all these questions just to understand, how do the results change in different ways of manifesting it? So we really can understand how to stretch it out, and build something more resilient, frankly, that has a better understanding of cause and effect ultimately. Can you imagine if ed tech companies did this?

Instead of coming into your schools and saying, “Gee, Diane. We’ve got this great math product, but you can only use it if you dedicate this many days and this many hours. And you have to change how your teachers do this thing over here.” I know you. You literally throw these folks out and say, “Sorry, it doesn’t work for our model.” Because I think you’ve known instinctively. I guess if companies came to you with a heartier set of products, that actually worked in a range of circumstances. And handled the “real world” better, so it could work in a Summit model, but also work in a traditional school model. Maybe it’s not possible, but I don’t think they’re even testing it to see. How does that land?

Tavenner: Yeah, a couple things coming up for me. One, guilty as charged in terms of being not very tolerant for a lot of products that are on the market. Also simultaneously feeling a little, I want to be humble here. We weren’t an ed tech company for sure, but we did build a technology platform to support our school model. I’m now thinking about that work with the lens of the green line. It took me back to like 2010-11 when we were developing what would become the Summit learning model and platform. I think we had a lot of the green line mindset, Michael, but I’m not sure we did it the way you’re describing there. I mean, here’s what we had. We literally didn’t know what would work, so we just started trying things. 

At first, we were a little bit timid and slow, but honestly we just started going and picking up speed. Since education is pretty complex, and it was a whole school model, there were lots of different people trying lots of different things, all at the same time, which seems very green line to me, and very fast at the beginning. We had week-long cycles in the beginning. We were constantly watching how students and teachers would respond to different things we tried, and we were in continual dialogue with them through focus groups and surveys and feedback. I mean, you remember those days. I would walk through the room, and students would come up to me and give me feedback, and be like, “This isn’t working, and this is what we need.”

We actually developed a system for this, because things were moving so fast. It was literally posters. We had these posters on the wall, and they would change each week. They would say like, “Here’s the feedback we got from last week. Here’s the input you all got. Here’s what we’ve done to address it.” Then there was a space to gather more feedback and input. It was literally gathering all this data, and moving really quickly. So many fails, so many fails, but we were super transparent about it. Everyone could track all of the information. I feel like we were at least mostly on the green line. I remember simultaneously feeling totally exhilarated. 

I look back to that time, and it’s still some of the most fun, best learning. It felt like we were all learning together, all at once. Each day was driven by curiosity and discovery. It felt like we were making progress, and we were doing it together. At the same time, I always felt uneasy. It never felt comfortable, or like a place that we could stay, or even take a breath, really. We were really testing the boundaries of what might be possible, so we could figure out where they were and then design within them. These are the boundaries of a personalized learning model that we were really testing. One of the boundaries that we pushed really hard on was attendance. 

Specifically taking attendance, which has a whole lot of legal requirements and financial implications. It’s also a big time suck, and as we often talk about, really misaligned with the ways personalized and real world learning work. We really, really pushed on that one. We went so far as to trying to install chip readers at the doors of the building, that would mark a student present when they detected the chip that we had adhered to each of their computers. That was just one of many things we tried. For the record, that was a step too far, and it didn’t work for a ton of reasons. But that’s the type of thing you do on the green line, I think.

Horn: Yeah. I mean this strikes me as exactly right. I won’t say like pull it into the product that got developed, but the model that you all built. I remember when you were doing all that prototyping, and you had literally numerous experiments going on in the same building sometimes. I’d go one place to the next, and see two totally different things going on. I think your description matches exactly what I saw. I guess if we take that and then extend it into what we’ve shared with the audience this year. Say the prototype you’ve done around better supporting your novice, you call them executive directors, EDs, or most people call them school principals. People might remember the basic idea, that you had the expert long-standing principals or EDs supporting the novice ones, and helping them understand the resources that were available to them and so forth. 

Part of this was these regular meetings between these people, sort of buddying them up, if you will. You started out with a hypothesis that maybe it’d be an hour long meeting. What ultimately came out of it was the time was less important. It was more the regularity and the content of it. I guess on the one hand, someone who’s listening might say, “Well, did you start saying, ‘Can we stretch out that to two months instead of monthly,’ or, ‘Can we do it in 30 minutes instead of an ad hoc?’” I guess I want to hear your reflections there. Before you do, my other gut is that maybe what really happened is we left out a stage in the design process in describing what you did for our listeners that we may not have shared.

I don’t know this, some of the upfront iterations you did before you had a design you were really ready to roll out and test. So let me ask the question maybe in a more open-ended way. When I listened to this, like I said, I had one reaction. I was like, “Oh my gosh, this is why so much ed tech companies don’t hit the mark.” Then the other one was like, “I just really want to know what Diane has to say on this, because I’m super curious.” It feels very different from a lot of the test-and-learn stuff we talked about in the beginning of the year. I guess, how might this concept change how you would design the pilots we talked about, this ED pilot throughout the year? Maybe that’s the open-ended question. 

Tavenner: You ask. Yeah, it’s good to think about. I think what you’re surfacing, Michael, is that it can be hard to make transparent or be metacognitive about the things we do sometimes. In this case, I think you’re right. I think we probably left out a whole beginning of our work out of our conversations, mostly because I just probably took them for granted. I’m so glad we’re getting into it now. I appreciate the framework for really triggering me to think about what we did in helpful ways. What’s coming to mind when I hear your question about what we would’ve done differently is actually something that Malia, who was our project lead, did instinctually that I think represents the green line. It wasn’t planned, but it’s what she did. There was this point early on in the pilot when our cooperating EDs and the onboarding EDs were meeting and using… You talked about the timing. 

That’s one thing we were testing at that time. Also, we had a template agenda for those meetings. We agreed on an agenda that they would use, that we thought would help surface the things we wanted to surface. Malia just kept getting a lot of folks saying, “Hey, we think the agenda needs revision. We think this should change, or that should change.” They had all these various inputs that they were giving her. Their mindset initially was a little bit I think red line, like, “Hey, Malia. You change the agenda, and then we can continue having meetings.” Malia was like, “Oh, heck no. You’re welcome to adjust the agenda. Just document what you’re doing, so we can learn. But this can totally be happening simultaneously with all the other things we’re testing.”

As I reflect on that moment, it feels more green line to me in mindset and approach. Certainly we didn’t have this language, and it was just a thing that Malia did. I remember the conversation with her after. She was like, “Hey, I did this. Do you think that’s fine? Do you think I messed up the pilot?” I was grateful that she had comfort with a bunch of people working on different parts at the same time. And that she wasn’t managing the project in a way that was so linear that people had to wait on each other. Which just conjures up these true assembly line ideas to me, which is like, I’m waiting for the person before me, and the person after me is waiting for me, and we’re handing these things off.

Horn: Yeah, I love it. Bob uses that language even, right? He’s like, “You might be the UX person doing the website, or something like that. You think you have to wait for the person who’s doing the programming on the frontend. Instead be like, ‘OK, what are the variables you’re playing with? Let me just design a bunch of prototypes off of what that could look like, so I can learn. Then when we’re ready to snap them together, I already know so much more.’” I do think it flips some of these test-and-learn ideas on its head a bit in interesting ways. Maybe it’s just what we both have reflected on, that it’d be more of like a, let’s try a bunch of upfront stuff as part of our learning agenda. Then as we start to understand the boundaries, as we start to understand what works and why, let’s start to lock down into something. 

At which point, we actually start to… We’re no longer, as Bob might say, hypothesis seeking. We start to actually really have some hypotheses. Now we’re testing and learning through those hypotheses as we get nearer and nearer to launch. We also, by the way, have probably a lot more knowledge upfront, so those hypotheses are really grounded in something. Compared to when we’re really designing something from scratch, but we haven’t prototyped it yet. I guess that’s the last piece of it. When you were reinventing the Summit model in 2010-11, there’s some things that stayed constant, but you were really, all first principles were on the table, right?

Tavenner: On the table. Yeah.

Horn: Yeah. Whereas in some ways there’s a difference, that continuous improvement brings you into a different zone, I think.

Tavenner: I think that makes sense to me, and helps me reconcile something that I was having a hard time holding while I was thinking through this. Specifically, it’s our use of if-then hypotheses, which you’ve just been referencing. Obviously they’re a part of continuous improvement. Bob trained with Deming, so I know he believes in continuous improvement, but I was not sure if they’re a part of the green line. In if-then hypotheses, you really are declaring what you think is going to happen. So it does feel a little bit different from this learning, trying to push failure in order to learn. As we’ve talked through this today, though, I think we are just really zeroed in on an early phase in the design, and a true sort of whiteboarding design. It’s not continuous improvements, when you’re really designing as we were a decade ago. 

That’s very new products, versus continuous improvement. In the beginning, I’m using the word product there loosely, obviously. In the beginning, you’re really just trying to establish boundaries and rule out what won’t work. It doesn’t make much sense to have if-then hypotheses. You don’t know enough for that. As you do this early work, and get the boundaries and the clarity, then you can get to them later. It’s funny, I’m thinking back now. After we did a bunch of that early work, I remember sort of beating myself up for not having the if-then hypotheses. I flag that just because, as much as I know and have practiced this and done this, sometimes I listen to something or read something, and I’m like, “Oh, I forgot that again.” There’s a little bit of maybe giving ourselves a break and saying, “Oh, you use different things at different times.”

Before we wrap, I do want to surface one tension that I think is real. I don’t want people to leave this thinking we’re totally Pollyannaish. Part of the green line concept is that you move from theory to reality really in these testings. You’ve brought it up, like what will really work and what won’t in all these circumstances. But I get this sense that most of the things that this theory is addressing are still reality in a lab setting versus, for example, with real kids and real schools. They’re simulating a variety of conditions that aren’t optimal, but they aren’t testing in real life. Now, I could be making that up, but that’s my sense. I think this is one of the huge tensions in education. No one wants to be tested on, but I just don’t know what the “lab setting” looks like in education.

I have a really hard time imagining one. For those ed tech companies that we just beat up a little bit on, what would that actually look like for them to do this? Maybe that’s why you get these products that have really only been very theoretical. As I say that, the field’s reverence for RCTs, which I think you loosely referred to earlier, is also a contributing factor here. Those being random controlled trial studies to prove that something works. Which sounds good in theory. Of course we’d want things that work. But in practice, almost everything we do in education that is effective is nearly impossible to prove via an RCT. Because to isolate variables in the way that are required, it doesn’t make sense, to me anyway. I could go on and on for a long time about RCTs. 

The reason I bring all of this up is I have personally experienced the pushback from parents and community activists, who get very angry if they feel their children are being experimented on, which makes doing innovation well and right and super hard. I’ve talked extensively in the past about that theory. The same time I was just describing as being so exhilarating, and one of the worst nights of my life with a group of parents, who were really upset about what we were doing. I just think it’s important that we point out the tensions that are very real here and the challenge of this. It’s a set of constraints that I think help lock in what we’ve always done in schools, because somehow that feels safe.

Horn: Yeah. I mean look, it’s a great point. Obviously unfortunately the truth is, though, that doing the same old thing itself is a risky and unfair advantage, because it doesn’t work for so many. In many ways, I think it’s worse than experimenting. Knowingly, to use the language, “We know it’s not going to work for your child, but we’re still putting you in it.” Which is itself an experiment that I think is unfair. It goes back to, I think we need to figure out places in education where we can do that kind of prototyping that you were describing when you created that Summit model. Where it’s rapid, it’s a lot of things, we’re playing with it. Frankly, I mean this gets to the randomized control trial thing as well, we need to be finding those anomalies. 

Where does it not work? The thing that you just designed that you’re sure is gold standard, where does it break? Instead of seeing that as failure, which researchers often, you know this, they try to do a million things to show, “Oh, that doesn’t apply because of this.” Or, “We’re going to take it out of our dataset because…” Instead realize, no, that’s really, that’s showing us that there’s a different circumstance here that requires a different approach. What a cool thing, because we can now say this approach works in this circumstance, and we’re trying to understand what will work for this other one. I just don’t think… We’ve got to be more real with parents about that as well, I think, and bring them into it. 

Look, it’s another reason prototyping is the word. We’re not saying turnover everything tomorrow. So many superintendents, they want that headline. Five years from now, they can say, “We transformed all our schools.” Well, maybe you’ll get that headline, but maybe you’ll get a bunch of small things that teach you a lot, and help you serve 20% of your kids better than they otherwise would’ve. The others will sort of be the status quo, but at least it’s better than you were. I don’t want to say that as in a down way. I just think we have to find these opportunities, because we are not going to better serve kids otherwise.

Tavenner: Yeah. It’s almost like, right now, the perfect is the enemy of the good. Perfection is expected. If this doesn’t work for every single student in every single circumstance, then forget it. Write it off. Versus, how can we get to 20% of the kids with this? Yeah. I think that the middle ground here is collaboration, Michael. Students, parents, communities, educators, how do we all work together on innovation, so we aren’t doing to people, but with people? That’s much easier said than done, but it’s definitely worth it. Honestly, now I’m getting curious about how radical collaboration could evolve and improve these theories, because I don’t think any of them are really grounded in that type of collaborative-

Horn: Before you jump where you’re going to go after that, just to stay on that for half a beat. I think that’s right. You actually, all these micro school experiments right now, where educators are leaving the public school districts and starting their own schools. And families are opting into designs that look very different, because they do not want the status quo. These are all your chances. Because there, parents have actively said, “No, we want to be part of something that’s different.” These are your opportunities, I think, to prototype. We’ve got to take advantage of them, because they’re the ones that are demanding the prototypes. They’re not scared of them.

Tavenner: Right, right. Willing participants actively engaged with us. I mean, I think that’s probably the place to leave it today. Otherwise, I’ll open a whole new can of worms. I’m just leaving with so much to think about, as always. Before we go, I want you to give me one more thing to think about. What are you reading, watching, listening to?

Horn: Oh gosh. More Harry Potter in our House, Diane. We’re going down that road. We’ve finished Chamber of Secrets. We’re now in Azkaban, or however you pronounce it. You’ll tell me afterwards. I finished Rick Hess’ new book, The Great School Rethink, which I really enjoyed. It’s a very grounds-up thing. What I liked about it is, he’s not like, there’s one thing that’s right. You’ve really got to figure it out for your community. I really enjoyed that. Then frankly, a lot of my nights are… By the time this comes out, my agony of the Boston Celtics is probably going to be over. But right now, it’s been an aggravating NBA Playoffs over here, Diane. What about you?

Tavenner: Well I’m not following that, so I’m not feeling that. I’m sticking with fiction for a little bit here. I just picked up Deepen Copperhead by Barbara Kingsolver. Michael, several of her novels have arrived in my life at moments when I felt like I needed them, and they really resonated. In this moment, when I’m feeling compelled to not look away from what many of our children still suffer on a daily basis, I’m going to give this one a go. Dig into what I hear is a pretty amazing novel, but a little bit challenging. Anything inspired by Dickens is probably a little bit hard on the heart, right?

Horn: Very much so, and report back when you’re done for us. Until then, I hope you all have gotten as much as we selfishly have gotten out of this conversation. We’ll see you all next time on Class Disrupted.

Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

Republish This Article

LISTEN — Class Disrupted S4 E17: How Testing Boundaries, and Embracing a Child-Like POV, Could Transform School Innovations

There’s a beauty to embracing a child-like mind in exploring the limits, to understand where new ideas work — and importantly, where they break

By &



We want our stories to be shared as widely as possible — for free.


View our republishing terms

On the 74 Today

Daily Enlightenment!

Sign-up for our free newsletter and start your day with the most important education news.