Measuring the success of Ed Tech: Not all about test scores

Sometimes, technology gets thrown at struggling schools as a panacea for a variety of ills with predictably bad results. How should we be measuring those results, though, in the schools that are doing it right?

The New York Times ran a feature on Saturday called "In Classroom of Future, Stagnant Scores". The article is part of a series called "Grading the Digital School" and asked some tough questions about whether technology in schools can actually improve student achievement. Most significantly, it pointed to the lack of hard data around the quantifiable success of investing in technology-rich schools. While we have a responsibility to ensure that technology is adding value in schools, I'm inclined to believe that the lack of supporting data is the result of poor measures rather than poor results.

As noted in the article,

Since 2005, scores in reading and math have stagnated in Kyrene [the Arizona school district featured as an example of a technology-rich environment], even as statewide scores have risen.

To be sure, test scores can go up or down for many reasons. But to many education experts, something is not adding up — here and across the country. In a nutshell: schools are spending billions on technology, even as they cut budgets and lay off teachers, with little proof that this approach is improving basic learning.

Let me start by saying that I've seen too many technology implementations in schools that add no real educational value, but take a nice dent out of taxpayer wallets. There are plenty of ways to go about making a school "technology-rich" that actually take away from the real business of learning. When rollouts are half-hearted, teachers and parents don't completely embrace the approach, students and teachers lack accountability, and teachers aren't provided with the right training and coaching, then schools end up buying a lot of expensive toys. eSchool News recently highlighted schools in New Mexico that are saddled with hefty repair bills and failing, aging, abused computers from their 1:1 efforts.

I am not in the give-everyone-computers-and-watch-them-succeed camp.

However, I wouldn't be in the business of Ed Tech if I didn't think that the potential existed for kids to learn in new, engaging ways that prepared them for real-world challenges and managed to better differentiate instruction so that every student could be better served in our public schools. What's happening in a district like Kyrene where everything seems to be happening the way educational technologists believe it should? Kyrene has solid community investment, good teacher buy-in, and progressive techniques. Why aren't standardized test scores following?

On counterpoint in the article sounded fairly familiar:

Karen Cator, director of the office of educational technology in the United States Department of Education, said standardized test scores were an inadequate measure of the value of technology in schools. Ms. Cator, a former executive at Apple Computer, said that better measurement tools were needed but, in the meantime, schools knew what students needed.

It would be nice to think that schools know what kids need, but we also need to find ways to measure the more intangible skills that students acquire using technology in relevant and (I believe) powerful ways. We see technology breaking down barriers to collaboration, improving writing and criticism, providing software that differentiates instruction and gives real-time feedback to teachers on student strengths and weaknesses, and allowing teachers to guide students through rich and varied resources. Standardized tests, all too often, measure students' abilities to take tests.

Some of the best standardized tests get at students' critical reasoning skills and their ability to tease out abstract concepts from real-world problems. Even these, though, may not be aligned with the way students are learning, particularly in more constructivist settings where technology enables a different kind of creativity. And even in states with tests widely acknowledged as "good", most students will see short-term bumps when schools teach to the test.

Real learning, the sort that many hope will happen in technology-rich environments, is rare when curricula are closely aligned with test materials. Don't get me wrong - Alignment is the name of the game and tests should assess what students are learning. Too frequently, though, schools tweak and rework their curricula based on minute analyses of yearly test items. Those schools that delve deeply into subjects may find their scores for the year distinctly lacking, even if their students are richer for the deep dive. Imagine, for example, a school-wide, year-long focus on statistics and data analysis, where students use spreadsheets, scientific probes, online surveys and other tools to really explore the world around them. Invariably, scores relating to measurement, statistics, expository writing, and reporting will improve overall. However, there won't be time that year to teach 27 other standards in English and math, no matter how many important skills and concepts students take with them.

The tests will also not measure the ability of students to manage projects independently or search the web for research materials, both of which would have been key outcomes of the project-oriented learning I described above.

Yes, we need better tests. And we need data about the real value of tech in the classroom. But more than that, we need research into pedagogy that supports the use of tech in the classroom. We need students to focus on developing portfolios rather than racking up test scores. We need students to know how to tackle a project when they encounter one (and not just Google "statistics" but really manage resources and develop and implement a plan). And we need teachers to not just have a day of professional development before school starts on how to have kids use Google Docs; they need ongoing coaching from experts in the field to ensure that all of these technology investments are adding real value, even if the tests left over from NCLB don't have the chops to assess that value.