There’s been considerable debate regarding a recent RFP to purchase a pair of electric vehicles as part of the St. John’s city fleet. As expressed by councillor Dave Lane, who spearheaded the proect, the plan was to make the new vehicles available to the parking enforcement unit and to treat the whole thing as a pilot project and to see where things went.
Initially this did draw considerable, polarized, interest with those for it noting that much could be learned from the acquisition and use of the vehicles and, besides, the cost of operating them should be considerably cheaper. Those against typically viewed the whole thing as not worth the bother; playing with newfangled toys. The debate was brought to a head, though, by a letter written to the council by former mayor Andy Wells in which he slammed the plan as a waste of taxpayers’ money and concluding that electric vehicles are, in general, ” ‘driveway jewelry’ for the eco-affluent, who benefit from public subsidy to indulge their guilt about living in a fossil-fuel-dependent society,”
Then the fight started, the polar opposites moved yet further apart and both sanity as well as reason, it seems, exited the building.
Still, though, it’s impossible to ignore the fact that, year by year, electric vehicles (EV’s) are becoming more and more prevalent. Sales, while not exactly meeting the growth targets guessed at 4 years ago (they were expected to triple each year) have still been showing decent growth. What’s more, most of the major manufacturers are in on the industry. Presently, models are available from GM, Toyota, Nissan, Fiat, Daimler, Mitsubishi and of course Tesla to name just a few.
With a little time on my hands I investigated the costs associated with the requested purchase for the city. I asked just one question: does it make financial sense? In other words should the expected reduced cost of operation translate to a lower cost of ownership. Not to spoil the rest of the post but the short answer is “no” but it’s worth reading on, if you have the time and interest.
Let’s look at the cost for just one car and let’s leave the cost of the charging station out of it altogether since the car doesn’t really need a dedicated charging station as such; all it needs is access to a 110 V or a 220 V (preferred) standard outlet. Since the EV is to be used just around the city, all of the costs should be based on that type of driving.
Now—what car to choose? While no doubt some users would love to cruise around in something very nice such as the luxurious and trendy Tesla S, we have to be more pragmatic here and choose something better suited to the job at hand. It’s for parking enforcement and won’t be carrying significant cargo. It’s also bought on the taxpayers’ dime so it therefore needs to be inexpensive. For the sake of argument let’s choose the Nissan Leaf.
Let’s stick with the base model. According to Nissan’s website it can be had here for $33,788.00. Not exactly cheap but the expectation is that what we lose up-front we’ll gain back in the long term with lower operating costs.
Let’s figure them out. Let’s start with charging the battery. According to Nissan, the battery capacity is 24 kWh so you might assume that it therefore will take that much electricity to charge it. You need to take into account, though, the simple reality that no process is 100% efficient. You may notice that when batteries are charging and discharging they warm up. This means that some of the energy is being wasted as heat. You can expect, therefore to need more than 24 kWh.
Based on some data found here I am assuming 85% efficiency this means that to fully charge the battery you must therefore supply (24/0.85) or 28 kWh of energy. At NL Power’s current rate of $0.1178 / kW h this means a full charge will cost $3.30.
Next you need to determine how far a charge will get you. Nissan’s stated figure is of 135 km for a full charge. The US EPA, however lists a much more conservative value of 117 km. Given the fact that batteries function les efficiently in cold temperatures, however, it makes sense to question the validity of even this figure. Fortunately, some low temperature data are available. The website fleetcarma.com has listed some empirically derived figures of how the range per charge varies with temperature so let’s use those.
We also need to use temperatures that are realistic for this setting. Average temperatures by season can be found here. Let’s assume that the vehicles will be driven 20,000 km/year and, further, let’s assume equal distances in each season. Since the temperatures will be different in each season let’s take that into account. The table below lists the anticipated costs for driving 5000 km in each season, as well as the yearly total.
|Season||Temperature (degrees C)||Range (km)||Cost for charge||Cost for $5000 km|
Table 1: Yearly charging cost for Leaf, assuming 20,000 km
Now let’s compare the EV to something reasonable. Since we started with a small Nissan EV let’s compare it to a small conventional Nissan, the Versa. Once again, let’s stick with the base model but equip it with an automatic transmission to make it more functionally equivalent to the Leaf. According to Nissan’s website that vehicle should come in at $17,165.00.
Now we need to find the cost of fuel for 20,000 km. Based on US EPA figures the Nissan Versa is rated at 7.6 L/100 km in city driving so you can expect to use 1520 litres to drive the 20,000 km in a year. The variability in gas pricing makes it impossible to provide a definitive single cost so upper and lower figures will be used instead.
According to Gas Buddy the yearly low was $0.95/l and the high was $1.44/l. This then gives two yearly fuel costs.
|Based on $0.95/l
|Based on $1.44/l“high fuel”|
|Yearly fuel cost||$1444||$2189|
Table 2: Fuel costs for Versa, assuming 20,000 km
Now the maintenance. Data on this are available in US$ from autoblog.com and are presented in the table below (converted to Canadian dollars). Interestingly enough the figures are roughly the same and could realistically have been omitted from the calculation.
|Repairs and Maintenance||$4844.20||4857.94|
Table 3: Maintenance and repair
So, finally, let’s look at the total five year cost for the two vehicles. Insurance and licensing will be the same for both so we can omit them.
|Item||Leaf||Versa (low fuel)||Versa (high fuel)|
|Repair and Maintenance||$4844.20||$4857.94||$4857.94|
Table 4: Five year cost of ownership, based on purchase with no resale.
Clearly, presented this way, there’s no contest. Based on straight up purchase the leaf will cost anywhere from around $9000 to around $14000 extra to own over the five-year period.
Now, you may be crying foul, “wait a minute, you don’t ditch the car after 5 years. The Leaf will be worth more at the end of that period so this is not a fair comparison.” Fine. Let’s factor in depreciation. Once again the estimates came from autoblog.
Table 5: Expected depreciation and resale values
Set’s just do the total cost table 4 above over again but use depreciation instead of purchase cost.
|Item||Leaf||Versa (low fuel)||Versa (high fuel)|
|Repair and Maintenance||$4844.20||$4857.94||$4857.94|
Table 6: Five year cost of ownership, based on purchase with resale
The Leaf is still considerably more expensive, even when you consider a worst case scenario for gasoline.
From a strictly cost-based perspective, then, it does not make sense to procure and use the EV’s if the assumptions used are valid.
That’s not really the end of the story, though, is it? It’s not my intention to be negative here, just reasonable, and since the main argument put forward was based on cost it needed to be pointed out that it was likely invalid. That said, there are far more compelling reasons that may still make the plan a good idea. Consider these:
First, you need to consider the overall environmental impact of EV’s. They have the potential of being much cleaner and environmentally friendly. Assuming that the batteries are correctly recycled and re-purposed (and there’s cause for some optimism in that area, see here.) then the real environmental issue is related to the source of the electricity. Right now in NL, unfortunately that’s just a bad joke as the electricity is as non-green as it gets, coming, as it does, from a dirty thermal generating plant. Later, though, when the feed is switched over to the Hydro-based Muskrat Falls plant that will be an entirely different matter; much greener. Simply put, right now electric cars are just contributing to the pollution coming from the Holyrood Plant bit that will change in a few years—right about the time those vehicles are ready to come off the road as it turns out.
Second, you need to consider the value in foresight and planning—something often badly absent from the NL milieu. (As an aside a good friend often half-jokes that the NL Government’s idea of long-term planning is, “what’s for supper?” His words, not mine.) Based on the best available data it does seem likely that EV’s will become more and more prevalent as time goes on. To what extent? I would suggest it is impossible to ascertain that right now. It’s still worth considering. As such not only the city but also the province needs to devote a reasonable amount of time and effort in gathering pertinent empirical data regarding use costs, reliability, safely and infrastructure needs. In that light, the proposed plan, if altered and fleshed out appropriately as a rigorous pilot project, and not just a vague idea, can easily be seen to have significant merit.
So maybe the best advice to those involved should be this: plan it all out a bit better, look again at the timelines and goals, and maybe see if partnership assistance is available from the province. Don’t just mess around driving from meter to meter and, asking the traffic enforcement officials, “how’s it going with the new EV’s?” from time to time. No, devise a proper plan and implement it. Log everything: kilometers driven, time needed to charge, energy transferred in the charge, times required, maintenance and repairs–everything. Put it up there where we can all see it and benefit from it. In that way, properly implemented, the project does have the possibility of yielding information that can be used by consumers and governments alike.
It would have been in most respects a normal day for an online distance education teacher in the early nineties. I settled in to my spot in the studio and made sure everything was working. First the mikes—all OK. Next the Telewriter: I picked up the pen and wrote on the screen and then remotely loaded the first ‘slide’ for the day’s lesson. Again everything was fine. As always the first thing to do would be to greet the students by name and just chat for a few minutes. Besides ensuring that the audio and graphics capabilities were working it had the much more important function of getting the students to open up, to come out of their schools, defined as they were by the walls of the classroom, and now enter into the online one defined only by who was present that day.
Today I had a new student. I was a bit surprised as it was several months into the school year. I asked her name but she did not reply. Eventually another student at that school answered for her, telling me her name and letting me know she was shy.
Over the next few weeks I did my best to get my new student—let’s call her Angela—involved, but all to no avail. She would not respond when asked a question and would not ever write on the electronic whiteboard when asked to contribute to the day’s work. Her first written work assignment was comprised of mostly blank sheets and so, I decided it was time to contact the school. I called the principal and then learned the awful truth.
In a previous job, around 14 years ago, my designation was Program Implementation Specialist and one of my initial tasks was to put together a team of online teachers who would lead the changeover from the distance education system used in my province since 1988—the one described in part above, and may be described in more detail here if you are interested. Together, the Program Development Specialist and I devised a recruitment strategy that involved an online application system that would be used to provide a short-list of candidates. Those candidates would then be interviewed by a panel of three and would be subject to a reference check. All components were scored and the scores were used to rank the potential candidates, who would then be seconded.
This system was used by me and my colleagues for seven years and provided me with a significant experience in selecting those would be well suited to online learning. Through constant use I came to anticipate the response to one particular question as it tended to give an almost instant measure of whether the interviewee was or was not a suitable candidate. The question? “What would be your response if you noticed that a particular student was not doing well in the course? That is, if you noticed that a student was not engaged, not submitting work on time or doing work that was of sub-par quality?” Typical answers included: putting on extra classes, creating tutorials, providing “worksheets” and maybe even involving disciplinary measures. None of those, however, were the one I sought. I wanted something else.
Oftentimes the truth or the best course of action is not the one that seems obvious. Take my own academic discipline—physics—for example. There’s nothing commonsensical about the majority of what is typically found in the high school physics curriculum despite the protestations of inexperienced (or just plain ignorant) instructors who claim they can “make it easy.” Newton’s first law (objects tend to remain at rest or in constant motion unless acted on by an unbalanced force) is about as counter-intuitive as it gets. Objects remain at rest—no they don’t! Just YOU try sliding a book across a floor; it comes to a stop in no time! No! Newton’s first law is the product of sheer genius; a fantastic off-the-charts insight made by a most unusual individual. Seeing or maybe creating ‘friction’ as a new construct but one that merely presents itself as a new unbalanced force—pure brilliance!
Physics is not something that is not easily absorbed; something that is only understood after a skillfully-constructed instructional framework that involves bringing students right up against their existing world understanding, clearly pointing out the deficiencies and ensuring that the student acknowledges those deficiencies and then carefully rebuilding the worldview in a different way. Not simple at all and certainly not something that happens in a day.
And so it goes with everything. To do better work you have to work hard to get beyond the obvious and, as just pointed out, this involves going up against your “comfort zone” then breaking through it with a whole new worldview. This involves breaking common sense.
Allied Bomber Command faced just such a situation in World War II.
Let me digress for a moment here. I am not one given to glorifying war. While I acknowledge that it is a reality and something that often cannot be avoided I also want to point out that there is generally no “right” and “wrong” side but instead two opposing groups who have found themselves with no alternative but to act with extreme aggression. It is a reality. Ordinary people like you and I never wish to find ourselves in it but, alas, from time to time it happens and we are faced with no choice but to do what we must. Under the extreme conditions faced by the various sides oftentimes comes the need to dig down deep and to utilize every and any opportunity that affect the balance of power. Frequently, then, wartime becomes a time of extreme innovation borne of necessity. I wish to consider one case here as it is illustrative of a point I wish to make and not for any other reason.
Bombers, with their heavy deadly loads, are slow lumbering beasts and, as such, are easy targets for fighters who desperately seek to prevent them from achieving their missions. In WW2 many that set out did not return but were instead shot down by the fighter planes they encountered along the way. Those that returned were typically bullet riddled but still able to limp back to base for repair and refitting.
One of the responses to this loss of planes was to install armour that would protect the aircraft from the projectiles from the fighters. Armour, though, is heavy and reduces the load capacity and thus the military effectiveness of the aircraft. The solution, therefore, is to place the armour only where it is absolutely necessary. Bomber command subsequently engaged in a constant, careful study of its in-service aircraft. Each time an aircraft would return from a mission it would be inspected and the location of bullet holes obtained in that flight would be recorded. Typical returning aircraft resembled the drawing below. Notice where the bullet holes are; namely on the wings, tail and fuselage. Based on that it would make sense to place the armour there since, after all, that’s where the hits were occurring, right?
Wrong. The reasoning is unsound; fundamentally flawed, in fact.
Fortunately so, too, thought the Allied Bomber Command, thanks to the insight of mathematician Abraham Wald. He assumed that the bullets were not specifically aimed at any one part of the aircraft. Aerial firefighting was much too chaotic an activity to allow for precision aiming. Fighter pilots instead aimed in the general direction of the aircraft and hoped that the bullets/cannon shells would have some negative effect. One would expect, therefore that in an ideal situation, the placement of bullet holes would be more-or-less uniform.
The placement wasn’t uniform, of course as you already noticed from the image. Wald, however went one step further by reasoning—correctly—that hits to vulnerable areas would result in downed aircraft, ones that would not make it back. Since the sample used in the study consisted of aircraft that made it back it would be logical to conclude that they tended NOT to have hits to the vulnerable areas.
Take another look at the diagram. Where are there very few bullet holes? The engine and forward cockpit. Of course! A relatively small number of hits to the engines would render them inoperable. Likewise, hits to the cockpit could result in casualties to the flight crew. In either case the plane would be lost.
Simply put, instead of looking for where the bullets were you should look for where they were not. Those are the parts that need armour, and not the bullet-riddled parts.
So what does this have to do with eLearning? It turns out that in my previous career a significant part of my efforts were dedicated to the improvement of the quality of our instructional efforts. I approached this is various ways: reading about things done differently elsewhere, researching new devices and attendant methods, conferring with teachers and interviewing successful students. These tended, at first, to be my main starting points. Over time, though, I slowly moved away from all of these somewhat.
It started in a somewhat unexpected fashion. Each year I would address all of the intermediate-secondary student teachers at Memorial University in order to explain to them how the province’s distance education program worked. As part of the presentation I would those in the audience who has received part of their high school program from the program to identify themselves and would ask them to offer up their perspectives on the experience.
Of course, in all honesty, I was, in part, “selling” the program. I was part of that same system and certainly took great pride in it and in my contribution to it. While I was making it look like I was seeking an unbiased assessment I know—now—that in the initial stages I was really seeking affirmation; an ‘independent’ external source that validated the program as being worthwhile.
To my great surprise that’s not exactly what I got. Yes, many of the students were quite positive about the experience they’d had in the distance education program, but not all of them were. Numerous students indicated that they’d not found it great or that they much preferred the more traditional face-to-face approach.
The first few times this happened I responded by downplaying the responses, merely assuming that they were just the voices of the disgruntled few who had not enjoyed success probably through their own efforts or, more accurately, lack thereof. In time, though, I came around. Rather than dismissing those voices or, worse, glossing over what they’d said I began showing active interest in their points of view. I would not just let their comments sit unacknowledged; unchallenged. Instead, I slowly came around to a practice whereby I would probe deeper whenever I got the somewhat negative responses, attempting to determine just exactly had led to what I’d found.
It was enlightening, to say the least. Space does not permit a detailed exposition of what I found but, in general, here were a couple of items that were frequently encountered:
- The choice to enrol in a particular course, which also happened to be a distance education offering, was not made by the student but, rather, by the parents or, even more frequently, the school administrator or the school district office.
- The instructor had not made a concerted effort to reach out to the student but seemed, rather to either just teach to nobody in particular, seldom involving anyone in the class or, instead, appeared to play favourites.
- Technical issues had resulted in significant ‘down time.
Now, lest you get the impression that this post is a mean-spirited barb at my former employer, let me assure you that nothing could be further from the truth. The pride I felt, and continue to feel in that program, is built on more than just emotion. It is, rather, something that is rooted in significant evidence that indicates its overall efficacy. The numbers don’t lie and they indicate that the students tend to do well. Just not all of them.
My point, rather, is to point out that in the later part of my career I found much more use in finding out why students did not find success than I did in identifying those factors that were associated with success.
Like Wald, I found it useful to consider the planes that did not return.
As for that telling response to the question, “What would you do if a student is not having success in your course?”
The desired response: “I would find out what was wrong.” That’s a lesson I earned through long and often painful experience.
Never mind the extra classes, the tutorials and the varied approaches, just figure out why the student is not doing well and do what can be done.
But there’s still ‘Angela,’ the student I found in my class, the one who unexpectedly dropped in and who was not finding any success. Yes, I did seek to get to the bottom of it all.
And I did.
I learned that she had just returned to her home community, after living away for several years. Her mom was a single parent but had found a new boyfriend so she’d moved away to be with him, taking her daughter with her. It became an abusive relationship and one night, in a drunken rage, the boyfriend had murdered Angela’s mom while she was present there in the apartment. She’d returned to her home community and was placed in foster care and that’s why she’d been dropped unexpectedly in my grade eleven physics class.
I tried as best I could to make things work for Angela. Unfortunately I did not succeed. I did not end up giving her a passing grade and she was not in my online physics class the following year. I do not know how she fared in life after that but do think of her often, especially when I need a good dose of humility. Sometimes, even with hard work, skill and insight you still cannot get the success you hope for. Yes, you generally do, with effort and teamwork, but not always.
Angela did not have a good experience in my Physics class. It continues to be a humbling truth.
From time to time you will see institutions ranked according to various criteria. Generally this is done with the intention of demonstrating how well each is performing. It’s not unusual to see this done with schools and here’s a claim that is often made, and substantiated by the numbers:
Small Schools Tend to Lead the Ranks
This is consistent. For years I saw it in my own region, reflected in the annual report card issued by the Halifax-Based Atlantic Institute for Market Studies (AIMS). Year after year, small schools led the provincial rankings. As a professional whose entire career was devoted to the betterment of small rural schools I wanted to be able to brag about this, to puff out my chest and say, “look, I told you that small schools were better for our children. It’s obvious that the extra care and attention they get on an individual basis, as well as the better socialization caused by the fact that everyone knows everyone else, is making a positive difference.”
I never did that, of course. It’s not because I don’t believe in small schools–I truly do. My silence on the matter was, to some degree due to the fact that at the time the studies were published I was a non-executive member of the Department of Education. As such I was not authorized to speak on its behalf. That was not the real reason though.
No, I knew that a far more powerful force was afoot; something that affected the results much more than did either good teaching, a supportive (small) ecosystem and the presence of many brilliant bay woman and men.
Although all of those are positive factors.
No the most powerful effect was something else, something related to straightforward mathematical behaviour, and if you’ll spare a few minutes of your attention I will explain.
Simply Put: Small schools have an advantage in these rankings that is due only to the fact that they are small.
And there is an unexpected twist too, one not often mentioned in the discussion of the reports.
Let’s simplify the situation and assume that the rankings are based on the outcome of one test only. Furthermore, let’s say that the result of that test, for any given student, is completely random; that is, any student who writes it will get a random grade between 0 and 100. In other words let’s act like there’s really no difference between the students at all of the schools. The small ones will still come out on top.
Let’s see what this would mean for ten small schools (we’ll arbitrarily name them sml01 to sml10), each one having only fifteen students in grade twelve and writing the test on which our report is based. The results for all of the students are tabulated below. In reality the table was produced using a random number generator in Microsoft Excel. You don’t need to read the table in detail. It’s just here so you know I’m not making the whole thing up!
Table 1: School results for ten small schools.
That’s a huge pile of numbers and we are only looking at the results for the schools so lets just redo that table showing only the schools and the averages.
Table 2: Small School Results
Notice that the results show a fair bit of variability. They cluster around an average of 50 but some schools had averages in the thirties while others were up around 60.
Now let’s do it all over again, but this time let’s see what would happen in larger schools (named big01 to big10). For the small schools we assumed there were only 15 students per grade level but for the larger ones let’s assume that there are in fact 120 students in grade 12 writing the test.
The rather long table is below just so you know I’m not pulling the numbers out of my head. As was the case with the small school simulation it was done using a random number generator in Microsoft Excel and just pasted directly into WordPress. Scroll to the bottom of the table :-)
Table 3: School results for ten big schools.
As before let’s just look at the averages for each school.
Table 4: Big School Results
Notice that, like table 2 the results are clustered about an average of around 50. Notice, though, that the numbers are not spread nearly as much.
Let’s put the two tables side-by-side for a better look
Table 5: Averages for both small and big schools
The thing to notice is that the big schools show much less variability. In small schools, individual students who do very well or very poorly (we call them outliers) tend to have a large effect on the average. In larger schools, the increased number of results tends to “smooth out” the results; to make them less variable.
This is something that is well-known in mathematics. It even has a name: The Law of Large Numbers. Simply put, in larger populations repeated experiments tend to cluster better about the expected result.
Now, this is where things get interesting. Recall that this is all about the fact that small schools get a built-in advantage due only to the fact that they are small. Let’s see what it looks like when all twenty schools are ranked from highest to lowest.
Table 6: All twenty schools ranked from highest to lowest.
Did you see what happened? The top schools were all small schools. They reached the top due to nothing other than the law of mall numbers working in their favour. Random variability–two or three bright students or an absence of two or three weaker students had a profoundly positive effect on the school average.
Recall also I mentioned there would be a twist. Notice that while the highest ranking institutions were drawn from the pool of small schools, so, too were the lowest ranking ones, and for the same reason–namely the presence of a few weaker students or he absence of a few strong ones.
So, based on this little experiment it’s plain to see that when ranked this way, small schools will tend to come out on top simply because they are small and the fact that the law of large numbers is better able to work in their favour.
As for the small schools at the bottom, it happens too and it’s at best likely that these are rarely mentioned due to the presence of selection bias on behalf of whoever wishes to weave the numbers into a narrative that suits their own political ends. One wonders, though, how many small schools have been closed or otherwise penalized for nothing other than being the unfortunate victims of chance.
Closing Note: this is in no way intended to cast AIMS in any negative light. To the best of my knowledge neither they, nor the various Departments of Education nor the various school districts ever tried to spin the reports into any grandiose claims regarding big and small schools. The false claims I have heard have generally be made by private individuals, each with their own axes to grind.
As for my own conclusion: Ranking systems, regardless of the context, whether it be health care, law enforcement, customer care or, as is the case here, school-based student achievement, serve a useful purpose but be wary of the law of large numbers before making any sweeping generalizations.
In a previous post I considered the possibility that much of what is presented as “Innovation” is anything but that. With access to some fairly new and attractive or otherwise popular products and armed with even a slight grasp of how to operate them it’s relatively easy to create an appearance of innovation. Simply put, if you can get your hands on some new gear, in even a short while you can present quite a convincing front.
Worse again, it is equally easy to generate what passes for proof; to an untrained eye you can make it look like your so-called innovation is creating some real differences. Any of these strategies can give you reams of what looks like convincing evidence:
- Deliberately pick enthusiastic students or teachers and pile on the anecdotes that endorse the desired point of view. People who rely on system-one (more or less intuitive) reasoning are easily swayed by stories so it won’t be hard to capitalize on that to get some people talking about how innovative the project is.
- Stage the project in a relatively well-off school or class and then compare the results from this highly-biased “treatment” group to the population in general. Very few will dig deep enough to see that the superior achievement results predated, and were independent of, the treatment.
- Rely on manufacturer or vendor supplied “research” when crafting reports, proposals and press releases.
- Bluff; just preface your claims with clauses like “decades of research shows…” and leave it at that. You might be surprised to see how few—if anyone—will call you out. Besides it will be relatively easy to portray those that do as kooks or curmudgeons.
That said, you could instead opt to take the more difficult path and strive for some real gains.
Notwithstanding the cynical tone of the opening of this post, it needs to be emphasized that emerging technologies should be welcomed, albeit guardedly, in all places of learning. I’ve come by this knowledge the hard way with ample personal experience in doing it both the right AND the wrong way. Lessons learned well generally involve first-hand experience and I have it, having done things for both the right and the wrong reasons–but generally having benefited from the experience in either case. It can be summed up succinctly: it’s best when you develop and refine an appropriate match between the technologies and the desired learning outcomes. This means, in particular, to start with the right sort of question:
- Bad Question: How can (insert gadget name here) be used in the (insert subject name) classroom?
- Better Question: What combination of equipment and methodology will foster better achievement in (insert subject name/outcome area)?
Notice the difference? Instead of placing the focus on the tools, place it on the learning.
You might say that, in the end, the two are the same. Yes, in both cases the goal is to do a better job. Take a closer look, though. Notice that the bad question is, in fact all wrong. First, by selecting a particular device it sets serious limits on what can be done. This can even lead to the selection of inferior methods. Consider this: A teacher wants to see if physics achievement can be improved through the use of tablets in the class after noticing that there are some good simulations available and asks, “how can I use tablets in the classroom?” With the best of intentions the approach is changed, replacing hands on activities with simulations. Now, while simulations are an excellent way to introduce topics, especially ones that cannot be done cheaply or safely, it makes little sense, when you think about it, to replace hands on activities involving motion, sound, electricity and light with simulations in which the only physical interaction is sliding a finger along a glass screen! After all, physics is all about interacting with the physical world! How ironic! If, instead the right question had been asked, no doubt the simulations would have been used but their use would have been balanced with follow up real-world interactions.
Second, the selection of a particular device sets in place a condition in which demonstrable improvements are expected. That’s nice, but what if it’s the case that the new technology is in fact inferior? You might suggest, “no problem, the report will show this.” Think about it, though, and be careful to layer in some human nature.
Consider again the previous case involving tablets and physics. Suppose that the unit of study was about current electricity and the tablets were used to explore the topic through simulations in which students constructed virtual circuits involving batteries, resistors, lamps, switches and meters to measure voltage and current instead of doing the same with the real thing. At the end of the unit the evaluation would be based on what could be measured, either online or using pencil and paper, and NOT on actually constructing the circuits.
How likely would it be that students would be able to do the same with real circuits? Not likely.
How likely is it that they would do about the same on a test? Very likely.
What’s the difference? In which class would it be more likely that you would find someone who could help you wire your basement? If, on the other hand, the right question had been asked, again, in all likelihood the simulations would have been utilized but their user would be balanced, blended with hands-on activities too.
Focusing instead on the learning will have two likely outcomes:
- You will likely not get famous as “it’s” clearly about learning and not about you.
- The project will show modest but useful results.
Whenever embarking on any effort to improve results in education it’s important to bear in mind one simple truth: you are not starting from scratch. The “traditional” methods that self-nominated reformers (most of whom have only limited classroom experience, other than the imagined stuff) so love to mock are in fact reasonably effective. The huge majority of people–those who’ve not been the beneficiaries of their enlightened practice but who have still managed to thrive nonetheless bear testament to that. Existing methods are, perhaps, not as good as they could be but are still nonetheless effective. Reformers should bear in mind that the traditional methods they so distain have several important advantages over proposed new ones. First, they are understood since, in all likelihood, existing practitioners not only use them now but will likely have been taught using them. More importantly, though, traditional methods have been refined from extensive classroom use. Proposed methods, by contrast are not well understood, raw and untested.
Far too often reformers boldly charge into classrooms armed with little more than vague ideas, shiny new equipment and an unhealthy combination of ignorance and arrogance. Students, parents, colleagues and administrators generally tolerate the ensuing activity since (a) it probably doesn’t interfere with them too much and (b) there is always the chance that some good might come of it. The proponent will usually get a little something—a write up in a journal, perhaps a trip to a conference, maybe even an award—but in the end the students will likely be left no better off and the effect on general classroom practice will be negligible.
It does not need to be that way, though. If, instead, the proponents asked the right question, one that focused on making some real improvement in student learning, then wins would be had all around. That is, better teaching and learning would result and, who knows, maybe the innovator’s career would get a boost anyway.
I came across something like this “unhelpful high school teacher” meme the other day and it got me thinking about the distracted landscape our students occupy.
All too often the opinions you encounter on the web and in other parts of everyday life are one-sided; normally the work of someone with an axe to grind; someone wishing to provide just one side of a rather complicated issue and this is no exception. There are very valid reasons why educators have to be skeptical about the unrestricted use of electronic devices such as laptops and tablets in class.
In my previous job my office was located on campus at a fairly large university. It gave ample opportunity to view the electronic habits of typical students and was a never-ending source of amazement—both the good and the bad kinds.
One incident in particular stands out. I wished to confer briefly with a colleague who was, at the time, teaching a large class (around 160+ senior education students) in one of two large lecture theatres located in the basement of the building we both inhabited. I decided to just head over to the class and chat with him before it started. Unfortunately, as is often the case, I was briefly distracted, and, by the time I arrived at the door the class had already started. Out of curiosity I looked in. My vantage point was from the centre back and, as the lecture theatre slopes toward the front, I had an excellent view of exactly what the students were doing.
Almost all of them had either a laptop or a tablet device open and active. What was interesting was the fact that the majority of the students were not just taking notes on the machines but also had a web browser open. Well over half of the students would periodically switch from the note taking application (typically a word processor) to the browser. The browsers had the usual suspects, of course (Facebook, Twitter and other social media applications) but a surprising number of students were also shopping online during class time. I’d estimate now that somewhere between 10 and 20 of the approximately 150 students were doing this! Only a very small fraction—I’d estimate now around 20 to 25%–seemed to be totally focused on the lecture; at least as evidenced by their keeping the notes application open throughout the five minutes or so I was watching.
I recall the moment quite well as it was one of those times when something became quite clear to me; a time that has sparked a considerable number of subsequent informal observations. Right then and there I decided to also take a look at the other lecture theatre. This one had a 2nd semester calculus class going on and, unlike the former one, was one in which electronic devices were not that well suited to taking notes (unless, of course, you had a touch screen or some stylus such as a Wacom device in which you could render back handwriting. After all, typing calculus notes is not something anyone can do on the fly!). Guess what? Same thing! Once again I saw a sea of laptops and tablets. Not quite so many, of course—I’d estimate around 50% of the students had them open as opposed to over 90% as was the case in the education class. Once again, though, the screens were dominated by not just social media but also online shopping!
Just a thought—maybe someone should run their own set of observations and verify this. At any rate, this short anecdote lends a bit (yes, I know “piling on the anecdotes” is a very flawed form of research) of credibility to the notion that we all have of how distracted our students really are.
Which brings us to the point: as educators it is in our best interests, and those of our students, if we find effective ways of managing the many distractions that electronic gadgets bring to our classrooms. While it is certainly true that electronic devices hold incredible promise for all aspects of education it must also be acknowledged that the devices are equally effective at pulling students away from the tasks that should be at hand. The same conduit that brings research, information and activities right to the students’ foregrounds is equally adept at bringing in distractions such as off-topic interactions, irrelevant information and other distractions, particularly games that have nothing to do with learning.
Blocking unrelated content is a strategy that will never work. Go ahead and block Facebook at the Wi-Fi router. The students will hardly be slowed at all. Some will switch back to getting it through their phones, which you cannot block. Others will switch to a different social media platform—new ones pop up almost weekly, and still others will just connect through a proxy server which will just circumvent the router and firewall rules. It’s a losing game of cat and mouse.
Blocking the use of electronic devices is equally counterproductive. First of all, it drags instruction back to the 19th century—and we cannot afford to do that. More importantly, though, the whole practice of “blocking” or “banning” is anathema to the whole idea of schools as places of learning.
So what, then? What is the magic bullet? As expected, because it’s nearly always the case, there is no one simple solution. There are, however general strategies that can be applied and which will be found effective. Here are some suggestions:
- Make a personal contact with the students: When students turn to the web browser they are turning away from you, the instructor. The less personal you are to the students the more they will do this.
- Communicate your values clearly: Typically around 80% of people will respect your wishes so make sure they know what your wishes are. Make it clear to the students that you do value the use of electronic equipment but that they must also make the best use of their class time. To do this they should minimize distractions and, in particular, save the social networking and shopping for some other time. It’s also worth noting that of the remaining 20%, around three-quarters of them can be convinced to follow along too especially if you ensure that you move around the room to make it apparent that you are checking to see If students are engaged. It should also be noted that the small remainder—around 5% of the total—will do what they please regardless of what you do and you would be well advised that this small group may be regarded as “beyond the point of diminishing returns” so long as they do not distract others with their off-topic pursuits.
- Find ways to leverage the potentially-distracting technology: You can always find ways to put the devices to some good use. Examples include: (1) getting the students to install “clicker” applications and build in “instant response” activities to your classes (2) provide electronic versions of partial notes (sometimes referred to as “gap notes”) that the students can complete online if they have annotation software (3) make effective use of simulations in class time if appropriate (4) use appropriate application software for in-class activities.
The word Innovation is one that is tossed around so much that it’s lost much of its impact. In some ways it’s like “awesome,” isn’t it? Once awesome meant something that literally took your breath away. These days it’s just a tired expression of assent; something that is deemed awesome is more likely just socially acceptable. Similarly, in a world where corporate press releases are grinded out in volumes that rival unit sales neither “innovation” nor “innovative” catch the readers’ attention much.
Add to that the point, already made, that scant few resources exist, whether in the form of HR or money, to engage in the various activities that one might immediately recognize as innovative. Besides in today’s busy, distracted world it’s often hard to spot it when it does occur.
That’s not to say it does not exist—it’s just generally buried under mounds of impressive looking but essentially shallow efforts. A recent journey to the Unemployed Philosopher’s blog reminded me that most of the important work happens far away from fanfare. Day after day, professionals of all kinds, including educators, toil away developing the small but significant things that make practice just a bit better. It is a shame, really. Much of the attention is given to things that appear significant but are really not once you take the time to peer beneath the surface; stuff designed to grab the attention and maybe further some goal, just not the goals one would associate with positive change for all. Sure it may look and sound great but in the end, you’re often left with the professional equivalent of election promises. The real innovations often lie elsewhere, often buried among the many other details that take up our days. They do, nonetheless exist and can be seen if you look hard enough, in one of these four forms.
1. Structured Engineering: The kinds of planned changes that take place in a more-or-less orderly fashion. You have identified a problem to be solved, planned a solution that involves more-or-less standardized equipment & procedures then will implement and test a solution.
For example, suppose you develop an online visual art course. You will carry out a procedure roughly like this:
- review with the curriculum guide and outline the general instructional strategies, including the method by which they will be developed or acquired;
- assemble the development and implementation team; formulate the overall plan;
- select and assemble a system of effective tools and methods by which you will carry out the plan;
- field test the course and revise as necessary.
- Good fit between need and response.
- Robust system once implemented.
- Significant up-front cost.
- Often significant resistance to system-wide change and adaptation.
- Possibility of large scale failure if wrong choices are made.
2. Structured Deepening: This involves extending an existing system in a purposeful way. As an example, perhaps you chose to modify the aforementioned system by which you are teaching visual art so that you can now teach music online too.
- Significantly less costly than starting from scratch.
- Less likelihood of large-scale failure.
- Less than optimal fit between need and response since you are modifying an existing system rather than building one to meet specifications.
3. Radically novel: Every so often completely new approaches are developed. It can be argued that before “Star Trek: The Next Generation” nobody thought very seriously about the use of multipurpose digital tablets such as Apple’s iPad or Google’s Nexus Tablet. Now, however these multipurpose devices are changing the way people interact with the Internet, with audio and video and, most importantly, with one another.
- Often based on new devices; carries a shink & new “wow” sense of interest;
- Teaching and Learning sometimes becomes a secondary activity;
- New devices often lack institutional tech support and have a short lifespan.
4. Entirely new bodies of knowledge and practice: Radically new devices lead, in turn, to entirely new ways of doing things. Consider English Language Arts. In the pre-digital age the focus was on reading, writing, listening and speaking. Now, with so many modes by which we can communicate an additional focus—Representing—is becoming very important. The mobile devices, mentioned above, are also changing the way we interact. Who knows what’s coming!
- Generally a good fit for those who have had the benefit of the events that led to the new development.
- Often well-suited to the time and place in which they occur; “ products of their times.”
- Often adopted by evangelicals who assume (incorrectly) that the new way is the best way for all.
Through it all, though, it remains as important as ever to maintain a focus on teaching and learning. While the new devices and methods are exciting, if the end result is not a strategically significant improvement in an identified area of concern in education, most notably increased achievement or cost savings, then the innovation is pointless.
How many times do you see “cost saving” being touted as a reason for increased use of educational technology, and most especially distance education? Time and again you will see the adoption of new technology being explained away as cost savings. All you can really do, most of the time, is roll your eyes as you know, beyond doubt, that one of two things will happen. Either (1-not bad) the new technology will wind up costing somewhat more than budgeted—owing to the training costs and other unanticipated costs associated with the adoption and integration process or (2-BAD) it will eventually be abandoned and left to lie, mostly unused, right next to all of the other money wasters that have been purchased through the years.
This does not need to be the case. Properly done, new technologies can be more effective and cheaper; just not that much cheaper. Look around at the cellphones, fuel-injected engines, “green” heating systems and such that have made our lives that much better. The same can happen in our classrooms too but we need to take a much longer view of what comprises cost saving and just plain get over the fool’s quest for that elusive magic bullet.
Cost saving should not be NOT the slashing of departmental budgets and subsequent placement of course notes online just so deficits can be handled in the short term. (Although, admittedly, here in the real world that does have to happen from time to time regardless of how high-minded we would like to be.)That helps nobody as the result will only be a degradation of services, followed by corresponding loss in enrolment. Cost savings might be better framed as the deliberate employment of suitable technologies so that, over time, better outcomes can be achieved at lower cost.
- Joining classes at separate campuses or schools using videoconference or, even better, a combination of videoconferencing and web conferencing such that smaller student cohorts can be aggregated. In those instances, though, care must be taken such that the host site or the instructor site does not become the “main” site with the remote ones getting the scraps from the educational table.
- The replacement of non-interactive lectures with series of multimedia-based presentations, preferably with interactive components, such as embedded quizzes or simulations.
- The gradual replacement of some media types with others but only after a piloting process which (a) shows the worth of the new technology and (b) refines the methodology before full deployment. For example, it may be feasible to replace the printed materials used in a course with online versions, perhaps multimedia or eBooks.
How often has it happened—a new device and its associated procedures shows up unannounced? Perhaps it’s a new set of chromebooks, maybe its clickers, a handheld computer algebra system or a new, shiny, computer numerical control (CNC) machine for the shop class. Whatever. In it comes and with it comes a feeling that you are expected, all of a sudden, to just change everything.
Before proceeding too far it needs to be said that the expectation that you need to change right away if often imagined. It’s been my experience that those responsible for high level decisions do tend to also have a healthy sense of what everyone is up against. After all, the funding that permits that sort of upgrade, itself takes years to put together. The problem is that the expectations that led to the upgrade are often not well understood by those who are expected to implement the change; there’s often a disconnect. Nonetheless, those on the front lines tend to be confronted by a somewhat intimidating set of equipment and feel a corresponding sense of stress on account of what they know needs to happen.
Of course that is just a bit silly. Change does not happen that way. Yes, we are all intelligent and capable of change but none of us is foolish enough to react to every new thing that comes our way, whether invited or not. The change and integration process happens in stages. Assuming that the technology is not another blind alley (and they do happen) it usually plays out something like this:
- Familiarization: You have to learn how the equipment works at the most basic level. What’s it for? What do the controls/menus do? What options do you have? In situations like this it’s good to have access to an expert. A demo followed by hands-on activities can be quite useful at this stage.
- Utilization: You have to become comfortable with using it. It’s not enough to know what each component does but you have to become adept in its use. Nobody wants to make clumsy or false moves in front of an audience so you need time to practice. If, for example, the device in question is a handheld computer algebra device then use it for your own purposes for a semester or so before even attempting to build lessons around it. If it’s an IWB then you need to take some time to engage in unstructured use—play—with the device in a non-threatening environment. Just close the classroom door and fly solo or, better still, gather a small posse of like-minded colleagues and have a collaborative session.
- Integration: Bit by bit you make the use of the technology a part of the natural routine. While you can bring it in all at once it’s much less stressful to layer its use in here and there. If, for example the device is an IWB, instead of ditching your existing lesson plans, try instead to catch the low hanging fruit; that is to redo some of the lessons than lend themselves best to an IWB approach. If it works well, try another and so on.
- Reorientation: In time you may find that the “new” equipment and associated methodology becomes your standard approach. That set of chromebooks that you used to despise may, in time, become treasured additions to your classroom; perhaps even indispensable. This will not happen overnight and the stages are likely measured best in semesters, maybe even years.
- Evolution: With new standards come new horizons. You may find unexpected applications of the once-unfamiliar technology. Perhaps you even spot yet another—and for now unfamiliar—set of methodologies that bears promise.
Of course equipment will still arrive unexpectedly and instructors will, to some extent, have to sort it out as best they can. The best advice is to realize that regardless of what else happens the integration process will come in stages, so act accordingly.