X
Tech

CIO for Hire: Ignorance may be bliss, but it’s also dangerous

Guest post: TechRepublic's Jay Rollins talks about an instance in his company where the user view of “If it ain’t broken, don’t fix it” seriously affected the bottom line and what he had to do to correct it. You can find more posts like this in TechRepublic's CIO for Hire blog.
Written by Larry Dignan, Contributor

Guest post: TechRepublic's Jay Rollins talks about an instance in his company where the user view of “If it ain’t broken, don’t fix it” seriously affected the bottom line and what he had to do to correct it. You can find more posts like this in TechRepublic's CIO for Hire blog.

——————————————————————————————————————-

It’s kind of interesting how things work out. Many IT organizations struggle to get security or monitoring tools justified in a business case: “Everything is working fine, why do we need to upgrade our firewalls or buy enterprise virus scan?”

Then, when you get the tools in place, you find all kinds of issues that cause you to fight fires left and right. One example in my shop had to do with an enterprise virus application. Most of the servers had Free AVG or some other non-systemic security package. Once we put an enterprise-class system in place and deployed a security architecture with various layers, we had visibility we never had before and found some malware, trojans, and viruses just sitting on the network without causing symptomatic conditions.

Regardless of what you find, getting the tools in place is difficult. Sure, they can make you more productive, but the more detail you have visibility into, the more problems you find. The more problems you find, the more work you have to do with expectation setting. I’ll give an actual nightmare example here.

An ecommerce company had a SPAM gun that was internally developed. A SendMail Demon was set up with some Perl scripts to take first name, last name, coupon code and e-mail address and dynamically insert these fields into an outbound e-mail in a merge process.

Pretty straightforward. The business saw a spike in sales after the e-mail went out and the marketing folks received their copy of the e-mail immediately upon launch of the campaign. An important assumption made here is that once you click the button, the e-mail goes out instantaneously. It became all important that e-mails arrive in everyone’s e-mail box at exactly noon for the biggest sales lift.

“Why do we need another e-mail system?” they ask. “It works fine.” The challenge was that it was nearly impossible to do some of the advanced segmentation and offer testing with the home-built system that marketing wanted to do. Additionally, the home-grown solution lacked detailed reporting and the development staff had larger projects to focus on. So, like many applications that are not directly impacting the core business (in this case, taking online orders), it received cursory updates or attention.

From an IT standpoint, the old system also required a lot of hands-on action from development and IT operations to launch a campaign. A system that the marketers can manage themselves would be ideal. So we get this really powerful e-mail campaign management tool. It does all kinds of things and you have visibility into every little detail of the mailing process. Here’s what happened when the system was implemented.

We launched our first campaign. We can see e-mails going out, but the bounces were spiking. Marketing comes to us three hours after the campaign launched and said they had not received the e-mail yet. We start looking under the hood. The merge process had completed, the spike in bandwidth had subsided, yet there were still thousand of e-mails in a queue that seemed to be going down in increments of 25 emails per second. This new system was supposed to send out thousands per second. What gives?

Now for the details:

  • The old system did not have a bounce counter or any way to handle bounced e-mails.
  • The old system did not have a real-time queue report to see how many e-mails were going out and how quickly.
  • The new system “sprinkles” seed list e-mail addresses throughout the entire list for a better view into campaign performance.
  • The old system loaded the seed list (the marketing e-mail addresses) at the very beginning of the campaign.

When we get on the phone with the vendor, we all got a huge lesson in spamming. First, was the discovery of the seedlist pecking order. Of course, marketing would get their e-mails the second the campaign kicked off because their e-mail addresses were loaded and sent first in the old system. Next, we found interesting behaviors with Hotmail and MSN e-mail addresses. They were only trickling through 20 or so e-mails at a time before it closed a connection. Apparently, you had to have a “whitelist” agreement with them to send mass e-mails. This new system waited for acceptance confirmations, the old system didn’t wait around to see if it was accepted or not. We also discovered many of the others were being blocked outright. AOL had rules that a spammer had to have an efficient way to handle bounced e-mail or their IP address would get blacklisted. We also found out that it would take an act of God to get off of a blacklist once you were on it.

Obviously, the old system could not manage bounced e-mail. It just ignored it. That behavior had the IP address blacklisted at a bunch of ISPs. So Marketing’s perception of e-mail with the old system was that it did what it was supposed to do and quickly. This perception persisted, because of the limited information and insight they had into how the system actually worked. Despite our exhaustive explanations and evidence that the old system didn’t work and the new system was not actually slow, they still wanted to use the old system.

I couldn’t believe it. Because their limited measures were all green lights, they were satisfied with a system that we thought we proved didn’t actually work.

It took several months to work through all of the lessons learned and track down the reasons behind the issues found in the more detailed measures of the new system before we could start to turn things around. Only then, when we could demonstrate a higher conversion rate and after we switched IP addresses to start with a clean non-spammer MX record, could we push the department past their obvious denial.

Unfortunately, this level of denial is human nature. When a person is presented with change, and even worse, complexity in that change, the walls of denial go up. Even though the system may perform great, if the user doesn’t fully grasp it, the implementation can fail.

There were a few lessons I learned here.

  1. A better system with better information and better measurement will unearth issues you never thought existed.
  2. Be prepared to learn and learn quickly.
  3. Expectation setting is key. Many non-technology managers take the sales person literally. Be the voice of reason and communicate that it will get harder before it gets easier.
  4. Prepare for change. Be proactive and buy and distribute copies of “Who Moved My Cheese” by Spencer Johnson and work through the lessons with the business users. Help them recognize what denial is and how to avoid it.

There are more lessons here as well and i am sure you have your own war stories about battling ignorance. Please share.

Editorial standards