WordPress: So Where are We Now?

It’s been four days since the deployment and we’ve been live for two. So how are things? Not too bad. We’re having problems with analytics and spiders finding us, but that’s expected. Changing DNS makes spiders pretty angry at the best of times. I’m just glad that Shelob isn’t after me – it’s not like I have an Elven Sword or Starlight.

Right now, things are reasonably stable. No reports of problems have come my way and backups are being taken without problems. I can take the backup and Git can tell me what changed between versions – a big plus if you have to manage content as a database. Although the ECLIPSE compare engine takes a long time even on an i7 because the MySQL snapshot is so massive. Git compare tools might be better here.

Our one annoyance? Spambots. Yes, they caught up with us moments after the DNS switch happened. Fortunately, we are able to block posts. We have a large number of fake users that I have already deleted – hopefully I did not remove anyone real. Don’t worry, our editors won’t blindly delete users with posts so even if we accidentally delete a real user, the posts will be kept (or at least reviewed first for spam). We’re researching how to prevent the initial registration, but that’s a harder problem than catching actual spam posts.

More tomorrow on a really cool idea one of my sons had for learning about disaster management training.

WordPress Deployment: Here is What We Did

It was a dark and stormy night…

The adventure started by building a local copy of WordPress using basically MAMP (Mac, Apache, MySQL, and PHP) although I configured each on its own rather than using the MAMP product. No real differences there. DNS on a Mac is a bit of a gotcha because it sometimes does not like localhost vs. 127.0.0.1 if the server is not itself a DNS host, but that’s a different problem. The Apache instance had to be on a different port, so at least our URLs were very apparent. This turned out to be a good thing.

We then migrated whatever content we wanted from the old website. Forget Import. It doesn’t do what you’ll want. Probably never. At this point, some of the plug-ins worked, some didn’t. Anything with email was DOA because we are inside our firewall. No problem, we expected that.

So now the fun bits. There is no “deploy” button. FrontPage had one. DreamWeaver has one. Most things do. WordPress? Not so much. Sure, FTP worked to move the PHP code and ECLIPSE update sites. But there was more…

To move the content, we had to take a copy of the database. Oh, there’s nothing like TMF online dumps mind you; dump to text via mysqldump because WordPress requires MySQL. (Yes, that’s in red because of all the blood and pain). Our web host provider then used one of their magic scripts to go through the text image of the database and change all the URLs. Are you afraid yet? They then created our database using that script. This was early last Friday.

Once that was done, we had to check everything. PayPal had to be used once or errors  were displayed to the user. That’s gone now. The Contact plug-in had to be disabled and enabled. The Captcha plug-in had a security problem in its internal directory that we had to set/reset and now it’s happy too. Not bad but very manual.

48 hours later, the DNS caches all flushed and the new site is visible to one and all. We’re breathing a little easier. But only a little. When we make content changes, instead of pushing them up to the host, we have to make in-place changed and then take a backup – that or we lose comments and posts. I can force an on demand backup any time I want (there are plug-ins like BackWPup to do that) There’s still effective way to push wholesale changes from development to production. So what’s the plan?

Well, we’re talking to people. WordPress hasn’t quite figured out this discipline yet; that or they are letting the community handle it. They do not have an official position on deployment. Yet. I know a blogger who is getting comments out there. You might know him. There are plug-ins being built that are starting to take this stuff into account. They aren’t free, and if I’m going to pay for something, it better be rock solid – you know, like we expect in the NonStop community. Maybe it is an opportunity for one of us to do something about it.

 

 

WordPress Lessons Learned – The Hard Way

Deep Breath

It’s difficult for me, used to a high level of operations management discipline, to look back on our website deployment with anything but shame. Here’s the story of our somewhat moderate accidental success, or horrible failure – depending on your point of view.

Last fall, our web host provider informed us that support for FrontPage extensions, which we used on the previous incarnation of the website, was being dropped. Great! I finally had the motivation to get rid of the cobwebs and move to something modern. We’ve all been there. This started a long internal discussion of what to use for the new improved nexbridge.com. Our hosting provider suggested WordPress.

Moving the content was interesting, and difficult. Import did not work, so we had to move each page. The WordPress editor is not 100% consistent with HTML so that didn’t work out with simple copy&paste functions. But that’s just a side note. We’re currently fixing some of the paste issues that accidentally copied image links from the older site. Paste only text. Do not assume images are preserved.

But the work we did was all on a local server, to try things out. How to get the content up to the hosted environment? That was the question. The website has many moving parts:

  • Our ECLIPSE update site,
  • the PHP code for WordPress, its plug-ins, our customizations,
  • and the website content, which hides inside a MySQL database – and it has to be MySQL. That’s a WordPress requirement.

The ECLIPSE update site and our customizations are no problem. We’re good at that. Standard staging, install, fallback using Git. No worries.

The WordPress code has internal pointers to the root DNS of the website, so those have be changed when you deploy or you can use redirection code, which still needs to be modified. Queue fear here.

Some of the plug-ins have caches and internal pointers that depend on the DNS site root also. Those had to be reset after we deployed. I was not happy about that.

An important lesson we learned was the security inside the PHP code area is crucial, and generally wrong. There are caches, you see. The security and users on the target system are usually different from your build machine. Enter annoyance.

And probably the worst part: The way to deploy website content is through SQL scripts. You basically dump the database to text, upload it, change a bunch of stuff in the scripts, and apply it to the target database, and every table is blown away and recreated. Ok, I can accept that once. But what about our next set of changes?

In any event, we’re still working on it and are not live yet. We’ve made some of the site available for now. More to come on how we solved it…. [followup-post]

Website freeze is on!

The candidate website is now officially frozen pending DNS record deployment. Stay tuned! Randall will be posting our experiences under the Resources category once this adventure is completed and the new website is live.

Welcome!

Welcome to the new Nexbridge website, powered by WordPress.

We have completely overhauled our product and service offerings and the web site that supports it. We hope you enjoy the new experience we are providing.

Installing Plug-ins under Indigo and JRE 1.7

As many of you may know, ECLIPSE 3.7 has been out for a while now. So has Java 7. What you may not know is that there is a problem with the ECLIPSE installer in 3.6 and 3.7.0 relating to Java 7. What happened is that Oracle made a change to Array.sort() in Java 7 that changes the way sort() works in a threading environment. You will find a discussion of what the ECLIPSE contributors have done relating to this problem here: https://bugs.eclipse.org/bugs/show_bug.cgi?id=317785.

The problem was further diagnosed as being: https://bugs.eclipse.org/bugs/show_bug.cgi?id=297805 with regards to Mirror Ranking.

So far, we have found two temporary solutions:

  1. For ECLIPSE 3.6.x to 3.7.0 is to modify the config.ini file found in the ECLIPSE configuration directory (eclipse/configuration/config.ini) to add the following parameter:
    -Djava.util.Arrays.useLegacyMergeSort=true
  2. Run ECLIPSE 3.6.x to 3.7.0 using the Java 6 JRE/JDK.
    Revisions 6_20 upward are acceptable to ECLIPSE. You can still build
    with the Java 7 JDK by configuring your compilers internally to
    ECLIPSE through Window/Preference.

This is of particular importance to NSDEE 2.x users who are limited to ECLIPSE 3.6.x with CDT 5.0.0. If you an NSDEE user, we suggest that you stay on Java 6 until you move to NSDEE 3.0 or NSDEE 4.0. [Note: this comment is obsolete]

The problem has been resolved for ECLIPSE 3.7.1. What we don’t know is whether there are any compatibility issues with NSDEE 3.0 and Java 7, although it is unlikely. The fix #1 above should work if there continues to be an issue after ECLIPSE 3.7.1. [Note: this comment is obsolete.]

Updated 21 Jan 2014: The problem continues to exist or has been reintroduced in Juno/Java 1.7.0_51 and has caused some items in this support note to be made obsolete.

Updated 20 Feb 2014: The problem with the installer appears to have been resolved under Kepler. See the updated support note for details.

Updated 25 Feb 2014: Not everyone may see this problem. The issue relates to threading, and is, as a result, timing sensitive.

The Indestructible Intersect Point

I’m writing this blog while watching the first game of the Montreal Canadiens’ playoff hopes. There was a time when I was growing up when they were virtually undefeatable. We can only hope now. Yes, I’m a fan. Is there a relationship? Probably not, but who knows? I’m also still snickering at the maintenance outage that the hosting site for this blog had on Thursday morning. Does anyone see the irony in that?

To continue where I was last week, improving availability is typically an exponential cost function, where each 9 of availability costs substantially more than all the previous 9’s combined. So at some point you get to diminishing returns where it costs you more than it’s worth.

But what if you looked at the problem from the other direction, specifically assuming that the system will never go down, and work backwards? Wouldn’t that be unattainable and cost an infinite amount of money? If you come at it from the wrong way, starting from an unreliable system and trying to make it perfect, you’ll never get there – yes, that’s a debatable point, so go ahead and argue with me. So, start from the assumption that outages are unacceptable right from inception. There’s lots of stuff you’ll need and have to invest, but it’s actually a quantifiable cost that you can get to using traditional project budgeting techniques. Traffic routing, reliable platforms, sophisticated version control are all elements of it. Infrastructure is a huge part of the cost, as is cultural change in the operations and development groups – we’ll go there later – but is it worth it?

iIP[1]

Here’s another cost graph where the cost of indestructibility is added to last week’s picture. Strangely, it’s a straight line. It makes no difference in the cost whether you run an indestructible solution 7x24x365.25 or 20 hours a day. So again, why bother?

In many situations these days, installation can take days, not hours. Try renormalizing a multi-terabyte-size database in your normal outage window. You can’t. It doesn’t matter whether you are at 99.9999% or 99.99%. Indestructibility means you have to have the ability to perform the renormalization while the system is up – a daunting task, but possible.

What the cost curve shows is what I call the Indestructibility Intersect Point, let’s say iIP to coin an acronym. It’s where the availability curve and the indestructibility curve meet. If you’re indestructibility investment is less than your outage cost, which is can easily be (again, you know who you are out there), why bother chasing the 9’s curve? Sometimes the iIP will be above the outage cost. That’s when you don’t bother. So the question for you to think about – who knew there would be homework in a blog – is this: Do you know what your three costs are so that you can decide what to do?

What’s coming next? Well, we’re starting to get past the introduction, so I’m going to have to write about details soon. Stay tuned and give me feedback.

Indestructibility vs. Availability

An interesting perspective came from a discussion I had recently with Richard Buckle of the Real Time View (http://itug-connection.blogspot.com/). We talked about a major difference in perspective between highly/continuously available systems and indestructible systems. In the case of indestructibility, people take the position that the system will always be running, forever, right from the beginning. It’s the core assumption. How you build your infrastructure, software, and platforms needs to keep that firmly in mind right from the initial concept. With highly available systems, people start with a trade-off of what is good enough vs. cost; whether that’s four 9’s, five 9’s, or better. The tradeoffs made during a project come with how many nines are people willing to pay for. So it gives marketing people a really interesting pitch:

We can give you a brand new system that has five 9’s of availability, and it will only cost you ___.

Sure they can, but who pays for the other changes. You know, the small stuff, like retraining everyone, change management, business process reengineering (BPR), and testing cycles – all the your mileage may vary costs that somehow are always much bigger than anyone expects and often larger than the technology outlay to get you that extra 9 in the first place. At some point, and it’s very personal for your organization, the cost of indestructibility is actually less than chasing the exponential 9’s curve when you start from a system that is fundamentally fragile.

Now, suppose you’ve got a system that is happily running along at 99.99% of the time, and somebody figured out that every minute of outage costs your company twenty million dollars in penalties (You know who you are out there), and you’ve had outages. Or worse, suppose your outages are larger than that, and you cross the critical fifteen-minutes-down-and-lose-your-charter line. In order to add another 9 to your availability numbers, you’re going to have to rework your environment, maybe change platforms, change your processes, rewrite your software, build new deployment technology, get user acceptance testing signoffs, and worse, try to find funding in the organization to make all that happen. That’s pretty daunting. The fear of having to go through that for every nine is what led me down the indestructibility path in the first place. Organizational change is far harder than technological change, but that’s often what we have to do to add that elusive and expensive additional 9.

Untitled[1]

To illustrate this point, here’s a sample graph of the risks/rewards of availability. Next time, I’ll talk more about this cost function and what it looks like in the indestructible world. You might be very surprised.

What Is Indestructible Computing Anyway?

I was recently asked a good question by one of the readers: “What is indestructible computing, and why should I care?” It’s a good question. Here are a few common terms. What you should keep in firmly in mind is that whatever aspect a system you look at, the actual service level you experience is usually the weakest of your components. Guess which aspect one is almost always the weakest? If you’ve been following the blog, you already know: software change.

General Purpose Computing

Well, you’re probably reading this blog from a general purpose environment. A workstation or laptop can be considered general purpose hardware. Your browser could probably be considered general purpose software. The combination of the two gives you a general purpose environment.

Highly Availability Systems

These systems are available most of the time – generally 99.99% of the time, or slightly under 5 minutes of unplanned or planned downtime a month. Banking systems are typical of these. Fitting maintenance into even a five minute window is difficult, particularly when you’re upgrading disks or restructuring your Operational Data Store (ODS).

Continuously Available Systems

These systems are available virtually all of the time – generally 99.999% of the time (about 30 seconds of down-time). Extensive use of independent components allows these systems to operate virtually without any unplanned outages. Planned outages do occur for upgrades, but the window for these outages is very small. There’s a lot of confusion between Highly Available and Continuously Available systems, the lines are pretty blurry, and I won’t really differentiate between them, much. That there is even a distinction is arguable.

Critical Systems

These systems include some of the obvious life-critical systems: flight control systems; rockets; many health monitor devices. Systems like this do not have the same level of long-term availability that continuously available systems have, but during their duty cycle, no outage is permitted at all. Fortunately, no changes are generally permitted while the systems are up. How many launches were delayed because of sensor or software issues?

Long-Life Systems

In long-life systems, reliability is the number one priority. Unscheduled maintenance is usually impossible or cost-prohibitive. Scheduled maintenance is possible but not desirable, and usually involves only software components. During maintenance, rigorous testing is done to ensure that the system will function reliably when back online. Communication satellites and the Mars Explorers fall into this category. Even then, subtle defects, like miles vs. kilometres per hour in a calculation, can cause disastrous failures.

Indestructible Systems

A truly indestructible system builds on the best of all of these systems. The systems are expected to be long-life, yet dynamic. Change is not only possible, but expected. Yet, there are no unplanned outages and no planned outages. Not only small components, but major components like data centres can go offline without a perceived outage or noticeable reduction is service levels. Maintenance is done while the system is up.

And I don’t blame anyone for thinking indestructibility is unattainable. It’s very hard to get right and even then, it’s always possible that something will go wrong. In future posts I’ll go into what it takes to make this work. Hopefully you’ll see that indestructible systems are practical in the real world and understand what it takes to make them work for you.

The next post will go into the starting points of view for building these systems and how money gets wrapped up in it.

Bringing DevOps to Legacy Platforms