/build/static/layout/Breadcrumb_cap_w.png

What's your Test Environment Like

I know that we all come from different walks and strategy when it comes to testing configuration changes prior to production. I wanted to open up what I've found that works well in my environment as well as hear out your success stories or worst nightmare scenarios.

My test environment has 2 tiers.

Tier 1 - 4 different virtual workstations running in VMware Workstation. I power these 4 on and run patches, software deployments, and scripts through them to ensure that nothing is majorly broken. These 4 machines span 3 Operating systems and run multiple different business applications. If something breaks here it's too easy to revert to the preinstall snapshot and try again.

 

Tier 2 - A test group of 10 physical production workstations that are at the same physical location as my department. Provided nothing broke in Tier 1 this is where I go next. I have 10 end users across multiple departments with different requirements. Once I've verified that they have received their change and they've gone through their checklist of items they do on a typical basis (prioritized by business need), our changes usually go to production level.

 

A success story using this method is that our Office 2010 deployment was totally debugged using this method. Across the 2 operating systems we deployed to everything worked as intended and remained silent.

 

A disaster was a software deployment that went production and triggered an automatic reboot that wasn't caught in the virtual environment. The push to the test group happened late one business day and rather than report the reboot the end users called it a day. The reboot also effected some production servers and critical systems that were down somewhere between 10-15 minutes. It is fortunate that when they came back online nothing else was wrong.


Comments

  • We use a 2 tier approach also. The first tier is a test rack with 12 machines in it. I have 1-2 of every model that we have in the field. We also have a couple of laptops and Macs that are in the field in the R&D area to test with. If the push fails we reimage with the k2000 and try again.

    The use of real machines even lets test bios changes and WOL then deploy. We push to those and if all goes well the ITO staff gets the deployment next (tier 2) about 20 users. - SMal.tmcc 10 years ago
  • Thank you for sharing your strategy. Any success/horror stories? - GeekSoldier 10 years ago
  • We are still ironing out our testing procedure. We have 3000 Windows XP workstations and about 400 Windows servers.

    Currently we do a 4 tier approach for software deployments/upgrades to workstations.

    Tier 1 - test to virtual and some physical workstations in a test lab

    Tier 2 - test deploy to IT which can be less than useful since most everyone in IT runs Windows 7 instead of XP

    Tier 3 - production deploy to Clinics (about 700 workstations)

    Tier 4 - production deployment to the rest of the network which includes in-patient and ambulatory areas of the hospital

    I wish we had a Tier 2 more like yours. It sounds much more useful than our approach. I think I'll look into trying something like that.

    I absolutely have a horror story. We had to do an overnight upgrade of a software application that is managed in house. The backend and frontend had to be upgraded over the same evening. The morning after the scheduled deployment I came in and almost none of the ~3000 deployments didn't work. I learned since then to minimize issues always create a label ahead of time to hit machines that don't already have the new software. That seems obvious, but I was a total newb when I started. Experiance is the best teacher, right? O_O

    Recently I have a success story where I did a deployment overnight for Ultra VNC. I targeted machines that had an old version or no VNC installed at all. I scheduled the install to run and then scheduled a "force check-in" to run and scheduled one more deployment well after the check-in directing the second deployment at the same label. This seemed to yield a MUCH higher successful deployment rate.

    Thanks for your post, GeekSoldier! It's good to be thinking about ways to make testing better! - awingren 10 years ago
  • I agree that experience can be a great and merciless teacher, but for those that get experience we generally seem to come out better for it. I enjoy hearing about how others approach their testing. I've found a virtual bench to be particularly useful and time saving. This way before any user is impacted and I have to reimage anything, I've got a faster way to revert to a prechange state. My office 2010 deployment took many tries before I finally got the syntax correct in KACE to deploy the software correctly. Since then that has opened my world of possibilities when it comes to software deployment and updates. - GeekSoldier 10 years ago
  • My first week with Kace i wanted to run a patching schedule on about 10 user PCs to Detect and Deploy. I set the schedule for 11pm that night, and hit save and run now. By hitting that, i thought it would run the schedule for 11pm and not actually run it at the moment.

    Needless to say ,i knocked 10 users off of their computers and it just restarted on them. I work in a very large law firm, so it was a great learning experience to just hit SAVE and not SAVE and RUN NOW.

    Keep that in mind any newcomers who are reading this. - areiner 10 years ago
This post is locked
 
This website uses cookies. By continuing to use this site and/or clicking the "Accept" button you are providing consent Quest Software and its affiliates do NOT sell the Personal Data you provide to us either when you register on our websites or when you do business with us. For more information about our Privacy Policy and our data protection efforts, please visit GDPR-HQ