Archive for the 'Tech' Category

MacBook Pro – Gaming Rig


I love my rMBP. I have the “Late 2013” model, which means I’m one generation back from current – the “Mid 2014” model.

OS X with Citadel

OS X with Citadel

The difference is essentially that I have the 2.3 GHz (i7-4850HQ) with 6 MB on-chip L3 cache processor, instead of the 2.5 GHz (i7-4870HQ) with 6 MB on-chip L3 cache processor. All the other specs are effectively the same between versions.

With that processor, 16 GB of onboard RAM, a 512 GB SSD, and an Nvidia GeForce GT 750M with 2 GB GDDR5 memory video card, this sounds like a reasonably decent spec level to play video games. The issue of course being that most games are made for PC instead of Mac.

I haven’t been a gamer for a while (15 or so years). However, my son is now old enough to play Mass Effect, which is a series that I’ve wanted to play for almost as long as since the first game was released. My son currently plays it on his Xbox 360, and although it is fun to spend time with him while he is playing and we discuss strategy and options, I wanted to play as well. I had no interest in purchasing another game console, so that meant I would be playing the PC version. As my only “personal use” computer is my MacBook, that meant a Windows install. There are many ways to run Windows onto a Mac now. I use, or have used, most of them, so this goal wasn’t frightening.

Virtualization is the easiest way to run Windows on a Mac. The user continues to run OS X, and the Windows instance gets to live in a Type 2 (software with underlying OS) Hypervisor. Parallels, VMware Fusion, and VirtualBox are all hypervisors that I’ve used in an OS X environment. For the last few years I’ve been using VirtualBox exclusively on the various Macs I’ve owned to host my virtual machines. With the purchase of my current rMBP, I’ve added the change of running my virtual environments from SD cards, so as not to take up valuable real estate on the onboard SSD.

Where the VMs live

Where the VMs live

These cards host my Windows XP, Windows 7 x64, Windows 8.1 x64, and Ubuntu virtual machines. Since there are 2 VMs per card and only one SD slot in the Mac, with this method I can only run two of my virtual machines at a time.

My trusted virtualization model doesn’t work in this case anyway, as the games need to talk directly to the hardware. That means no virtualization – the Windows OS needs to be installed as a local OS.

Apple supports a local Windows install very easily with their Bootcamp product. Bootcamp will allow the user to partition the local hard drive, and then allows the user to select the boot partition (Windows or OS X) when the computer posts by holding down the option key. The negative to this model is that I did not want to sacrifice any of my precious SSD space to a Windows partition. The 512 GB is all I have at the moment, and there are no aftermarket drives available for the late-2013/mid-2014 rMBPs for expansion. Apple uses a proprietary non-M.2 PCIe blade SSD.

Now I’m to my third requirement (after no virtualization, and don’t partition my onboard storage) – I have to run this Windows installation from an external drive.

Bootcamp is no help. Bootcamp does not support installing / booting Windows from an external drive. However, there are several people who have done this with slightly older Macs, and I was able to take their work and make small changes for the current rMBP.

First, ignore this post:

Windows To Go

There are several reasons this is a bad choice for this operation. You have to use a USB 3.0 drive that is certified for WTG. This is a real need, not marketing – the USB stick has to present itself as an internal disk. WTG requires volume activation (no retail users allowed). Finally, even if you do built it, configure it for UEFI boot, and otherwise make it all happy, the Mac won’t boot to it anyway.

Here is the first useful post:

How Not to Install Windows on your Mac’s External Disk

This is a great / fun read that goes over the differences between BIOS and EFI, as well as explains why many of the things you’re going to want to try won’t work. He DOESN’T go as far as to explain how to actually accomplish your task.

Here is the second useful post:

install windows 8.1 to external disk

This one kind of works, but it resulted in a lot of bugginess for me. Your mileage may vary. It is very “cut to the chase” but it doesn’t give a lot of detail and hand holding for non technical users. 

Here’s the third and most useful post:

Mac: Install Windows 7 or 8 on an external USB3 or Thunderbolt drive without using bootcamp

Yay! Helpful info! Let me save you a little bit of time. First, you can’t install Windows 7 on a USB3 drive, and if you were thinking about installing on a USB2 drive, then moving to a USB3 enclosure, the rMBP only has USB3 ports, so it won’t boot to it anyway. Second, you should go for the Lacie thunderbolt drive, not USB. It’s faster, and works better for the install process. For me it was as easy as following the steps in that post (using an existing Windows 7 machine – on USB) to make the external disk UEFI bootable, deploying the installation image, and then booting to it. I did download the bootcamp drivers to that disk as we’ll, to allow installation of the hardware when I knew I wouldn’t be able to get on the Internet due to the NIC not being recognized.

If you don’t want an EFI partition on both your internal storage and your external disk, this post also looks interesting:

Guide: create external Windows 7 boot drive for Macbook

I couldn’t test it, as my internal disk is encrypted. Having an EFI partition on both disks does’t really bother me, but it should work.

With the information I’ve linked to, a thunderbolt / USB combo drive, and a copy of Windows 8.1, I now have a working Windows 8.1 install for my MacBook, and have been using it for games for about 3 and a half weeks. So far no blue screens, or any unusual behavior. I call this endeavor a success.

Windows 8.1 with Liara

Windows 8.1 with Liara

 And here we have the happy gaming machine…

OSX Yosemite – first week (not so) fun


Yosemite is characterized by granitic and remnants of older rock. Perhaps that’s why Apple chose that as the name for their latest operating system. Anything that turns a working computer into a rock should have a relevant name.

I’m just ranting a little bit here. I recently upgraded my Late 2013 rMBP to OS X 10.10 Yosemite, and it returned the favor in enticing me to help dust off my troubleshooting skills.

All that being said, the root cause of my issues technically wasn’t part of OS X nor the Apple ecosystem, but I wasn’t happy after my upgrade just the same.

The problem I faced made my Mac extremely difficult to use. Within 2 – 10 minutes of a power on, any running application became unresponsive (pinwheel at mouseover, could not perform a force quit, if Activity Monitor was already open then any apps I had running would show application not responding, and the finder itself would also show application not responding). I was also unable to open any additional application once the laptop reached this state. The behavior existed both after an upgrade to OSX 10.10 or after a clean install (once my applications were also installed). The only way to be able to function again was after a hard reset.

I tried to diagnose/repair using all the usual suspects:

Configuration changes I tried or checked  included:

  • Verified FileVault was disabled – encrypting the drive will slow your machine until that process is complete
  • Reduced transparency in accessibility options – one common thread in people who reported slowness issues was the thought that it was graphics controller related, so reducing overhead may help in come cases
  • Disabled graphics switching in power options – again, if there is a graphics controller issue, staying on one controller or the other may help
  • Reduced the number of items that Spotlight was indexing

None of these helped. Although I consider myself a power user, I don’t tend to have many applications running at a time. Typically just Safari, the Microsoft Office 2011 Suite, Terminal, and Remote Desktop. With a Core i7, 16 GB of RAM, and an SSD with 50% free space, it was painful to watch my rMBP struggle just to paint the screen with a minimal number of applications open.

Still, everything pointed at the Finder or a graphics issue. It seemed unreasonable to presume that several applications were having a simultaneous problem.

I downloaded and ran EtreCheck and saw nothing unusual. I went through all of my applications to see if there had been any version updates in the prior two weeks since I had last checked. One of them had – Dropbox – for a Finder related issue no less.

I downloaded and installed the (beta) update, hoping that it would resolve my problem. It didn’t, but it did make me take a long hard look at the only two Launch Daemons that had been consistent through all of my changes.

The first was Dropbox itself. The reason it was always consistent was that although I could switch browsers, productivity suites, shell applications, and RDP tools, I couldn’t imaging living without Dropbox. I use it to keep everything synched across 8 devices on 4 different operating systems (iCloud isn’t a good fit for me). It turns out that Dropbox modifies the Finder to add green checkmarks to files that signify that they have synchronized. Turning off that feature doesn’t impact Dropbox functionality, but it also isn’t a preference to be set. Users have to remove the resource from the Dropbox app with the following commands:

sudo rm -rf /Library/DropboxHelperTools

rm /Applications/Dropbox.app/Contents/Resources/DropboxHelperInstaller.tgz

The second daemon was DisplayLink. I use the DisplayLink application to drive USB and Ethernet graphics devices – usually for displays that are physically distant from my laptop. I’ve used it for years, and never had an issue. It turns out that now they have an issue. It wasn’t the issue I was having, but I’d found another app that was having compatibility problems with the Finder in Yosemite, and that made it suspect.

After removing DisplayLink and disabling the Finder modifications in Dropbox all my of issues with Yosemite have disappeared (other than that it’s ugly). My rMBP doesn’t run hot, hang, or have trouble painting the screen. I’ve had to disable auto update in Dropbox to avoid possibly reintroducing my problem, but that’s a small price to pay.

I’m confident that I won’t have to wait long to be able to use both of my problem child applications again. Each of them is mainstream and under active development.

But I’m not rushed.

WSUS, Drive Space, and Pain


Today’s annoyance started earlier this week when I happened to notice that the server I run WSUS 3.0 SP2 on at home was starting to run a little low on disk space. A few minutes of checking revealed that yes, it was the WSUS directory that was the culprit (at over 120 GB). No worries, thought I. I’ll just run the WSUS server cleanup wizard and all will be well.

Checking on the server a couple hours after firing off the wizard revealed that very little progress had been made. The progress bar had moved perhaps 3% towards completion, and seemed to be hung on “deleting unused updates”. I thought that, perhaps the process was hung, perhaps something was holding a file open, or perhaps the server hiccuped. I stopped the process, rebooted the server, started it again, and went to bed.

By the next afternoon there was more progress – to perhaps 10 or 11%. Since I’m fairly patient, the server wasn’t in immediate danger of running out of space, and the process was progressing, I decided to wait it out. Four days later the process aborted at just under 60% completion.

OK, the “wait it out” method didn’t seem to be working. A few quick Google searches revealed many admins recommending that you run the cleanup wizard often (weekly) to prevent just such an occurrence caused by an overly large WSUS file store, and correspondingly large database. Thanks guys.

Since the “cleanup everything” method didn’t seem to be working, I tried individual options in the cleanup wizard to see what would work. “Decline superseded updates” worked without error. “Decline expired updates” and “Delete computers not contacting the server” also executed quickly and flawlessly. “Delete unneeded update files” took about 40 minutes, but it also executed, freeing up about 6 GB of space in the process.

Since the issue seemed to be one of efficiency (server was running too slowly to execute such a large process), I went looking to see what I could to to either have it not work as hard, or have less to do.

That in mind, I attacked was the disk itself. This server has been running for about 3 years now, and since my home sever supports a whopping 4 users, it doesn’t get a lot of preventative maintenance or performance tuning. WSUS uses the Windows Internal Database (formerly SQL Server 2005 Embedded) as the back end engine, and I’de never given it any attention. First, I took a look at the database files on the disk. I discovered the database was about 10 GB, and the log was almost as large at 8 GB. This made sense as again, the files had been growing on demand for nearly 3 years.

I downloaded SQL Server Management Studio Express to take a look at the utilization of the files. You can download the version for 2005, but I went ahead with 2008 R2 instead to stay closer to current. Installing the software was no problem, and then I just needed to connect to the database engine.

Once you open the management studio, there are 2 caveats to connecting to the internal database engine. First, the only configuration for connectivity is named pipes, so the server name needs to be in the format of: \\.\pipe\MSSQL$MICROSOFT##SSEE\sql\query. Then, you need to be logged in as a server administrator and use Windows Authentication. You’ll get a logon failed error unless you execute the console with a “run as administrator”.

Once I had access to the database engine, I selected the WSUS database, and shrank the log and database files. I reclaimed 60% from the database and 97% from the log. Since autogrowth had over time created fragmentation and diminished performance, I then used SQL Server Configuration Manager to stop the Windows Internal Database, in preparation of defragging the server.

With SQL stopped, I also stopped the IIS Web Server, and the Update Services. WIth everything related to WSUS offline, I then defragged the disk with Defraggler.

Defragmentation took most of a day, after which I restarted the server. Upon restart I re-ran the wizard only selecting “Unused updates and update revisions”. The process still took about 4 hours to run, but it did finish, and freed up about 70 GB of space.

I’ll be automating the cleanup wizard tasks in the future to avoid having this happen again…

New Toy – Kindle Fire HD 7″


I’m just pulling my head up from about 8 mind bending months, and have decided my mind is a little too bent.

In an effort to unbend at least a little, I went out and indulged myself a bit today, and purchased a Kindle Fire HD.

I’ve written about the original Fire, and the positives still hold true:

1) Cheap
2) Small and light
3) Built in could storage

It still doesn’t come with Google Play, cellular connectivity, or a reasonable UI, but the following items are fixed:

1) OS is now a more recent version of Android – 4.0 Ice Cream Sandwich (Yes, I know – not Jelly Bean)
2) Bluetooth
3) Camera
4) 16 GB of storage (I still use Dropbox, so this was more of a nice to have than a need).

Now we have new positives! The screen compares favorably to the iPad retina display. I’ve been watching Dr. Who episodes on Amazon Prime all evening, and the device plays them as well as my MacBook Pro – and far better than my original Fire. The new physical design is thinner, lighter, easier to hold, and you no longer blind yourself with glare. Although some reviewers have said the apps are laggy, I have found it to be significantly snappier than my original Fire. (The 1.2 GHz dual core CPU and 1 GB of RAM are large increases over the version 1 model).

I still don’t like the Amazon UI overlay, but the good news is that it’s no longer so slow that it makes you cry. The bad news is that it’s still so ugly it makes you cry.

This is more of “hey, I got a new Kindle” than an actual review – but so far, I have to say I endorse it. As with the original Kindle Fire, the biggest benefit to the Fire HD is the Amazon ecosystem behind it – especially if you have Amazon Prime. It’s comfortable, it’s a value proposition, it looks and sounds great.

Is it an awesome computing device – absolutely not. For that, and to stay in the price point, you could go to the Nexus 7, but then you have to trade the Fire HD’s screen, speakers, and extra storage to get that UI and the full Android environment.

The fire works for me. Other than media consumption, I use a tablet for e-mail, note taking, and Facebook. Everything else, I go to the MacBook Pro. The Fire does all of these things just fine…

Verizon misses customer service opportunity…


I recently accepted a position with a new employer, and with that position came a company issued cell phone. I’ve been managing my own phone for a long time. With my last several jobs up until the one right before this one, I would simply expense the portion of my mobile bill that applied to my individual phone. Now, for the first time in a long while, it was “Here’s your phone”, as opposed to “This is how you expense your phone”.

Since I am now the somewhat disgruntled owner of an iPhone 4, there seemed no need to keep paying close to $100 a month to also have my Droid 2, so I went to the Verizon Wireless store to cancel it.

I’d been a Verizon customer for several years, and had been using this phone for about 2 of those, so I expected no issue in having the phone shut off. I wasn’t closing the account, as my wife and son would continue to have their service through Verizon. I expected the entire process to be fast and painless. Surprise! It wasn’t.

First, I was told that I was under contract on my phone until early 2014. I was a bit surprised by that. The agent explained to me that when my son washed his (no features) phone, and my wife had it replaced with another (no features) phone, Verizon used my smartphone’s reduced price upgrade / renewal instead of his. So, I had ended up paying $100 to get a standard phone, and also extended my 2 year old phone’s contract out an additional 2 years. The agent then informed me that for this to happen was not at all unusual – that it happens all the time. I asked that since he could see what had happened, could it be fixed? It wasn’t doing me any good to have my son’s new phone already eligible for an upgrade. I was told no, we would have had to catch it when it happened. We didn’t, so we’re locked in, and I have to pay a cancellation fee.

Since the agent was unwilling to do what I believed made sense, I asked him what he suggested. Was there any way I could avoid a cancellation fee? He said no. However, his suggestion was that if I wanted to move my existing number to a standard phone, there would be no charge, and the new cost would only be $9.99 per month. My daughter doesn’t have a phone, so I asked him to show me the cheapest standard phone they had. He did – it was $150. So, to cancel would be $155, and to move to a less expensive service would be $150, plus $9.99 per month for a 2 year minimum. That made the decision fairly easy – I spent the extra $5, and cancelled my service.

Yesterday Verizon customer service called to ask why I cancelled, how it went, and if they could do anything to bring me back. I told them I couldn’t think of anything. (They did try to upsell me on additional services though).

So, here are the fails:

1) Verizon made an account error (using the wrong phone’s upgrade eligibility). They were able to see that, and were unwilling or unable to fix it.

2) Follow up call for no other reason than to have made the call. The caller had no information as to why I cancelled or how the cancellation process had gone, but someone somewhere decided they should call all cancelling customers. That’s fine, but call with a suggestion before leading with “How can we bring you back”. I had explained earlier how to keep my business, but it had been turned down. To call later and ask the same question is more annoying than good customer service – but it lets someone place a mark on a checklist somewhere.

As usual, we’re giving lip service to good customer service, but not actually empowering employees to provide it.

There is a good article on Forbes relating to the same issue. And I can relate the author’s pain when trying to cancel XM radio after I traded my car in for one without a satellite receiver.

One more example that leads me to conclude that the company that actually gets customer service right will have a huge advantage over their competition.

Take two tablets and call me when you’re ready for three…


We’ve been a one tablet household for nearly two years now. That tablet, my wife’s iPad, has been her primary computing device for nearly the entire time she’s owned it. The iPad pushed her laptop to her desk, and her desktop to the garage. For anything less than actual content creation such as largish documents or web and graphics work, she almost never goes to her Windows machine.

Since my computing needs tend to extend beyond content consumption, I have always carried my laptop with me. However, more and more opportunities seem to have arisen lately, such as watching TV, waiting for a child to finish an activity, or generally any moment where pulling out the laptop was enough inconvenience, that I just choose not to do it. My Droid 2 (that I still love) filled in that gap somewhat, but with the small screen and relatively short battery life, it just wasn’t good substitute for a dedicated device that wouldn’t leave me without a phone when the battery died.

So, I started thinking about a tablet for myself.

The first question was iPad or Android tablet? I decided pretty quickly that I would go Android. Although my primary computing device is a MacBook Pro, and my wife loves her iPad, I couldn’t bring myself to go iPad. First, I consider the iPad to be far too expensive – especially for the occasional use I envisioned. I went expensive on my primary computing device in my MBP, That was enough, I don’t need (and honestly couldn’t afford) to have every device in my life to be at the high-end premium level. Second,  I  have some investment in Android apps. They didn’t cost much, but as I will continue to use an Android phone, I don’t want to have to buy every app I want once for Android and once for iOS.

Since I wanted Android – what tablet did I want? After evaluating several online, as well as using the local Best Buy as a showroom, I really liked the Galaxy Tab 8.9 (probably because the Tab 2 7.0 wasn’t on display yet). It was reasonably light, snappy, had a vivid screen, and overall seemed comfortable to use. The problem I had with it was the same as I had with the iPad – price.

OK, given that price was always going to be my sticking point, what was the cheapest tablet I could find? I really wanted something that I wouldn’t cringe when handing it to my 9 year old daughter. Using that criteria – there was only one – the Kindle Fire. With refurbs from Amazon going for a little over $100, they were practically disposable. If I bought one and didn’t like it, I’d just return it without remorse.

Now, before everyone cringes at the thought of using the Fire as their primary tablet, here were my main pros and cos of the device:

Pros

1) PRICE
2) Did I mention price?
3) The 7″ display (10, and even the 8.9 was a little large for me)
4) Cloud storage via Amazon

Cons

1) OS – uses a customized (crippled) version of Gingerbread
2) No Google Market (Google Play), can only buy apps from Amazon Market
3) Only 8 GB storage
4) No bluetooth
5) No cellular connectivity
6) No camera

The pros don’t need a lot of discussion. I am notoriously cheap, so price is a huge factor. I liked the size, and would have ended up with a 7″ tablet if at all possible anyway. The cloud storage is nice, but honestly in the week I have used the device, I’ve never used the Amazon cloud in favor of Dropbox.

As for the cons:

Amazon’s OS. Hated it. It was EXTREMELY responsive mind you, I just hated the UI. Easily fixed – Go Launcher will replace the default UI without even rooting the device. It took under a minute to install. It doesn’t change the underlying OS, but it does change the interface to something almost exactly like the Android 2.3.4 version on my Droid 2.

No Google Play was also a little annoying, but also fairly easy to overcome. Putting Google Play apps on the Kindle Fire (sideloading) isn’t difficult at all. In the device menu of the Fire, turn on “Allow Installation of Applications from Unknown Sources”. Now all you have to do is get the .apk files to the Fire. You can do this via USB, but I find that cumbersome. My method is to use the “Astro Files” file manager on my Droid 2 to make a copy of any app I want on the Fire. After backing up an app using Astro Files, the .apk the file ends up in \backups\apps on the SD card. From there I move it to the Dropbox folder, and voila, the .apk is available to the Fire. Click to install from Dropbox, and Angry Birds lives on the fire without paying another $2.99.

Only 8 GB storage. Can’t do much about that with no media card reader, but between Dropbox and USB I can’t see this being an issue with a device that will primarily be used for e-mail and web browsing.

No bluetooth also doesn’t have a workaround, but I only use a bluetooth headset for phone calls, and haven’t felt a lack yet.

No cellular is actually a benefit for me. I don’t want to pay for another data plan, and if I did, it would be far more useful to enable my phone as a hotspot then in buying a data plan for all of my devices individually.

No camera is the only item I have actually felt the lack of so far.

All in all, I feel I’ve ended up with a decent Android tablet. Once I consider the price, it is a screaming awesome tablet. (I love how that works). Amazon has just announced the next version of the Fire should be out soon. Will I buy it again? Probably not. If I have to go to the $199 full price, then the Samsung Galaxy Tab 2 7.0 at $249 overcomes all the negatives with only a $50 differential, and then the Fire can be passed on to my daughter who already claims an ownership stake in it anyway…

Deploying Windows 7 With Stone Knives and Bearskins


So, one of the IT corporate objectives for 2012 was the deployment of Windows 7 to the userbase – in a virtual environment.

We couldn’t go virtual everywhere of course. We have sales people and other traveling folks who use laptops. We also have developers and such who have a great many monitors, and who need horsepower at the desktop. However, since 80% of our on site staff are “call center” types, virtual would  be a perfect fit.

As usual, plans changed at the last minute.

I started pricing out the hardware – windows terminals (I was leaning towards HP) in the cube farm, and Dell back end because all of our other servers are Dell, and I didn’t see the need to go overly crazy with my hardware spend. We don’t currently have a SAN, so storage would have been the biggest part of the hardware expense. I also planned to go XenDesktop because of the negative experiences we had had with VMware View when I was with Wright Medical – particularly with local USB printers, which we have a great many of.

The cost was creeping higher – but nothing terribly unexpected. However, I do work for a company where we buy almost all of our technology equipment used or refurbished from either the Dell Outlet or from Dell Financial Services. Needless to say, we’re very price sensitive.

My manager found some Dell OptiPlex 790s available on the DFS site. These units had 8 GB of RAM, Core i5 processor, and were the ultra small form factor. These were $640, with an additional 25% off coupon. At that price point, they were significantly cheaper than the virtual solution. With this new option, the Windows 7 migration was changed to be a desktop replacement as opposed to a migration to a virtual environment.

With my objective adjusted, I now needed to come up with a deployment plan. Our environment isn’t terribly large – under 100 workstations would be deployed. When I was with Wright Medical and Warehouse 86, I would multicast with Ghost. When I was with IT Workshop, we never had deployments large enough to need multicasting.

“Back in the day” at FiestaNet I created an “ad hoc” imaging environment using DOS USB boot disks using the universal TCP/IP network bootdisk from netbootdisk.com and Ghost 7. I know how to drop updated DOS drivers into the boot disk, so unsupported network cards isn’t an issue. I would boot to the boot disk, map to a share on a Windows server, and pull the image across. This is a little more problematic with Windows 7 and Windows 2008 R2 servers because first, you can’t map to a 2008 R2 network share from a DOS client without some security policy changes on the 2008 server, and also because you’ll need to do a quick repair on the Windows 7 client after the image comes across because otherwise it won’t boot if deploying with such an old version of Ghost (the partitions will be off by one). Still, it works (I use it at home for builds and rebuilds), and I may document it out one day just because it is funny that something I put together in 2001 still works – especially since Ghost 7 in no way is supported for Windows 7 deployments.

I knew I wasn’t going to get a commercial deployment tool approved for this project. I also (honestly) didn’t think I’d need one for a deployment this small. So, this would be done with free tools. Next, would I be moving images over the LAN, or performing the installs locally? Due to the limited space in the IT work area, I could only prep 4 workstations at a time. 4 at a time meant no need for multicasting. Also, since I carry 10 USB sticks in my backpack ranging in size from 8 to 32 GB, I decided that I would just do everything from stick as opposed to adding the delay of  performing the install across the LAN. So, no need for Windows Deployment Services (although I did think it would have been fun to try it).

So, basically I’m going to install Windows 100 times using images and installers created using the Windows AIK.

I downloaded the WAIK, and since my workstation is 64 bit, installed the 64 bit version from the DVD using the wAIKAMD64.msi installer. Next I created a bootable USB drive by using the following steps:

Create bootable USB

Click Start, point to All Programs, and then click Microsoft Windows AIK.
Right-click Deployment Tools Command Prompt, and then click Run as administrator.
Type copype.cmd amd64 C:\winpe_amd64 – press ENTER.
Type copy C:\winpe_amd64\winpe.wim C:\winpe_amd64\ISO\sources\boot.wim – press ENTER.
Type copy “C:\Program Files\Windows AIK\Tools\amd64\ImageX.exe” C:\winpe_amd64\ISO\ – press ENTER.
Type diskpart – press ENTER.
Type list disk – press ENTER.
Identify the USB stick (usually by size – in this case it was #2).
Type select disk 2 – press ENTER.
Type clean – press ENTER.
Type create partition primary – press ENTER.
Type select partition 1 – press ENTER.
Type format fs=fat32 quick – press ENTER.
Type active – press ENTER.
Type exit – press ENTER.

Next, I converted the file system from FAT32 to NTFS with the command convert H: /fs:ntfs. I did this to support the WIM files I would create which would be larger than the 4 GB file size limit for Fat32. I converted instead of originally formatting the sticks as NTFS because formatting as NTFS would cause the format to hang. Converting the file system after the fact always worked, so that is the process I followed.

Finally, I used the command:

xcopy /s C:\winpe_x86\iso\*.* H:\ (because my USB drive again was “H”)

Now I have a bootable USB stick with which I can copy images off of workstations for redeployment using ImageX.

Create Images

With the environment out of the way, I needed to create those images. I decided I needed three different images. One image included Office 2010, one image did not include Office, but did include Outlook, and one included neither Office nor Outlook, but did include OWAtray with the expectation that the user would use OWA. All images were fully patched and updated, and also included things like Java, Flash, Shockwave, PDF readers, antivirus, and Firefox.

Creating the image was always the same process. Install everything, patch everything, and then sysprep the system.

The sysprep process I used was as follows:

Click start, and type , type C:\Windows\System32\sysprep\sysprep.exe in the search box and press enter.

You then get this:

System Preparation Tool dialog box

Sysprep needs to be performed twice, so be careful to perform the steps in the right order.

BEFORE THE FIRST SYSPREP, THE DEFAULT ADMINISTRATOR ACCOUNT NEEDS TO BE ENABLED AND IT STILL NEEDS TO BE NAMED ADMINISTRATOR. Audit mode logs in as administrator, and if it cannot, then the result is a system that cannot be logged into.

In the System Cleanup Action list, select Enter System Audit Mode.

In the Shutdown Options list, select Reboot.

Click OK to restart the computer in Audit mode.

After the restart, Windows 7 automatically logs in as Administrator – if it cannot, then you can go no further.

This session is used to delete any and all accounts and profiles that were needed to install software.

Once that is complete, run sysprep again, and this time perform the following:

Open Sysprep.

In the System Cleanup Action list, select Enter System Out-of-Box Experience (OOBE).

Select the Generalize check box.

In the Shutdown Options list, select Shutdown.

Click OK

This is now an image that can be captured and redeployed.

Capture Image

This is an easy part – boot to the created USB stick, and use it to capture the image locally to the stick. The PC needs to be booted to the USB stick by either changing the USB boot order in the BIOS, or using the one time boot selector (usually F12).

Once the PC has booted to the memory stick, use ImageX to capture the image.

In my case, the command I used was as follows:

F:\imagex /compress fast /check /flags “Professional” /capture D: F:\install.wim “Windows 7 Professional” “Windows 7 Professional Custom”

Where “F:” was the memory stick (confirmed using “dir f:”) and “D:” was the partition with the Windows installation (confirmed using “dir d:”).

ImageX is the command-line tool in Windows 7 that you can use to create and manage Windows image (.wim) files. Compress specifies the compression type: maximum, fast, or none. Check verifies the integrity of the .wim file. Flags is required if you are going to deploy the .wim file with Windows Setup (I did). Otherwise you do not need to specify flags. Capture is the actual collection of the image. D: is what partition, F:\install.wim is where to save, and what to name the .wim file (hopefully you’re using at least a 16GB USB stick in this case), “Windows 7 Professional” is the name of the new .wim file, and “Windows 7 Professional Custom” is the description.

In my case, it took about 20 minutes to capture the image.

Create Deployment Media (using bootable USB)

Follow the same steps as above (everything but to create a bootable USB stick. (I did this 4 times).

In an elevated command prompt:

Type diskpart – press ENTER.
Type list disk – press ENTER.
Identify the USB stick (usually by size – in this case it was #2).
Type select disk 2 – press ENTER.
Type clean – press ENTER.
Type create partition primary – press ENTER.
Type select partition 1 – press ENTER.
Type format fs=fat32 quick – press ENTER.
Type active – press ENTER.
Type exit – press ENTER.
Convert the file system to NTFS.

Now insert your Windows 7 Volume Licensing disk into your optical drive. (Or mount the .ISO, or whatever method you choose to get to the install files).

In the elevated command prompt window, type xcopy /s D:\*.* H:\*.*, where D is the drive letter of the Windows 7 Volume Licensing media (optical drive) and H is the drive letter of the USB stick you just formatted and made bootable.

In the elevated command prompt window, type xcopy /r J:\install.wim H:\sources\install.wim, where H is the drive letter of the USB stick you created in the previous step and J is the original USB stick with ImageX. (Or you could have previously copied that install.wim file to another location). If prompted, type Y to confirm that you want to overwrite the file.

Eject the USB stick containing your new install files, and you are ready to deploy.

Deploy Image (using bootable USB Deployment Media)

Boot the PC to the deployment USB stick.

Follow the prompts to install Windows 7.

That’s really all there is, so here are the caveats:

We have Key Management Service servers for our Windows 7 keys, so the workstations will self-activate (no need to enter the license key).

I didn’t use an unattend.xml file to apply settings instead of entering them at setup. First, this is because it wasn’t a large deployment, and I could only do 4 at a time. The extra few mouse clicks didn’t slow me down – I was always waiting on the next computer. Second, as our naming convention is to use the service tag as the computer name, I had to type that in on every computer anyway. Joining the domain was no additional trouble, and everything else we customize we apply through Group Policy.

Didn’t “copy profile”.  Our environment is very plain vanilla, and even using Windows Easy Transfer to move the profiles, the other person doing this with me was able to put new machines on desks as quickly as I was creating them.

The whole process essentially took 2 weeks from when we got the hardware until all the hardware was in use. Not bad, especially since this wasn’t the only thing we were working on…


Twitter Update

  • @jhaletweets The wikipedia article on his book "Capitalism and Freedom" is a good intro. Relevant to current political/economic climate 1 month ago

Calendar

February 2017
M T W T F S S
« Jan    
 12345
6789101112
13141516171819
20212223242526
2728  

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 11 other followers