Archive Page 2

Deploying Windows 7 With Stone Knives and Bearskins

So, one of the IT corporate objectives for 2012 was the deployment of Windows 7 to the userbase – in a virtual environment.

We couldn’t go virtual everywhere of course. We have sales people and other traveling folks who use laptops. We also have developers and such who have a great many monitors, and who need horsepower at the desktop. However, since 80% of our on site staff are “call center” types, virtual would  be a perfect fit.

As usual, plans changed at the last minute.

I started pricing out the hardware – windows terminals (I was leaning towards HP) in the cube farm, and Dell back end because all of our other servers are Dell, and I didn’t see the need to go overly crazy with my hardware spend. We don’t currently have a SAN, so storage would have been the biggest part of the hardware expense. I also planned to go XenDesktop because of the negative experiences we had had with VMware View when I was with Wright Medical – particularly with local USB printers, which we have a great many of.

The cost was creeping higher – but nothing terribly unexpected. However, I do work for a company where we buy almost all of our technology equipment used or refurbished from either the Dell Outlet or from Dell Financial Services. Needless to say, we’re very price sensitive.

My manager found some Dell OptiPlex 790s available on the DFS site. These units had 8 GB of RAM, Core i5 processor, and were the ultra small form factor. These were $640, with an additional 25% off coupon. At that price point, they were significantly cheaper than the virtual solution. With this new option, the Windows 7 migration was changed to be a desktop replacement as opposed to a migration to a virtual environment.

With my objective adjusted, I now needed to come up with a deployment plan. Our environment isn’t terribly large – under 100 workstations would be deployed. When I was with Wright Medical and Warehouse 86, I would multicast with Ghost. When I was with IT Workshop, we never had deployments large enough to need multicasting.

“Back in the day” at FiestaNet I created an “ad hoc” imaging environment using DOS USB boot disks using the universal TCP/IP network bootdisk from and Ghost 7. I know how to drop updated DOS drivers into the boot disk, so unsupported network cards isn’t an issue. I would boot to the boot disk, map to a share on a Windows server, and pull the image across. This is a little more problematic with Windows 7 and Windows 2008 R2 servers because first, you can’t map to a 2008 R2 network share from a DOS client without some security policy changes on the 2008 server, and also because you’ll need to do a quick repair on the Windows 7 client after the image comes across because otherwise it won’t boot if deploying with such an old version of Ghost (the partitions will be off by one). Still, it works (I use it at home for builds and rebuilds), and I may document it out one day just because it is funny that something I put together in 2001 still works – especially since Ghost 7 in no way is supported for Windows 7 deployments.

I knew I wasn’t going to get a commercial deployment tool approved for this project. I also (honestly) didn’t think I’d need one for a deployment this small. So, this would be done with free tools. Next, would I be moving images over the LAN, or performing the installs locally? Due to the limited space in the IT work area, I could only prep 4 workstations at a time. 4 at a time meant no need for multicasting. Also, since I carry 10 USB sticks in my backpack ranging in size from 8 to 32 GB, I decided that I would just do everything from stick as opposed to adding the delay of  performing the install across the LAN. So, no need for Windows Deployment Services (although I did think it would have been fun to try it).

So, basically I’m going to install Windows 100 times using images and installers created using the Windows AIK.

I downloaded the WAIK, and since my workstation is 64 bit, installed the 64 bit version from the DVD using the wAIKAMD64.msi installer. Next I created a bootable USB drive by using the following steps:

Create bootable USB

Click Start, point to All Programs, and then click Microsoft Windows AIK.
Right-click Deployment Tools Command Prompt, and then click Run as administrator.
Type copype.cmd amd64 C:\winpe_amd64 – press ENTER.
Type copy C:\winpe_amd64\winpe.wim C:\winpe_amd64\ISO\sources\boot.wim – press ENTER.
Type copy “C:\Program Files\Windows AIK\Tools\amd64\ImageX.exe” C:\winpe_amd64\ISO\ – press ENTER.
Type diskpart – press ENTER.
Type list disk – press ENTER.
Identify the USB stick (usually by size – in this case it was #2).
Type select disk 2 – press ENTER.
Type clean – press ENTER.
Type create partition primary – press ENTER.
Type select partition 1 – press ENTER.
Type format fs=fat32 quick – press ENTER.
Type active – press ENTER.
Type exit – press ENTER.

Next, I converted the file system from FAT32 to NTFS with the command convert H: /fs:ntfs. I did this to support the WIM files I would create which would be larger than the 4 GB file size limit for Fat32. I converted instead of originally formatting the sticks as NTFS because formatting as NTFS would cause the format to hang. Converting the file system after the fact always worked, so that is the process I followed.

Finally, I used the command:

xcopy /s C:\winpe_x86\iso\*.* H:\ (because my USB drive again was “H”)

Now I have a bootable USB stick with which I can copy images off of workstations for redeployment using ImageX.

Create Images

With the environment out of the way, I needed to create those images. I decided I needed three different images. One image included Office 2010, one image did not include Office, but did include Outlook, and one included neither Office nor Outlook, but did include OWAtray with the expectation that the user would use OWA. All images were fully patched and updated, and also included things like Java, Flash, Shockwave, PDF readers, antivirus, and Firefox.

Creating the image was always the same process. Install everything, patch everything, and then sysprep the system.

The sysprep process I used was as follows:

Click start, and type , type C:\Windows\System32\sysprep\sysprep.exe in the search box and press enter.

You then get this:

System Preparation Tool dialog box

Sysprep needs to be performed twice, so be careful to perform the steps in the right order.

BEFORE THE FIRST SYSPREP, THE DEFAULT ADMINISTRATOR ACCOUNT NEEDS TO BE ENABLED AND IT STILL NEEDS TO BE NAMED ADMINISTRATOR. Audit mode logs in as administrator, and if it cannot, then the result is a system that cannot be logged into.

In the System Cleanup Action list, select Enter System Audit Mode.

In the Shutdown Options list, select Reboot.

Click OK to restart the computer in Audit mode.

After the restart, Windows 7 automatically logs in as Administrator – if it cannot, then you can go no further.

This session is used to delete any and all accounts and profiles that were needed to install software.

Once that is complete, run sysprep again, and this time perform the following:

Open Sysprep.

In the System Cleanup Action list, select Enter System Out-of-Box Experience (OOBE).

Select the Generalize check box.

In the Shutdown Options list, select Shutdown.

Click OK

This is now an image that can be captured and redeployed.

Capture Image

This is an easy part – boot to the created USB stick, and use it to capture the image locally to the stick. The PC needs to be booted to the USB stick by either changing the USB boot order in the BIOS, or using the one time boot selector (usually F12).

Once the PC has booted to the memory stick, use ImageX to capture the image.

In my case, the command I used was as follows:

F:\imagex /compress fast /check /flags “Professional” /capture D: F:\install.wim “Windows 7 Professional” “Windows 7 Professional Custom”

Where “F:” was the memory stick (confirmed using “dir f:”) and “D:” was the partition with the Windows installation (confirmed using “dir d:”).

ImageX is the command-line tool in Windows 7 that you can use to create and manage Windows image (.wim) files. Compress specifies the compression type: maximum, fast, or none. Check verifies the integrity of the .wim file. Flags is required if you are going to deploy the .wim file with Windows Setup (I did). Otherwise you do not need to specify flags. Capture is the actual collection of the image. D: is what partition, F:\install.wim is where to save, and what to name the .wim file (hopefully you’re using at least a 16GB USB stick in this case), “Windows 7 Professional” is the name of the new .wim file, and “Windows 7 Professional Custom” is the description.

In my case, it took about 20 minutes to capture the image.

Create Deployment Media (using bootable USB)

Follow the same steps as above (everything but to create a bootable USB stick. (I did this 4 times).

In an elevated command prompt:

Type diskpart – press ENTER.
Type list disk – press ENTER.
Identify the USB stick (usually by size – in this case it was #2).
Type select disk 2 – press ENTER.
Type clean – press ENTER.
Type create partition primary – press ENTER.
Type select partition 1 – press ENTER.
Type format fs=fat32 quick – press ENTER.
Type active – press ENTER.
Type exit – press ENTER.
Convert the file system to NTFS.

Now insert your Windows 7 Volume Licensing disk into your optical drive. (Or mount the .ISO, or whatever method you choose to get to the install files).

In the elevated command prompt window, type xcopy /s D:\*.* H:\*.*, where D is the drive letter of the Windows 7 Volume Licensing media (optical drive) and H is the drive letter of the USB stick you just formatted and made bootable.

In the elevated command prompt window, type xcopy /r J:\install.wim H:\sources\install.wim, where H is the drive letter of the USB stick you created in the previous step and J is the original USB stick with ImageX. (Or you could have previously copied that install.wim file to another location). If prompted, type Y to confirm that you want to overwrite the file.

Eject the USB stick containing your new install files, and you are ready to deploy.

Deploy Image (using bootable USB Deployment Media)

Boot the PC to the deployment USB stick.

Follow the prompts to install Windows 7.

That’s really all there is, so here are the caveats:

We have Key Management Service servers for our Windows 7 keys, so the workstations will self-activate (no need to enter the license key).

I didn’t use an unattend.xml file to apply settings instead of entering them at setup. First, this is because it wasn’t a large deployment, and I could only do 4 at a time. The extra few mouse clicks didn’t slow me down – I was always waiting on the next computer. Second, as our naming convention is to use the service tag as the computer name, I had to type that in on every computer anyway. Joining the domain was no additional trouble, and everything else we customize we apply through Group Policy.

Didn’t “copy profile”.  Our environment is very plain vanilla, and even using Windows Easy Transfer to move the profiles, the other person doing this with me was able to put new machines on desks as quickly as I was creating them.

The whole process essentially took 2 weeks from when we got the hardware until all the hardware was in use. Not bad, especially since this wasn’t the only thing we were working on…

Here User, User, User…

When, like today, the 5:00 AM wakeup call comes in that someone cannot get to the Internet, it is always nice to have a little information – like what computer the user is on, where can’t they go, etc.

So, of course this morning’s call contained none of that. Just so-and-so can’t get to the Internet, please fix. Click.

Not having the computername, I had to go through the chore of finding the computer based on the username logged into it. Fortunately, there are a ton of ways to do so:

1) Back in the days of WINS I could use winscl.exe command. (Sorry, we don’t have any WINS servers now). Not really a choice in this example.

2) I could set a Domain Policy to audit account logon events, and then look at the logs on all my domain controllers. It works, but unless I have a tool to consolidate my logs, (I don’t), it can be time consuming to find the domain controller that authenticated the user, and the workstation that sent the logon request.

3) PSLoggedOn from Sysinternals (Microsoft) is a great little tool, but since it won’t scan every machine in my network in one pass, it isn’t perfect. If the machine I need is in the first several dozen, great! If not, I’m out of luck.

4) NBTscan is a great tool for this kind of thing, and I have used it often.

It gives you great output like:

Doing NBT name scan for addresses from

IP address NetBIOS Name Server User MAC address
—————————————————————————— WKS-01 <server> Bob 12-34-ba-c0-52-32 WKS-02 <server> Sam 00-0f-1f-b3-b5-89


When I have had occasion to use it, it has never let me down.

5) Spiceworks is also a nice system, not specifically for this, but if you are sweeping your network with it, the inventory function will tell you the last logged on user of a given workstation.

6) In today’s case I used User Locator. This tool is one of my favorites. Not only does it return a list of computer(s) that the user is logged onto, but it can bind tools to the remote computer for one-click management of that computer. You can download it for free at

Anyway, there are many ways to find out what computer a user is logged into. These were just the choices I ran down on the way to my selection this morning. You may have free or pay tools you prefer, but this is just another example of how many ways there are to do the same thing in IT. (The best way, of course, is to get the user to tell us what workstation they’re on in the first place…)

Synchronizing IIS on Windows 2008 R2 to Apache on OSX using Dropbox

I’ve hosted websites in my home for a long time. I started back in the mid 90’s when I was a partner at FiestaNet (now ViaWest). First I was on dialup with a static IP, then ISDN, and finally DSL. The nice thing was that up until 2001, although I had to pay for the connectivity, my bandwidth was free.

Since FiestaNet was an all Microsoft shop, I started with IIS 1.0 on Windows NT 3.51. Basically, it could host static pages. Since no one ever went to my site (, it didn’t matter all that much.

Over time I moved through every version of IIS. 2.0 came with NT 4.0. Then the upgrade to IIS 3.0 followed, finally replaced by 4.0 in the NT Option Pack. Windows 2000 brought is IIS 5, 2003 IIS 6, 2008 IIS 7, and finally Windows Server 2008 R2 and IIS 7.5. I put my web server first behind Proxy 2, then ISA 2000, 2004, and 2006. The latest version, Threat Management Gateway 2010 protects my home network even now.

This kept up through many employers, several homes, and a few cities. I always upgraded a little ahead of whatever company I happened to be working at the time. That way I could have some hands on experience before using new versions in production at the workplace. It was a great way to have a lab that I couldn’t neglect or let fall out of “fully functional”.

This finally came to an end in late Summer of 2010. We were selling our home in Germantown, Tennessee, and did not yet have a new home lined up in Phoenix. My wife and children came out ahead of me to a rental home, and I followed once all the various tasks involved with selling the house were complete. Since my servers had no connectivity (or even a home) for nearly 3 months, it was time to move my sites to a hosting provider – in this case GoDaddy.

My sites have been with GoDaddy for about 17 months now, while first we bought a home, remodeled it, moved in, and eventually got around to such tasks as setting up the server room. Now my sites could have a place to live again, but I have not yet purchased a static IP, so for now the sites are still at GoDaddy. I’m torn between having to maintain uptime at home (since CenturyLink’s connection seems fairly unreliable) and the fact that AreMySitesUp reports downtime from GoDaddy at least once a day (as I’m sure they are fairly oversubscribed in the servers that provide the dirt cheap hosting package I purchased).

Since I don’t have a static IP, I can’t easily remote into home to get any files when I need them. (Yes, I could use DynDNS or something like it, but I hate spending money). That also means I can’t use my home web server for development if I am not actually at home.

If only there were a way to work on my sites at home in a dev environment and still have them available to me to play with when I am not at home. Oh wait, there is. Dropbox. (Please sign up and get me more free storage!)

So, my sites are on my IIS web server at home, with the WEBDEV folder shared into my Dropbox account. That means those files are also available on my MacBook Pro all the time, with real time updates no matter which device I make a modification on. The sites are hosted on IIS to internal users at home. Now I just need to be able to serve them locally from the MacBook when I’m not at home. Sounds easy, right?

OK, let’s use my site as the example. When I am at home, and I go to, I go to the public site GoDaddy. If I go to, I to to my internal IIS server. That works for any device on my LAN. If I leave my LAN, and go to, I get a “server not found” error and I am sad. That is because the zone lives on my home DNS server, but nowhere else.

First, I need to make resolve from my MacBook Pro, even when I am not on my LAN. That’s pretty easy, I just need to add to my HOSTS file and have it point to Since my virtual machines all use bridged connections, when I am at home I can use Windows 7 to see on IIS, and OSX to see on the local  instance of Apache on the MBP.

I’m running OS X 10.6.8 Snow Leopard, so I have a couple of options.

I can open a terminal window, and type

sudo nano /private/etc/hosts

enter my root password

and I get the hosts file, which I can edit.

If you prefer to use the finder, use the go to folder function, go to /private/etc and open HOSTS. I recommend TextWrangler as the editor if you’re going to go that route. I’m a big fan of Notepad++ in Windows, and TextWrangler is the closest thing to an OS X equivalent.

In this case I add

and save the file.

Now to actually set up the Apache webserver on the MPB.

Apache isn’t enabled by default, but it is very easy to enable. First, go to System Preferences::Sharing and check “Web Sharing”. Otherwise open a Terminal window and enter

sudo apachectl start.

If you go through the Terminal you will again be asked to enter the root password. At this point if you open


in a browser you should see the text “It works!”.

The directory you are seeing when browsing http://localhost is:


In my case my user files are available at http://localhost/~josephking/ and that directory is located at:


which is fine, but not where my web site files are.

All of my website files reside at paths like:


Where WEBDEV is a directory synchronized from my Windows server.

Since I don’t want to move WEBDEV to be under /Sites/ and I don’t want to change the default pathing in Apache, I create a symbolic link in Sites to WEBDEV by opening a terminal and typing:

ln -s /Users/josephking/Dropbox/WEBDEV /User/josephking/Sites

which creates the link, and if I go to the Sites directory in the Finder, I see the link to WEBDEV.

If I go to http://localhost/~josephking/WEBDEV/ however, I get the following:


You don’t have permission to access /~josephking/WEBDEV/ on this server.

Which is because Apache needs permission to not only WEBDEV, but the entire chain all the way to WEBDEV from the root, and also because Apache needs to be configured to follow symbolic links.

First, permissions:

I need to enter the following in a terminal window:

chmod 755 /Users/
chmod 755 /Users/josephking/
chmod 755 /Users/josephking/Dropbox/
chmod 755 /Users/josephking/Dropbox/WEBDEV/

which then lets Apache actually get to where I want it to go.

Next, to allow symbolic links:

I edit my username config file for Apache at:


by making it look like the following:

<Directory "/Users/josephking/Sites/">
Options +Indexes +MultiViews +Includes +FollowSymLinks
AllowOverride None
Order allow,deny
Allow from all
<Directory "/Users/josephking/Dropbox/WEBDEV/">
Options +Indexes +MultiViews +Includes +FollowSymLinks
AllowOverride None
Order allow,deny
Allow from all

I then save the file and restart apache by opening a Terminal window and typing:

sudo apachectl restart

and now I can see all of my site documents listed at


How 1995 of us! Look at that URL!

Almost there, just a few changes.

First, I want to go to that directory using

I go into httpd.conf at /private/etc/apache2/ and I remove the # in front of :

Include /private/etc/apache2/extra/httpd-vhosts.conf.

I then edit httpd-vhosts.conf at /private/etc/apache2/extra/ by adding:

<VirtualHost *:80>
 DocumentRoot "/Users/josephking/Dropbox/WEBDEV/"
# ErrorLog "/private/var/log/apache2/"
# CustomLog "/private/var/log/apache2/" common

I actually added a lot of these, one for each site I have set up in the HOSTS file, but you get the idea.

Last, I go back to httpd.conf for a couple quick tweaks:

modify DirectoryIndex

<IfModule dir_module>
 DirectoryIndex index.html index.htm index.shtml index.php default.html default.htm default.shtml default.php

to list all my default documents

Remove the “#” in front of

LoadModule php5_module libexec/apache2/


LoadModule fastcgi_module libexec/apache2/

to enable PHP and CGI.

Restart Apache again, and voila!

I can get to from my MacBook Pro on the local Apache server. If I had other machines who needed to see it, I could edit their hosts files or use DNS to point to my laptop as well.

Easy, right? Now everyone will want to sync IIS and Apache…

Back to technology for a moment…

As we start 2012, I find it a good time to reflect. In this case, on some of the technologies in my life. Some are little things, like I introduced one of my colleagues to the concept of multitouch just last Friday. Some are larger (for me), like how I ended the year without a PC in my home dedicated to my use for the first time since 1994. Plus, it’s just nice to think about tech for a while as opposed to some of the other things we have going on.

I have a very technology centric life and lifestyle. My family are all very plugged in. Many of my friends either technology workers or enthusiasts. Despite that, several of the technologies we’re using at home have changed this year.

Thing number one – as a family we’ve gone mobile more than I expected. At the start of the year, we had 10 computers in the home (not counting virtual machines). Now we’re down to 6. VMs and mobile devices have filled the gap.

Number two – Apple is more insidious than I suspected. My wife the PC lover now rarely uses her laptop in favor of her iPad.

Number three – although the total number of devices on our network continues to grow, we’re no longer adding computers, but instead new consumer electronic devices that are network enabled.

I was really surprised that our computer count went down. We’ve had 10 computers in one form or another for a long time. While our count of physical servers remained steady at two, each of the family members have gone from two computers to one. My wife went first, giving up her workstation in favor of her laptop. This was not much of a stretch, as her PC had received little use since moving from Tennessee. The computer was used seldom in the house she and the kids rented in August of 2010, and we didn’t even set it up in the new house we moved into in June. It now lives in the garage, to eventually be parts. Both of our kids had a Mac Mini, and a PC laptop. The Macs because I got tired of rebuilding their PCs when they were too young to not click on everything the Internet sent their way. The laptops came later when the kids became enamored with games and programs that weren’t available for the Macs. The Macs saw less and less use, and eventually my wife (who was never a Mac fan) was happy to put the minis up on Craigslist, leaving the children with one computer apiece.

Finally, I lost my workstation when the power supply failed. Since I didn’t want to invest in a new power supply, and replace the computer’s case and video card fans that were also loud and  begging for replacement, I pulled the hard drive and connected it to my Macbook with a USB dock.

With all the VMs on my Macbook, the only regrets so far have been the monumental task of moving my iTunes library, and the fact that I can no longer leave apps running when I’m not home. I take my Macbook everywhere, which for example means the family can’t access my iTunes library when I am away.

My son doesn’t seam to miss his Mac mini because – surprise – he now has an iPod touch. He now only goes to his laptop for homework and for a couple of games he plays. Much of his web surfing, game playing, and email are all on the touch. Sometimes we have to work to get his attention, but he also spends more time out with the family, and less time in his room (other than to play legos).

My daughter uses her laptop less as well. Not because she has moved to a mobile device, but because she doesn’t need it as much. Her use of her laptop is mostly videos – DVDs, Netflix, Hulu, and YouTube. She can do all of these things from either of our blu-ray players, and get a bigger screen to boot. Like my son she still uses the laptop for games, but she doesn’t find it convenient to lug the laptop around just to watch iCarly clips.

My wife was the biggest change. Windows fan and Mac hater, she gave up her PC and now almost never uses her Dell laptop. Instead, at home, at the ice rink, and at the karate studio, her iPad is her constant companion. When she is otherwise out and about, she uses her iPhone.

All of this may sound very much like I have jumped on the “The PC is Dead” bandwagon. I admit, there was a time when I was always about the bigger and better PC. I used my workstation for entertainment, for work, to consume media,  and to keep in touch with others. My desk was usually my first and last stop whenever I wanted to perform any of those tasks. While my MacBook Pro has replaced my PC, I still use it as my go to device for almost all of my technology needs. This makes me more of an outlier these days.

Despite this, I don’t believe that we are moving to an explicitly “mobile computing” environment. I think instead that we are moving more to a “convenience computing” model. Yes, tablets, smart phones, and netbooks have made mobile computing the new buzzwords. The addition of cloud to the mix only accelerates the speed in which people are willing to cast their traditional computer aside in favor of their portable devices.  However, I think people instead desire the ability to access their data of choice no matter what device may be handy to do so. Pandora and Netflix built into televisions, web surfing from gaming consoles, and e-mail from Kindles are all examples of moving not necessarily towards mobility, but to convenience.

Just because I can post a tweet from a PSP doesn’t mean that is the optimal use for that device – it just means that’s what was in my hand when the mood struck me to do so. In a society of convenience, we no longer want the best tool for the job, we want the tool in hand to do whatever we want at that moment.

And we only want to carry one tool at a time…

Ranting is a great way to release frustration…

My daughter Alex was released from the hospital late Christmas afternoon, so she’s been home a little more than a day. She is adjusting to her new reality, as are we all. Other than the trauma of every injection, I think she is taking it better than my wife and I are. Of course, if the last week had been handled better, although the transition would still have been difficult, perhaps we might have had things slightly easier. I’m not ready to rant about the diabetes itself, so instead I’ll rant about her hospital stay.

Of course the hospital stay was rough, but so many things were missing that I want to get a least some of them out of my system.

First, the ER was very hard on all of us. Alex was diagnosed by her primary care physician, so instead of going back to school Alex found herself in the ER. She was dehydrated, so it took a few tries (over nearly an hour) to get an IV into her. The child life assistant tried very hard to distract her, but overall it was pretty awful. Each new person would come in saying they were the final word in IVs and would get a line in, try for a bit, and end up calling someone else up the food chain to have them give it a try. Alex lost faith in new faces quickly.

By early afternoon we were moved up to the ICU. That also could have been better. She needed multiple IVs, so again we had multiple people try to get one in (although it took fewer people and fewer tries than the ER). They eventually had to sedate her to get the second line going.

A few things bothered me about the IVs. She had one in her right hand where she was being medicated (saline, potassium, glucose, insulin, and others – at one point she had 5 separate bags on the line). She had another in her right arm at about the elbow, which they were using for blood draws, although eventually they moved the line there when they stopped drawing blood so often.

What bothered me first was that once they added glucose to the mix, they couldn’t understand why her  blood sugar shot up from where we had dropped to the high 200s, beyond the high 300s she was admitted with, to over the 550 the blood sugar meter was able to read. The ER nurse and the endocrinologist who stopped by were both stumped. They tried 2 different meters, and eventually sent her blood out to the lab. I had to point out to them that they were drawing blood in the same arm where they were introducing glucose. Oh. They started using finger pokes on the other arm, and got a reading in the 150s. This really made Alex mad because she allowed them to put in another IV specifically for blood draws (because she was so afraid of the finger pokes, and she needed 4 per hour). So, now she got a second IV, and still had the finger pokes. Another loss of credibility in her mind. (And mine too – because again, I’m not the medical professional, but the issue seemed pretty obvious).

The second IV issue I only heard about, but never saw. After about a day, once Alex was stabilized, the bags were moved to the arm IV, and when they needed blood, they would disconnect, flush, wait a bit, and draw blood from the IV. All is fine and dandy , except one of the nurses would take the blood she needed, and push what was left in the draw back into her arm. Back into her arm? What? Anyone ever heard of a blood clot?

The last IV issue was that even though she eventually came off the drip, they didn’t remove her IV until discharge – that’s 6 days. They didn’t even flush it the last two. No one wanted to deal with the terrified screaming child until it fell to the nurse who was there for our discharge (who fortunately was fantastic).

Finally we moved to the diabetes ward, and although things were not as traumatic, we did still have issues. First, everyone thought someone else was taking care of things, so everything got missed. In ICU we were told that we would meet with child life and a psychologist to have the discussion about what diabetes was, and how she would need shots every day (see previous post about needle phobia). That never happened. She found about because the diabetes nurses discussed it in front of her. She cried for a long time. In fact, we almost never saw the child life team again, as they were all working on the Christmas toy drive. I realize that’s a big thing, but should it come before patient care?

Since things were not getting better, we asked for a psych consult. We met with a psychiatrist, who after a 90 second evaluation recommended prozac. For a 9 year old? Really? His reply was how useful it was, so my rebuttal was that our first attempt to address her terror and dispair would be to set a 9 year old girl up for an SSRI discontinuation syndrome? He left shortly after that.

We did have a nutritionist come in, who was very nice, but other than a very nice lady on the diabetes team who let Terri try shots out on her, we didn’t see many people but the nurses. The nurses each had a different theory about diabetic care, so we got conflicting advice. The nurses (because they each thought the previous 12 hour shift was taking care of things) would never check or empty Alex’s hat (we were collecting urine for output measurement and ketone testing) until we asked. The nurses never changed her sheets in the entire stay, despite them getting covered in blood spots. The nurses left me with a girl who had a nosebleed so bad we went through an entire box of tissues in 25 minutes (I had the call light on) and I kept ducking out to find someone and never could (eventually they were back chatting at the nurses station around the time I ran completely out of paper products). One nurse was so nervous that their shaking hands and sharp intake of breath caused Terri to drop the syringe the first time she tried to give Alex insulin. (Terri had to go home for most of a day to get her confidence back).

I don’t blame them entirely, and we did have 2 fantastic nurses during our stay. The process was what was bad. There is a “newly diagnosed diabetic” kit. They were out, we never got it. We spent nearly $400 on her prescriptions in the hospital pharmacy – when we got back to the room, we saw they forgot the syringes that we paid for. When we went back, they were closed 3 days for the holiday. There is an education and counseling process. We only got a couple pieces of it because some people were working on the Christmas activities, and others were out for the holidays. As my wife and I were both basket cases, I know I would have thought twice about entrusting Alex’s care to us after the 6 days in the hospital.

On the flip side, Alex’s primary care physician came by 3 times during her stay. You may not be aware that you can’t do that anymore. There are PCPs and hospitalists. We told the nurse who was discharging us (because she asked when our PCP followup was), and she was flabbergasted and said she had never heard of that.

Despite it all, we’re making progress. Alex was home for Christmas. We’ve bought our first round of carb countable groceries, and we’re managing her blood sugar. We hope to have Alex (and probably us) set up with a therapist shortly after new years.

But it’s still hard.

And then everything changes…

As we close out 2011, I am wrapping up 24 months of my life that I can only refer to as “upheaval”. I’m a pretty stable guy – tend to be long term with jobs, I’m married, with 2 kids and a dog, a homeowner, and have a viewpoint that trends a little right of center.

In the last 24 months, I’ve changed jobs 3 times, moved twice (3 times if you count a few months in an extended stay as a “residence”), was hit financially by the housing bubble, am working a new job that has quality people and a customer ethic I agree with (even if the hours tend towards “crushing”), and I was just starting to hope that 2012 would be the year that my family and I would finally swing back to the normal that I vaguely remember from the 2000 – 2007 period.

I was so completely wrong.

I apologize in advance, as this post is of a significantly more personal and less coherent than usual. It’s 3:30 in the morning on 12/24, and I’m in a hospital room at Phoenix Children’s Hospital.

On Tuesday the 20th, my 9 year old daughter was diagnosed with type 1 diabetes, and my wife and I became the parents of a diabetic.

Alex is a rambunctious child. She’s smart, stubborn, creative, opinionated, and has a temper that doesn’t end. She and her mother don’t have the relationship I would hope for them to have (or probably them either, for that matter). She’s not an angel – but she’s my angel.

I just had to wake her up at 3:00 AM to take her blood for a blood sugar test. She has diagnosed anxiety issues, and is terrified of needles. Every one of these draw blood then give an insulin shot (5 times a day) events creates 15 minutes of screaming terror for her – with an extra long term insulin shot every day thrown in for good measure. She doesn’t mean it of course, but having to hold her down and hurt her while she screams and yells “I hate you”, “It’s going to hurt”, and just an overall shrill scream until she runs out of breath is an experience that I can’t describe to someone who hasn’t gone through something similar.

I didn’t know much about diabetes before this week. I “knew” there was a juvenile diabetes that had a genetic component, and an “adult” diabetes caused more by lifestyle. I know a lot more about diabetes now. For instance, these are actually called type 1 and type 2. I’ve spent a lot of time researching diabetes that I would normally have spent working or keeping up with technology (or getting ready for Christmas). There’s a lot of good people working on this, and new technologies come out all the time that allow diabetics to do everything a non diabetic can – so long as they use those technologies correctly to manage their blood sugar. I’ve already checked out several support groups. You can bet I’m am going to become the lay person’s version of an expert on diabetes and diabetes research.

At the same time, today, I need to help my little girl through something she should never have had to face – and I need to keep my game face on while I do it, because if she knew how I felt it would be even worse for her. I need to help my wife and son cope with our new reality, as my wife is taking this ever worse than I am (of course) and my son is so worried that he spent Alex’s first night of hospitalization at home throwing up.

I’m just really tired…

Privacy is dead – long live privacy…

Privacy was dead long before Mark Zuckerberg of Facebook decided that everyone should be able to see everything about everyone else. (He has tiptoed back a little from that position). He was blasted in the press because the public seemed to think that they had private lives until Zuckerberg opened them up to the world.

That was far from true. In fact, there hasn’t been much privacy for a long time – people just behaved as if they had privacy.

“Back in the day” when I was still with FiestaNet, I was interviewed on one of the local news shows about online privacy. This would have been around 1999 – 2000 or so. I told them then that privacy was a myth. I got the usual smile and nod that I would expect from a comment like that. So, I turned back to my desk and in under a minute gave them the home address of their news anchor, and what they paid for their house. (That part of the interview didn’t make it on TV).

So I was surprised not long ago when one of my most technically savvy friends was horrified that I had posted my home address in Foursquare. As if that wasn’t available on the county property tax site, by looking up my domain names, or any number of other public databases. I explained that I didn’t see a need to avidly protect data that was so readily available – I protect myself in other ways.

If you accept the premise that one of the drawbacks of today’s database driven, social media centric society is lost privacy, then there are some behaviors that should be changed. What caused me to think about that was this XKCD comic. As it says, we have trained ourselves to use passwords that only make things hard on ourselves, but actually don’t add any security, only perceived security. Our account and privacy policies do the same thing.

Not long ago, my wife’s iTunes account was used by someone other than her to purchase some apps. It was obvious that she wasn’t the one who did it, as the apps were Vietnamese. She contacted Apple, and after several days of runaround, she was able to get her account first locked, then re-enabled with a new password, and finally her money refunded. She then changed all of her passwords.

Technically her account wasn’t “hacked”, someone changed her passwords, and then used her account as their own. It is the typical account recovery policy that allows this.

Most web sites have a password recovery policy. This is because people set passwords that are difficult to remember and difficult to type. Add to that the fact that most people then save the passwords in their browser, once a new browser or computer comes into play, they cannot log into the site they are trying to access because they have long since forgotten the password. So, they follow the password reset process that exists to allow them to regain access to their account.

However, without your password, how do you prove that you are who you say you are? Typically with a verification question. Many of the predefined questions have answers that are publicly available. What is your mother’s maiden name? What is your address? Where did you go to high school? Some are not quite that public, but still easy to find. What is your pet’s name? Who was your favorite teacher?

Not all of them can be found – but many of them can. A simple Facebook post or otherwise innocuous tweet can provide the answer to many of the questions that are not specifically public. This gives third parties ways to access the credentials you use for e-mail, purchasing, banking, etc.

The way I get around this is simple – I lie. As far as the Interwebs are concerned, my pet’s name is Megatron, my favorite color is vermillion, my favorite teacher was Dr. Death, and so on. (Of course, now I have to keep track of this information as well as my passwords).

It isn’t an elegant solution, but at the same time it is one more way to enhance security in my online life. Do I have privacy, no. But, my lack of privacy is less of a target than for some others. Of course no one is hack proof. Systems that actually hold credential sets are compromised all the time. However, my method does make one of the many ways to steal personal data slightly more difficult…

Twitter Update

  • @jhaletweets The wikipedia article on his book "Capitalism and Freedom" is a good intro. Relevant to current political/economic climate 1 month ago


February 2017
« Jan    

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 11 other followers