Skip to content

My take on ‘The Post PC Era’

So we’re calling this new thing “the post PC era”.  Dell bought Wyse for thin clients.  HP has made thin clients for seemingly ever.  Where are we headed here:

  • VDI is here to stay
  • Even cheap PCs are too expensive to run

Let’s start with the first point:  VDI is here to stay.  VDI is an odd duck in that it transfers cost from OpEx (support) into CapEx (data center hardware).  NetApp/EMC (and to a degree Dell and HP) are committing borderline assult with regards to hard disks (much less a shelf – go price a NetApp shelf as an add on and you will cry).  Even assuming you can put 4-12 users on a single spindle, the drive costs 4-12x as much excluding the expensive storage array you have to slam behind it.  Servers may save you a bit of money assuming the VM host licensing doesn’t take it all back:  assuming a single server processor core is ‘enough’, a single 4 way 12 core box can handle 60 users at a sub-30k cost.  Less power draw in total (since you are only running two big 1100 watt power supplies instead of 60 250 watt power supplies).  Networks have to be beefed up (10G ethernet to the server, 8G fibre, and converged networks) to support this kind of data load.

But if we are tying so much in server expense, where are we saving money: ongoing operations.  Instead of 10 desktop support technicians to support a campus, you have two or three.  The help desk can resolve issues from their location: be it restoring a ‘golden’ image, or freeing resources up.  This saves real money over time.

Even with the capex talk, there are vendors ready to take VDI into the ‘cloud’, making this an OpEx investment for smaller (I’d say anyone under 150 desktops should consider a cloud solution if their network connectivity supports it).  These have a bit of difference thanks to Microsoft licensing restrictions (have to use Windows Server instead of native Windows 7 desktops), but should be able to meet most lower end needs especially for users that require mobility.  Give a sales guy a tablet and a thin client for his desk.  His ‘desktop’ is always with him, and if he loses the tablet, just kill it over the air.  More secure, better end user experience.

So what do we lose?  End users may feel they’ve stepped ‘back’, especially the users who are old enough to remember terminals.  They lose the feeling of seeing things boot up.  But who cares about those things?  Users.  But the gains probably will outweigh the losses.

Advertisements

Goodbye to the laptop (and good riddance)

The laptop.  No other tool has changed how we work in the past 15-20 years.  It took us away from our desks and out into the coffee shops, homes, and airports we work in today.

Which is why it’s a good time to kill it, or at least replace it with something a bit different.

Do I mean force everyone to use a tablet?  Maybe.  Maybe I mean give users a choice: if they want a tablet, let them use a tablet.  Maybe provide them a lower end desktop PC and a tablet for travel.  Desktops have longer useful lives, are less likely to be the cause of data loss, and generally are lower cost to operate/maintain over the life of the device.  Tablets are portable, easily maintained, can connect back to the office easily, and generally require a lot less worry for any size IT organization.  For non-web apps, we can provide Citrix/Terminal Services/other solutions for end users to access their software (or data we do not wish to have an easy exit path from our data center).  Innovative solutions like Oxygen Cloud allow us to link our remote users into existing NAS investment which allows end users to maintain access to data within reasonable access constraints.  Mobile Device Management tools – including simple and low cost service based solutions – reduce management overhead and create tools by which end users, IT, and others can manage devices and handle data loss incidents faster and easier than with current laptop platforms.  For staff in far flung locales, hands on tablet support may be easier to come by:  instead of having to dispatch a PC back to the central office, or have a technician visit, the employee can visit a mobile provider or tablet vendor store front for assistance.

For users where tablet may not make sense, BYOD combined with VDI/Application Virtualization and other technologies allow staffers to provide their own machine, reducing costs.  CIOs see BYOD as a solution to a variety of IT budgeting issues:  a mechanism to reduce expensive end user system expense, which can be moved to other more useful functions both inside of and external to IT.  Less devices mean happier users:  they get to ‘pick’ what they use, be it an Android tablet, or a Windows laptop. In any number of cases: what I’ve said for tablets ring true for BYOD.  The compute/storage remains in the computer room.

What could be the savior of the laptop?  Licensing.  Today, understanding the licensing for Microsoft Office is easy:  one per machine. The problem with VDI MS Office is one per machine applies and sticks (for 90 days).  So if an end user connects to a VDI instance from their home PC and their tablet, that is two licenses required to remain compliant.  The OnLive Desktop saga shows that this is an area not to take lightly at all.

So where does all this go?  If you look at where companies like Dell and HP are pointing their R&D budgets, it’s on a mix of higher end consumer systems (XPS 13) and enterprise technology (storage, networks) to improve the data center.  The writing is on the wall:  time to somewhat bury the laptop.

Thoughts on the end of the Social Boom

First, It’s been a while since I’ve blogged here. I think I’ll bring this back for some discussions not just on IT, but the technology sector in general.

Tonight, I will point you toward Omar Gallaga’s discussion on SXSWi and how this feels like the end of the boom around Social Media. First, Omar is right: It seems more and more like this boom will end sooner than later. We’re seeing technologies gain attention (albeit less crazy than the late 1990s – the market has learned) that should have never made it past the angel phase.

Second is the bigger (and lucrative) question: what next. We know tablets, smartphones, and the other tools that took us to the “Web 2.0” era are here to stay. Facebook isn’t going anywhere. Twitter isn’t either. The next evolution of Internet technology must work within the context of these tools. So based on that, it can be inferred that: we will see an era where desktops will become less important, both in the home and the workplace as smaller devices (and specialized devices) take those tasks over. While the Social boom is ending, the Cloud and aaS eras are not. More and more data will be moved off PCs and into the data center (creating it’s own boom: not in physical data center space – but solutions to make existing data centers more efficient at current physical constraints). The TV may become the center of some homes again: not as a a box that gets 4 channels or even 400 channels, but a device that connects the home to a variety of services: be it streaming media, Facebook, or the bank over IP.

In the enterprise, the promise of tablets may finally be delivered upon: as a cost savings tool. In my analysis, capital expense of a tablet over a similarly equipped laptop is around half. While the traditional desktop will never go away, we will see two growth drivers in this area: businesses that issue significant numbers of severely underutilized desktops and laptops look to tablet or other solutions as a method to reduce costs and smaller businesses who self-identify services that are available online such as point of sale solutions (Square) as a mechanism to reduce initial and ongoing capital costs.

We are again at a crossroads. The boom in social media will reach an end soon. The capital and resources that are in that market must find a new home once the bust happens. Many old line businesses will identify ways to adapt to new paradigm, however in a significant number of arenas old line businesses will simply fail to adapt because of cultural issues: be it a fear of cannibalizing existing cash cow products, or a simple lack of will to put forth risk. canibal

It’s not the servers, stupid

In 1992, James Carville put three statements on the Clinton Presidential campaign HQ in Little Rock.  #3 was famously “It’s the economy, stupid”.  Bush went from 90+% approval in early 1991 to losing to Clinton in November 1992 (some say Perot played a spoiler, but we’ll not go there).

While much like 1992 the economy isn’t too hot, in IT one thing has gone from hot to very cold – servers.  With virtualization, the server stopped mattering.  The server is a commodity platform.  The core differentiator between Dell, HP, IBM, and whoever else is what do you want (Blades?  Okay. Dell has em, HP has em, IBM has em, Cisco has em (UCS).  Traditional rackmounts? You’ve got more choices than you’ll ever want).  You see this in the M&A behaviors of firms.  Would HP pay an ungodly amount of money for 3Par to keep Dell from owning them if servers mattered?  Probably not.  Why?  Because servers sell storage.  And what sells servers (especially in SMB)?  Administrator preferences.  After some experiences I had with HP DL360s in a former life, I’d never buy HP.  The Dell stuff worked.  The HP stuff didn’t.  The few IBM and Rackable Systems (now SGI) we had were okay (I’d buy IBM again, but IBM sees the writing on the wall and puts little focus on commodity hardware).  Is HP better today?  Their market share says yes.  Will I be running them in the near future?  Probably not (and the server volumes I buy don’t warrant testing them).

Today, the sole differentiator (outside of being on HCLs) is remote management – iDRAC vs ILO vs whatever IBM/foo has.  When you have 10-20 physical servers, you might just have a remote KVM switch that can tie into the old hardware you inevitably have (legacy systems – I kn0w big businesses with old beige Dell Optiplexs on their DC floor running old legacy apps that won’t virtualize and can’t die for some legal or one person is still using it reason) as well as new stuff.  Sure the remote management stuff does more, but when you consider the remote console feature in Dell is costing you an extra 200-300 per server, it doesn’t take long to pay for a good KVM and more other stuff (RAM, drives, etc).  Most of the rest the basic cards can cover, or your system management tool will discover on either agent based management, or within ESX(i) on virtualization hosts.

The Wedding – The Cloud and the Network

Here is one simple fact – Cloud computing is really good for ISPs.  Really good.  Moving tools from the data center – where either your LAN or dedicated inter-office WAN covered your data needs, to the cloud – at a remote data center where access is handled either by pure IP access, or (hopefully) IPsec tunnels, leads to more network needs.  As your business becomes more dependent on services held in far off data centers, you become more dependent on your IP provider.

The irony here is that most ISPs are pretty monolithic, old school organizations (and cloud is a new school dynamic way of thinking).  Try and make AT&T or Verizon move fast.  You can’t.  Even your tier-2 ISPs, who should be a bit more responsive, are still slow and half the time, miss their own SLAs.  T1 pulls take weeks to install.  Metro Ethernet, DS-3 and any form of optical is even more expensive runs and lead time to get installed.  If you have fairly high uptime requirements, getting BGP up is a pain in the rear so you can multi-home – getting two of these firms on the phone to get something going is a pain (ignoring you have to get a /24 – no matter what, albeit ARIN is pretty laid back on this, but it explains helps to IP space burnout since you’re probably NAT’ing your desktops).  Even if you are small enough to only need cable or DSL, it’s still a rough go – count yourself lucky you’ve not grown into high-uptime, high-dollar, high-headche network services.

It’s time for a tip – be careful about your ISP selection.  Make sure they are responsive to your needs.  I have one provider where the sales person returns my emails quickly, but their NOC would probably enjoy hearing my violent demise on the phone.  Another where I rarely hear from the sales guy, but every ticket I’ve ever opened was handled quickly – 3 days to a BGP spin up sounds good after the SLA busting, week and a half BGP nightmare I’m still going through with their peer provider.  If you’re in small enough business to only have a single provider – stick with the cheaper consumer stuff for as long as possible, then break out for as good of service you can afford – and if Metro Ethernet is in your area, give it a shot.  It is going to be as reliable as more traditional T/DS/OC links?  No.  But dollar for dollar, you’re not going to beat it.  Tie it with a T1 or two from another provider, do BGP, and off you go.

Another point – with your cloud provider, remember that geographical distance and AS distance are both important. For example – in Austin, Rackspace is pretty much at most two AS hops away.  On Time Warner Cable, it’s TWC to DFW-> Above DFW -> Rackspace in DFW.  Amazon?  Austin -> Houston -> Dallas -> Level 3 to  Atlanta? -> Level 3 Washington (DC?) -> Amazon.  Which one is going to store files faster?  Of course the one only a hop or two away.

Too often, the internetworking portion of Cloud is forgotten (the ISPs just hope they don’t have to build out that much new capacity – just sell new capacity, which means more money and more BMWs) – the systems and storage guys drive this conversation.  In SMB IT, where you may wear every hat (I know I do), you don’t have a choice but to consider the implication of first, your ISP and how it will react and respond to your new data needs, and second how your cloud provider will interact with your ISPs.  The nature of these providers mean they peer with lots of firms – if you’re on say, Sprint or AT&T, you’re probably one hop away, and if you’ve got the bandwidth, they will probably let you use it.  If you’re on Bob’s ISP and Video Rental (don’t laugh – in the late 90’s the video rental place back home in Small Town, USA was the local ISP.  Had a T1 and some modems – and everyone had to be happy unless you could afford at T1!), be aware that your needs and their abilities are probably not going to sync up.   Identifying an ISP is kind of like a marriage – you’ve got to find someone who makes you happy.  The sparks of early romance won’t always be there, but you don’t want to go home at night and think about leaving your ISP for a pillow and some cheap wine.

Clouding up: getting out of ‘dirty’ IT

First off: I’m going to define ‘dirty’ IT.  Dirty IT is IT you don’t want.  The parts of IT that you dread when you’re in bed at night and know someone else can do better.  Email.  Remote collaboration.  Offsite backups (D2D2C instead of D2D2C).  The kinds of technology that enable business, but have significant expertise requirements – would you rather worry about improving your network security protocols, or doing routine Exchange updates or handling the inevitable failure (because you can’t afford big clusters, fast SAN, and the like for 200 users)?  I’m taking the former 7 days a week, 52 weeks a year.   And anyway – you’re going into the cloud like it or not – the cost savings are too good for that PHB down the hall controlling the budget.

So now you’re headed into the cloud.  What to get rid of.  Of course you’re not dumping everything into the cloud unless you meet some very specific cases: say, retail.  Retail can go into the cloud, especially SMB retail.  As long as you’re not handling card data – and thus PCI, there is no reason your POS systems can’t be cloud based.  You can put Salesforce to work, track your customer bases, do all the analysis you need, and not own one server.  Not a single unitary server.  Maybe at most a small NAS appliance for storing files onto, but even up to a 5-10 store shop could easily use Box.net or a comparable service and boom – your files are accessible everywhere.  Even bigger than that – it’s still easy.  May not scale as well, but easy to implement.

Email is my favorite app to put in the cloud.  Exchange is a beast to administer.  I don’t want to own 2 servers for Exchange, hardware for BES, hardware for whatever else I need, tons of storage, and lots of headaches at 3 am when something goes wrong.  I want to have a web portal I can let our administrative assistant create email accounts in and setup Blackberry/ActiveSync devices at her desk.  This reduces IT spend more than anything.  Every penny we could save by an in-house email server – even a pure POP/IMAP solution, is spent back 3-4 times by administrator overhead.  Have trouble at 3am?  Someone working the 3rd shift will answer the phone and take care of it.  I don’t have to crawl out of bed and deal with it, other than calling in.

My second favorite Cloud app is document tools.  Now I love keeping unstructured data on NAS, and there is a case for that.  But there is a case for users who have very simplistic needs – a small spreadsheet to track time, a word processing document here and there, to use tools like Google Docs instead of buying $700+ Microsoft Office licenses.  What is the ROI on a 700 dollar license over 2 years for a user who fires up Word 2 times per night, and Excel twice a week.  It’s a negative number.  You’re losing money there.  You don’t have to think long to realize – 700 dollars is 14 times 50. 14 years at a 0% cost of capital.  Assuming 10% CoC, you’re never going to come out ahead on that Office license purchase – even assuming insane license lives compared to that $50 a year Google Apps Enterprise license.  Oh, and you get email with Google Apps (see 1).

Third is DR.  I’ve talked about DR in the cloud, and it’s a wonderful way to get rid of tape.  D2D2C is the new backup mechanism.  We’re going to it.  Some shops are even skipping the second D and going D2C.  The only advantage to me owning storage environments is that I can use it for virtualization, and SATA drives are pretty cheap for desktop backup.  For more critical items, it’s off to the cloud in a secure encrypted fashion.  The beauty of it is – either ‘cloud drives’ like Rackspace’s JungleDisk, Box.net, or a true cloud offering like Symantec’s services drive lots of value.  For sure cheaper than sending tapes off to Iron Mountain or whomever your retention vendor is, and I control ‘tape rotation’ from my desk. Tapes ready to disappear?  Check a few boxes, and poof.  Gone.  Life made easy.

5 Simple Rules for picking a storage vendor

I’m a big fan of simplicity.  Most everyone is, except storage vendors (with a few exceptions on the low end – players like Iomega (EMC for small biz) and Data Robotics (DROBO)).  If you need a storage solution with enough spindles to get some serious IOPS: either out of SATA, SAS, or FC (I know iSCSI and FCoE are the rage but FC still has some uses, especially in businesses where obtaining 10GBPS hardware isn’t a real option and you need every last IOP of performance) drives.  After 3+ months of looking at vendors, I’ve come up with a few simple rules.

1 ->  Look outside the box (from your ‘usual’ systems vendor).
If you’re like most SMB IT shops, you probably get most of your IT purchases from a single company – be it Dell, HP, IBM, CDW, or whomever.  For the most part, you need to be willing to look at least a little outside the box.  If you’re a Dell customer call up NetApp and Compellent.  If you’re a HP customer call up Dell and IBM.  While it’s an easy path, your vendor isn’t always going to meet your needs exactly every time.  While most everyone has a converged storage solution, is it the right one?  It may seem like I’m being un-simple here – but trust me – I am.  Calling up a vendor and getting some information doesn’t hurt.  They get turned down every day.  You might be surprised as to what you find just by taking a look outside the box.

2 -> Does the storage fit your growth plans?
If you’re like me, your entire IT system is dependent on virtualization to keep costs low.  Pulling up new vmware systems has a marginal cost of 0 – especially in my world where we run lots of projects on Linux (thanks CentOS!).   How does your storage vendor fit into this?  If you’re a Hyper-V shop – the same goes for you.  You need to make sure your vendors story fits in.  Is there onboard management – provisioning in vSphere client for example – of course EMC has it – they own 70%+ of vmware.  NetApp/IBM N-series has it.  I don’t know if HP does, but a quick search says no.  These tools save time (see rule #5).  But not only that – you need to understand how the system grows, and how this will affect future costs.  You do not want to be stuck doing a forklift upgrade because you hit an arbitrary limit on disk count 15 months into use.

3 -> TCO.  Oh boy.
Storage isn’t cheap.  Never has been.  It’s cheaper than it was in the past – SATA and SAS are cheaper than FC and SCSI. Make sure you understand fully where you’re going.  Don’t assume the sales guy isn’t lying to you.  If you’ve got crazy bucks and you’re buying an EqualLogic box – know you’re buying another one once that one is full.  I can add trays to my AX4-5i or my FAS2020.  Not on that EqualLogic (but on the Dell MD3020i/3000i).

4 -> Convergence = where it’s at
We have NAS today.  That NAS box sucks.  I mean sucks.  I pray for the day when I have converged storage appliances with support so I can pull the plug out, and do what these guys did to their old Symmetrix.  In SMB, in my opinion, it is much more likely to see a large storage initiative take flight when NAS value can be assigned – users understand unstructured files, and making easy service of those is value non-technical management – the ones with your purse strings – can understand.  Do they care about iSCSI or FC support? Not likely.  Do they like to have their files accessible easily?  Of course.   Will SAN-only machines stick around?  Of course.  Dell is selling EqualLogic left and right.  But one look at EMC’s unified storage love-fest (and 20% guarantee) indicates that even the biggest SAN player sees the writing on the wall.

5 -> Management overhead
My first line stands true – I’m a big fan of simplicity.  For all my ragging on Dell’s EqualLogic being an expensive one trick pony – that one trick pony has a good side.  It’s easy to manage.  Snapshotting is easy.  It doesn’t take a lot of work to get something going quickly.  It’s not painful.  You don’t have to be a dedicated storage administrator to make it work.  But other vendors are catching up.  NetApp’s tools, especially in vmware, make provisioning storage easy.  It’s not rocket science to get a FAS or V series filer working, even on the ONTAP command line.

After all my ranting, I just want to leave you with one final thought – whatever you do, do your homework.  I almost broke this rule on a project.  I’m glad that it took 3 times as long to get budget as I planned, because that time let me find something that provided more bang for not many more bucks by looking down a different path.

Update:  I hate the phrase IT solutions.  Sounds like marketing drivel.  Replaced it.