The Data Center in a Post Virtualization World @ AZ Tech Summit Sept 17th in Phoenix, Arizona

How Fast Can This Go?

The speed of change is changing. It’s getting faster and faster and it sometimes feels that if you  blink you can miss an important development in technology. A prime example is the proliferation of Virtualization in the Data Center. Always wary of proclamations such as “this is the year of VDI” or “Everything is moving to the Cloud”, I do think that it is now valid to characterize the situation today as “Post Virtualization”. Virtual machines are now ubiquitous and there is widespread knowledge about how to configure and optimize the storage and network to support them- i.e., we know how to do this.

So what comes next? I suggest that the next phase is the Data Center Re-born: A dynamic pool of resources and productivity for the business to consume. We are moving out of the days where services and solutions are hard coded, built individually and not re-usable. Up until now, as new applications and resources come online, there is simply more to do, more to know and more to manage. People like to talk about “The Cloud” as the answer, and maybe in time it will be. What we need NOW are real ways to converge and streamline the datacenter and grant easy/secure access to Users and Data in support of the organizational mission. As a wise man I know once said, ” They just want to press the button and a get a Banana”. Up until now it’s all been way too complicated…..

The Data Center Re-born

OK, we are not yet just going to press a button and get everything we want out of a Datacenter just yet. But now there are many straightforward ways to get pretty close to that vision. I have been designing and deploying these solutions since the 1990′s and we are at the best point ever to balance the Triangle of Cost-Performance-Capacity. In short what this means is that for a very reasonable cost, organizations can now adopt strategies and technologies that get you much closer to the dream. It is now completely possible to configure your storage, network, operating systems, applications, data, and user access as fully Dynamic Services. Three major characteristics of these systems are:

Deploy By Assignment- Deploy users, devices and applications simply by assigning resources, not by the brute force of building machines, installing applications, locking down systems, maintaining hardware, etc, etc

Built once, Re-use infinitely- Yes, it’s real!

Dynamic Allocation of Resources: Storage, Compute, Applications, User Data, Remote Access are all available to be consumed as needed on top of a High Availability and Fluid platform. This platform is lower cost, its components can be used, re-used and re-purposed as needed (for example, no more new SAN every three years, reuse that storage in new ways). This is not magic, it follows from building the infrastructure and platform services using these new approaches. Once the foundation is properly established, it becomes easy to serve up the Applications, Tools, Data, and ability to Collaborate that your users need to serve the Mission of the Organization.

Data-Center-Pavillion

Join us, and a select group of core technology partners, on September 17th for the AZ Tech Summit in Phoenix to explore these concepts. We will be hosting an Innovative Data Center Pavilion at the entry to  the Main Event Hall.

Come speak with experts and learn how our clients are running these streamlined operations  and gaining the benefits 24×7. Informal discussions will be going on throughout the day as well a Main Conference session:

 

12:00 pm - 1:00 pm
Tech Theater II
Lunch & Learn: The Data Center in a Post Virtualization World  Presented by: Steve Greenberg, Thin Client Computing

 

…and an Executive VIP Presentation/Discussion:

 

2:45 pm - 3:45 pm
VIP Executive Track
Executive Strategies for Mobility and Virtual Data CentersPresented by: Steve Greenberg, Thin Client Computing

 

REGISTER HERE and enter the code thin to receive a complimentary registration to this year’s conference. We look forward to seeing you there!

 

Keeping it Real in Tech: Marketing vs MarkT-ing

Just got back from Citrix Synergy 2014 happy, inspired and exhausted! It was a great week of learning, collaboration,  conversations, and great times with friends and colleagues from around the world. It was an overload of ideas and input, but one things stands out above all else- The character and heart of Citrix President and CEO Mark Templeton.

After a short leave of absence, this was Mark’s highly anticipated return to deliver the keynote at Synergy 2104 before his announced retirement within this next year. It is hard to describe the effect that MarkT (this is what we all call him) has on people. At first I thought it was just me as my career has directly paralleled Citrix and Mark’s leadership and I am deeply grateful for that. However, I spoke with countless attendees after the keynote about this and absolutely everyone said the same thing- that they are moved and inspired by Mark in a very special way. I heard this same sentiment across the board, everyone from first time attendees to old timers, Geeks, sales people, partners, etc. Feeling  this very strongly myself, and hearing it echoed over and over again throughout the week I set my mind to figure out exactly what was going on here. After much deliberation, here is my conclusion:

Some people are very skilled at speaking, at presenting a message in a clear and impactful way. Some people have great skills at persuasion or inspiration- they can get you excited in what they say and how they say it. Some people understand the technology behind products, or the business value, the use case, etc. When you listen you can be impressed or motivated to act. Mark is not any of those, he is something so much more…

MarkT has a heart the size of an ocean liner. You can’t help but be genuinely brought in, not from the hypnotic sound of a practiced speaker, but from the genuineness of a person who loves that they do and means what they say. He wants to share the exciting developments at Citrix because of what they can bring to YOU, how they can help YOU- he cares about others and is happy and honored to be able to share it.

In the end, it is about integrity, honesty and heart-felt sincerity that excites people. It cuts away the hype, pretense, agendas and spin and replaces it with genuine beliefs. When you experience the real thing, you just know it, everyone feels it and this year’s Synergy Keynote was the prime example. Next to this, the standard marketing/spin/positioning looks like a thin veil of charlantism. The “secret” is a sincere desire to make the world a better place, and, to lift up those around us in the process.

The Tech World, the Business World, and, the Whole World for that matter, will be a much better place if we can learn from his example and actively reach out to replace all this superficial (i.e. self-serving) Marketing, to make it Real, to question our own values and re-align them so that they truly can help others.

I hereby pronounce the End of Marketing and usher in a new era of Sincerity and ‘Keeping it Real” called the Age of MarkT-ing

 

Thanks for everything Mark, now it is our turn to carry this forward….

steveg-markt

Citrix 3D Graphics Cheat Sheet (and how to do Community right!)

One of the most exciting developments recently in the Virtualization World is the emergence of mature and highly performant remote 3D Graphics solutions. As expected, Citrix and NVIDIA are leading the charge here with full support for virtualized GPUs in the XenServer hypervisor. This is revolutionizing the delivery of high end graphical computing workloads remotely that, until recently, required dedicated local hardware to perform adequately. There is a groundswell occurring in industry, and among my consulting peers, in learning the best practices and approaches. In this regard, NVIDIA has done an oustanding job of collecting and sharing the relevant information. I received the data below from John Rendek at NVIDIA yesterday and was really pleased to see what they have assembled and shared here in full- Thank You NVIDIA for “Getting It”! **UPDATE** Jared Cowart filled me in that most of this of this data was compiled by Angelo Oddo, Senior Sales Engineer at Citrix. Mad Props to Angelo!

 

Citrix 3D Graphics Cheat Sheet   2/04/2014

Guides and Optimizations:

 

NVIDIA Resources:

 

 

NVIDIA-vGPU

VMware HDX Resources:

XenServer HDX Resources:

 

XenServer GPU commands:

 

List GPUs

lspci | grep VGA

 

Validate iommu is enabled

xe host-param-get uuid=<uuid_of_host> param-name=chipset-info param-key=iommu

 

Attach a VM to GPU

xe vm-shutdown

 

Find the UUID of the GPU Group

xe gpu-group-list

 

Attach GPU

xe vgpu-create gpu-group-uuid=<uuid_of_gpu_group> vm-uuid=< uuid_of_vm>

 

Validate GPU is Attached

xe vgpu-list

 

Start the VM

 xe vm-start

 

Detach a GPU

First, Shut down the VM using xe vm-shutdown

 

Find the UUID of the vGPU attached to the VM by entering the following:

xe vgpu-list vm-uuid=<uuid_of_vm>

 

Detach the GPU from the VM

xe vgpu-destroy uuid=<uuid_of_vgpu>

 

How to implement Citrix 3D Graphics Pack

Download Citrix XenServer 6.2 + SP1
Download NVIDIA GRID vGPU Pack for GRID K1 or GRID K2
Download Citrix XenDesktop 7.1 99 user trial or licensed software here (requires a MyCitrix ID)

1)     Start with a fresh XenServer 6.2 installation on GRID supported hardware

2)     Install XenServer 6.2 SP1

3)     Download the NVIDIA GRID vGPU Pack & install NVIDIA GRID manager in XenServer from CLI

4)     Create a base Windows 7 VM

5)     From XenCenter, assign a vGPU type to the base image

6)     Install NVIDIA GPU guest OS driver in the base image (available in the NVIDIA GRID vGPU Pack)

7)     Note: Drivers will not install if a GPU has not been assigned to the VM

8)     Install the XenServer Tools

9)     Install the latest version of Citrix HDX 3D Pro VDA 7.1

10)   Create a Machine Catalog using MCS or PVS

11)   Create a Delivery Group, assign users and publish the desktops

NVIDIA-XD7x

Tweaks for XenDesktop VDA:

  • The following Registry key setting will increase Frames per Second (FPS)

[HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\Graphics]“EncodeSpeed”=dword:00000001

 

  • The following registry key setting will ensure the screen is refreshed and eliminate artifacts of previously opened windows:

[HKEY_LOCAL_MACHINE\Software\Citrix\HDX3D\BitmapRemotingConfig]“HKLM_EnabledDirtyRect”=dword:00000000

 

Hotfixes, Drivers and Tool Downloads

*Primary Source

Microsoft App-V 5.0 Load Balancing

I have had the pleasure of working with Microsoft App-V for a while now and HA has always been a very important item.   Load Balancing has been a breeze in the App-V 4.x environments.  All you needed was a load balancer that could pass * for the port and * for the protocol and everything worked great.  Yes, you can argue that RTSP used 554 TCP but the random port is chose after was the killer.

That has all changed in App-V 5.0.  Now Kerberos is a huge deal.  Anyone that has worked with SQL clusters will understand how temperamental Kerberos can be without being properly setup.  After I have had the fun of translating Microsoft language into a usable format, I figured I would document to the best of my ability how to setup App-V 5 to use Kerberos and be load balanced.

Before I start, I would like to share some of the articles that were used or discarded in getting this to work

Microsoft has a “Planning for High Availability” article which can be found here http://technet.microsoft.com/en-us/library/dn343758.aspx.  This article talks about HA for the entire environment and a pretty good read except for the Web Services load balancing

Microsoft has another article on “How to provide fault tolerance and load balancing in Microsoft App-V v5”, http://support.microsoft.com/kb/2780309.   I didn’t find this article very useful

After combining the 2 articles above and many others, I have found these steps to be pretty straight forward and easy to do.

Assumptions:  I am assuming you have 2 or more App-V 5 servers installed with Management and Publishing working in the environment.  I put both Management and Publishing on the same servers, but it is up to your design.  I have performed these steps in Windows 2012 R2 Standard

I will be using the following as examples

Server Names:  vAppV01 and vAppV02
Load Balanced Name:  AppV
FQDN:  dummy.lcl
App-V Management port: 8080
App-V Publishing port: 8081

Step 1:  Have a Load Balancer and DNS A record

I tend to use Citrix Netscalers for LB on the projects I work on, but any should work.  Just like App-V 4.0, it is easiest to use a LB with * for ports and * for protocols.  Again the security guys will argue with me about you are opening to much.  My point is it is internal traffic and not transferring in company data.  All that is being transmitted is bits to launch an application.

Step 2:  Setup an AD Computer Account

Create a computer account in Active Directory with the Load Balanced Name.  This will be used to assign the SPN’s to later.

Step 3:  Change the IIS ApplicationPool Identity

This is where all the confusion comes in.  If you read all the information out there regarding the ApplicationPool Identity, it leads you to believe that you need to change this to run as a service account.  Performing this step will break the syncing of your publishing servers with the Management service.  We will just skip that part and allow the KernelMode to take care of Kerberos for you:

  • Navigate to c:\windows\system32\inetsrv\config and make a backup of ApplicationHost.config
  • Now we need to edit 2 parts of this file, both are found at the bottom of the file and crossed out below.
    <location path=”Microsoft App-V Management Service”>
    <system.webServer>
    <security>
    <authentication>
    <digestAuthentication enabled=”false” />
    <basicAuthentication enabled=”false” />
    <anonymousAuthentication enabled=”false” />
    <windowsAuthentication enabled=”true” />
    </authentication>
    </security>
    <webdav>
    <authoring enabled=”false” />
    </webdav>
    </system.webServer>
    </location>
    <location path=”Microsoft App-V Publishing Service”>
    <system.webServer>
    <security>
    <authentication>
    <digestAuthentication enabled=”false” />
    <basicAuthentication enabled=”false” />
    <anonymousAuthentication enabled=”false” />
    <windowsAuthentication enabled=”true” />
    </authentication>
    </security>
    </system.webServer>
    </location>
  • These sections need to read as the following:
    <windowsAuthentication enabled=”true” useKernelMode=”true” useAppPoolCredentials=”true” />

Now reboot your server to verify that changes have taken effect.

Step 4:  Adding SPN’s to Active Directory

Now that your file has been changed, we need to setup the following SPN’s to help allow AD to provide the Kerberos authentication for both App-V Publishing and Management Roles

Run the following commands with Domain Admin rights

Setspn –a http/<server>:<port> <domain>\<LB Name>
Setspn –a http/<server.FQDN>:port <domain>\<LB Name>

Examples below

  • • setspn –a http/appv:8080 dummy\appv
  • • setspn –a http/appv:8081 dummy\appv
  • • setspn –a http/appv.dummy.lcl:8080 dummy\appv
  • • setspn –a http/appv.dummy.lcl:8081 dummy\appv

Step 5:  Your Database

Nothing to add or change to the DB

Step 6:  Your Content Share

Nothing to add or change to the Content Share

Step 7:  Final Step

Now to make sure we don’t have the Publishing Servers going across to the other Management Server, I make one final change

Edit the Hosts file on each App-V Server to point to its own IP for the LB name

example:

If the IP for vAppV01 is 192.168.1.1 and IP for vAppV02 is 192.168.1.2 and the LB Name of AppV is 192.168.1.3, the hosts files should read like this:

Hosts File vAppV01:

192.168.1.1                 AppV

Hosts File vAppV02:

192.16.1.2                  AppV

 

Conclusion:

Now you have successfully setup the load balancing for App-V 5.  It is not as complicated as it seemed when I first started this journey, but again, there was no place that I found that had everything needed for App-V documented.

 

“You had me at Login”

This was one of my favorite moments on the expo floor at Citrix Synergy 2013 this week in Anaheim. I took an important client down to spend some time with Pierre and team from Norskale to do a deep dive into their VUEM product.

Norskale VUEM Booth at Synergy

We have been getting stellar results with this product at our client sites and thought it would be perfect for this particular environment. This client has a widely dispersed national company with a large PC desktop footprint and a well established XenApp and XenDesktop deployment that is rapidly growing.

We went through the range features in the framework, first from the user viewpoint by logging in and loading a complex set of user settings and modifying them in real time. We looked at various use cases of users populating their own Start Menus and Desktop from an admin provided list of apps, how users can easily assign their own printers and set their own defaults. We then looked into the admin console and the details of the integration with MS Roaming Profiles and Citrix Universal Profile Manager, i.e. how to implement folder redirection and VUEM provided default settings to exclude files that cause known issues with common apps such as Chrome. We examined the SQL backend and reviewed the architecture for placing the broker services across multiple locations and options for SQL clustering and mirroring. We went through the clever CPU and Memory optimizations that VUEM provides (which can increase scalability by 25%) and talked at length about integrating this framework into an existing login script/GPO based environment. The discussion got deeper and deeper and more and more technical and went on for until they closed the expo floor.

It was a great session and when it was all over I turned to my client and asked what he thought. He said, and I quote, “You had me at Login!”. While I am used to it, a sub 15 second login time is a major revelation (and productivity boost) for most organizations.

He went on to explain that with all the requirements of their environment from security, to drive mappings, to user and machine policies, that login times were often well over a minute and sometimes quite a bit more. While they value and need all these other great features, just giving the users all the settings with a smoking fast login was enough!

VUEM is definitely a fantastic product for managing your User Environment whether it is physical, virtual or, both. It is my personal “Best in Show” this year from Synergy. If you are interested in learning more, contact us for a demo

 

The Impending IT Crisis (and what do about it!)

In our consulting group we spend a lot of time discussing, dissecting and analyzing each project we do. This leads to long debates around what ultimately are the best practices in everything from app virtualization, to VDI vs SBC, to storage, networking, hypervisor and ‘physical versus virtual’. While this is personally and professionally very satisfying it pretty much means that we don’t do any “cookie cutter” solutions. Each new project gets the benefit of lessons learned and is uniquely tailored and shaped to be ideal for that particular client environment.

Over time, however, this process has been rapidly increasing in turn over time. It used to be measured in a few years and there was a relatively small set of technologies to master and keep up on. Then it accelerated to about a year or so, but with an order of magnitude more details to learn and integrate. Now, it seems to be happening in months and weeks and there is more and more complexity at each turn. There are even times when it seems that important elements of solutions are evolving and changing within a just matter of days! Oh, and once you figure it out, new version of the products get released and all new Best Practices are needed!

When you do this full time for a living, try really hard and have an “A” Team like we do at Thin Client Computing, we can just about keep up. However, most of our clients are not in the I.T. Business, their missions are in other important areas such as HealthCare, Education, Finance and Manufacturing. They do I.T. because it is necessary to run, support, enhance and grow their Core Mission.

In a recent group retreat, Brenda Tinius shared a concern and phrase that pretty much stopped us all in our tracks. She described with great concern was she sees as “The Impending IT Crisis”. The crisis is an inflection point in which the technology advances beyond what people can readily absorb and assimilate into their daily processes. IT Professionals are kept very busy with the day to day tasks of maintenance, repair, updates, and, responding to the daily needs of the Business and it’s Users- how can they possibly stay ahead of trends and innovate in a climate of change that is happening faster than human speeds!

One example is the fact that the technology industry has been pushing organizations to virtualize servers and desktops for years now. It is becoming generally accepted, and the stated policy of many organizations today, to virtualize every workload in their organization. Enter rapid change- that was a great idea when most of the workloads were running on legacy 32bit Operating Systems- servers had somehow sprawled out all over the data center in a mess of inefficient configurations and underutilized hardware. Hardware Virtualization, i.e. the hypervisor, emerged as a useful and effective tool. Over time it has become the central focus of so many IT initiatives, but, in the time it took to become mainstream, a lot has already changed!

Now there are well proven ways to virtualize at all layers of the stack- hardware, disk, operating system, application, user and presentation layers. Hardware virtualization is only one solution in a range of options and often strikes me at the technology equivalent of Monty Python’s classic skit “Mosquito Hunting with a Cannon

Some would say that this is whole point of Cloud Computing, you no longer have to buy, build, and maintain Information Technology yourself, you simply consume the resources you need and let the provider worry about all the details. Thats a great thing and I agree that in time this is exactly how the world will work, but, this is clearly in the future. For now, I just don’t see comprehensive offerings in which organizations can completely outsource all their needs to a Cloud Provider and have them truly met.

Just like in the days of when the mainframes and minis ruled IT, I see users wanting, needing and expecting more than IT can often deliver. Today is it common for users to have better capabilites on their personal SmartPhone/Tablet and their home computer than they have at the office! Everyday now we are hearing about departments within our client companies skirting around the internal IT department to deploy technologies they need and want themselves. Meanwhile, IT is working harder than ever to provide what they can, and, with smaller and smaller budgets. There is a real Crisis brewing here, but what can we do about it?

In short, it is time for a new Era of Innovation and I see this as fueled by a taking a fresh look at the technology landscape and being willing to let go of old assumptions  and ideas. We have to start over again in 2013, wipe the slate clean and take a fresh approach. While most people regard Cloud as hype and self serving marketing on the part of many industry players, it has taught the key to avoiding the Crisis:

Build Once and Leverage Infinitely

 

The hardware today is astoundingly powerful and software capabilities are at an all time high. Tools are readily available to create advanced systems, whether internally or externally hosted, that can deliver virtually any application to any user, device or location. There is no longer any need to hard-code the hardware to the OS, the OS the Apps, the Apps to the User or the User to a device.

The key is to rethink how to accomplish this in your own organization. Take a step back, learn what is possible, leverage what is available and flip this whole Crisis on it’s head.  I.T. can become a valuable service to the organization once again by adopting these new ideas, rising to the challenge of the Cloud by rethinking and redesigning internal systems to provide seamless and ubiquitous services to all who need them. It is time to stop doing things the old way just because they are familiar and take a bold step forward into technologies and designs that let you get ahead of the curve by creating versatile platforms and not just point solutions.

Announcing our Annual Event for 2012!

Join us for “Soar Beyond The Cloud”, Friday, February 24th 2012

For 15 years now it has been a tradition at Thin Client Computing to give back to our customers and the community through special events. Our concept is to eliminate the talking heads, sales pitches and self serving agendas and simply share real experience about what works best in practise (and what does not work so well!).

We are truly grateful that each year more people attend and tell us how valuable our events are to them. We have continued to seek out unusual and interesting venues and important/relevant topics to explore. We share real world feedback about technology implementation and best practises, and, introduce new and forward looking concepts/approaches. We also arrange the event so that the majority of time is spent in peer interactions, hands on demos and deep dive small group discussions.

This year we are pleased to take this to the next level based on an idea by our superb Technical/Business Analyst Brenda Tinius to occupy the Commemorative Air Force Museum in Mesa, Arizona.

 

Standing among these great machines, created in the Golden Age of American Innovation and Technology, we are honored to share our vision for the future. This is a future in which we are able to bring jobs back to the USA through well proven uses of Virtualization/Cloud Technologies. In 2012 we are at the point in which the technology, when properly implemented, simply works.

As a result Businesses, and organizations of all kinds, can cut costs dramatically while improving productivity, retention, lifestyle and job satisfaction and truly compete on a Global level in a whole new way.

Please come out and join us for “Soar Beyond The Cloud”, Friday, February 24th 2012 we believe you will find this an Inspiring and Educational day!

 

 

Unsung Heroes…the best of SE Troubleshooting and Technical Data in one place

Today’s world of open information exchange  is very different than just a few years ago where practical technical information was hard to come by. In those dark times, companies feared that acknowledging product issues, flaws or workarounds was a source of negative publicity and would hurt sales and affect stock prices. Trade shows were closed to open discussion and web based information was tightly controlled. The goal then was to make it all seem simple and magical, and above all, ignore the man behind the binary curtain.

Back in these Dark Ages of the 1990′s and early 2000′s it was the System Engineers who broke convention, risked their livelihood, and shared the needed information to the community. Doug Brown, Brian Madden and Roy “I am about to be fired for what I am going to say” Tokeshi are great examples, but perhaps the most widely known was (then) Rock Star Citrix SE Rick Dehlinger. Rick’s ‘Metaframe Tuning Tips’circulated the globe (often via modem) as THE practical guide to making Citrix Metaframe installations work. The idea was simple  1) find out from your own experience and from others what works   2) put the information in one place and   3) share it to the world.

Nowadays we have open Social Media, the CTP Community, Briforum, Geek Speak,  and even real-time free support with Citrix IRC ! There are countless ways to find information. However, 140 character conversations or quick-fix blog posts don’t always provide the depth of knowledge that is needed to be successful, and, they are not even close to being located in one place.

This week I was attending our Citrix Reseller Technical Briefing and lo and behold I heard the words from the mouth of SE Rock Star Jared Cowart,  ”I might get fired for what I am about say…”. I felt all warm and fuzzy inside!  Later JC shared a set of documents that he, and several others within the SE community, have been developing. I am happy to see that the spirit lives on!

These documents include:

- Citrix Troubleshooting (a fantastically deep PowerPoint that if you enter you may never come out of)

- Citrix Troubleshooting Tools (list of avilaible tools with the articles #s !)

- External Links (to important sites and resources)

- Recommended Training Videos

- Citrix Xen Desktop Tools (with article numbers)

You can download these documents as a single zip file here. Please provide feedback, if there is sufficient interest we will gladly create a centralized repository to maintain and update this data.

Citrix Aquires RingCube- My Ears Must be Ringing

You know when you are thinking of someone and then they call you? Well this is how I felt today when I received the announcement today that Citrix has aquired RingCube.

Just yesterday I wrote about the the “Data Problem” around Virtual Desktops and Applications (see blog C.R.A.P. Is King). This announcement from Citrix signals an important move in the right direction. What RingCube brings to VDI is the ability to represent all of the Computer Residue of Applications and Personalization (C.R.A.P.) from a standalone PC and layer it on top of a shared/read only VDI instance. In practise this means that the IT shop can manage a single image for a large number of users and yet provide the user a fully personalized environment (including apps that they have installed themselves).

The RingCube approach is to quantify all the data created by the user into a standard VHD file container. At runtime this set of data is layered over the shared/read-only desktop instance. In this approach you get a ‘best of both worlds’ scenario in that a single desktop image can shared to many users, i.e. through Provisioning Services, and yet the user experience is fully customizable. We have deployed other solutions to address this problem but they come with high system costs and add considerable complexity to the environment.

While this doesn’t address the larger issue of persisting this data across multiple operating systems and platforms, it does potentially provide a very elegant solution to the “Data Problem” in a pure VDI environment. Although Citrix has not yet made any specific product announcements, I predict that this functionality will influence adoption for organizations that want a simple and cost effective way to move existing PC’s into a centralized VDI solution.

This potentially could be a more elegant solution to the question posed by Gabe Knuth “Is P2V-ing your existing machines into a VDI environment really an option?”  In that article, Gabe explores this and cites one of our customer case studies in which P2V was actually the best way to transition the desktop into VDI. Only time will tell how well this works in practice, but we will be watching carefully and would love to hear your thoughts on the subject in the meantime!

 

 

VDI- One Man’s Trash is another Man’s Treasure, or, Why Crap is King….

[Please note, thinclient.net is under renovation- some content and links are in still in progress]

I.T. Professionals and Consultants who have worked for any period of time on hosting (or virtualizing) applications and desktops are acutely aware of the unstructured data that becomes part of a user’s environment. On a standalone PC it goes pretty much unnoticed as it “blends into the woodwork” of the overall system, spreading itself across the registry, file system and user profile. However, when you virtualize applications and desktops you become faced with trying to capture and re-apply this data as users move across diverse systems. Tim Mangan identified this issue in his 2008 Briforum Session “The Data Problem” which was an early recognition of the problem and a great explanation of the sources and impacts (PS-that’s the back of my bald head in the audience). He also has a more recent article on the subject  “How to Describe Layering: the blob, cake, or 3D Tetris”.

Over many years of working with Roy Tokeshi, a leading Citrix SE,  he would refer to this set of data in his technical/business presentations as “Crap”. In an effort to validate this concept, and to be able to actually use the word “Crap” in presentations, I came up with the following acronyon:

Computer Residue of Applications and Personalization (C.R.A.P)

I was pretty proud of this one and then Ron Oglesby pointed out on Twitter that “I love your acronym. But Users are like Hoarders. Some guy’s CRAP is their meaningful “stuff’ “  

As a result I am releasing an alternate version:

Carefully Retained Applications and Personalization (C.R.A.P)

So now we can use “Crap” in any context , positive or negative, to refer to this same set of undefined data that attaches itself to users and applications.

This a strange problem because on the one hand our inclination is to simply retain all this data and carry it across whatever environment the user wants to run in. Whenever possible we like to have the settings that a user expects automagically appear (because then people are happy and we are heroes). Yet, large portions of this data may be  irrelevant (at best) or even incompatible (at worst). This problems shows itself most acutely in mixed environments where applications are delivered across multiple operating systems, and, when using other tools such as App-V. For example, a user may have a local desktop OS (i.e. XP), a hosted VDI desktop OS (Win7) and apps or desktops hosted in Windows 2003 and 2008 R2. In these cases there will be corruptions of settings, locked sessions, broken profiles, etc. when indiscriminately mixing this data across platforms.

What is the solution? Well there is no simple answer that can be applied in all cases, but it comes down to knowing your applications and including/excluding the correct portions of the data for the target platform. The details will follow in a future entry, but for now we have identified and understand the challenge this presents….