Citrix Session Recording is Great!!!

I love that Smart Auditor has come back…..  er… I mean Session Recording.  This is an amazing tool. The only issues I have with this product is if you want to not use SSL and retention and back to multiple consoles.

I could complain about the multiple consoles, but that would be kicking a dead horse again and again.  We will leave that alone and hope that Citrix will consolidate eventually.

Citrix has documented very thoroughly on how to install Session Recording with SSL.  But what if you are with a client that doesn’t have an internal PKI solution and doesn’t want to buy a 3rd party cert for this.

To Configure the Session Recording without SSL, don’t choose a certificate during the installation.  You would believe this to be enough, except when the website is installed, it is setup to require SSL.  To fix this setting, open IIS admin and navigate to the SessionRecordingBroker site.  Choose SSL Settings, and uncheck require SSL.


The main problem is there is no interactive way to setup archiving of the Recordings.  If Citrix could develop a utility that would make it easy to configure the managing of the recordings it would be much nicer.  As of now, the only way to manage the recordings is with the icldb utility.


Citrix has only listed the main commands in their document.  If you would like to learn more about the commands here is a full list of the options for each command





[/L] [/F] [/S] [/?]


Archive session recording files older than the retention period specified.

This will mark files in the database as archived. Physical files will not

be moved unless the /MOVETO option is specified. Archiving a large number

of files may take some time.


/RETENTION:<days>  The retention period for session recording files. Files

older than this will be marked as archived in the

database. Retention period must be greater than 2 days.

/LISTFILES         List the path of files as they are being marked as


/MOVETO:<dir>      Specify a destination directory to which files are to be

physically moved. If this option is omitted, files will

remain in their original location.

/NOTE:<note>       Attach a text note to the database record for each

file that is archived.


/L           Log results and errors to the Windows event logs.

/F           Force command to run without prompting.

/S           Suppress copyright message.

/?           Display command help.




ICLDB DORMANT [/DAYS:<days> | /HOURS:<hours> | /MINUTES:<minutes>]

[/LISTFILES] [/L] [/F] [/S] [/?]


Display or count the session recording files that are deemed as dormant.

Dormant files are session recordings that never completed due to data loss.

The search for dormant files can be made across the whole database or only

recordings made within the specified last number of days, hours, or minutes.


/DAYS:<days>       Limit the range of the dormant file search to the last

number of days specified.

/HOURS:<hours>     Limit the range of the dormant file search to the last

number of hours specified.

/MINUTES:<minutes> Limit the range of the dormant file search to the last

number of minutes specified.

/LISTFILES         List the file identifier for each dormant file found.

If this is omitted, only the count of dormant files will

be displayed.


/L           Log results and errors to the Windows event logs.

/F           Force command to run without prompting.

/S           Suppress copyright message.

/?           Display command help.






[<file> …] [<directory> …]


Import session recording files into the database. The metadata contained

within each file will be read and database records created. Once a file is

imported, the file must not be moved or deleted.


/LISTFILES         List the files before importing.

/RECURSIVE         For directories specified, recursively search for files

in all sub-directories.

<file>             Name of file to import (wildcards permitted).

<directory>        Name of directory to search for files to import. Files

must have an .ICL extension. Sub-directories will be

searched if the /RECURSIVE switch is specified.


/L           Log results and errors to the Windows event logs.

/F           Force command to run without prompting.

/S           Suppress copyright message.

/?           Display command help.





ICLDB LOCATE /FILEID:<id> [/L] [/F] [/S] [/?]


Locate and display the full path to a session recording file given a file



/FILEID:<id>   Session recording file identifier or file name to search

for. This may be specified in either of the following two




(example: 545e8304-cdf1-404d-8ca9-001797ab8090)





(example: i_545e8304-cdf1-404d-8ca9-001797ab8090.icl)


/L           Log results and errors to the Windows event logs.

/F           Force command to run without prompting.

/S           Suppress copyright message.

/?           Display command help.





[/L] [/F] [/S] [/?]


Remove references to session recording files older than the retention

period specified. This will only remove records from the database, unless

the /DELETEFILES option is specified.


/RETENTION:<days>  The retention period for session recording files.

Database records older than this will be removed.

Retention period must be greater than 2 days.

/LISTFILES         List the path of files as their database record is

being removed.

/DELETEFILES       Specify that the associated physical file is to be

deleted from disk.


/L           Log results and errors to the Windows event logs.

/F           Force command to run without prompting.

/S           Suppress copyright message.

/?           Display command help.




ICLDB REMOVEALL [/L] [/F] [/S] [/?]


Removes all records from the Session Recording Database and returns the database

back to its original state. This command however, does not remove physical

session recording files from disk. On large databases this command may

take some time to complete.


Use this command with caution as removal of database records can only be

reversed by restoring from backup.


/L           Log results and errors to the Windows event logs.

/F           Force command to run without prompting.

/S           Suppress copyright message.

/?           Display command help.




ICLDB VERSION [/L] [/F] [/S] [/?]


Display the Session Recording Database schema version in the format



/L           Log results and errors to the Windows event logs.

/F           Force command to run without prompting.

/S           Suppress copyright message.

/?           Display command help.


Citrix messes with SQL Always On

XenDesktop 7.9 FMA has issues with SQL Always On….

Databases has been a source of controversy since Citrix released XenDesktop.  With the merger of XenApp and XenDesktop the main solution for database availability is SQL Always On.  With SQL Always On you have the benefit of a cluster for OS and SQL protection while still having the benefits of the standalone SQL Server.  I have deployed XD 7.x countless times using these technologies for many customers and have never had an issue with SQL Always On and Citrix technologies until 7.9

Using SQL Always On, I have been able to fail my SQL server, configure and manage my XD environment without issues.  I have recently discovered with 7.9 you are unable to extend the environment while utilizing SQL Always On.  The symptoms are simple:

  • Add a new Delivery Controller to an existing XD/XA 7.9 deployment utilizing SQL Always on
  • Receive an innocuous error, stating unable to connect to the SQL server
  • Datastore is now corrupt

The error received, with unable to connect to the SQL server, shows an error of unable to connect to a SQL Server…..  when you read the error, it is trying to connect to a SQL server directly in your Always On cluster.   The error details state it is unable to update the security in the database.  This is to be expected since the individual node it is trying to connect to is a secondary node in the Always On cluster.  Weird…..

Run the connect to a site wizard again, and it will give an error stating that the database cannot be updated again, this time showing the correct Always On name.

What has happened is the Datastore is now corrupt.  The tables housing the information regarding your Delivery Controllers is the only part effected.    The following screen shot is shows the Controller node of Citrix Studio:


Once this has occurred, all aspects of XD/XA continue to work, however you will be unable to get information regarding your delivery controllers.  To resolve this issue, you will need to clear out the database regarding any information of the new controller that was added.

Citrix has this handy article ( to remove Delivery Controllers manually.  The simple explanation is:

  • Open powershell and run Get-BrokerController
  • Make note of the SID of the offending Delivery Controller
  • Run the script provided in the article on a delivery controller.
    • Populate the $DBName with your Site Database name
    • Populate the $EvictedSID with the offending Deliver Controller SID
  • This script will create a SQL script the will need to be run against the Datastore

The way to avoid all this hassle is to simply remove your XD/XA DB’s from the SQL Always On group.  Leave the DB’s on the primary server and extend your delivery controllers.  After you have extended your site, put the DB’s back in the Always On Availability Group

I have submitted detailed information and logs to Citrix Technical Support and am working with them toward a permanent resolution- Stay Tuned!

The Data Center in a Post Virtualization World @ AZ Tech Summit Sept 17th in Phoenix, Arizona

How Fast Can This Go?

The speed of change is changing. It’s getting faster and faster and it sometimes feels that if you  blink you can miss an important development in technology. A prime example is the proliferation of Virtualization in the Data Center. Always wary of proclamations such as “this is the year of VDI” or “Everything is moving to the Cloud”, I do think that it is now valid to characterize the situation today as “Post Virtualization”. Virtual machines are now ubiquitous and there is widespread knowledge about how to configure and optimize the storage and network to support them- i.e., we know how to do this.

So what comes next? I suggest that the next phase is the Data Center Re-born: A dynamic pool of resources and productivity for the business to consume. We are moving out of the days where services and solutions are hard coded, built individually and not re-usable. Up until now, as new applications and resources come online, there is simply more to do, more to know and more to manage. People like to talk about “The Cloud” as the answer, and maybe in time it will be. What we need NOW are real ways to converge and streamline the datacenter and grant easy/secure access to Users and Data in support of the organizational mission. As a wise man I know once said, ” They just want to press the button and a get a Banana”. Up until now it’s all been way too complicated…..

The Data Center Re-born

OK, we are not yet just going to press a button and get everything we want out of a Datacenter just yet. But now there are many straightforward ways to get pretty close to that vision. I have been designing and deploying these solutions since the 1990’s and we are at the best point ever to balance the Triangle of Cost-Performance-Capacity. In short what this means is that for a very reasonable cost, organizations can now adopt strategies and technologies that get you much closer to the dream. It is now completely possible to configure your storage, network, operating systems, applications, data, and user access as fully Dynamic Services. Three major characteristics of these systems are:

Deploy By Assignment- Deploy users, devices and applications simply by assigning resources, not by the brute force of building machines, installing applications, locking down systems, maintaining hardware, etc, etc

Built once, Re-use infinitely- Yes, it’s real!

Dynamic Allocation of Resources: Storage, Compute, Applications, User Data, Remote Access are all available to be consumed as needed on top of a High Availability and Fluid platform. This platform is lower cost, its components can be used, re-used and re-purposed as needed (for example, no more new SAN every three years, reuse that storage in new ways). This is not magic, it follows from building the infrastructure and platform services using these new approaches. Once the foundation is properly established, it becomes easy to serve up the Applications, Tools, Data, and ability to Collaborate that your users need to serve the Mission of the Organization.


Join us, and a select group of core technology partners, on September 17th for the AZ Tech Summit in Phoenix to explore these concepts. We will be hosting an Innovative Data Center Pavilion at the entry to  the Main Event Hall.

Come speak with experts and learn how our clients are running these streamlined operations  and gaining the benefits 24×7. Informal discussions will be going on throughout the day as well a Main Conference session:


12:00 pm – 1:00 pm
Tech Theater II
Lunch & Learn: The Data Center in a Post Virtualization World  Presented by: Steve Greenberg, Thin Client Computing


…and an Executive VIP Presentation/Discussion:


2:45 pm – 3:45 pm
VIP Executive Track
Executive Strategies for Mobility and Virtual Data CentersPresented by: Steve Greenberg, Thin Client Computing


REGISTER HERE and enter the code thin to receive a complimentary registration to this year’s conference. We look forward to seeing you there!


The Impending IT Crisis (and what do about it!)

In our consulting group we spend a lot of time discussing, dissecting and analyzing each project we do. This leads to long debates around what ultimately are the best practices in everything from app virtualization, to VDI vs SBC, to storage, networking, hypervisor and ‘physical versus virtual’. While this is personally and professionally very satisfying it pretty much means that we don’t do any “cookie cutter” solutions. Each new project gets the benefit of lessons learned and is uniquely tailored and shaped to be ideal for that particular client environment.

Over time, however, this process has been rapidly increasing in turn over time. It used to be measured in a few years and there was a relatively small set of technologies to master and keep up on. Then it accelerated to about a year or so, but with an order of magnitude more details to learn and integrate. Now, it seems to be happening in months and weeks and there is more and more complexity at each turn. There are even times when it seems that important elements of solutions are evolving and changing within a just matter of days! Oh, and once you figure it out, new version of the products get released and all new Best Practices are needed!

When you do this full time for a living, try really hard and have an “A” Team like we do at Thin Client Computing, we can just about keep up. However, most of our clients are not in the I.T. Business, their missions are in other important areas such as HealthCare, Education, Finance and Manufacturing. They do I.T. because it is necessary to run, support, enhance and grow their Core Mission.

In a recent group retreat, Brenda Tinius shared a concern and phrase that pretty much stopped us all in our tracks. She described with great concern was she sees as “The Impending IT Crisis”. The crisis is an inflection point in which the technology advances beyond what people can readily absorb and assimilate into their daily processes. IT Professionals are kept very busy with the day to day tasks of maintenance, repair, updates, and, responding to the daily needs of the Business and it’s Users- how can they possibly stay ahead of trends and innovate in a climate of change that is happening faster than human speeds!

One example is the fact that the technology industry has been pushing organizations to virtualize servers and desktops for years now. It is becoming generally accepted, and the stated policy of many organizations today, to virtualize every workload in their organization. Enter rapid change- that was a great idea when most of the workloads were running on legacy 32bit Operating Systems- servers had somehow sprawled out all over the data center in a mess of inefficient configurations and underutilized hardware. Hardware Virtualization, i.e. the hypervisor, emerged as a useful and effective tool. Over time it has become the central focus of so many IT initiatives, but, in the time it took to become mainstream, a lot has already changed!

Now there are well proven ways to virtualize at all layers of the stack- hardware, disk, operating system, application, user and presentation layers. Hardware virtualization is only one solution in a range of options and often strikes me at the technology equivalent of Monty Python’s classic skit “Mosquito Hunting with a Cannon

Some would say that this is whole point of Cloud Computing, you no longer have to buy, build, and maintain Information Technology yourself, you simply consume the resources you need and let the provider worry about all the details. Thats a great thing and I agree that in time this is exactly how the world will work, but, this is clearly in the future. For now, I just don’t see comprehensive offerings in which organizations can completely outsource all their needs to a Cloud Provider and have them truly met.

Just like in the days of when the mainframes and minis ruled IT, I see users wanting, needing and expecting more than IT can often deliver. Today is it common for users to have better capabilites on their personal SmartPhone/Tablet and their home computer than they have at the office! Everyday now we are hearing about departments within our client companies skirting around the internal IT department to deploy technologies they need and want themselves. Meanwhile, IT is working harder than ever to provide what they can, and, with smaller and smaller budgets. There is a real Crisis brewing here, but what can we do about it?

In short, it is time for a new Era of Innovation and I see this as fueled by a taking a fresh look at the technology landscape and being willing to let go of old assumptions  and ideas. We have to start over again in 2013, wipe the slate clean and take a fresh approach. While most people regard Cloud as hype and self serving marketing on the part of many industry players, it has taught the key to avoiding the Crisis:

Build Once and Leverage Infinitely


The hardware today is astoundingly powerful and software capabilities are at an all time high. Tools are readily available to create advanced systems, whether internally or externally hosted, that can deliver virtually any application to any user, device or location. There is no longer any need to hard-code the hardware to the OS, the OS the Apps, the Apps to the User or the User to a device.

The key is to rethink how to accomplish this in your own organization. Take a step back, learn what is possible, leverage what is available and flip this whole Crisis on it’s head.  I.T. can become a valuable service to the organization once again by adopting these new ideas, rising to the challenge of the Cloud by rethinking and redesigning internal systems to provide seamless and ubiquitous services to all who need them. It is time to stop doing things the old way just because they are familiar and take a bold step forward into technologies and designs that let you get ahead of the curve by creating versatile platforms and not just point solutions.

Citrix Aquires RingCube- My Ears Must be Ringing

You know when you are thinking of someone and then they call you? Well this is how I felt today when I received the announcement today that Citrix has aquired RingCube.

Just yesterday I wrote about the the “Data Problem” around Virtual Desktops and Applications (see blog C.R.A.P. Is King). This announcement from Citrix signals an important move in the right direction. What RingCube brings to VDI is the ability to represent all of the Computer Residue of Applications and Personalization (C.R.A.P.) from a standalone PC and layer it on top of a shared/read only VDI instance. In practise this means that the IT shop can manage a single image for a large number of users and yet provide the user a fully personalized environment (including apps that they have installed themselves).

The RingCube approach is to quantify all the data created by the user into a standard VHD file container. At runtime this set of data is layered over the shared/read-only desktop instance. In this approach you get a ‘best of both worlds’ scenario in that a single desktop image can shared to many users, i.e. through Provisioning Services, and yet the user experience is fully customizable. We have deployed other solutions to address this problem but they come with high system costs and add considerable complexity to the environment.

While this doesn’t address the larger issue of persisting this data across multiple operating systems and platforms, it does potentially provide a very elegant solution to the “Data Problem” in a pure VDI environment. Although Citrix has not yet made any specific product announcements, I predict that this functionality will influence adoption for organizations that want a simple and cost effective way to move existing PC’s into a centralized VDI solution.

This potentially could be a more elegant solution to the question posed by Gabe Knuth “Is P2V-ing your existing machines into a VDI environment really an option?”  In that article, Gabe explores this and cites one of our customer case studies in which P2V was actually the best way to transition the desktop into VDI. Only time will tell how well this works in practice, but we will be watching carefully and would love to hear your thoughts on the subject in the meantime!



VDI- One Man’s Trash is another Man’s Treasure, or, Why Crap is King….

[Please note, is under renovation- some content and links are in still in progress]

I.T. Professionals and Consultants who have worked for any period of time on hosting (or virtualizing) applications and desktops are acutely aware of the unstructured data that becomes part of a user’s environment. On a standalone PC it goes pretty much unnoticed as it “blends into the woodwork” of the overall system, spreading itself across the registry, file system and user profile. However, when you virtualize applications and desktops you become faced with trying to capture and re-apply this data as users move across diverse systems. Tim Mangan identified this issue in his 2008 Briforum Session “The Data Problem” which was an early recognition of the problem and a great explanation of the sources and impacts (PS-that’s the back of my bald head in the audience). He also has a more recent article on the subject  “How to Describe Layering: the blob, cake, or 3D Tetris”.

Over many years of working with Roy Tokeshi, a leading Citrix SE,  he would refer to this set of data in his technical/business presentations as “Crap”. In an effort to validate this concept, and to be able to actually use the word “Crap” in presentations, I came up with the following acronyon:

Computer Residue of Applications and Personalization (C.R.A.P)

I was pretty proud of this one and then Ron Oglesby pointed out on Twitter that “I love your acronym. But Users are like Hoarders. Some guy’s CRAP is their meaningful “stuff’ ”  

As a result I am releasing an alternate version:

Carefully Retained Applications and Personalization (C.R.A.P)

So now we can use “Crap” in any context , positive or negative, to refer to this same set of undefined data that attaches itself to users and applications.

This a strange problem because on the one hand our inclination is to simply retain all this data and carry it across whatever environment the user wants to run in. Whenever possible we like to have the settings that a user expects automagically appear (because then people are happy and we are heroes). Yet, large portions of this data may be  irrelevant (at best) or even incompatible (at worst). This problems shows itself most acutely in mixed environments where applications are delivered across multiple operating systems, and, when using other tools such as App-V. For example, a user may have a local desktop OS (i.e. XP), a hosted VDI desktop OS (Win7) and apps or desktops hosted in Windows 2003 and 2008 R2. In these cases there will be corruptions of settings, locked sessions, broken profiles, etc. when indiscriminately mixing this data across platforms.

What is the solution? Well there is no simple answer that can be applied in all cases, but it comes down to knowing your applications and including/excluding the correct portions of the data for the target platform. The details will follow in a future entry, but for now we have identified and understand the challenge this presents….