App-V5 and Citrix XenDesktop/XenApp 7.x

Citrix has been working really hard to make sure that App-V 5 is fully supported in XA/XD.  This is greatly appreciated and a key integration for a wide range of users. However, it doesn’t quite work like we would hope as there is a pretty big “Gotcha”- read on….

I deploy App-V and Citrix on a daily basis and teach hands-on classes to many of our clients and we have approached this a number of different ways.  For users wanting a published desktop the traditional approach was to create published desktops in XA/XD and then deploy applications to the desktop using the standard App-V tools. The App-V client for desktop use would include setting GPO’s with the following settings

  • Enable Package Scripts
  • Reporting Server
  • Configure publishing servers

 

Simple, effective and worked great. Configure Citrix, add the App-V client to the image and then configure App-V and allow the two to do their functions. This is still the method that needs to be followed for published desktops utilizing App-V applications.

For published applications, if you configure your environment in this way currently, things just don’t seem to function properly.  The generic message “Cannot start app” starts to appear.  This is where the problem currently lies.

To help with this, Citrix has recently introduced a node for Desktop Studio that handles app publishing for App-V applications within their Citrix’s console. According to the document released by Citrix (https://docs.citrix.com/en-us/xenapp-and-xendesktop/7-7/install-configure/app-v.html) it says to follow the Microsoft practices and then turn off the Microsoft practices.

Why would I configure all the best practices from Microsoft?!?!?!  According to the Citrix document, it will stop published applications from launching if you leave the best practices enabled.  What is neglected to be said in the article is why that happens.  The Citrix built tool CTXAppLauncher.exe is designed to perform the launch mechanism of the App-V Application.  This is what performs the communication to the App-V Publishing server and packages.

If you want to use published applications, we will need to follow the Citrix documentation to invoke the CTXAppLauncher.  Ok…. So I will configure it for publishing applications.  The key to this is to make sure that the UserRefreshOnLogon setting for a publishing server is set to FALSE.  When this is set to false, during the login process, Citrix will contact the publishing server, download and add the package to the client and launch.  This is great, but now I can’t have the users Refresh their application on logon for a desktop.

This translates to needing 2 delivery groups:

  • One for applications
  • One for Desktops

 

My feeling, is if App-V is in use correctly, I should be able to share the resources to help consolidate my environment.

The other issue is Load Balancing with App-V.  Using Citrix’s GUI, it is impossible to add more than one server.  If you were to use an LB name or DNS Round Robin, XA/XD will not be able communicate the App-V Publishing Servers.  This means I have a single point of failure because of Citrix wanting to make their own method of publishing App-V.  I know you can add more servers through POSH, but they do not show in the GUI.  Most Admins are going to look in a GUI and not doublecheck the config in POSH.

Why is it necessary to create their own publishing methods instead of using the built-in solutions already provided by Microsoft?  This seems to be a waste of time on their part.

In the end, I have found many ways for this to fail:

 

App-V integration is supported and needed by the Citrix products.  Just be sure to follow the Citrix document to the letter and everything should work except……..

Welcome to Citrix Workspace Environment Manager

By Steve Greenberg and Hal Lange

Citrix recently acquired Norskale software and in the process added VUEM (Virtual User Environment Manager) and Transformer software to their portfolio. The new name for VUEM is Citrix Workspace Environment Manager (WEM) and it available as a free entitlement for existing XenApp/XenDesktop customers here. Check out Hal’s new video series to help get started using and optimizing WEM in your environment

We have been working closely with WEM since 2011 and are proud to have contributed to key design elements and features of the product over the years.  We have many customers running WEM, across a wide range of industries and use cases on XenApp, XenDesktop and physical PCs. Regardless of environment size or type all of our customers have experienced one key outcome: Incredible User Experience.  It is not unusual to be called into to help out clients with login time of 90 seconds or even 2-3 minutes (or more). WEM routinely provides login times of 10-15 seconds that stay constant over the course of years, i.e. they do not degrade like standard profiles types and solutions.

Beyond fast logins though, it is often the uncanny performance optimizations that WEM brings that get the attention. WEM can expand the scalability of a typical XenApp server up to 50-60% but not just packing more users in the process, but actually IMPROVING THE USER EXPERIENCE USING THE VERY SAME HARDWARE, MEMORY STORAGE AND SERVER IMAGE!

These optimizations apply to CPU, RAM and I/O and touch every aspect of User Experience for the better whether on XenApp, VDI or even physical PCs. When a process takes unnecessary CPU cycles it is coaxed down, when apps maps out unnecessary/unused RAM, WEM returns it to the system, and, when a process selfishly consumes IOPS it gets it’s hand slapped and is made to behave. The end result is High Performance Server Based Hosted Computing like you have always wanted, and, VDI/Physical desktops that can FULLY leverage the powerful hardware they run on. If you were wondering where all the performance is with our specular new PC/Server hardware, WEM brings it home!

But wait, there’s more……

The GPO replacement for the user environment, increases login speed and granular control of the user settings.  With users able to control their own applications and printers without access to the system, it gives a great balance of security and user experience.

One of the best parts of this product, is also the best kept secret.  The database is in a standard readable format. User settings are normalized into standard readable/writable values, not just a file system or database of mysterious binary hives and blobs. With simple SQL commands, you can easily edit, import, export and customization user settings at will individually or in batches.  It has been simple for us and our customers to import print servers, add registry settings, create applications and settings with very basic SQL actions.

In addition, WEM provides an easy to use interface with a vast set of filters/settings that provide all of the most commonly used Citrix settings right from the GUI. It is easy to learn, easy to own and lends itself to delegation of common administrative tasks to other groups within the organization such as Desktop Admin/Techs and the Help Desk to provide direct and speedy resolution for End Users.

The acquisition by Citrix not only helps solidify XenApp/XenDesktop as the leading Application and Delivery platform but adds to it the Best in Class User Experience. The Cherry on Top is that after  6+ years of large scale experience in live Enterprise environments, we can count the number of product support tickets on one hand.  This technology just works and it is THAT GOOD.

Let us know if you have any questions!

Hal and Steve

 

Citrix Session Recording is Great!!!

I love that Smart Auditor has come back…..  er… I mean Session Recording.  This is an amazing tool. The only issues I have with this product is if you want to not use SSL and retention and back to multiple consoles.

I could complain about the multiple consoles, but that would be kicking a dead horse again and again.  We will leave that alone and hope that Citrix will consolidate eventually.

Citrix has documented very thoroughly on how to install Session Recording with SSL.  But what if you are with a client that doesn’t have an internal PKI solution and doesn’t want to buy a 3rd party cert for this.

To Configure the Session Recording without SSL, don’t choose a certificate during the installation.  You would believe this to be enough, except when the website is installed, it is setup to require SSL.  To fix this setting, open IIS admin and navigate to the SessionRecordingBroker site.  Choose SSL Settings, and uncheck require SSL.

ScreenRecording

The main problem is there is no interactive way to setup archiving of the Recordings.  If Citrix could develop a utility that would make it easy to configure the managing of the recordings it would be much nicer.  As of now, the only way to manage the recordings is with the icldb utility. https://docs.citrix.com/en-us/xenapp-and-xendesktop/xenapp-6-5/xenapp65-w2k8-wrapper/ps-sa-library-wrapper-v2/ps-sa-reference-wrapper-v2.html

 

Citrix has only listed the main commands in their document.  If you would like to learn more about the commands here is a full list of the options for each command

 

ARCHIVE:

 

ICLDB ARCHIVE /RETENTION:<days> [/LISTFILES] [/MOVETO:<dir>] [/NOTE:<note>]

[/L] [/F] [/S] [/?]

 

Archive session recording files older than the retention period specified.

This will mark files in the database as archived. Physical files will not

be moved unless the /MOVETO option is specified. Archiving a large number

of files may take some time.

 

/RETENTION:<days>  The retention period for session recording files. Files

older than this will be marked as archived in the

database. Retention period must be greater than 2 days.

/LISTFILES         List the path of files as they are being marked as

archived.

/MOVETO:<dir>      Specify a destination directory to which files are to be

physically moved. If this option is omitted, files will

remain in their original location.

/NOTE:<note>       Attach a text note to the database record for each

file that is archived.

 

/L           Log results and errors to the Windows event logs.

/F           Force command to run without prompting.

/S           Suppress copyright message.

/?           Display command help.

 

DORMANT:

 

ICLDB DORMANT [/DAYS:<days> | /HOURS:<hours> | /MINUTES:<minutes>]

[/LISTFILES] [/L] [/F] [/S] [/?]

 

Display or count the session recording files that are deemed as dormant.

Dormant files are session recordings that never completed due to data loss.

The search for dormant files can be made across the whole database or only

recordings made within the specified last number of days, hours, or minutes.

 

/DAYS:<days>       Limit the range of the dormant file search to the last

number of days specified.

/HOURS:<hours>     Limit the range of the dormant file search to the last

number of hours specified.

/MINUTES:<minutes> Limit the range of the dormant file search to the last

number of minutes specified.

/LISTFILES         List the file identifier for each dormant file found.

If this is omitted, only the count of dormant files will

be displayed.

 

/L           Log results and errors to the Windows event logs.

/F           Force command to run without prompting.

/S           Suppress copyright message.

/?           Display command help.

 

 

IMPORT:

 

ICLDB IMPORT [/LISTFILES] [/RECURSIVE] [/L] [/F] [/S] [/?]

[<file> …] [<directory> …]

 

Import session recording files into the database. The metadata contained

within each file will be read and database records created. Once a file is

imported, the file must not be moved or deleted.

 

/LISTFILES         List the files before importing.

/RECURSIVE         For directories specified, recursively search for files

in all sub-directories.

<file>             Name of file to import (wildcards permitted).

<directory>        Name of directory to search for files to import. Files

must have an .ICL extension. Sub-directories will be

searched if the /RECURSIVE switch is specified.

 

/L           Log results and errors to the Windows event logs.

/F           Force command to run without prompting.

/S           Suppress copyright message.

/?           Display command help.

 

 

LOCATE:

 

ICLDB LOCATE /FILEID:<id> [/L] [/F] [/S] [/?]

 

Locate and display the full path to a session recording file given a file

identifier.

 

/FILEID:<id>   Session recording file identifier or file name to search

for. This may be specified in either of the following two

formats:

 

xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

(example: 545e8304-cdf1-404d-8ca9-001797ab8090)

 

-or-

 

i_xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.icl

(example: i_545e8304-cdf1-404d-8ca9-001797ab8090.icl)

 

/L           Log results and errors to the Windows event logs.

/F           Force command to run without prompting.

/S           Suppress copyright message.

/?           Display command help.

 

REMOVE:

 

ICLDB REMOVE /RETENTION:<days> [/LISTFILES] [/DELETEFILES]

[/L] [/F] [/S] [/?]

 

Remove references to session recording files older than the retention

period specified. This will only remove records from the database, unless

the /DELETEFILES option is specified.

 

/RETENTION:<days>  The retention period for session recording files.

Database records older than this will be removed.

Retention period must be greater than 2 days.

/LISTFILES         List the path of files as their database record is

being removed.

/DELETEFILES       Specify that the associated physical file is to be

deleted from disk.

 

/L           Log results and errors to the Windows event logs.

/F           Force command to run without prompting.

/S           Suppress copyright message.

/?           Display command help.

 

REMOVEALL:

 

ICLDB REMOVEALL [/L] [/F] [/S] [/?]

 

Removes all records from the Session Recording Database and returns the database

back to its original state. This command however, does not remove physical

session recording files from disk. On large databases this command may

take some time to complete.

 

Use this command with caution as removal of database records can only be

reversed by restoring from backup.

 

/L           Log results and errors to the Windows event logs.

/F           Force command to run without prompting.

/S           Suppress copyright message.

/?           Display command help.

 

VERSION:

 

ICLDB VERSION [/L] [/F] [/S] [/?]

 

Display the Session Recording Database schema version in the format

<major>.<minor>.<build>.<patch>.

 

/L           Log results and errors to the Windows event logs.

/F           Force command to run without prompting.

/S           Suppress copyright message.

/?           Display command help.

 

Citrix messes with SQL Always On

XenDesktop 7.9 FMA has issues with SQL Always On….

Databases has been a source of controversy since Citrix released XenDesktop.  With the merger of XenApp and XenDesktop the main solution for database availability is SQL Always On.  With SQL Always On you have the benefit of a cluster for OS and SQL protection while still having the benefits of the standalone SQL Server.  I have deployed XD 7.x countless times using these technologies for many customers and have never had an issue with SQL Always On and Citrix technologies until 7.9

Using SQL Always On, I have been able to fail my SQL server, configure and manage my XD environment without issues.  I have recently discovered with 7.9 you are unable to extend the environment while utilizing SQL Always On.  The symptoms are simple:

  • Add a new Delivery Controller to an existing XD/XA 7.9 deployment utilizing SQL Always on
  • Receive an innocuous error, stating unable to connect to the SQL server
  • Datastore is now corrupt

The error received, with unable to connect to the SQL server, shows an error of unable to connect to a SQL Server…..  when you read the error, it is trying to connect to a SQL server directly in your Always On cluster.   The error details state it is unable to update the security in the database.  This is to be expected since the individual node it is trying to connect to is a secondary node in the Always On cluster.  Weird…..

Run the connect to a site wizard again, and it will give an error stating that the database cannot be updated again, this time showing the correct Always On name.

What has happened is the Datastore is now corrupt.  The tables housing the information regarding your Delivery Controllers is the only part effected.    The following screen shot is shows the Controller node of Citrix Studio:

screensql

Once this has occurred, all aspects of XD/XA continue to work, however you will be unable to get information regarding your delivery controllers.  To resolve this issue, you will need to clear out the database regarding any information of the new controller that was added.

Citrix has this handy article (https://support.citrix.com/article/CTX139505/) to remove Delivery Controllers manually.  The simple explanation is:

  • Open powershell and run Get-BrokerController
  • Make note of the SID of the offending Delivery Controller
  • Run the script provided in the article on a delivery controller.
    • Populate the $DBName with your Site Database name
    • Populate the $EvictedSID with the offending Deliver Controller SID
  • This script will create a SQL script the will need to be run against the Datastore

The way to avoid all this hassle is to simply remove your XD/XA DB’s from the SQL Always On group.  Leave the DB’s on the primary server and extend your delivery controllers.  After you have extended your site, put the DB’s back in the Always On Availability Group

I have submitted detailed information and logs to Citrix Technical Support and am working with them toward a permanent resolution- Stay Tuned!

Hyper-convergence, Zerto and Disaster Recovery- a Quiet Revolution is underway…

In technology we tend to notice the big headlines that change the I.T. world, but sometimes the biggest changes come along quietly. Think about the early days of Citrix or VMWare, both started as niche technologies that steadily grew over time to change, well, everything. As stand-alone technologies they were powerful, but when combined together the potential became much greater than the sum of the parts.

A new revolution is stirring now with Hyper-Convergence and most have heard of it and many have deployed it already. At the same time though a quieter revolution is occurring, courtesy of Zerto, that focuses on new approaches to Replication and Disaster Recovery. Although this sounds dangerously close to “Backup”, i.e. BORING, it is one of the most exciting developments in recent times.

The basic idea is simple. Just as we have done with compute, storage and network virtualization, the power of any given function shifts from proprietary hardware/software combinations to a simplified software solution. In the case of Zerto, replication and DR functions have moved up the stack to provide replication at the Virtual Machine level. So instead of replication strategies that are dependent on specific hardware and hypervisor products/configurations, Zerto allows the replication and activation of protected VMs across hypervisors, and, on any type of storage or compute platform.

So how does this relate to hyper convergence? Well, the power is in the combination of features- now you can shrink your entire infrastructure to a fraction of it’s current size, lower the cost, eliminate most of the complexity, minimize the number of multiple infrastructure vendors and platforms, and, have nearly instantaneous DR capabilities! Here’ s a quick case study from an internal memo:

Arizona Tile maintains (2) Data Centers across their campus, each one has about 3-4 racks of gear, much of it is older equipment. In the past we built a highly redundant storage/hypervisor/Citrix environment that spans these two data centers via direct linked fiber. Citrix has been the key enabler for their geographically disperse operations and they rely heavily on thin client devices. They expressed the desire to move to a co-location facility about a year ago and we assisted by setting up a 3rd storage node for them at COLO. This allowed them to have an off-site copy of their data volumes and a future location for primary production

Now here is where Citrix,  Zerto  and Atlantis Hyperscale together took this to a new level:

  • We are able to run ALL OF THEIR PRIMARY WORKLOADS across all cities and states on only (2) Atlantis CX12 units–4U total!
  • These are placed at COLO significantly reducing the infrastructure burden on the staff
  • Zerto will migrate existing VMs from on campus to COLO with just a short reboot per VM. NO SPECIAL STORAGE PRODUCTS REQUIRED, no SRM, NetApp mirroring, etc, This same software, less than $1000 per VM, will be used to fail back in DR and testing scenarios. The customer can even migrate to Hyper-V if they choose using the same product
  • The existing on-site SAN and computer resources are re-purposed as the DR Facility (at no additional cost)!

The combination of these two technologies together have brought one of the most rapid advances in my career to the Architect’s Toolbox. The ability to shrink an entire Data Center environment, move it to a professional COLO all while lowering costs, increasing performance, implementing DR and reducing complexity is staggering–oh, did I mention at a lower cost?…..Welcome to the Revolution!!

 

 

Why Citrix Workspace Cloud gets it right

Ok, I am a server hugger, there I said it. I like my servers local, connected by high bandwidth, right where I can see them. I keep multiple copies of my data on-site and use cloud based backup too. I love cloud technology- it’s new, it’s cool and I leverage it in many ways- but the meat and potatoes for me stays on site.

 

data-closet

My home office servers- right where I can see them, touch them , and, even hug them

 

At TCC, our practice is about giving our clients the most reliable, high performance systems possible, and, we are asked to constantly drive costs down in the process. Our solutions power hospitals, financial institutions, colleges, Not for Profit organizations, and, Businesses of many different types and sizes. Cloud based solutions are becoming more widely adopted but in our slice of the world they have mostly been either specialized solutions, i.e. Salesforce.com, or mundane services like email which many clients prefer not to have to maintain themselves. For the mission critical systems, the best performance and business benefit comes not from public cloud based hosting, but from where the data lives – most often privately owned on-premise, or, co-located solutions

Before you call me a luddite, consider that while we like to focus on desktops and apps, in the end it’s about the data organizations need to function. This data no longer exists in silos, i.e. AN app and A database sitting neatly beside each other. For most of our clients their day to day “system” is actually an amalgam of many systems and constantly changing data sources. To pick up part of that environment and move it to “the cloud” would introduce latency and new points of failure for critical resources. Over time this will certainly change, but for now it is a practical reality for the majority of our client environments.

Yet, we have tasted the benefits of scale, availability, on demand resources and ease of updates that Cloud based services bring. So what’s a Server Hugger to do? Enter, Citrix Workspace Cloud  (…cue dreamy music)

While many in the industry have been busy trying to push organizations into the cloud and present a ‘hybrid’ approach, i.e. splitting resources between on and off perm,  it doesn’t solve for what the customer really needs and wants. Customers want the best of both worlds- run their resources where the data lives AND manage those resources from the cloud just like they would with any subscription based Cloud service. Citrix Workspace Cloud  does exactly that- it offloads the complex backend system that make up traditional Citrix environments and replaces it with a very simple Cloud service. The nasty back-end stuff is taken care of by them – your workloads run where YOU want them to run. This frees the customer up to focus on what they really care about- the business, user experience, user productivity and innovation.

We got in early as an Alpha/Beta tester. With the help of Harsh Gupta, Joe Shonk and I installed, tested and pounded on it. At first, it wasn’t hard to break which was okay because Joe likes to give Harsh feedback 🙂

Then, before our eyes, we watched as the product iterated, changed and improved (without any action or effort on our side). It got to the point where we would send Harsh an issue report and he would say things like “Oh, we updated the service last night, try it again” and sure enough it was fixed. In more than 20+ years working with Citrix and related products, I had never experienced an enterprise product being improved and innovated like you would expect to see with a website, Gmail, iTunes, etc. On top it, these updates didn’t require any user intervention!

While you can’t eat your cake and have it too, you CAN have both on premise workloads and all the characteristics of Cloud Based management with Citrix Workspace Cloud , take a look, it’s the future….

The Dream of the Thin Client 90’s is alive with NVIDIA GRID 2.0

 

 

Back in the day we called it Thin Client/Server Based Computing and the dream was ubiquitous access to desktops and app from anywhere, on any device. We worked really hard over the years and made great strides toward that goal. Every project since has been a laboratory for further improving compatibility, scalability, reliability and user experience. Great technologies from Citrix, Microsoft, VMWare, etc., have jet-fueled these efforts and the systems we deliver today are amazing. Yet for some uses cases, the user experience of remote desktops and applications didn’t quite reach 100% that of modern, high-powered local GPU based devices.

I have been an enthusiastic supporter of NVIDIA Grid capabilities as it overcame this very limitation for high end use cases. This has enabled scientists, engineers, designers, etc. to gain the same benefits mainstream businesses have enjoyed with Thin Client technologies for many years.

 

nvidia

A laptop accessing an array of remote applications including automotive design, 3D modeling, image processing and video editing all  leveraging NVIDIA GPU

 

NVIDIA upped the ante today in their announcement from VMWorld 2015 of Grid version 2.0. They have expanded the serviceable use cases in important ways by increasing the user density/application performance, supporting a wider range of servers and blade systems, and, adding LINUX guest OS support. There is new software and new Maxwell based cards as well:

 

Tesla M60 Tesla M6
GPU Dual High-end Maxwell Single High-end Maxwell
CUDA Cores 4096 1536
Memory Size 16 GB GDDR5 8 GB GDDR5
H.264 1080p30 streams 36 18
GRID vGPU CCU 2 / 4 / 8 / 16 / 32 1 / 2 / 4 / 8 / 16
Form Factor PCIe 3.0 Dual Slot MXM
Power 240W / 300W (225W opt) 100W (75W opt)
Thermal active / passive bare board

 

These new products from NVIDIA expand both the existing high-end capabilities, but even more significantly, they provide a big experience boost for us humble business users. Why does that even matter, aren’t we just doing flat 2D applications? Well more and more these days, browsers and productivity software incorporate graphical and 3D processing, and it’s not just “eye candy”. Take for example the richness of the info you gain from your daily internet browsing and the time you spend on your latest PowerPoint presentation. Your experience is significantly enhanced by the addition of vGPU making your work much more productive (and enjoyable!). You get the all the benefits of virtualization and centrally hosted compute with the visual experience  you expect from a powerful local graphics processor.

The ability support to 128 users per system democratizes  this experience for the larger user sets we commonly deploy for corporate clients.When it comes to high-end users, i.e. the rocket scientist, we expect to leverage expensive hardware for smaller, more specialized users. Now awesome GPU goodness is available for all on commonly used rack servers and blade systems. So lets go out and re-start our XenApp/XenDesktop engines with vGPU and crank up the User Happiness factor. The Dream of the 90’s is Alive with NVIDIA GRID 2.0!

 

Down the Rabbit Hole of IoT Part II, or, how an innocent hobby leads to creating an IoT Robo Laser Octoblu Smoke Breathing Kitty on Splunk !

 

If you are reading this, then you have read Part I and taken the BluOctoPill- welcome down into the Rabbit Hole of IOT!

 

Go with the Flow

The heart of this demo is the Octoblu framework, an amazing mesh network which is a powerful IoT Gateway between generators and consumers of data. Running in a highly resilient cloud framework, it supports multiple protocols, programming languages and platforms. The basic unit of operation is a “Flow” which is an instance of Octoblu that can do whatever you tell it to and can be created and managed in a drag and drop, web based interface. In the simplest case, you might do something like create a flow which says “When I post ‘#bluelight’ on my Twitter account, change my lights to blue”. What is happening there would be the Octoblu flow scanning Twitter for a post by you. When it sees the data “#bluelight”, it triggers the preset action you described of changing your lights to blue. How did it do that? You are running an instance of the gateway on some device of your own and connected it to your wifi based light. Now any condition, input, data, etc. that you define can control your lights via the Internet.

You can run an instance of gateway on a many different devices such as PC, Mac, Android, IOS, Arduino, etc. Based on the ambitious I/O requirements of this project, we chose to use a Raspberry Pi based on it’s processing capabilities and number of GPIOs (General Purpose Input/Output Pins) for controlling the various devices.

 

IMG_0432

Moheeb Zara building the Raspberry Pi image running Gateblu

Here is screenshot of the Flow we used in Octoblu to operate this demo. You can see that it is a drag and drop type of “flowchart’ interface.

 

OctoBlu-Flowclick to expand and open in a new window

Any pre-defined objects can simply be dropped into the flow and connected with the mouse. Bringing up the properties of any object lets you set the value of it, i’e’ “blue” for the light. The big button is a simple trigger which allows you to start the actions that are linked to it manually. In the light example, clicking on the trigger in the Octoblu web page would turn the light in your room to the color Blue.

In our demo, this trigger was used for testing while during the demo the real trigger data was coming from Splunk monitoring the Datacenter. This is explained beautifully in Jason Conger’s blog on how to trigger an Octoblu Flow from Splunk.

The monitoring of our Citrix Datacenter had 4 basic states and outputs defined as follows:

Green– Everything is normal and running great

Yellow– System is experiencing some issues

Red– Something is really wrong!

Defcon5– Crashed!

Lets take a look at a simple component to make it clear how this works- the SMS message sender. When everything was good and Splunk was providing data indicating the GREEN state, it triggers the SMS node as follows:


SMSRED

So, when the system was normal, I received a text that said “System State GREEN Everything is Good!”. I just had to define the phone number to send to and message payload, that’s it, Octoblu took care of the rest! Now, when the system went into DEFCON5, I received a very different message “STATUS: RESUME GENERATING EVENT – RESUME POSTED TO MONSTER.COM”!

SMSDEFCON5

So, taking this simple example, we extended this to all the things that make up the IoT Workspace as follows:

Raspberry Pi B+ with the Pi image running Gateblu with Wifi and Bluetooth adapters

A LIFX wifi lightbulb in the desklamp

FadeCandy controller board running the NeoPixel LEDs

A Phillips Hue Bulb lighting the glass plaques

Two servo motors mounted inside the Lucky Cat to turn the body left/right and the arm up/down

A relay controlling the laser projector mounted in the Kitty’s chest

A relay controlling vibration motors placed inside the mini file cabinet and mounted inside the foam rubber robot figure

A Punch Through Light Blue Bean hacked to be my keychain

An iPad acting as a digital photo frame showing pictures that reflect the system state

All of these devices were defined as nodes and connected through the Octoblu Gateway running on the Raspberry Pi. The difficulty ranged from the SMS example above (drag and drop) to hand coding  custom nodes such as one for the servo control using the Johnny5 machine control library (thanks to MoheebChris Matthieu and the whole Octoblu team for their help on this!).

In addition to the software, I worked out all the power supply requirements, logic/power/grounding cabling, relay control, board layout, etc. This included drilling holes in the cat for LEDs in the eyes and repackaging the circuit boards of a mini laser show projector in the body so it would project through a hold drilled in the chest. I learned all kinds of cool new tools in the process like ceramic drill bits, glue guns and soldering tiny things with magnifying lenses! Perhaps the single biggest challenge was somehow getting all this stuff across country to Orlando intact, figuring out how to get it mounted to the desk and actually working by show time!

I dont know what else to include here so please feel free to reach out to me @stevegreenberg to let me know if there is any other info that would be useful. We are also planning to hold a webinar in August to cover this info in an interactive format. In the meantime, enjoy some pictures below of the project and Geek Speak Tonight!

(Click on images to expand them to see more detail)
IMG_0617

IMG_0651

IMG_0597

IMG_0751

Desk4

Desk1

desk2
Desk5

Desk7 storage

DougBrown

IMG_0839

Virtual Twins
kitty laser cool

laser1



Splunk ICA round trip

Splunk Metrics

Virtual Twins

If you made it this far you are a hero, please let me know and I will buy you the beverage of your choice at our next industry meetup!

SG

Down the Rabbit Hole of IoT Part I, or, how an innocent hobby leads to creating an IoT Robo Laser Octoblu Smoke Breathing Kitty on Splunk !

When I left off on the last blog, my son and I were working on our Laser Kitty. I am happy to report that the project was successful! We completed our laser kitty by experimenting and learning all we needed to program an Arduino, servo motors and a laser and then shrink into a small package to fit inside a plastic Lucky Cat we bought at a local Chinese gift shop. You place the kitty on the edge of a table, or counter top, and it fires the laser against the ground, drawing a pattern of light for kitties to play with- here is what is looks like before final assembly:

 

IMG_0192

 

 

and in action:

(the flipped and distorted image happened by accident but it’s perfect because it looks a bad old kung fu moving opening sequence! how cool is that?!?)

Mission #1 Accomplished and the story would have ended here, except we then slipped far down into the Rabbit Hole of IoT! Immediately after, and very rapidly, a number of things happened that poured jet fuel all over this and ignited a massive flame of two months of manic late night hacking:

– I totally freakin’ love IoT and everything about the Maker culture

 @JoeShonk volunteered us to plan, execute and emcee Geek Speak Tonight! at Synergy 2015 , and, the theme was to be all about IoT. This was to include an opening comedy script for @Hal_Lange and I to perform which was not to be revealed to us until two days before!

– He set the bar for this to be a “Legendary” event

– People at Citrix, as usual and understandably, started to worry about this band of ridiculous geeks taking charge of a featured event at their annual worldwide conference (makes sense!). The main question coming at us “What is the practical business application of this stuff?”

– I love to rise to such challenges!

To satisfy all of the above conditions I came up with the IoT Workspace. The idea is that the Internet of Things is about stuff in our environment generating and receiving data. People tend to think of fitness trackers, internet connected refrigerators, or turning on and off your lights from your smartphone. Those are example applications, but the implications are much greater than that. I thought to myself, hey self, yes you: “What is the real core of Citrix? What do we implementers and Citrix users care most about? What would be simply awesome and make it all better?”

Well, the core of Citrix is delivering Applications and Desktops to anyone, on any device, anywhere, over any type of connection to any type of device. What we care about is how well that is running! Make it fast, make it “just work”. If something goes wrong, how do I find out? How do I isolate the problem? How can I act on it? While we know how to do these things, it always ends requiring someone to actively monitor stuff, logging into various consoles and systems and combing through data and indicators. If the Internet of Things is about connected devices, why can’t I make my own familiar environment work for me? Why can’t all the stuff on my desk be active consumers of IoT data? Why not have the things around me monitor the data center for me? Instead of logging in and looking around, why dont they proactively get my attention and tell me exactly what is going on in my Data Center? (and yes, laziness is often the real mother of invention!)

Like most big ideas, if I actually knew what it would take I never would have started. But in my naivety I knew I could control motors and microcontrollers and use Octoblu to consume data and talk to devices. So I ought to be able to create this, right? What I didn’t have was a ready way to get real world data out of a real Citrix Enterprise environment to trigger these devices.

Desk4

When you need help it always a good idea to turn to the best, so I reached out to my friend and fellow CTP  @JasonConger. Jason has a long history of mastering data access and code development around Citrix enterprise systems.

IMG_0761

 Jason pondering the IoT Workspace data flow…..

Let’s start with the end result, here is the video from Geek Speak Tonight! of the IoT Workspace. Note that the lamp, the glow of lights around the desktop, the pen set and glass desk plaque, the cat statue, picture frame, key chain, file cabinet and, um, ‘atmospheric conditions’, and, SMS messages to my iPhone are all receiving monitoring data coming from a system composed of XenApp, XenDesktop, Hyper-V and XenServer, Cisco UCS hardware and a storage array from a major enterprise manufacturer (name withheld because we knowingly allowed it fail and do not want to unfairly reflect negatively upon on the product!). Also, to understand some of the comments made in the video, you should be aware that in the previous two days a number of high profile demos had failed during keynote and presentations, especially in trying to demo the Citrix X1 Mouse in large wifi/radio saturated rooms. The same thing was happening to us as wifi was not working due to interference and the preceeding demos had not gone to well as a result…..

 

Be sure to read Jason’s blog on the same demo for more detail on the how he got the data from Splunk to interact with Octoblu and trigger the flows I created to control the devices.

Now you can take the RedOctoPill and end here having enjoyed the demo. Or, you can take the BluOctoPill and jump further down the rabbit hole of IOT with us in in Part II….

A Journey to IoT w/Father, Son, a Laser and Cats…Phase One

As I wrote about in The Internet of Things, or, the Consumerization of Engineering, last month my son, I and Joe Shonk attended the IoTPhx meet up here in Tempe, Arizona hosted by the awesome Chris Matthieu of OctoBlu.

Without a doubt, we were all deeply inspired by the technology and this great group of people.  Last night we attended the next meet up and watched two robot cars race using Twitter hashtags to move them forward (or backward):

 

robot-race

 

We saw awesome 3D printed parts and control systems, killer LED matrices and circuits. If that wasn’t enough, Moheeb Zara brought IoT controlled LED Pyramids that can be smartphone controlled, or, controlled from an insane control board with motorized faders that definitely came from an Alien Spaceship! This is for a display with Intel at the upcoming SXSW. If you are wondering where the passion for tech, tinkering, hacking and innovation is-   This Is The Place!

LED-Pyramids

 

So we got our first Arduino board at the event last month and thought,  Now What?

What we needed was some goal, our own personal MoonShot, an idea, a project to inspire us to learn, develop skills, and build something that we could take back to the next meeting. But what?

Well despite his gruff exterior Joe Shonk is quite a softy and loves cats and kittens, he has four of them at home and literally cats show up at his house asking to be adopted! My son and I came up with the idea that we could create an Arduino controlled Laser pointer game to entertain cats – why not? You can’t always be home to play with them, wouldn’t just a little automation help here?

 

Joe-and_GrumpyCat

@JoeShonk with Grumpy Cat

 

My son and I spent the next four weeks immersing ourselves in the process. First step was to achieve “Hello World”. In the Arduino space that is most often represented by attaching an LED to Pin 13 and creating a basic sketch (i.e. program) to turn on/off the LED.

 

IMG_0036

 

After we accomplished that, we continued on following tutorials on adding switches, scanning for the state of a switch, manipulating timings, etc. Each effort involves reading a tutorial, wiring some components on a bread board and creating the code to achieve the desired outcome.

We worked our way up to controlling servos (little motors that you can control the position of with commands) using the Radio Shack Motor Pack for Arduino. We needed two servos- one to control X (left and right) and one to control Y (Up and Down). The combination of these two movements controlling the laser, and pointed at the floor, would give the “target” for the kitty to chase.

 

IMG_0038

 

 

So to make a long, interesting and fascinating story short (involving hacking a laser sight off a toy gun, super glue, interviewing cat owners….and other fun stuff) here is a video clip of the basic mechanism of the KIT:

 

 

 

and here is a video of the system in action, note we upgraded the laser with a small, stand-alone laser module we got from Amazon.com and then hacking a ‘wall wart’ power supply to juice it (the rabbit hole goes deep, once you jump into it….)

 

 

 

 

This Project has Three Phases:

KIT,   Kitten Interaction Terminal, Local processing, NOT internet connected (Complete)

KIT-T     Kitten Interaction Terminal- Twitter Connected, i.e. turn it on by posting a Twitter #HashTag such as #PlayKitty!

KIT-N    Kitten Interaction Terminal- Nano Edition, this version would employee much smaller and less expensive components and be in a convenient casing

Happy to say that we made our goal of showing our Phase One project at the IoTPhx meet up and received great feedback. Our question to the group was how to get to Phase Two and connect KIT to the Internet. There were suggestions about doing it connected to a computer and/or doing it all on the board. There is some new code coming soon to provide TCP/IP connectivity within the ChipKit Boards that looks promising that could make it stand-alone…Good Times!

It made us feel great, like we were Batman, well we can’t both be Batman, but you know what I mean….then today I received this intriguing communication…

tweet1and Alisa is?

 

tweet2

Did I just create this person out some weird ability to manifest what I was thinking into the material universe? How cool is that?!?!!?

Stay Tuned, same Bat/Kat time, same Bat/Kat channel for the next episode where we connect our Kitty Interaction Terminal to Twitter!

 

@stevegreenberg