Down the Rabbit Hole of IoT Part II, or, how an innocent hobby leads to creating an IoT Robo Laser Octoblu Smoke Breathing Kitty on Splunk !

 

If you are reading this, then you have read Part I and taken the BluOctoPill- welcome down into the Rabbit Hole of IOT!

 

Go with the Flow

The heart of this demo is the Octoblu framework, an amazing mesh network which is a powerful IoT Gateway between generators and consumers of data. Running in a highly resilient cloud framework, it supports multiple protocols, programming languages and platforms. The basic unit of operation is a “Flow” which is an instance of Octoblu that can do whatever you tell it to and can be created and managed in a drag and drop, web based interface. In the simplest case, you might do something like create a flow which says “When I post ‘#bluelight’ on my Twitter account, change my lights to blue”. What is happening there would be the Octoblu flow scanning Twitter for a post by you. When it sees the data “#bluelight”, it triggers the preset action you described of changing your lights to blue. How did it do that? You are running an instance of the gateway on some device of your own and connected it to your wifi based light. Now any condition, input, data, etc. that you define can control your lights via the Internet.

You can run an instance of gateway on a many different devices such as PC, Mac, Android, IOS, Arduino, etc. Based on the ambitious I/O requirements of this project, we chose to use a Raspberry Pi based on it’s processing capabilities and number of GPIOs (General Purpose Input/Output Pins) for controlling the various devices.

 

IMG_0432

Moheeb Zara building the Raspberry Pi image running Gateblu

Here is screenshot of the Flow we used in Octoblu to operate this demo. You can see that it is a drag and drop type of “flowchart’ interface.

 

OctoBlu-Flowclick to expand and open in a new window

Any pre-defined objects can simply be dropped into the flow and connected with the mouse. Bringing up the properties of any object lets you set the value of it, i’e’ “blue” for the light. The big button is a simple trigger which allows you to start the actions that are linked to it manually. In the light example, clicking on the trigger in the Octoblu web page would turn the light in your room to the color Blue.

In our demo, this trigger was used for testing while during the demo the real trigger data was coming from Splunk monitoring the Datacenter. This is explained beautifully in Jason Conger’s blog on how to trigger an Octoblu Flow from Splunk.

The monitoring of our Citrix Datacenter had 4 basic states and outputs defined as follows:

Green- Everything is normal and running great

Yellow- System is experiencing some issues

Red- Something is really wrong!

Defcon5- Crashed!

Lets take a look at a simple component to make it clear how this works- the SMS message sender. When everything was good and Splunk was providing data indicating the GREEN state, it triggers the SMS node as follows:


SMSRED

So, when the system was normal, I received a text that said “System State GREEN Everything is Good!”. I just had to define the phone number to send to and message payload, that’s it, Octoblu took care of the rest! Now, when the system went into DEFCON5, I received a very different message “STATUS: RESUME GENERATING EVENT – RESUME POSTED TO MONSTER.COM”!

SMSDEFCON5

So, taking this simple example, we extended this to all the things that make up the IoT Workspace as follows:

Raspberry Pi B+ with the Pi image running Gateblu with Wifi and Bluetooth adapters

A LIFX wifi lightbulb in the desklamp

FadeCandy controller board running the NeoPixel LEDs

A Phillips Hue Bulb lighting the glass plaques

Two servo motors mounted inside the Lucky Cat to turn the body left/right and the arm up/down

A relay controlling the laser projector mounted in the Kitty’s chest

A relay controlling vibration motors placed inside the mini file cabinet and mounted inside the foam rubber robot figure

A Punch Through Light Blue Bean hacked to be my keychain

An iPad acting as a digital photo frame showing pictures that reflect the system state

All of these devices were defined as nodes and connected through the Octoblu Gateway running on the Raspberry Pi. The difficulty ranged from the SMS example above (drag and drop) to hand coding  custom nodes such as one for the servo control using the Johnny5 machine control library (thanks to MoheebChris Matthieu and the whole Octoblu team for their help on this!).

In addition to the software, I worked out all the power supply requirements, logic/power/grounding cabling, relay control, board layout, etc. This included drilling holes in the cat for LEDs in the eyes and repackaging the circuit boards of a mini laser show projector in the body so it would project through a hold drilled in the chest. I learned all kinds of cool new tools in the process like ceramic drill bits, glue guns and soldering tiny things with magnifying lenses! Perhaps the single biggest challenge was somehow getting all this stuff across country to Orlando intact, figuring out how to get it mounted to the desk and actually working by show time!

I dont know what else to include here so please feel free to reach out to me @stevegreenberg to let me know if there is any other info that would be useful. We are also planning to hold a webinar in August to cover this info in an interactive format. In the meantime, enjoy some pictures below of the project and Geek Speak Tonight!

(Click on images to expand them to see more detail)
IMG_0617

IMG_0651

IMG_0597

IMG_0751

Desk4

Desk1

desk2
Desk5

Desk7 storage

DougBrown

IMG_0839

Virtual Twins
kitty laser cool

laser1



Splunk ICA round trip

Splunk Metrics

Virtual Twins

If you made it this far you are a hero, please let me know and I will buy you the beverage of your choice at our next industry meetup!

SG

Down the Rabbit Hole of IoT Part I, or, how an innocent hobby leads to creating an IoT Robo Laser Octoblu Smoke Breathing Kitty on Splunk !

When I left off on the last blog, my son and I were working on our Laser Kitty. I am happy to report that the project was successful! We completed our laser kitty by experimenting and learning all we needed to program an Arduino, servo motors and a laser and then shrink into a small package to fit inside a plastic Lucky Cat we bought at a local Chinese gift shop. You place the kitty on the edge of a table, or counter top, and it fires the laser against the ground, drawing a pattern of light for kitties to play with- here is what is looks like before final assembly:

 

IMG_0192

 

 

and in action:

(the flipped and distorted image happened by accident but it’s perfect because it looks a bad old kung fu moving opening sequence! how cool is that?!?)

Mission #1 Accomplished and the story would have ended here, except we then slipped far down into the Rabbit Hole of IoT! Immediately after, and very rapidly, a number of things happened that poured jet fuel all over this and ignited a massive flame of two months of manic late night hacking:

- I totally freakin’ love IoT and everything about the Maker culture

- @JoeShonk volunteered us to plan, execute and emcee Geek Speak Tonight! at Synergy 2015 , and, the theme was to be all about IoT. This was to include an opening comedy script for @Hal_Lange and I to perform which was not to be revealed to us until two days before!

- He set the bar for this to be a “Legendary” event

- People at Citrix, as usual and understandably, started to worry about this band of ridiculous geeks taking charge of a featured event at their annual worldwide conference (makes sense!). The main question coming at us “What is the practical business application of this stuff?”

- I love to rise to such challenges!

To satisfy all of the above conditions I came up with the IoT Workspace. The idea is that the Internet of Things is about stuff in our environment generating and receiving data. People tend to think of fitness trackers, internet connected refrigerators, or turning on and off your lights from your smartphone. Those are example applications, but the implications are much greater than that. I thought to myself, hey self, yes you: “What is the real core of Citrix? What do we implementers and Citrix users care most about? What would be simply awesome and make it all better?”

Well, the core of Citrix is delivering Applications and Desktops to anyone, on any device, anywhere, over any type of connection to any type of device. What we care about is how well that is running! Make it fast, make it “just work”. If something goes wrong, how do I find out? How do I isolate the problem? How can I act on it? While we know how to do these things, it always ends requiring someone to actively monitor stuff, logging into various consoles and systems and combing through data and indicators. If the Internet of Things is about connected devices, why can’t I make my own familiar environment work for me? Why can’t all the stuff on my desk be active consumers of IoT data? Why not have the things around me monitor the data center for me? Instead of logging in and looking around, why dont they proactively get my attention and tell me exactly what is going on in my Data Center? (and yes, laziness is often the real mother of invention!)

Like most big ideas, if I actually knew what it would take I never would have started. But in my naivety I knew I could control motors and microcontrollers and use Octoblu to consume data and talk to devices. So I ought to be able to create this, right? What I didn’t have was a ready way to get real world data out of a real Citrix Enterprise environment to trigger these devices.

Desk4

When you need help it always a good idea to turn to the best, so I reached out to my friend and fellow CTP  @JasonConger. Jason has a long history of mastering data access and code development around Citrix enterprise systems.

IMG_0761

 Jason pondering the IoT Workspace data flow…..

Let’s start with the end result, here is the video from Geek Speak Tonight! of the IoT Workspace. Note that the lamp, the glow of lights around the desktop, the pen set and glass desk plaque, the cat statue, picture frame, key chain, file cabinet and, um, ‘atmospheric conditions’, and, SMS messages to my iPhone are all receiving monitoring data coming from a system composed of XenApp, XenDesktop, Hyper-V and XenServer, Cisco UCS hardware and a storage array from a major enterprise manufacturer (name withheld because we knowingly allowed it fail and do not want to unfairly reflect negatively upon on the product!). Also, to understand some of the comments made in the video, you should be aware that in the previous two days a number of high profile demos had failed during keynote and presentations, especially in trying to demo the Citrix X1 Mouse in large wifi/radio saturated rooms. The same thing was happening to us as wifi was not working due to interference and the preceeding demos had not gone to well as a result…..

 

Be sure to read Jason’s blog on the same demo for more detail on the how he got the data from Splunk to interact with Octoblu and trigger the flows I created to control the devices.

Now you can take the RedOctoPill and end here having enjoyed the demo. Or, you can take the BluOctoPill and jump further down the rabbit hole of IOT with us in in Part II….

A Journey to IoT w/Father, Son, a Laser and Cats…Phase One

As I wrote about in The Internet of Things, or, the Consumerization of Engineering, last month my son, I and Joe Shonk attended the IoTPhx meet up here in Tempe, Arizona hosted by the awesome Chris Matthieu of OctoBlu.

Without a doubt, we were all deeply inspired by the technology and this great group of people.  Last night we attended the next meet up and watched two robot cars race using Twitter hashtags to move them forward (or backward):

 

robot-race

 

We saw awesome 3D printed parts and control systems, killer LED matrices and circuits. If that wasn’t enough, Moheeb Zara brought IoT controlled LED Pyramids that can be smartphone controlled, or, controlled from an insane control board with motorized faders that definitely came from an Alien Spaceship! This is for a display with Intel at the upcoming SXSW. If you are wondering where the passion for tech, tinkering, hacking and innovation is-   This Is The Place!

LED-Pyramids

 

So we got our first Arduino board at the event last month and thought,  Now What?

What we needed was some goal, our own personal MoonShot, an idea, a project to inspire us to learn, develop skills, and build something that we could take back to the next meeting. But what?

Well despite his gruff exterior Joe Shonk is quite a softy and loves cats and kittens, he has four of them at home and literally cats show up at his house asking to be adopted! My son and I came up with the idea that we could create an Arduino controlled Laser pointer game to entertain cats – why not? You can’t always be home to play with them, wouldn’t just a little automation help here?

 

Joe-and_GrumpyCat

@JoeShonk with Grumpy Cat

 

My son and I spent the next four weeks immersing ourselves in the process. First step was to achieve “Hello World”. In the Arduino space that is most often represented by attaching an LED to Pin 13 and creating a basic sketch (i.e. program) to turn on/off the LED.

 

IMG_0036

 

After we accomplished that, we continued on following tutorials on adding switches, scanning for the state of a switch, manipulating timings, etc. Each effort involves reading a tutorial, wiring some components on a bread board and creating the code to achieve the desired outcome.

We worked our way up to controlling servos (little motors that you can control the position of with commands) using the Radio Shack Motor Pack for Arduino. We needed two servos- one to control X (left and right) and one to control Y (Up and Down). The combination of these two movements controlling the laser, and pointed at the floor, would give the “target” for the kitty to chase.

 

IMG_0038

 

 

So to make a long, interesting and fascinating story short (involving hacking a laser sight off a toy gun, super glue, interviewing cat owners….and other fun stuff) here is a video clip of the basic mechanism of the KIT:

 

 

 

and here is a video of the system in action, note we upgraded the laser with a small, stand-alone laser module we got from Amazon.com and then hacking a ‘wall wart’ power supply to juice it (the rabbit hole goes deep, once you jump into it….)

 

 

 

 

This Project has Three Phases:

KIT,   Kitten Interaction Terminal, Local processing, NOT internet connected (Complete)

KIT-T     Kitten Interaction Terminal- Twitter Connected, i.e. turn it on by posting a Twitter #HashTag such as #PlayKitty!

KIT-N    Kitten Interaction Terminal- Nano Edition, this version would employee much smaller and less expensive components and be in a convenient casing

Happy to say that we made our goal of showing our Phase One project at the IoTPhx meet up and received great feedback. Our question to the group was how to get to Phase Two and connect KIT to the Internet. There were suggestions about doing it connected to a computer and/or doing it all on the board. There is some new code coming soon to provide TCP/IP connectivity within the ChipKit Boards that looks promising that could make it stand-alone…Good Times!

It made us feel great, like we were Batman, well we can’t both be Batman, but you know what I mean….then today I received this intriguing communication…

tweet1and Alisa is?

 

tweet2

Did I just create this person out some weird ability to manifest what I was thinking into the material universe? How cool is that?!?!!?

Stay Tuned, same Bat/Kat time, same Bat/Kat channel for the next episode where we connect our Kitty Interaction Terminal to Twitter!

 

@stevegreenberg

The Internet of Things, or, the Consumerization of Engineering

The Internet of Things…IoT

I had heard about it, but aside from thinking of everything around me having an IP address, and imaging doing silly things with my refrigerator,  I didn’t really know what it meant. Recently we heard that Citrix had acquired OctoBlu, a leader in the IoT space and saw some impressive demos of human/device interaction at Citrix Summit 2015 in Vegas.

Then I became aware that this team, and the co-founder Chris Matthieu, are based right here in AZ and down the rabbit hole we went. Turns out that in addition to doing amazing stuff like starting a GoToMeeting when you enter a room (via Moheeb Zara) and when the meeting ends sending everyone a recording of the meeting via Sharefile automatically, they do things like Hack High Performance Cars! And if that is not cool enough, they are also very active in the community right here in my backyard.

We (me, my 13 year old son and Joe Shonk) attended the IoTPhx meet up and met a fascinating bunch of developers, hackers, engineers and enthusiasts. Here is just one project that was demo-ed:

MicroChip was there showing their Development products and some cool expanded Arduino compatible boards. My 13 year old son received one in the giveaway…this could be the start of something big!

So what does it all mean? According to WikiPedia:

The Internet of Things (IoT) is the interconnection of uniquely identifiable embedded computing devices within the existing Internet infrastructure. Typically, IoT is expected to offer advanced connectivity of devices, systems, and services that goes beyond machine-to-machine communications (M2M) and covers a variety of protocols, domains, and applications.[1] The interconnection of these embedded devices (including smart objects), is expected to usher in automation in nearly all fields, while also enabling advanced applications like a Smart Grid.[2]

OK- but not sure it that really helps.What I saw at the IoTPhx Meetup was the ability of people at all levels of skill and experience to get into coding and executing ideas on low cost and easily accessible platforms. These platforms are relatively powerful single board systems that are easily programmable, provide broad connectivity and include inputs for sensors of light, sound, movement, gravity etc. and outputs for controlling external things like lights, motors, wifi, etc,  and

Is that new? Not really, professional engineers have been coding and embedding functionality in microcomputers and doing device control for decades. However, these are highly educated, well paid professionals working in mostly large companies using expensive and proprietary equipment. The revolution here is that starting at ~$50 bucks anyone can put together their own system, using their own ideas and realize results that before took teams of men and women, many thousands of dollars, high expertise and months of work. In addition, these ideas can now communicate easily across the entire internet using simple code and even graphical “grab and drop” interfaces.

Just as we have seen with computers evolving over the decades, we now carry massively powerful systems in our hands. What was not even possible just a few years ago, we now experience in a portable telephone -Star Trek like global information, communication and access!

The last several years have been characterized by many as “The Consumerization of Information Technology“. This is a time in which people have tremendous power in their hands. Technology products are targeted to individual, every day, “normal”(i.e. non expert) users. With inexpensive devices, and cheap apps, we can surpass the capabilities of large Enterprise class systems from just yesterday. We can do more, we expect more and simply don’t need or want to be held back.

From my point of view, I see the “Internet of Things” as the Consumerization of Engineering. Now for just a few bucks, some creative ideas and a little of your own free time you can create engineered results. You can instruct a controller to do things, to communicate between whatever devices you like, to talk across the Internet to do whatever YOU envision.

What do you think? A bright future? Just a bunch of toys? The tech version of “We the People?  The best way to fight the coming war with machines? Let’s start a conversation….

@stevegreenberg

 

Check out the New EUC Podcasts!

Over the last several years, many of us in the industry have discussed the need for community driven End User Computing podcasts focusing on virtualization topics for people designing, deploying, and using Citrix, Microsoft, VMware and surrounding technologies. I am excited to share that this month, two new Podcasts are being launched! First, a warm congratulations to Jarian Gibson and Andy Morgan on the successful launch of their Podcast, Frontline Chatter. Here’s to many years of continued success! Next, allow me to introduce the End User Computing Podcast!

Announcing the End User Computing Podcast!

The End User Computing Podcast (www.eucpodcast.com) is a community driven podcast for IT Professionals. The content covered on the EUC Podcast is primarily geared toward community support and enablement for application, desktop, and server virtualization technologies. Comments and community interactions are strongly encouraged to keep the authors honest and non-biased toward the vendors and technologies being covered. While the EUC Podcast is an independent community driven podcast, SME’s vendor preferences and strengths may be presumed based on active projects and topic areas covered. As unaffiliated technologists, EUC Podcast encourages the authors to discuss a wide variety of vendors and products based on current or upcoming engagements. The first episode of the EUC Podcast will be streamed live on Monday February 16th at 20:30 GMT (12:30PM Pacific, 3:30PM Eastern). To watch and participate in the podcast live, go to www.eucpodcast.com. It will be recorded live via Google Hangout and delivered in audio Podcast format with EUC experts from around the world including: DANE YOUNG (@youngtech) | STEVE GREENBERG (@stevegreenberg) | CLÁUDIO RODRIGUES (@crod) | ANDREW WOOD (@gilwood_cs) | CHRIS ROGERS (@citrixjedi) | DWAYNE LESSNER (@dlink7) | BARRY COOMBS (@virtualisedreal) | THOMAS POPPELGAARD (@_poppelgaard) | MIKE NELSON (@nelmedia) | ALEXANDER ERVIK JOHNSEN (@ervikMore about the EUC Podcast…thecrew_forwebThis episode will be available via iTunes and other RSS/Podcast applications on iOS, Android, Windows, Mac, etc. To subscribe, go to http://eucpodcast.itvce.com/subscribe/ To participate in the live stream, go to www.eucpodcast.com during the live stream. We will be interacting via Twitter hashtag #EUCPodcast with an embedded Crowd Chat: https://www.crowdchat.net/EUCPodcast Click here to add this event to your calendar (Download .ICS file). As part of the podcast, we will be doing introductions, talking about news and announcements, and introducing a segment called Ask the EUC Experts! where audience and community members have an opportunity to submit questions or podcast topics via the web form. If you have any comments, questions, or want to learn more, feel free to use the comments section below to leave us your feedback! Thanks and we look forward to seeing you on Monday the 16th at 20:30 GMT!

–The EUC Podcast Crew

The Data Center in a Post Virtualization World @ AZ Tech Summit Sept 17th in Phoenix, Arizona

How Fast Can This Go?

The speed of change is changing. It’s getting faster and faster and it sometimes feels that if you  blink you can miss an important development in technology. A prime example is the proliferation of Virtualization in the Data Center. Always wary of proclamations such as “this is the year of VDI” or “Everything is moving to the Cloud”, I do think that it is now valid to characterize the situation today as “Post Virtualization”. Virtual machines are now ubiquitous and there is widespread knowledge about how to configure and optimize the storage and network to support them- i.e., we know how to do this.

So what comes next? I suggest that the next phase is the Data Center Re-born: A dynamic pool of resources and productivity for the business to consume. We are moving out of the days where services and solutions are hard coded, built individually and not re-usable. Up until now, as new applications and resources come online, there is simply more to do, more to know and more to manage. People like to talk about “The Cloud” as the answer, and maybe in time it will be. What we need NOW are real ways to converge and streamline the datacenter and grant easy/secure access to Users and Data in support of the organizational mission. As a wise man I know once said, ” They just want to press the button and a get a Banana”. Up until now it’s all been way too complicated…..

The Data Center Re-born

OK, we are not yet just going to press a button and get everything we want out of a Datacenter just yet. But now there are many straightforward ways to get pretty close to that vision. I have been designing and deploying these solutions since the 1990’s and we are at the best point ever to balance the Triangle of Cost-Performance-Capacity. In short what this means is that for a very reasonable cost, organizations can now adopt strategies and technologies that get you much closer to the dream. It is now completely possible to configure your storage, network, operating systems, applications, data, and user access as fully Dynamic Services. Three major characteristics of these systems are:

Deploy By Assignment- Deploy users, devices and applications simply by assigning resources, not by the brute force of building machines, installing applications, locking down systems, maintaining hardware, etc, etc

Built once, Re-use infinitely- Yes, it’s real!

Dynamic Allocation of Resources: Storage, Compute, Applications, User Data, Remote Access are all available to be consumed as needed on top of a High Availability and Fluid platform. This platform is lower cost, its components can be used, re-used and re-purposed as needed (for example, no more new SAN every three years, reuse that storage in new ways). This is not magic, it follows from building the infrastructure and platform services using these new approaches. Once the foundation is properly established, it becomes easy to serve up the Applications, Tools, Data, and ability to Collaborate that your users need to serve the Mission of the Organization.

Data-Center-Pavillion

Join us, and a select group of core technology partners, on September 17th for the AZ Tech Summit in Phoenix to explore these concepts. We will be hosting an Innovative Data Center Pavilion at the entry to  the Main Event Hall.

Come speak with experts and learn how our clients are running these streamlined operations  and gaining the benefits 24×7. Informal discussions will be going on throughout the day as well a Main Conference session:

 

12:00 pm – 1:00 pm
Tech Theater II
Lunch & Learn: The Data Center in a Post Virtualization World  Presented by: Steve Greenberg, Thin Client Computing

 

…and an Executive VIP Presentation/Discussion:

 

2:45 pm – 3:45 pm
VIP Executive Track
Executive Strategies for Mobility and Virtual Data CentersPresented by: Steve Greenberg, Thin Client Computing

 

REGISTER HERE and enter the code thin to receive a complimentary registration to this year’s conference. We look forward to seeing you there!

 

Keeping it Real in Tech: Marketing vs MarkT-ing

Just got back from Citrix Synergy 2014 happy, inspired and exhausted! It was a great week of learning, collaboration,  conversations, and great times with friends and colleagues from around the world. It was an overload of ideas and input, but one things stands out above all else- The character and heart of Citrix President and CEO Mark Templeton.

After a short leave of absence, this was Mark’s highly anticipated return to deliver the keynote at Synergy 2104 before his announced retirement within this next year. It is hard to describe the effect that MarkT (this is what we all call him) has on people. At first I thought it was just me as my career has directly paralleled Citrix and Mark’s leadership and I am deeply grateful for that. However, I spoke with countless attendees after the keynote about this and absolutely everyone said the same thing- that they are moved and inspired by Mark in a very special way. I heard this same sentiment across the board, everyone from first time attendees to old timers, Geeks, sales people, partners, etc. Feeling  this very strongly myself, and hearing it echoed over and over again throughout the week I set my mind to figure out exactly what was going on here. After much deliberation, here is my conclusion:

Some people are very skilled at speaking, at presenting a message in a clear and impactful way. Some people have great skills at persuasion or inspiration- they can get you excited in what they say and how they say it. Some people understand the technology behind products, or the business value, the use case, etc. When you listen you can be impressed or motivated to act. Mark is not any of those, he is something so much more…

MarkT has a heart the size of an ocean liner. You can’t help but be genuinely brought in, not from the hypnotic sound of a practiced speaker, but from the genuineness of a person who loves that they do and means what they say. He wants to share the exciting developments at Citrix because of what they can bring to YOU, how they can help YOU- he cares about others and is happy and honored to be able to share it.

In the end, it is about integrity, honesty and heart-felt sincerity that excites people. It cuts away the hype, pretense, agendas and spin and replaces it with genuine beliefs. When you experience the real thing, you just know it, everyone feels it and this year’s Synergy Keynote was the prime example. Next to this, the standard marketing/spin/positioning looks like a thin veil of charlantism. The “secret” is a sincere desire to make the world a better place, and, to lift up those around us in the process.

The Tech World, the Business World, and, the Whole World for that matter, will be a much better place if we can learn from his example and actively reach out to replace all this superficial (i.e. self-serving) Marketing, to make it Real, to question our own values and re-align them so that they truly can help others.

I hereby pronounce the End of Marketing and usher in a new era of Sincerity and ‘Keeping it Real” called the Age of MarkT-ing

 

Thanks for everything Mark, now it is our turn to carry this forward….

steveg-markt

Citrix 3D Graphics Cheat Sheet (and how to do Community right!)

One of the most exciting developments recently in the Virtualization World is the emergence of mature and highly performant remote 3D Graphics solutions. As expected, Citrix and NVIDIA are leading the charge here with full support for virtualized GPUs in the XenServer hypervisor. This is revolutionizing the delivery of high end graphical computing workloads remotely that, until recently, required dedicated local hardware to perform adequately. There is a groundswell occurring in industry, and among my consulting peers, in learning the best practices and approaches. In this regard, NVIDIA has done an oustanding job of collecting and sharing the relevant information. I received the data below from John Rendek at NVIDIA yesterday and was really pleased to see what they have assembled and shared here in full- Thank You NVIDIA for “Getting It”! **UPDATE** Jared Cowart filled me in that most of this of this data was compiled by Angelo Oddo, Senior Sales Engineer at Citrix. Mad Props to Angelo!

 

Citrix 3D Graphics Cheat Sheet   2/04/2014

Guides and Optimizations:

 

NVIDIA Resources:

 

 

NVIDIA-vGPU

VMware HDX Resources:

XenServer HDX Resources:

 

XenServer GPU commands:

 

List GPUs

lspci | grep VGA

 

Validate iommu is enabled

xe host-param-get uuid=<uuid_of_host> param-name=chipset-info param-key=iommu

 

Attach a VM to GPU

xe vm-shutdown

 

Find the UUID of the GPU Group

xe gpu-group-list

 

Attach GPU

xe vgpu-create gpu-group-uuid=<uuid_of_gpu_group> vm-uuid=< uuid_of_vm>

 

Validate GPU is Attached

xe vgpu-list

 

Start the VM

 xe vm-start

 

Detach a GPU

First, Shut down the VM using xe vm-shutdown

 

Find the UUID of the vGPU attached to the VM by entering the following:

xe vgpu-list vm-uuid=<uuid_of_vm>

 

Detach the GPU from the VM

xe vgpu-destroy uuid=<uuid_of_vgpu>

 

How to implement Citrix 3D Graphics Pack

Download Citrix XenServer 6.2 + SP1
Download NVIDIA GRID vGPU Pack for GRID K1 or GRID K2
Download Citrix XenDesktop 7.1 99 user trial or licensed software here (requires a MyCitrix ID)

1)     Start with a fresh XenServer 6.2 installation on GRID supported hardware

2)     Install XenServer 6.2 SP1

3)     Download the NVIDIA GRID vGPU Pack & install NVIDIA GRID manager in XenServer from CLI

4)     Create a base Windows 7 VM

5)     From XenCenter, assign a vGPU type to the base image

6)     Install NVIDIA GPU guest OS driver in the base image (available in the NVIDIA GRID vGPU Pack)

7)     Note: Drivers will not install if a GPU has not been assigned to the VM

8)     Install the XenServer Tools

9)     Install the latest version of Citrix HDX 3D Pro VDA 7.1

10)   Create a Machine Catalog using MCS or PVS

11)   Create a Delivery Group, assign users and publish the desktops

NVIDIA-XD7x

Tweaks for XenDesktop VDA:

  • The following Registry key setting will increase Frames per Second (FPS)

[HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\Graphics]“EncodeSpeed”=dword:00000001

 

  • The following registry key setting will ensure the screen is refreshed and eliminate artifacts of previously opened windows:

[HKEY_LOCAL_MACHINE\Software\Citrix\HDX3D\BitmapRemotingConfig]“HKLM_EnabledDirtyRect”=dword:00000000

 

Hotfixes, Drivers and Tool Downloads

*Primary Source

Microsoft App-V 5.0 Load Balancing

I have had the pleasure of working with Microsoft App-V for a while now and HA has always been a very important item.   Load Balancing has been a breeze in the App-V 4.x environments.  All you needed was a load balancer that could pass * for the port and * for the protocol and everything worked great.  Yes, you can argue that RTSP used 554 TCP but the random port is chose after was the killer.

That has all changed in App-V 5.0.  Now Kerberos is a huge deal.  Anyone that has worked with SQL clusters will understand how temperamental Kerberos can be without being properly setup.  After I have had the fun of translating Microsoft language into a usable format, I figured I would document to the best of my ability how to setup App-V 5 to use Kerberos and be load balanced.

Before I start, I would like to share some of the articles that were used or discarded in getting this to work

Microsoft has a “Planning for High Availability” article which can be found here http://technet.microsoft.com/en-us/library/dn343758.aspx.  This article talks about HA for the entire environment and a pretty good read except for the Web Services load balancing

Microsoft has another article on “How to provide fault tolerance and load balancing in Microsoft App-V v5”, http://support.microsoft.com/kb/2780309.   I didn’t find this article very useful

After combining the 2 articles above and many others, I have found these steps to be pretty straight forward and easy to do.

Assumptions:  I am assuming you have 2 or more App-V 5 servers installed with Management and Publishing working in the environment.  I put both Management and Publishing on the same servers, but it is up to your design.  I have performed these steps in Windows 2012 R2 Standard

I will be using the following as examples

Server Names:  vAppV01 and vAppV02
Load Balanced Name:  AppV
FQDN:  dummy.lcl
App-V Management port: 8080
App-V Publishing port: 8081

Step 1:  Have a Load Balancer and DNS A record

I tend to use Citrix Netscalers for LB on the projects I work on, but any should work.  Just like App-V 4.0, it is easiest to use a LB with * for ports and * for protocols.  Again the security guys will argue with me about you are opening to much.  My point is it is internal traffic and not transferring in company data.  All that is being transmitted is bits to launch an application.

Step 2:  Setup an AD Computer Account

Create a computer account in Active Directory with the Load Balanced Name.  This will be used to assign the SPN’s to later.

Step 3:  Change the IIS ApplicationPool Identity

This is where all the confusion comes in.  If you read all the information out there regarding the ApplicationPool Identity, it leads you to believe that you need to change this to run as a service account.  Performing this step will break the syncing of your publishing servers with the Management service.  We will just skip that part and allow the KernelMode to take care of Kerberos for you:

  • Navigate to c:\windows\system32\inetsrv\config and make a backup of ApplicationHost.config
  • Now we need to edit 2 parts of this file, both are found at the bottom of the file and crossed out below.
    <location path=”Microsoft App-V Management Service”>
    <system.webServer>
    <security>
    <authentication>
    <digestAuthentication enabled=”false” />
    <basicAuthentication enabled=”false” />
    <anonymousAuthentication enabled=”false” />
    <windowsAuthentication enabled=”true” />
    </authentication>
    </security>
    <webdav>
    <authoring enabled=”false” />
    </webdav>
    </system.webServer>
    </location>
    <location path=”Microsoft App-V Publishing Service”>
    <system.webServer>
    <security>
    <authentication>
    <digestAuthentication enabled=”false” />
    <basicAuthentication enabled=”false” />
    <anonymousAuthentication enabled=”false” />
    <windowsAuthentication enabled=”true” />
    </authentication>
    </security>
    </system.webServer>
    </location>
  • These sections need to read as the following:
    <windowsAuthentication enabled=”true” useKernelMode=”true” useAppPoolCredentials=”true” />

Now reboot your server to verify that changes have taken effect.

Step 4:  Adding SPN’s to Active Directory

Now that your file has been changed, we need to setup the following SPN’s to help allow AD to provide the Kerberos authentication for both App-V Publishing and Management Roles

Run the following commands with Domain Admin rights

Setspn –a http/<server>:<port> <domain>\<LB Name>
Setspn –a http/<server.FQDN>:port <domain>\<LB Name>

Examples below

  • • setspn –a http/appv:8080 dummy\appv
  • • setspn –a http/appv:8081 dummy\appv
  • • setspn –a http/appv.dummy.lcl:8080 dummy\appv
  • • setspn –a http/appv.dummy.lcl:8081 dummy\appv

Step 5:  Your Database

Nothing to add or change to the DB

Step 6:  Your Content Share

Nothing to add or change to the Content Share

Step 7:  Final Step

Now to make sure we don’t have the Publishing Servers going across to the other Management Server, I make one final change

Edit the Hosts file on each App-V Server to point to its own IP for the LB name

example:

If the IP for vAppV01 is 192.168.1.1 and IP for vAppV02 is 192.168.1.2 and the LB Name of AppV is 192.168.1.3, the hosts files should read like this:

Hosts File vAppV01:

192.168.1.1                 AppV

Hosts File vAppV02:

192.16.1.2                  AppV

 

Conclusion:

Now you have successfully setup the load balancing for App-V 5.  It is not as complicated as it seemed when I first started this journey, but again, there was no place that I found that had everything needed for App-V documented.

 

“You had me at Login”

This was one of my favorite moments on the expo floor at Citrix Synergy 2013 this week in Anaheim. I took an important client down to spend some time with Pierre and team from Norskale to do a deep dive into their VUEM product.

Norskale VUEM Booth at Synergy

We have been getting stellar results with this product at our client sites and thought it would be perfect for this particular environment. This client has a widely dispersed national company with a large PC desktop footprint and a well established XenApp and XenDesktop deployment that is rapidly growing.

We went through the range features in the framework, first from the user viewpoint by logging in and loading a complex set of user settings and modifying them in real time. We looked at various use cases of users populating their own Start Menus and Desktop from an admin provided list of apps, how users can easily assign their own printers and set their own defaults. We then looked into the admin console and the details of the integration with MS Roaming Profiles and Citrix Universal Profile Manager, i.e. how to implement folder redirection and VUEM provided default settings to exclude files that cause known issues with common apps such as Chrome. We examined the SQL backend and reviewed the architecture for placing the broker services across multiple locations and options for SQL clustering and mirroring. We went through the clever CPU and Memory optimizations that VUEM provides (which can increase scalability by 25%) and talked at length about integrating this framework into an existing login script/GPO based environment. The discussion got deeper and deeper and more and more technical and went on for until they closed the expo floor.

It was a great session and when it was all over I turned to my client and asked what he thought. He said, and I quote, “You had me at Login!”. While I am used to it, a sub 15 second login time is a major revelation (and productivity boost) for most organizations.

He went on to explain that with all the requirements of their environment from security, to drive mappings, to user and machine policies, that login times were often well over a minute and sometimes quite a bit more. While they value and need all these other great features, just giving the users all the settings with a smoking fast login was enough!

VUEM is definitely a fantastic product for managing your User Environment whether it is physical, virtual or, both. It is my personal “Best in Show” this year from Synergy. If you are interested in learning more, contact us for a demo