Pages

Saturday, January 18, 2014

A Dual Pronged Approach To Helping The Cancer Fight

Most of you, the readers of this blog, are of the "tech geek" persuasion. We look to use our analytical skills and jedi arts with technology to design solutions and improve quality of life. Heck, we even put our engineering skills to make people's lives more fun and enjoyable.

But, let me put the fun aside for just a minute...

I spend most of my day job helping organizations and businesses of all sizes solve problems in the hopes of driving more profit and make business operations in IT more efficient. There is absolutely nothing wrong with this at all. But occasionally, I get the pleasure of presenting to an organization that is looking to employ technology for a different overall goal. They most likely rely on philanthropic means to drive a good portion of their revenue and therefore operating budget, and use technology to make our lives better at the most basic (and necessary) level. Whether it is providing more effective education through the use of technology or employing high performance computing environments and petabytes of data storage to accelerate finding cures for disease and illness.

Sometimes to solve problems of illness such as cancer require employing simultaneous efforts from any one person. If every person follows this basic mentality the level of support just multiplies exponentially. In my case, I look to better inform these organizations through my career on the technology they can purchase and utilize to arm their researchers with state of the art tools to devise cures for disease, but I also look to give back personally through my own financial means to help them acquire the technology.

Ultimately, giving back can be done in multiple ways, be it financially or through volunteering your time. Either way, if we all pull together various talents and resources we can hope to accelerate research and ultimately save lives.

Starting just before the holiday season I joined my best friend back home in Alabama in an effort to raise money for the American Cancer Society's Relay For Life Team she and her family and friends will be coordinating. As part of this team, I am hoping to contribute and raise funds in memory of her Mother who passed away after a long battle with cancer.

If you are looking for ways to contribute back to causes such as this, I encourage you to give back with your time and help your local organization for whatever cause it is you feel strongly about. Make a difference from multiple angles!

For those who would like to donate and help me in my campaign, you can do so here.

Wednesday, January 15, 2014

EMC Elect 2014 Announced




First off, I wanted to say how honored and humbled I am to be a returning EMC Elect member in 2014. I have learned so much in the first year of being an Elect that I can only imagine what greatness is in store for me and being able to learn from this fantastic group of new and returning members.

For those readers that are not sure what "EMC Elect" is about, or have never heard of it, let me explain. As posted on our EMC Community Page (Login/Account Required): 

"EMC Elect is a community-driven recognition and thank you for individual's engagement with EMC as a brand over the last calendar year."

While that is the most concise and accurate statement describing the program at a high level, having what it takes to become EMC Elect in my mind is where the true value of this program shines through. 

There are three pillars that each member selected into this program stand on:
  • Engagement - EMC Elect revolves around a person's social engagement and advocacy for the EMC brand, it's products, and philosophy. Engagement usually takes place through social media outlets such as Twitter, Facebook, and Blogs, but can also include the level of engagement and contributions (speaking opportunities, etc.) that a member makes at industry events and conferences.
  • Commitment - This attribute comes in the form of being involved in industry conversation on a consistent basis, particularly surrounding EMC technology topics. However, more importantly, commitment is shown through constructive feedback helping efforts to improve not only EMC Elect or EMC and it's products but also the technology industry as well.
  • Leadership -  To put it simply, it's all about initiative. EMC Elect are always ready to take the opportunity to engage and represent for the betterment of the community and the EMC Brand.
I want to go ahead and congratulate those that have been announced this morning as EMC Elect 2014. In particular, my fellow co-workers in the EMC Data Protection and Availability Division:
If you have a minute, take a look at their respective blogs and give them a EMC Elect congratulatory Twitter follow!

While all of the EMC Elect 2014 should be recognized, the real winner in all of this is EMC's customers and the industry as a whole. I speak for the group of Elect in saying that our main mission is to be an advocate for the EMC brand.  You are probably thinking now, "What does this mean for me?"  
Our mission of advocacy is two fold. Providing you, the readers of our blogs and tweets, knowledge and a transparent view of the world of EMC. But more importantly it is also to listen and provide EMC with valuable feedback that ultimately means better service and solutions for you the customers. 

So if you visit an EMC Elect blog, comment and share your views and feedback. Or, if you see an EMC Elect member at an industry event, feel free to engage, converse, and share knowledge, it's what we as members are committed (and eager) to do!

UPDATE: Here are blogs from other EMC Elect 2014 members and what they are saying about the program and today's announcement...

Friday, November 22, 2013

11/22/2013: When DataCenter Tech Entered The Living Room


This will come as no surprise for those of you who have been following my twitter feed these past few weeks, I am obsessed with the launch of Microsoft's Xbox One! The Video Game entertainment industry is exploding! Recent gaming releases have raked in revenue that rivals the biggest Hollywood blockbuster ticket sales. So, it’s no surprise that the level of investment entertainment companies continue to make in games is increasing.

While which gaming system to get is a religious war to a lot of folks, I think whether you go with Sony’s Playstation 4 or Microsoft’s Xbox One, you are walking into a world that quite frankly will raise the bar when it comes to the immersion level in these games. Case in point, the ability for game developers to leverage APIs that can talk to an app on your smartphone to actually call you. That’s right, your virtual smartphone rings in the game, and the smartphone sitting next to you rings!

So aside from resolution up-scaling controversies, and number of “pixel shader” unit debates, there are a lot more subtle aspects which I feel are crucial to making this new generation of video game entertainment great.

What excites me more about Xbox One, is what is under the covers, and where the underlying technology Microsoft is using originated. It came from Microsoft’s Windows Hyper-V and Windows Azure components. One can think of it this way, essentially there are pieces of an enterprise data center technology sitting next to your television. I’ll even stretch the definition and say that it could be considered a mini Hybrid Cloud!

So what on earth am I talking about? Well, as demonstrated, the Xbox One can quickly and seamlessly switch between various entertainment modes. You can go from driving the Lotus E21 Formula One Car in Forza Motorsport 5, to watching ESPN live through your cable provider in seconds. Then, if you decide, you can switch to Netflix in seconds all while the video game and ESPN keep running in the background. So what is the enabling technology enabling this feature? It’s Hyper-V, arguably making the Xbox One a mini private cloud.

Now, you aren’t running a full blown implementation of Windows Server 2012 R2 with Windows Azure Pack inside of the box, but rather through intelligent code re-use, the ability to run 3 different OSes simultaneously on the Xbox One hardware is clearly where using Hyper-V technology makes sense.
So where is the “public cloud” component of my hybrid cloud statement? Well it’s essentially Xbox Live, Microsoft’s Online Gaming Platform. It isn’t a secret that Xbox Live is leveraging Microsoft’s Azure public cloud infrastructure. According to Microsoft Xbox Live utilizes 300,000 servers globally deployed in Azure. Leveraging Xbox Live services in the cloud, compute power of the Xbox One doesn’t stop at the console hardware.
Just like in an enterprise data center, you can augment the processing power available to you by leveraging public cloud resources. In the case of Xbox game developers coding to the Azure Development Platform, they now have the power of programming in-game capabilities that can off-load certain compute functions to the public cloud. This is where the possibilities can be limitless.

Launches like this only come around every 7-10 years. But this particular console refresh cycle to me is different and more exciting since I am seeing enterprise technology that I directly work with in my day job not simply making people productive, but bring fun and joy to life as well.

Happy Gaming!

Tuesday, November 12, 2013

Getting Your Backup and Recovery Process On Foils - Part 2



In the first part of this series of blog posts, I described how I think snapshots are misunderstood and quite frankly abused. This abuse can ultimately make you look like a fool given the right disaster scenario. In this second part, I wanted to dive a little bit deeper into the bits and bytes on an example of how snaps can actually be implemented to make you a backup hero!

Let's take a look at the data flow of what I am alluding to here. This is a visual example of how you can leverage a Backup and Recovery architecture (in this case using EMC NetWorker Snapshot Management together with the NetWorker Module for SAP) to truly offload the Backup and Recovery burden from your SAP with Oracle landscape. The best part is that aside from automating the snap orchestration, the entire process end-to-end is designed to take advantage of Protection Storage with Deduplication for better cost-efficiency over leveraging production capacity for versioning and longer term data retention.




Let's look at this process flow step-by-step:
  1. Starting at the production host, with the NetWorker Module for SAP and NetWorker Snapshot Management installed, interfacing with the application is done seamlessly through SAP BR*Tools' brbackup mechanism. It is up to brbackup (or any applications native backup utility) to prepare the data in a consistent state for the ability to create an application consistent copy of the data.  In this same step, brbackup makes calls directly into NetWorker's backint process, providing NetWorker with a detailed list of all the data structures for backup processing.
  2. NetWorker Snapshot Management interfaces directly with the array (and depending on use case certain types of disaster recovery solutions such as EMC RecoverPoint) to orchestrate actual snap creation of the production volumes coinciding with the list of data structures received from SAP itself.  These snaps that are created can be local snaps on the same local array subsystem, or can be replicated clones on a remote array as I have highlighted here.
  3. The interaction with the production host (servicing our application to our end users) is done and is left undisturbed. The processing now shifts to a designated proxy/mount host that then automatically mounts the replica or snap of the production data structures, and is tasked with processing full deduplicated backup transfer to protection storage.
    In this particular case, we are leveraging advanced NetWorker integration to EMC Protection Storage based on Data Domain via the DD Boost Transport over an IP or Fibre Channel network. Keep in mind however you could leverage any storage which is supported as a backend storage device by NetWorker, you just won't get all the DD Boost goodies. :)
  4. With the backup process complete on our proxy host, the NetWorker Snapshot Management module communicates with it's counterpart over on the production host. This essentially starts the "success" reporting chain back up the stack through NMSAP's backint and finally back to brbackup completing the backup cycle with the application. 

Given the integrated and coordinated nature of NetWorker managing this entire process with the application, NetWorker also offers the ability to manage snapshot retention as well, so you can truly gain the flexibility in RPO and RTO I mentioned in the third bullet item in my previous post.

The key with this approach is that we minimized the amount of processing and "tasks" added to the production host, and leveraged the full potential of our storage investment to move backups to the next level. While this was going on, your application end users didn't even notice! Trust me, when there is a disaster they will notice just how quickly you can get them back up and running again, it's only a matter of time.

Going back to my sailing analogy in my first post, the main goal here is to design a backup and recovery solution that like a regatta crew do everything possible to make the boat go faster, not slow it down. Looking for a backup solution that not only fully protects your mission critical application data, but does so in a manner that allows you to enhance your application SLAs (gaining boat speed if you will) is certainly advantageous.

Wednesday, October 30, 2013

Making the switch to mirrorless...


The DSLR in my photo kit has officially been replaced. I started out with a Canon Rebel, moved to a Nikon System with the D300 as my workhorse, and now am losing some pounds, without sacrificing quality (in my opinion), by going Mirrorless with the Olympus E-M1.

In the photography market, the "Mirrorless Movement" is starting to accelerate in the news and I can really see why, practicality and convenience in the form of weight savings and smaller size. In fact outside of the US, this idea has gained much greater traction vs. here in the US. I think that is about to change going into 2014.

The Olympus E-M1 I just purchased is literally about half the weight and much smaller than my previous DSLR.  But don't let size and weight fool you. The micro four-thirds system that this camera is based on, provides an incredible library of quality glass to attach to a camera body with a state of the art sensor!  In this video I unbox the new E-M1 and mention some of the great features setting it apart in the industry.

Last but not least I want to thank the folks at Newtonville Camera for fantastic service! In this day of e-commerce it is great to have a real camera store to walk into and make my purchases from!


Thursday, October 10, 2013

Getting Your Backup and Recovery Process On Foils - Part 1

Photo taken by Alex Almeida during Oracle Team USA Practice Session. San Francisco Bay, CA Aug 2013
A couple of weeks ago we witnessed Oracle Team USA make a heroic comeback in the 34th America's Cup in San Francisco Bay. Clearly the biggest comeback in the Regatta's history, and arguably one of the biggest comebacks in all of sports! I was certainly pulling for the team that would keep the cup here in America, but I always look forward to these competitions to see the world's elite sailors work together as a team to flawlessly and efficiently navigate the fastest path around the course. It is incredibly amazing to me how each sailor has put years of practice with their respective groups so that each tack and jibe is executed perfectly and each team member moves purposefully in concert with the other crew. It also goes without saying the new AC72s are a big reason why the racing action is much more exciting than in years past. The boats themselves are a marvel in technology alone.

I found The America's Cup to be an appropriate backdrop and analogy to discussing backup and recovery of an SAP with Oracle environment. Not because of the clear ties to the database vendor sponsoring Team USA, but rather concentrating on how a sailing team in any regatta operates. There are lots of moving parts and the winning solution has the most efficient design (fastest boat) and crew that integrate together like clockwork toward a main goal.  In fact, you can apply a lot of what I am going to talk about here to any mission critical application use case.

When backing up any mission critical transactional data the ideal recipe calls for the following:
  • Create a full retention copy on cost-effective storage as quickly as possible 
  • Minimize impact by the backup process to the application and more importantly its end-users as possible (NO IMPACT is really what we are going for) 
  • Provide for an "as easy as Apple Time Machine" recovery mechanism, where my Recovery Point Objective (RPO) is also as granular and flexible as possible.
In talking to more and more DBAs and IT Administrators, one of the biggest pain points is tied particularly to the second bullet point I mentioned. As most of them are experiencing (and maybe you are as well), most transactional systems now essentially operate at peak load around the clock. So any task that leverages processing cycles against your application environment is scrutinized.

To continue the sailing analogy, the boat and the crew are the two main components that together combine for a winning solution. Each one operates to benefit the other but never get in each other's way. The crew on the boat clearly puts a lot of effort to make sure sails are trimmed correctly and tacks and jibes are executed flawlessly, but that is all done without slowing down the boat. This is the most obvious when crew members don't get in position fast enough or don't make the properly timed adjustments leaving precious speed on the water. Clearly you also want to protect your business data in a fashion that doesn't cause the business to slow down. So how do we go about performing essential backups without upsetting the datacenter waters?

One of the approaches we are hearing a lot about in the data protection industry is "snap and replicate." Clearly this provides for data versioning, which offers a flexible RPO and quick (Recovery Time Objective) RTO, and replication of those copies on another array which is most likely at another physical location providing some degree of offsite protection.

As you start to dig deeper into this approach, one has to ask, "What happens if I lose my primary volume?" The answer is inevitably that the metadata recipe for re-creating the version of data I need (the snap) is useless to me. That metadata still references data blocks residing on the original storage logical unit number (LUN) and are essential for rebuilding any part of the dataset at any given RPO.

Let's forget about actual primary LUN failure for a second, and just concentrate on application/software data corruption. After you realize that corruption has taken place, it is often too late to recover from snaps because that corruption has propagated through too many of your snaps already.

But as much as folks may think I am "snap bashing," they aren't all bad! They can actually be very helpful! The key is how they are used. When leveraged as a complement to backup and not the lone backup process, you really start to see your Application Backup and Recovery processes step out of the limelight. Just like the crew on an AC72 in the America's Cup. To further validate my point, can you name for me the "grinders" on the Oracle Team USA boat?

The right Backup and Recovery architecture for mission critical transactional data separates the winning businesses from the losing ones and can make your IT team look like champions, essential to business success. On the other hand, if not implemented leveraging the right technology, you quickly have a team looking like serious ballast that not even the strongest winds can move!

Stay tuned for a future post where I expose the bits and bytes on a way to go about implementing this principle.

Wednesday, October 2, 2013

What Started It All

Stay Away From the Light

I posted this to flickr on January 2006, on a cold New England winter night. It was this photo which was taken on a Sony Mavica that wrote to mini CDs that led to my obsession with Photography. This photo was "pure exposure luck" by the way. I knew very little about photography and exposure theory at the time I snapped this, but I remember noticing the effect the lamppost was having given the foggy conditions. Clearly hiding the lamp post behind the tree created this awesome Sci-Fi like scene. It was only when I got this photo into my PC that I realized that the camera captured exactly what my newly discovered photographic eye intended, but repeatable? Probably not. I clearly needed to understand the technicalities and art of exposing light. So I was hooked. As some of you may have experienced capturing that perfect photo doesn't happen by sheer luck too often, and more often than not frustration ensues at the pc monitor during the photo reviewing process. So off to learn about this new hobby I went!

Usually one would learn the proper theories and settings behind a camera via books, manuals, or maybe a photography course in school, but I didn't go about it that way. Most of my learning came from other photographers who shared their experience on "The Interwebs." Photographers I didn't even know personally, and to this day still haven't met. It was through Social Media channels in early 2006 before Social Media, Facebook, and Instagram was in the general public's vocabulary that I came across this website called a "blog", titled Thomas Hawk's Digital Connection. Through Thomas's blog, my passion for photography grew, along with enhancing my passion for Technology and Social Media. He discussed all of those subjects in one single site and had an extensive blogroll which included some world class photographers. I leveraged this community as my "photography course," enhancing my eye through their photos and blog posts. It also didn't hurt that Thomas blogged occasionally about technology in a general sense. 

So why did I chose to start "Exposing Tech"? Well, essentially I thought it was time for me to break out on my own in the tech blogging scene, and get more eyes not only on my photography, but also on my analysis on what I observe in my professional career as a technologist in the tech world. There is a lot out there, and I certainly won't be able to cover it all, but I hope that through this blogging channel I will be able to inform other like-minded readers on what I feel matters in today's technological and geek-filled world. I can't promise that my subject matter will be focused on one particular area, but at the same time, I hope to grow my writing and analytical skills with whatever subject I blog about which hopefully means each blog post will get better. At least that is the idea. ;)

My goal for this blog is to tell the "technology story" through my still photographs and videos in an original way. I feel the visual arts can certainly be a lot more powerful than the written word. 

I look forward to journeying through the technological metropolis we live in together, hopefully striking a nice balance between visual, audible, and written communication to filter what truly matters from the hype and noise.