Up in the Air in the World of Social Media

Today I am honored to share with you a guest post by Allison Rice, Marketing Director, Amsterdam Printing. Allison writes about what is going on in the online marketing industry. This industry made its first big step into the cloud through a wealth of SaaS offerings, and is now moving on towards more integrated solutions which are very much driven by Big Data and Marketing Automation. I will follow up on these topics in separate blog posts.

In some ways, the world of social media is starting to look an awful lot like the U.S. airline industry: Air Tran is now part of Southwest, Northwest is the same as Delta, United and Continental are one and the same, and there’s no longer any difference between U.S. Air and American Airlines.

A Social Media Buying Frenzy

Meanwhile, the giants of social media have launched a buying frenzy that mirrors what’s happening with America’s air carriers. Last year, Salesforce.com purchased Buddy Media, Oracle announced that it would acquire Vitrue and Google paid a reported $250 million for Wildfire, a company that develops software which targets the personal data in social media user profiles to help companies reach the people who are most likely to buy their products.

Why Wildfire?

Why would Google plunk down a quarter-of-a-billion dollars for a 4-year-old company that started out with just two employees? The obvious answer is that the purchase will give Google a stronger position against the opposition in its battle to dominate the social media universe. Since it was established in 2008, Wildfire has become a leader in producing software to facilitate contests, promotions and a wide range of marketing campaigns on social media stars such as Facebook, Twitter and LinkedIn. When Google bought the company, Wildfire’s software was the force behind social media marketing for more than 16,000 businesses, some small and some, like Amazon, Target and Electronic Arts, really large.

Wildfire’s software gives companies of all sizes the wherewithal to interact with their fans, followers, customers and potential customers by managing their apps, pages, tweets and ads all at one time. For example, you can find out what’s being said about you, your industry and your competition on thousands of websites, and you can monitor and analyze hundreds of social media conversations. In addition, you can publish text, images and videos and monitor their impact on various social media platforms.

Possibly the most important aspect of this technology aiding your business is that you will be able to respond to potential sales leads from hundreds of social networking sites, blogs and forums and take more than just an educated guess at how much return you are getting on your advertising dollar.

Not Without Competition

Google obviously hopes its new acquisition will continue to spread like Wildfire, but the new partners are not without competition. Salesforce.com bought both Buddy Media and Radian6, at a total price in the neighborhood of $1 billion, and combined the best features of both to create the Salesforce Marketing Cloud. You can use the Cloud to glean data from Facebook, Twitter, YouTube, LinkedIn, blogs and online communities and to connect with your potential customers with videos, images and links. In addition, the Cloud allows you to launch ad campaigns on Facebook and elsewhere while you gather and respond to content posted on social media sites.

Not to be left out of the industry’s merger madness, Oracle acquired Vitrue, a platform developed to provide the tools necessary to easily manage social marketing campaigns on Facebook, Twitter, YouTube and Google.

Regardless of whether you choose to make your marketing more efficient by using the latest social media tools, there are certain guidelines you should follow to get the best possible return:

Listen to what your potential customers are saying by finding out what they are posting online.

Focus on what you do best. Using a "rifle" approach to marketing is better than using a "shotgun."

Quality is more effective than quantity. You’ll be better off if 1,000 people read your content, share it with their friends and talk about you online than if 10,000 people connect with you once and never come back.

Patience is important. You probably won’t attain overnight success. Instead, you should be prepared for a long-term marketing campaign.

Sharing is one key to success. If you publish quality content, your followers will share it with their own audiences on Twitter, Facebook and LinkedIn, which will also do wonders for your Google rankings. And don’t be afraid to share and talk about content published by others. After all, you expect them to do the same for you.

Connect with people who can help you succeed. Build online relationships with those who have quality audiences that might be interested in your products and services. Maybe they will share your content with their own followers.

Value is important. Don’t spend all your online time directly pushing your products and services. Focus on creating great content and nurturing relationships. If you are lucky, these relationships eventually will lead to word-of-mouth marketing for your company.

Acknowledge people who reach out to you. I think we’ve already determined that online relationships are important. Don’t ignore people who take the time to get in touch with you.

Be accessible to your audience. Publish content continuously and participate in online conversations. If you fall off the face of the earth for a few weeks, your followers will find someone else to follow.

Allison Rice is the Marketing Director for Amsterdam Printing, a leading provider of customized pens to order and other promotional products to grow your business and thank customers. Allison regularly contributes to the Promo & Marketing Wall blog, where she provides actionable business tips.

Scaling very large Clouds with Swarm Computing

In my previous post ‘Swarm Computing – Cloud 2.0?’ I have written about the basic principles of Swarm Computing. Now I want to dig a bit deeper into why I believe Swarm Computing is a natural evolution for Cloud Computing, and specifically which benefits it can bring to the Cloud.

Cloud Computing, although the industry has been talking about it for many years and major vendors have put lots of effort into developing Cloud solutions, still has significant untapped potential. Particularly, most vendors and most enterprises still look at the Cloud as a sourcing model only. You run your workloads on some other platform, and you safe cost along the way. Innovation, however comes not from cost saving but from agility. Spending wisely helps staying agile, but it is not enough. Business agility, as far as IT is concerned, comes from the ability to develop, deploy and operate new complex IT services (such as an online store) quickly and efficiently. This is enabled through Service Modeling, which connects all IT disciplines from planning and development to automation and monitoring. This untapped potential will be seen and realized over the next years.

The increased adoption of Cloud Computing will cause two problems of Cloud infrastructures to grow into limiting factors. One problem is scalability, the other is handling the unknown. I will write about the second problem in a later post. For now, let us focus on scalability.

Automated scalability is one of the value propositions of Cloud Computing. You put your IT service or application out there, and it grows and shrinks with the demand. You pay for just what you use, and there is virtually no limit to the size to which you can scale. So how can scalability be a problem of Cloud Computing? Actually, I am talking about a different perspective here, the Cloud provider’s perspective, not the Cloud customer’s.

If you think about how IT infrastructures are managed today, it is essentially a matter of centralized control. Compute instances (e.g. virtual servers, virtual network devices etc.) send information to one or more central tools and receive commands in return. Very little can they do on their own. They might ask their hypervisor for more RAM, but that’s about it. Most everything else is centralized. It’s a bit like running a team with micro-management. I confess I do believe computers have a personality – they certainly tend to have a weird sense of humor – and don’t like being talked down to. But the real problem with the centralized approach is scale. It works well with a certain number of compute instances. But from a certain point onwards, it becomes impossible to maintain the same level of efficiency. Polling times get too long, latencies get too high, the amount of information received doesn’t fit onto any reasonable screen anymore. Slicing an extremely large IT infrastructure down into more manageable chunks can be a good approach, but it’s not a solution, merely a workaround.

Teams get efficient when the rules are meaningful and understood and team members are left with as much freedom to choose how to do their jobs as possible. The same is true for compute instances. Imagine if virtual servers had enough artificial intelligence to know how to take care of their own requirements (e.g. get sufficient compute resources to run their workloads), and also had the capability to collectively make the right decisions as a compute swarm to manage the swarm infrastructure they run on. As bird swarms can collectively decide to turn in a certain direction, compute swarms could be enabled to switch on or off grid hardware (even place orders with the hardware vendor). They could decide to migrate off to another physical site when too many physical hardware failures occur or the network conditions get bad. They could evict a member that behaves badly. They could pass along patches and hotfixes.

It may rightfully be stated that all of the above is possible with a centralized approach as well. The difference is: the swarm approach scales extremely well! Size doesn’t really matter anymore once control is distributed instead of centralized. The main point then is: how does one define the ground rules in a way that both enables enough freedom and prevents bad things from happening? It will be interesting to see the first practical experiences with the swarm approach!

With kind regards!

 

Ralf Schnell

Swarm Computing – Cloud 2.0?

In the latest edition of CloudViews Unplugged Andi Mann and George Watt discuss, amongst other topics, research done by the University of California in Berkeley. The topic of this research is Swarm Computing, which appears to be somewhat related to research on robots. Swarm Computing goes way beyond simply interconnecting many devices through the Internet and enabling them to exchange information. It creates a self-organizing network of entities whose rules of interaction and reaction can create something resembling a living organism.

Most of you will know the book ‘The Swarm’ written by Frank Schätzing where protozoae work together to form a collective intelligence manifesting itself through forming bigger orgnisms. The same principles are applied when enabling robots in remote locations to work together to accomplish common tasks, e.g. gather in one location after having done individual exploration. All robots will need to perform some active tasks to complete that mission, but the master algorithm controlling it all is above the individual algorithms owned by each robot.

The idea behind Swarm Computing is the same: individual devices all have their own software and capabilities, but they are enabled to collectively pursue a bigger goal, an overarching purpose, something that is beyond their individual reach. This is not accomplished through some sort of centralized control instance that has access to all devices, understands their capablities and instrumentalizes them. Remember robots, maybe on the moon or on mars? There is no practical way of exercising centralized remote control – too much distance from earth, no command center onsite, and far too little knowledge about the situation and condition of every individual robot. Rather, the individual devices need to be able to take care of themselfes, be able to understand and communicate their ability to contribute to an external command, without neccessarily being able to understand the purpose behind that command.

Let’s try and get practical. Imagine we would like to optimize energy generation and consumption in a given country, and let’s limit this to private households for simplicity’s sake. There’s atomic plants, coal plants, solar and wind energy, there’s the power grids, and there’s a large number of devices demanding power. Now imagine that every device (or at least one per household?) were able to understand it’s own power demand patterns and exercise some measure of control, e.g. my central heating knows I want the water to be at 65°C, but I don’t really care at what time exactly it is being heated up, so it has the freedom to decide this on it’s own within the limits I specify. Now enable all those devices to exchange some basic messages. Rather than trying to control it all from one central location, devices within a certain proximity (thus sharing a common part of the power grid) can start adjusting their power usage, e.g. optimizing this for when the most solar and wind energy is available within this area (e.g. from solar panels on private roofs), and so each logical part of the national grid starts optimizing itself. Then on the next level up grid controllers can start talking to other grid controllers in order to coordinate power requirements, and talk to the devices below to make them re-arrange their power consumption if needed. None of the devices involved needs to understand the whole picture, yet we’d get a self-organizing mesh of devices that is highly flexible and scalable.

How would this work in IT? Imagine Swarm Computing applied to Infrastructure Management, Automation, Monitoring, Performance and Capacity Management. Rather than following the traditional approach of defining, provisioning and registering instances, we’d have those instances established with some basic self-awareness and the ability to communicate and react to other instances. Cloud isn’t that far away from this today. Think about automatically adjusting virtualized workloads across hypervisor grids. The big difference is: with Swarm Computing we wouldn’t use one central piece of automation software, rather the virtualized instances would talk to each other and to their nearest grid controller, those would talk to other grid controllers (potentially even outside the perimeter of their own datacenter), and workloads would then adjust themselfes.

 

 

With kind regards!

 

Ralf Schnell

Give me the blues–Innovation and Standards in Cloud Computing

My colleague Jacob Lamm wrote this wonderful piece on our ‘Innovation Today’ blog. It is about the fundamental question whether industry standards will accelerate or slow down innovation in Cloud Computing. I, personally, am glad to hear that there’s yet another musician in IT and at CA! You can find the original post here.

Standards promoting innovation? That’s one of the ideas Marv Waschke puts forth in his new book, “Cloud Standards: Agreements That Hold Together Clouds.” Marv makes the case that the existence of standards, whether for cloud computing or any other developing technology, actually accelerates advancement and enables innovation. An interesting idea that got me thinking. (Read my related blog on the topic here.)

I am a blues harmonica player. The premise of the blues, and of improvisational music in general, is that there is a simple agreed upon rhythm and chord structure that participants work within. As long as you remain within that structure, you can be as creative as you like – piecing together riffs (sequences) in innovative ways to create something unique in real time. Without the structure – or standards – the result would be noise, not the velvety and soulful sound that is the hallmark of blues. When all musicians adhere to the agreed upon structure, the resulting music can sound as though it was the product of hours of practice by a long-time organized band, when in fact the music may have debuted merely seconds ago by musicians who met only minutes before that. True creativity and innovation require a backdrop of structure to work within – especially when the innovation requires more than one person – let alone entire industries.

A comparison can be made between those agreed upon rhythm and chord structures and cloud computing standards.Unnecessary delay and discord are prevented when musicians understand some common terms and can start playing by agreeing to a few understood premises, such as those dictated by the “12-bar Blues chord progression.”  Not a blues artist? Here is the Wikipedia explanation: "The basic 12-bar lyric framework of a blues composition is reflected by a standard harmonic progression of 12 bars in a 4/4 time signature. The blues chords associated to a twelve-bar blues are typically a set of three different chords played over a 12-bar scheme. They are labeled by Roman numbers referring to the degrees of the progression. For instance, for a blues in the key of C, C is the tonic chord (I) and F is the subdominant (IV).” 

Imagine if every blues jamming session had to start with a tutorial on those principles? Though daunting to newbies, these principles are second nature to my jamming buddies and me and have been the platform for some pretty impressive blues, if I do say so myself.

The principles are rigorously defined. While on the surface this may seem to stifle creativity, in practice, it is the clarity of those principles that allows creativity to flourish. Similarly, a person or organization with an innovative idea will find that by adhering to standards early on, they will increase the likelihood of overcoming the obstacles that can prevent an innovative idea from achieving fruition. Recent research from CA Technologies also supports this premise: organizations with a structured, formalized approach to innovation experienced more success than those who used an ad hoc approach to innovation.

As Marv writes in his book, “Although clarity and ease of understanding are always desirable qualities in a standard, lack of ambiguity is paramount. A standard that means different things to different people will cause no end of problems when developers try to create systems that comply with the standard.”

Cloud stakeholders can avoid false starts and show-stopping integration issues, and get to the business of service innovation, by agreeing to play by the standards. Of course the blues have been around a bit longer than cloud computing and cloud standards are still their relatively early stages. Over time and with careful tuning, cloud standards will evolve to ensure harmony among cloud stakeholders, and to set the stage for innovation. 

Service-based Capacity Management

Capacity Management is an important discipline; it contributes both to cost effectiveness and service quality. While service disruptions certainly can be caused by code defects, quite a lot of them actually have their roots in insufficient capacity on one of the underlying components. Solving this problem by adding more hardware is a costly approach, and so one should assume that the need for good capacity management is a no-brainer for all IT departments, particularly so in companies whose reputation and revenue relies heavily on the quality of their IT services.

I admit I was taken by surprise when two customers that I recently talked to both let me know that they prefer to ‘just add hardware’ in order to avoid capacity related problems. After a moment of complete disbelief I found the real reason behind this: Those customers did not negate the correlation between capacity management, service quality and cost effectiveness, they simply believed that there is no practical solution to capacity management.

Now why would one think that way? Well, both customers have broad experience in running IT. They both did try to solve this problem. And they both failed. They failed not because they weren’t smart enough, they failed because the traditional approach to capacity management does not work.

Capacity management is usually done bottom-up. You look at individual components – network connections, CPU, RAM, disk space etc. – you monitor usage, you build reports and do a trend analysis, and you plan additional load or add capacity based on this. And you frequently find that you still run into issues. Reality does not fit into this model. To avoid problems, IT departments end up adding a lot of capacity headroom and the capacity management effort becomes inefficient – the amount of budget saved is no longer big enough to justify the effort.

The root question then becomes: why does reality not fit this model? Why do we still run in to capacity related problems? The answer is quite simple: capacity does not scale evenly, it is driven by the capacity demand of your individual IT services.

In order to really plan and predict capacity, you need to understand the capacity requirements of your IT services. This is not a trivial exercise. In order to succeed, you need to understand all of the components of each service, their dependencies, and their individual behavior. What physical components do these service components run on, how do these scale, what else is running on there? What kind of load, for which component, is being caused by a particular service demand?

What is really required is service based capacity management. The key to establishing this capability is having a service model, one unified model that links all disciplines from developing and building a service, provisioning it, running it, monitoring and measuring it, all the way up to service portfolio management. A model that is updated automatically in real-time when changes in the infrastructure occur, and that then triggers all changes required in other tools. An isolated solution that provides a model for just one single discipline is not going to be very efficient. A unified service model ensures that all disciplines actually use the same model for their tasks and planning exercises. In other words: everyone finally has the same understanding of each of your services.

Once a good service model is in place, predictive analysis now becomes an exact science. IT is now able to answer confidently and reliably to business demands. Based on business projections IT capacity can now be planned precisely and accurately. The calculation of the cost for new services will prove correct. The return on invest of that approach will be quick and significant.

With kind regards!

Ralf Schnell

5 Tips For Choosing a Managed Hosting Provider [Part Five]

Guest Post by Tarun Bhatti from Rackspace Hosting

Part Five: Apps, control, and cost

You’ve found a provider that offers enough uptime and support to meet your needs. But your work’s not done yet!

Tip: Ask how many other customers are hosting your favorite application with the hosting provider

I’m surprised how many people forget to ask whether their preferred scripting language is supported by a provider, and if so, which version is supported.

Ask your prospective provider these questions:

• How many other customers are using the application that you want to use—say WordPress, Drupal, or Joomla?

• Do they provide Windows Server® or Linux?

• Which versions of PHP, .NET, Python, and Ruby does the hosting provider support?

Verifying that the provider offers the languages and apps you want will prevent a lot of heartache when you go to create your site.

Tip: Consider the level of server control you need

Do you need full server control? Or do you never want to deal with the server at all?

Some advanced developers need SSH access to customize their environments. But if you don’t want to get into the real nitty gritty on a server, then you don’t need this. (Hint: If you don’t know what SSH access is, then you don’t need full server control.)

Tip: Consider cost

Of course, you want a provider you can afford. But while price per month is definitely a factor, it should not be the deciding factor. Know your budget, get answers to the questions above, and then choose the provider.

Also, don’t forget to ask what is included in the package:

• How many domains can you host without paying extra?

• How much bandwidth and storage is included?

Knowing this will help justify the cost of a provider who may offer better service over the long term, but cost more.

I hope you found these tips helpful in choosing the right managed hosting solution for your websites.

 

Tarun Bhatti is Senior Product Marketing Manager for Rackspace Hosting. He is a technology product marketer with 10+ years experience in technology. Tarun is enthusiastic about entrepreneurship and technology, and has worked in various functions from product marketing/management to IT consulting to software development.

The ‘new’ iPhone …

Ok, this has nothing to do whatsoever with Cloud Computing. But it is just so funny … customers comparing the iPhone 4S with the iPhone 4S! No, not a typo.

And by the way, it does have to do with Cloud Computing after all: how do you brand your product and your company in such a way that makes your customers admire and adore your product, instead of critically looking for flaws, holes and alternatives? To that end, I recommend watching Simon Sinek on Youtube (http://www.youtube.com/user/simonsinek).

Enjoy!

 

Kind regards!

Ralf Schnell

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: