Competition for tech pros skilled in cloud technologies is fiercer than ever, according to a new report in The New York Times.
In Silicon Valley, six-figure salaries are common for those with backgrounds in cloud infrastructure; data from the Times suggests that anyone with five years of experience can earn an annual salary of $300,000 (if not more), sweetened with stock options and other perks. Workers with the right combination of skills, meanwhile, face a near-constant barrage of recruiting phone-calls and emails.
As Amazon, Microsoft, Oracle and Google build out their respective cloud platforms, the demand for those skilled in building and maintaining cloud-system architecture may only increase. That makes things more difficult for smaller tech firms, which may not have the capital to offer highly skilled workers a competitive salary. (One startup co-founder, speaking to the Times, referred to stratospheric compensation as a “Facebook tax.”)
According to Dice’s most recent salary survey, the highest-paying tech skills (by average annual salary) that relate to cloud include:
- PaaS (Platform-as-a-Service): $140,894
- OpenStack (used with IaaS deployments): $138,579
- CloudStack: $138,095
- Chef: $136,850
In a highly competitive environment such as the Bay Area, however, salaries only go higher. Over the past year, other tech hubs such as Boston have undergone similar hiring binges, as companies large and small seek the cloud professionals who can help them build out next-generation services.
Interested in working for a healthcare IT startup? While the potential rewards are vast, so are the challenges.
“In healthcare, many great ideas falter because of technology—or more specifically, the difficulty in integrating to legacy systems,” John Sung Kim, founder of Five9 and DoctorBase, wrote in a new TechCrunch column. “Whether you’re selling to a small doctor’s office or a large hospital, healthcare organizations of any size are juggling multiple software systems, many of which do not speak to each other.”
Although many experts blame the woes of the healthcare IT industry on a lack of integration between healthcare databases and software platforms, there’s also the issue of regulations. Every app that interacts with patient data needs to follow the Health Insurance Portability and Accountability Act (HIPAA), which protects health data both in movement between databases and at rest. Hospitals and other entities that handle such data must ensure that they can maintain necessary privacy and security standards.
According to Kim, startups in healthcare IT face entrenched competition from Electronic Health Record (EHR) vendors, whose executives have no desire to find their business “disrupted” by some tiny company with an innovative new platform.
Whether working for a tiny startup or a massive vendor, tech pros interested in the healthcare IT field need to familiarize themselves with not only the basic building blocks of any software platform—programming languages such as C# and Python, and management methods including Agile—but also the sort of creative thinking that allows people to solve thorny problems.
That being said, much of the software employed in healthcare is complex and unique to the industry, making it hard for tech pros to get a handle on much of it until they have a number of years of experience under their belts. Health Level 7 (a framework and standards for retrieving electronic health data) and DICON (an imaging program) are just two of the platforms that workers will need to get familiar with.
But given the importance of data protection, perhaps the most important skill to learn is everything HIPAA-related. Whatever the nature of your startup, there’s nothing more important than ensuring patient data is shielded.
Having recently attended VMworld, VMware’s annual user conference this year, I came away ruminating on VMware’s existential future. (disclosure: VMware covered my travel and expenses to attend the event). I spent much of the keynote on day one of VMworld trading tweets with other members of the commentating classes. My tweets were, admittedly, somewhat snarky […]
The price of cloud services has dipped in recent years, thanks in large part to increased competition among the big providers. That competition might get even fiercer if Oracle has its way.
Oracle CEO Larry Ellison announced this week that his company will begin competing with Amazon.com on price, according to Reuters. That’s a pretty bold proposition, as Amazon Web Services (AWS) has continually snipped prices over the past few years, but Oracle has precious little choice: sales of the company’s legacy database software licenses are down, as more businesses turn to the cloud to support backend infrastructure.
AWS is widely viewed as the competitor to beat in the cloud-storage space, its services more widely used than similar offerings from Microsoft and Google. That hasn’t stopped other potential rivals, notably China’s Alibaba, from eying an increased market presence. If Oracle offers more commercial alternatives to AWS services, it could drive the price of cloud services—already dipping at a fairly rapid clip—even lower.
For those who need cloud services, that’s a good thing; pennies spent on compute capacity can just as easily go to salary and other expenses. A more diverse marketplace for cloud computing and storage can also serve a greater variety of customers. But for the cloud providers themselves, this steady erosion in margins may eventually force a few of them—especially smaller competitors—out of the business entirely.
“Cloud architect” isn’t exactly a new role within most companies, but that hasn’t slackened employer interest in hiring for the position.
According to Dice’s data, the number of job postings for cloud architects has more than doubled over the past two years. While it hasn’t been a steady climb—in mid-2014 and early 2015, the number of postings dipped before regaining momentum—the overall trend is clear.
It’s easy to see why there’s so much employer interest in hiring cloud architects. The past few years have seen more and more companies shifting from on-premises datacenters and servers to cloud-based services, generating a need for tech pros who can effectively deploy and maintain next-generation architecture.
Firms that have embraced a hybrid approach—combining on-premises servers and datacenters with the cloud—can face incredible complexity in ensuring that everything runs smoothly, which is where the cloud architect comes in.
Tech pros who want to get into cloud architecting need to not only know the latest cloud (and hybridized) platforms, but also the software that will make their jobs easier. For example, usage of configuration-management tools such as Puppet, Chef, Ansible, and SaltStack are on the rise, as pros look for ways to automate system configuration and software deployment.
Cloud architects also need good soft skills in order to talk and negotiate with stakeholders throughout an organization. Because cloud deployments touch so many points within your average company, an architect ends up interacting with quite a few people. There’s also a proactive element here: By working with others to tailor a company’s internal processes and culture to the cloud, the architect’s job becomes much more streamlined (although not necessarily easier, given the complexities involved in the role).
Small and medium-sized businesses currently have a lot on their plates. Some are concerned their infrastructures are not secure enough to combat cybersecurity attacks or not modern enough to sustain success. Cloud computing could be one way for SMBs in the United States to address these potential shortcomings.
CompTIA polled 500 U.S. SMBs and discovered 42 percent cited security as one area in which they must improve. In addition to protecting critical data, SMBs are also eager to use this content to their advantage. The survey found 42 percent of participants want to improve how they accumulate and maintain information.
"Small businesses are not immune to attacks simply because their data sets are smaller," said CompTIA Technology Analysis Director Seth Robinson. "Cyberattacks stem from a variety of motivations. Attacks of the smallest firms, where defenses are often weak, occur just as often as attacks on larger companies."
For SMBs to safeguard data and use it in an advantageous fashion, companies will require some upgrades to their core systems. Nearly 40 percent of SMBs admitted to CompTIA their firms must modernize their IT environments. Before the advent of cloud computing, accomplishing this feat may been far more difficult. CompTIA explained, however, businesses can procure affordable cloud solutions to achieve functionality even large enterprises enjoy.
Cloud offers world of benefits to SMBs
Cloud computing is affordable due to its flexible pricing model. SMBs that have purchased on-site hardware may have had to allocate upfront capital investments for the systems, paying flat rates for solutions that could never be used to full capacity. The cloud, on the other hand, is scalable, so firms no longer have to worry about wasting resources during low-traffic periods or not having enough computing power to satisfy peak workloads. All of these updates occur on the server side of the cloud hosting provider.
In addition to offering subscription packages that can be monthly or longer, SMBs can procure cloud applications through other means. Some vendors include pay-as-you-go models, so clients are only charged for the resources they consume. This service model is ideal for companies that experience fluctuating traffic periods, such as retailers, which are flooded with activity during peak shopping seasons and special promotions.
Companies interested in flexible environments can also make employee productivity even more widespread with cloud computing. Staff members, regardless of location, can use Internet-connected devices to access cloud suites to view the data needed to perform their jobs.
SMBs are a driving force behind cloud market
As more SMBs realize their IT infrastructures depend heavily on solutions such as cloud computing, the market for hosted services will undoubtedly increase at a healthy pace. A TechNavio report suggested the global SMB cloud industry will achieve a compound annual growth rate of 20.2 percent between 2014 and 2019.
The research firm noted SMBs are implementing cloud environments to reduce operating costs and capital investments.
Despite all of the advantages associated with cloud computing, there are ways that SMBs can experience hiccups with their deployments. The sheer number of different vendors available, as well as the various models, makes every decision critical to achieve efficient implementations.
Technavio indicated SMBs do not always need the most robust cloud suites. However, they still require personalized suites that account for their unique brands and respective markets.
Hybrid clouds in particular may support SMBs' specialized demands. Faisal Ghaus, vice president of Technavio, noted vendors are already creating hybrid environments that deliver control over critical corporate assets.
SMBs eager to achieve successful cloud deployments should seek third-party vendors that take advantage of cloud readiness tools. Service providers with these solutions can determine how customers' critical infrastructure – networks and servers – perform in cloud environments prior to launching suites, so businesses can make the most informed decisions pertaining to products available on behalf of their clients.
The post Cloud computing may help SMBs transition to modern infrastructure appeared first on RISC Networks.
This is a very interesting video from Miha Kralj, Principal Consultant, AWS Professional Services that was presented at AWS re:Invent 2014, this past November.
What is most interesting is Miha’s take on CMDB applications. Miha stated “CMDB’s lie, if you think you have anything good or meaningful information in your CMBD, good luck. We didn’t find anything of use and don’t trust anything that is in the CMBD. Agents are not really being taken care of in CMDB’s. Most enterprises already have a dynamic virtualized infrastructure, and CMDB’s are just too slow to catch up. CMDB’s are a snap shot in time, and that snap shot in time can’t be done once a week. Enterprises say “that snapshot is two weeks old”. Do you know how much I can spin up and spin down on Amazon in two weeks. That why CMDB’s won’t work to prepare for migrations.”
We get the question from many analyst, systems integrators and customers as to why they need our tool instead of using their existing CMDB. Miha’s presentation gives a great explanation as to why tools like our CloudScape tool is very valuable during the discovery process and why using it ongoing is critical to ongoing cloud projects
Software-defined networking has advanced from being used typically for data centers for broader purposes. Forbes contributor Swift Liu recently explained SDN has helped organizations update networks and configure virtual machines to not be overwhelmed by ever-changing DC advancements. Prior to the age of SDN, completing such tasks would require a blend of available technologies, including the Internet of Things, social media, mobile devices, cloud computing and big data.
Despite making it easier for businesses to address their data center needs, SDN still has some issues that must be ironed out before the solution can achieve its full potential.
According to Liu, companies have to upgrade their hardware to accommodate SDN. Traditional traffic flow is handled by tablets that include device-forwarding functionality. The Forward Information Base table supports up to 20,000 traffic flows, while Media Access Control boasts hundreds of thousands.
Unfortunately, Liu indicated FIB and MAC do not support OpenFlow service networks, which demand larger tables for extensive traffic flows. This is significant, given that OpenFlow is a standard SDN communications protocol. Organizations that want to overcome the shortcomings of FIB and MAC can implement Ethernet Network Processor chips that facilitate traffic flows between the three standards.
Should organizations address the challenges presented by SDN, they are in position to benefit significantly from the solution. Liu wrote that industries are able to customize SDN protocols because they were created with open standards, allowing companies to have control over how quality-of-service policies, domains and rights are handled to support their unique demands.
In terms of performance, SDN is a boon to bandwidth use. Liu reported Google managed to improve its average Wide Area Network utilization from 40 percent to more than 90 percent by integrating SDN.
Looking ahead, the SDN market is poised for rapid growth. A Mind Commerce report indicated the industry will expand at a compound annual growth rate of 53 percent between 2015 and 2020, surpassing $11 billion by the end of the forecast period. The report explained SDN both consolidates and simplifies network functionality by managing networks and interfaces through unified systems, opposed to disparate solutions.
Perform thorough checklist
Businesses expecting to adopt SDN sometime soon may need to configure their IT infrastructures to take full advantage of the networking tool's full capabilities. Managed service providers with access to RISC Networks IT HealthCheck can minimize migration challenges while simultaneously optimizing performance for their clients by aligning their tech solutions with their corporate needs.
The Software-as-a-Service IT operations analytics system performs more than 10 million network checks and forms simple lists that MSPs can view to see any recommendations to any customers' IT infrastructure problems. This level of insight also applies to documentation and reports that enable vendors to show customers analysis into their architectures. Such capabilities can form the foundation of a stronger relationship between the two parties.
With SDN poised for rapid growth, service providers prepared to assist clients with their migrations today will have a jump on the competition in this expanding marketplace.
The post Software-defined networking must address challenges to reach potential appeared first on RISC Networks.
IT professionals seem to feel more comfortable about their economic security these days, and a lot of them are looking around for new jobs, increasing competition among employers for tech pros.
According to a new survey, conducted online by Harris Poll on behalf of Randstad Technology, some 41 percent of U.S. IT workers said they’re likely to seek new employment opportunities over the next 12 months, while 52 percent said they were confident in their ability to find a new job.
At the same time, however, the poll found that 57.4 percent of IT employees were confident in the health of the overall economy, a slight dip from a record high noted in the third quarter of 2014. Forty-one percent of IT workers thought the economy was getting stronger, and 32 percent thought there were more jobs available. (Some 44 percent thought there were fewer jobs.)
Bob Dickey, group president of technology and engineering at Randstad, suggested that employee confidence stems from their skills and what part of the country they live in. “Overall, IT employee confidence remains pretty high,” he said. “For that reason, a lot more needs to go into attracting the right IT talent.”
Dickey noted that the cost of living in the region, training opportunities, and the culture of the organization all play a major role in attracting IT talent. But he also conceded that consolidation of data centers in the age of the cloud, along with increased IT automation, could affect the demand for certain types of IT jobs, as well as their location.
In addition to datacenter consolidation, many lower-level IT administrative functions are being automated. The end result is more demand for IT people with higher-level skill sets. “We’re constantly getting poached,” said Steve Hellmuth, executive vice president of operations and technology at NBA Entertainment in New York. “Investment banks are where a lot of our people wind up going, so we’re always recruiting at the college level.”
Factoring in that rate of turnover is now part of the organization’s basic business plan, Hellmuth added.
Like it or not, the days when a soft economy gave employers leverage over IT staff are pretty much over. Some organizations may be trying to lower their costs by moving IT jobs to different locations. But for every IT professional who might be attracted by the lower cost of living in a particular region, there will always be another that can’t get enough of the bright lights of a major city.
Given that need for talent, tech companies are focusing their recruiting efforts in areas outside of Silicon Valley and Silicon Alley. Louisiana announced that, as part of a 10-year deal involving IBM and CenturyLink, it will spend $4.5 million to expand computer science programs at the University of Louisiana, Louisiana Tech and Grambling State. As part of that arrangement, IBM committed to creating 400 jobs in Monroe, La., where it will open a service center to develop security, data analytics and mobile applications. Meanwhile, CenturyLink will transfer 350 employees to IBM, where they will become full-time employees employed in the new Monroe facility.
With IT professionals fairly confident in their ability to find new jobs, expect more tech companies to make similar moves in order to increase their pools of IT talent.
- Here’s a Smart Way to Switch Jobs Inside Your Company
- Recruiting’s Dirty Secrets: Recruiters Respond
- 5 Key Things Recruiters Want to Hear
Image: Photocreo Michal Bednarek/Shutterstock.com
Cloud Computing looks to be making the transition from a development and test option to a primary considerations for many businesses, but many are finding the pricing models and options overwhelming and complicated to figure out.
In the fall of 2014, 451 Research released a report at their Hosting & Cloud Transformation Summit that included information about the top cloud computing-related pain points, which pricing/budget/cost rated as the number two pain point on that list behind security.
How does an organization determine the best cost for their environment, what should they buy, what cloud providers should they consider. Regardless of the fact that Google, AWS and Microsoft appear to be playing the pricing battle to try and win the title of lowest priced public cloud offering, cloud computing decisions are no longer based just on price. IT leaders are looking at a variety of criteria in their decision making process, and are looking for processes, tools and ideas to simplify this challenge for them.
Tale of Two Approaches
Our data shows that most organizations could be saving an average of 66% on their IaaS infrastructure by simply gathering performance data from their infrastructure and making decisions on application and workload usage vs. using an inventory approach (buying exactly what you have in the data center in the cloud). Then we have reserved capacity vs. on-demand capacity in AWS, which can cause some IT buyers a little anxiety.
We recommend for anyone moving to AWS for the first time or for a new application to purchase On-Demand instances to see how your application runs in AWS, and then if you are happy with the results than you can consider buying reserved instances.
Something to consider when buying reserved versus on-demand; you are committing to a 1 year or 3 year term when buying reserved instances, which can have a 30-50+% savings over on-demand pricing, but you are stuck with the hardware characteristics of that instance. What does this mean? When you buy reserved capacity you are buying specific technology that you will run your application for the next year or three years, with no upgrade options. In some cases, organizations have purchased reserved instances of a specific hardware type, and a month later AWS lowers the price on their newer hardware technology, but you are stuck at a higher price on older technology.
If your use case is suitable for that option, and it is the right fit for your business, then go with that option. But if your application is of dynamic nature and you need better performance on newer hardware technology throughout the year or over the next three years, than on-demand pricing may make the most sense for your organization.
Most of these issues are addressed specifically to AWS, but Microsoft has developed their model to align with AWS. Microsoft has pledged to stay comparable with AWS with respect to pricing. Do not assume that AWS or Microsoft are going to be the best option for price for your business, we see situations where AWS and Microsoft are not necessarily the best price for larger instances. In cloud we are not paying for what we use, we are paying for what we reserve, and in some cases we are paying too much.
Network I/O and Percentage of Daily Cost
We looked at 148 of our clients and looked at just network I/O, let’s talk about the percentage of daily cost. If you take network I/O and you are going to spend $10 a day on a cloud server, based on your network usage, what percentage of that cost is made up of network I/O?
Our analysis showed that most organizations saw less than 25% of their cost of cloud was made up of network I/O. In some cases the network I/O of a particular server can make up 50-75% of the total daily cost of a server running in cloud. In 2% of the cases we evaluated, organizations had servers/workloads that network would make up greater than 75% of the total cost of running that server in the cloud.
If you have large database servers with data replication and high network communication to those applications, the cost of running those applications in the cloud may be prohibitive. We believe that before you can truly determine if the cloud is the right place for your application, you should complete a thorough readiness assessment of each application and their dependent workloads. Know before you go!
Keys to Success with Cloud Pricing
How can you be prepared when moving to public infrastructure-as-a-Service, you need to make sure you do not overbuy from a capacity standpoint.
- General Usage information for CPU, Memory and Disk I/O – You need to have a clear picture of the workloads you plan to move to cloud, and this includes CPU, Memory, and Disk I/O to ensure you are not going to over pay capacity.
- Current network usage and impact on performance – Including network usage when planning cloud cost models is critical. In some cases, your servers network I/O may make up 75% of the total cost of moving to cloud. It would be great to know this in advance.
- Application dependency mapping – Prior to pricing out your cloud implementation, it is important to understand all of the associated dependent workloads that may have to be moved with your application. It is much easier to price an application stack than pricing individual workloads.
The great thing about IT today is that organizations have plenty of choices of where and how to run their applications. The problem is that with a ton of choices, it also makes selecting the best option for your business.
We recommend that prior to selecting a cloud provider, you should have a clear picture of your entire environment, including full application dependency visualization for each application and a full communication density map of all of your host and client connections. Don’t make the mistake that plenty of organizations have made before you, it could save you hundreds of thousands each year in unnecessary cloud cost.
Cloud computing has taken up a mantle throughout the IT industry, especially in critical market verticals. A new IDC report indicated the technology accounted for one-third of ethernet, disk storage and server infrastructure budgets during the third quarter of 2014. Overall year-over-year cloud revenue for the period improved 16 percent to $6.5 billion, with public clouds accounting for almost half of this figure, achieving 18 percent year-over-year increases.
Richard Villars, vice president of data center and cloud research at IDC, explained public and private cloud computing make up a part of the 3rd Platform, which also includes mobile devices, social networking, and big data and analytics. Cloud solutions are "digital content depots" and "compute factories" that will usher in this next IT phase.
"Whether internally owned or 'rented' from a service provider, cloud environments are strategic assets that organizations of all types must rely upon to quickly introduce new services of unprecedented scale, speed and scope. Their effective use will garner first-mover advantage to any organization in a hyper-competitive market," Villard said.
Cloud computing's impact could be even greater
The amount of organizations using or planning to launch cloud services is already at a healthy level. Should more businesses overcome their data concerns, the sky might be the limit in terms of the technology's adoption rate.
A joint NCC Group and Vanson Bourne study found 40 percent of companies have not implemented cloud computing yet because of data loss fears. These worries are due primarily to whether cloud environments are offline, with roughly 75 percent of respondents reporting it would take more than one week to incorporate contingency plans following such outages. Another 5 percent noted their recovery time would require between two to three months.
The interesting part of the survey is that cloud computing can actually enhance disaster recovery if the proper service is available. Cloud environments enable end-users to access corporate content through PCs, tablets and smartphones, so employees can get back to work following a disruption. Firms may be forced to close their offices for a period of time, so having a somewhat productive workforce in place goes a long way toward generating revenue during difficult periods.
Daniel Liptrott, managing director of NCC Group, noted 2e2, an IT services provider, is the perfect example of how an organization can struggle without the necessary disaster recovery protocols.
"In a sector where time equates to large sums of money, organizations should ensure that they have comprehensive and effective disaster recovery plans in place to avoid costly delays if something goes wrong," Liptrott added.
Support your IT infrastructure, disaster recovery with the right cloud model
Businesses that research potential cloud offerings before any implementation can determine whether certain services offer the necessary data protection. The key to achieving this goal is through testing different cloud environments prior to launch.
Cloud-readiness tools enable companies to compare services and how their unique IT environments will interact with each solution. First-time adopters that want to minimize any complications with the technology from the start can do so from the beginning of the process with RISC Networks CloudScape.
The post Cloud computing responsible for fair share of IT infrastructure market appeared first on RISC Networks.