Monday 12 October 2015

(Image: JurgaR/iStockphoto)

Amazon Re:Invent: AWS Takes The Next Step

AWS announced a bunch of new services at Re:Invent, but more noticeable was the revelation of a more assertive Amazon.

Andy Jassy, the senior vice president most responsible for the business plan behind AWS's infrastructure technology, wasn't just talking up Amazon's new services as usual. He was also casually, and in passing, swatting at the competition.It was a more confident, competitive Amazon Web Services (AWS) at this year's Re:Invent in Las Vegas, which wrapped up Friday, Oct. 9. That showed up in a number of different ways, both from the jaunty way Amazon executives conducted themselves on the stage, to the dj spinning tunes in the registration lobby, and all the way to the "swag" counter dispensing AWS hoodies to attendees.
As AWS wades into business intelligence, a field with many well-established players including Cognos, Information Builders, and MicroStrategy, Jassy said AWS had designed the user interface for its BI entry, QuickStart, to be easy for non-technical people to use. He then showed a user interface from "an old-guard BI vendor" that was "hard to use" and "rather janky." He didn't say whose it was but dropped the hint, "the name rhymes with hog nose."

Building on Success

Earlier this year, Gartner cloud analyst Lydia Leong doubled her estimate of the margin by which AWS leads it competitors. She said AWS has 10 times the compute power of its 14 largest competitors. Still, it's been reported that Microsoft and Google are growing at a faster rate, percentage-wise, than Amazon.
Given that Amazon is so much larger to begin with, it's hard for its growth rate to reach the lofty percentages of smaller competitors. So by what percentage rate is it growing? Instance use is up 95%; storage use, 120%; and database services, including caching and data warehouse, by 127%, said Jassy. That means AWS revenues, reported at $4.64 billion for 2014, are growing at 81%, Jassy said.
In case you didn't get the full message, "Amazon by far is the fastest growing, multi-billion dollar technology company in the US," he said.
(Image: JurgaR/iStockphoto)
We tend to think of the cloud as offering different sizes of virtual servers, like Starbucks coffee cup sizes. But it actually varies, both in size and operational characteristics, to continually appeal to some new type of customer in the market.
At Re:Invent, AWS introduced the T2.nano, a virtual server that's smaller than the T2.small or T2.micro. The T2.small consists of one virtual CPU (equivalent to a 2007 Xeon processor) and 2GB of memory for 2.6 cents an hour; the even smaller T2.micro is one virtual CPU and one GB of memory for 1.3 cents per hour. The T2.nano still has the single virtual CPU but only 512 MB of memory. All of the T2s use a "burstable" CPU, meant to run at steady state on one virtual CPU, a fraction of the total physical core, for a low-traffic website or little-used database. But AWS allows users to amass credits during these periods of low use, then burst into use of the full physical core when needed for short periods of time. Each credit is equal to a minute of use of the full CPU.
The T2 family can amass up to 24 hours of credits, a big cushion for future traffic, and an attractive operational characteristic for many owners of small websites and online business applications.
In July 2014, AWS chief evangelist Jeff Barr published a blog explaining credits and burstable operational characteristics.
So if Amazon is willing to go small, why is it playing so hard at the other end of the spectrum as well?
CTO Werner Vogels introduced the X1, supersize-me virtual server with up to four Haswell processor cores and 2TB of memory. That's 2,000GB of RAM, or larger than some data warehouses. Previously, Amazon's largest amount of memory with any instance was 200 GB. Who needs 2TB in a virtual server? Someone, apparently. The new virtual server also takes away Microsoft's bragging rights that it has the largest virtual machines available in the public cloud, as Scott Guthrie did Sept. 29 during a virtual event, AzureCon. He cited Azure's G series, with the high-end G5 topping out at 448 GB of RAM.
Probably the most overlooked item in the blizzard of announcements Oct. 7 was Jassy's quick reference to AWS Inspector. It's well known among cloud users that Amazon runs incoming workloads through an automated inspection, looking for malware or suspicious behaviors in the code. Inspector may not be the only reason, but so far there have been no reports of malware getting inside EC2 and then running amuck. Now Amazon is making Inspector available to customers. AWS will place an agent on the resources that a customer tags as part of one application. The agent will watch the network traffic, file system use, and active processes, looking for security or compliance issues.
Then this operational data is compared against a set of security rules and best practices. Its findings are grouped by severity and reported back to the customer. Amazon is making the service available on a preview basis. Making this service available for a customer's own use, when and for how long it chooses, may both expose hidden vulnerabilities that have crept into a workload and also save time on routine checks by the customer's security staff.
More information to come, said Barr, AWS chief evangelist, in a blog.
[Want to learn more about Re:Invent? See Amazon Securing IoT Data With Certificates.]
Finally, AWS spent a lot of time talking about database services -- for good reason. Amazon can keep expanding its database-as-a-service offerings. It already has Oracle and SQL Server as proprietary systems, and MySQL, Aurora, and PostgreSQL as open source systems. At Re:Invent, it announced the addition of MariaDB (high-performance MySQL) as a fourth open source system. It also offers its Redshift Data Warehouse, DynamoDB NoSQL system, and AWS Elasticache caching system.

Taking On New Competition

These services have proven so attractive to its customers that AWS senses the disruptive role it might play in the traditional database market. It launched its AWS Database Migration Service as a preview technology last week.
Jassy said Amazon is already a $1 billion database company based on its existing services. AWS Database Migration Service is likely to expand that figure because it can help move an on-premises system to the same system in the cloud or help migrate away from a proprietary system to a different system in the cloud. The latter gives customers a chance to test Jassy's assertion that AWS Aurora will give you comparable performance for a tenth of the cost. And don't forget open source PostgreSQL's ambitions to be an Oracle replacement by posing as a compatible system and look-alike to Oracle applications.
It's been notoriously hard to get a database customer to move away from an established vendor, partly because most customers have little confidence their data can be migrated smoothly into the schemas of the system of the would-be new provider. Amazon is trying to address that with its $3 per TB database move fee, using Migration Service with its free Schema Conversion tool.
Customers who like the idea of conversion should remember it's free to ship your data into Amazon. Fees apply when you want to take it out. Amazon is making a serious bid to build a database business based on hours of usage and nothing else. That means Amazon will be maintaining and upgrading its database systems but won't be charging an annual maintenance fee -- 20%-22% of a database license cost -- as proprietary vendors have.
This shift in the billing model is the agent for disruptive change with database customers, and AWS appears to think that over the last year, with the introduction of its Aurora system, that it's spotted a crack in dam that has always prevented any flood of such migrations in the past.