Tuesday, November 17, 2009

Illegal Immigration: There's an App for That

Posted via web from chetty's posterous

Sunday, November 15, 2009

Cisco Fans Are Pretty Passionate About Their Brand

Posted via web from chetty's posterous

Monday, November 2, 2009

Inside one of the world's largest data centers

CHICAGO--On the outside, Microsoft's massive new data center resembles the other buildings in the industrial area.

Even the inside of the building doesn't look like that much. The ground floor looks like a large indoor parking lot filled with a few parked trailers.

It's what's inside those trailers, though, that is the key to Microsoft's cloud-computing efforts. Each of the shipping containers in the Chicago data center houses anywhere from 1,800 to 2,500 servers, each of which can be serving up e-mail, managing instant messages, or running applications for Microsoft's soon-to-be-launched cloud-based operating system--Windows Azure.

Upstairs, Microsoft has four traditional raised floor server rooms, each roughly 12,000 square feet and consuming, on average, 3 megawatts of power. It's all part of a data center that will eventually occupy 700,000 square feet, making it one of the world's largest.

"I think, I'm not 100 percent sure, but I think this could be the largest data center in the world," said Arne Josefsberg, general manager of infrastructure services for Microsoft's data center operations.

Even with only half the site ready for computers, the center has 30 megawatts of capacity--many times that found in a typical facility.

On a hot day, Microsoft would rely on 7.5 miles worth of chilled water piping to keep things cool, but general manager Kevin Timmons smiled as he walked in for the facility's grand opening in late September. It was around 55 degrees outside.

"When I stepped out, I said 'what good data center weather'," he said. "I knew the chillers were off."

Although Microsoft is open about many of the details of its data centers, there are others it likes to keep quiet, including the site's exact location, the names of its employees, and even which brand of servers fill its racks and containers.

The software maker also won't say exactly which services are running in each facility, but the many Bing posters inside the upstairs server rooms in Chicago offer a pretty good indication of what is going on there.

Microsoft originally intended to open the Chicago facility last year, but the company has slowed its data center pace some amid the weaker economy and an array of cutbacks companywide. Instead, the facility had its grand opening in late September.

Of Sidekick--and Azure
Within a month, though, Microsoft's data centers were attracting attention for a wholly different reason. A massive server failure at an older facility--one that Microsoft acquired as part of its Danger acquisition--left thousands of T-Mobile Sidekick owners without access to their data as part of an outage that is now stretching into its second month.

Although Sidekick uses an entirely different architecture, the failure represented a tangible example of the biggest fear of cloud computing--that one will wake up one day to find their data gone.

Microsoft is quick to highlight the differences between the Sidekick setup and what Microsoft is building in Chicago and elsewhere. "We write multiple replicas of user data to multiple devices so that the data is available in a situation where a single or multiple physical nodes may fail," Windows Azure general manager Doug Hauger said in a statement after the Sidekick failure.

As for Azure, Microsoft is expected to talk about its commercial launch at this month's Professional Developers Conference in Los Angeles, including offering more details on how the system will provide its redundancy. Microsoft has already announced some new Azure details, noting last week that it will begin charging for Azure as of February 1.

Microsoft is still trying to figure out just how much capacity at Chicago and elsewhere it needs to assign for Azure.

"Azure is incredibly hard to forecast," said Josefsberg. "We're probably erring toward having a little more capacity than we need in the short term."

What is clear is that, over time, Microsoft will need even more capacity. That's what has Josefsberg returning to a custom "heat map" that figures out the best place to build data centers based on factors including cheapness, greenness, and availability of power, political climate, weather, networking capacity, and other factors. Choosing the right spot is critical, Microsoft executives say, noting that 70 percent of a data center's economics are determined before a company ever breaks ground.

Josefsberg said he already has the next spot picked out.

"We know exactly where it is going to be but I can't tell you right now," he said.

But Microsoft has indicated how the next generation of data center will improve upon the Chicago design.

Moving to containers allows Microsoft to bring in computing capacity as needed, but still requires the company to build the physical building, power and cooling systems well ahead of time. The company's next generation of data center will allow those things to be built in a modular fashion as well.

"The beauty of that is two-fold," Josefsberg said. "We commit less capital upfront and we can then accommodate the latest in technology along the way."

Posted via web from chetty's posterous

Friday, September 18, 2009

Enterprise Content Management in the Cloud

Provides Developer Kit Including Alfresco on Amazon EC2-Ready Application Stacks

London, UK – September 17, 2009 – Alfresco Software Inc., the leader in open source enterprise content management (ECM), today launched its Cloud Content Application Developer Program. Alfresco will provide an open source Amazon EC2-ready stack and developer kit for customers and partners to develop, deploy and monetize cloud service architecture (CSA) content applications on the EC2 platform.

Tweet This: The Alfresco Cloud Content Application stack is available for Amazon EC2 at: http://bit.ly/46EhtI

Managing content effectively at a low cost while adhering to regulatory controls has become an indispensible part of running an efficient business. According to the study “Above the Clouds: a Berkley View of Cloud Computing,” the cost is one-fifth to one-seventh of that offered to a medium sized data center. Alfresco is designed to take full advantage of a cloud service architecture and deliver cost-effective high availability and scalability. Alfresco is portable across internal and external clouds through the use of the Content Management Interoperability Services (CMIS) specification. Those organizations wishing to use secure, but shared data models also have the option of deploying multi-tenant solutions, offering compliant access between multiple legal entities.

“Content growth requires a sophisticated ECM solution that can scale users and content volumes simply and at low cost without massive up-front capital expenditure. Alfresco today has customers in the cloud with millions of users, terabytes of data and hundreds of millions of documents,” commented John Powell, CEO, Alfresco Software. “Legacy applications may run in the cloud, but modern content service approaches consuming services resident in the same cloud are required to inherit the full benefits of a cloud service architecture. Only then can enterprises and governments achieve the cost efficiencies of on-demand scalability, fault tolerance and cloud-wide network security for documents, records and collaboration.”

The Alfresco Cloud Developer Program offers partners “early adopter” advantages to deliver cloud-ready content applications for collaboration, document and records management. Alfresco will also offer a subscription for those requiring expert Enterprise 24/7 support.

For further information regarding Alfresco's Cloud Content Application Developer Program, visit http://wiki.alfresco.com/EC2 or view this free webinar for more ideas: http://www.alfresco.com/about/events/2009/06/content-as-a-service/.

 

Posted via web from chetty's posterous

Tuesday, September 15, 2009

Obama Administration unveils cloud computing initiative

The administration's cloud computing initiative is getting started immediately, at least in small measure, on the brand-new Apps.gov Web site.

(Credit: Apps.gov)

MOUNTAIN VIEW, Calif.--The Obama administration on Tuesday announced a far-reaching and long-term cloud computing policy intended to cut costs on infrastructure and reduce the environmental impact of government computing systems.

Speaking at NASA's Ames Research Center here, federal CIO Vivek Kundra unveiled the administration's first formal efforts to roll out a broad system designed to leverage existing infrastructure and in the process, slash federal spending on information technology, especially expensive data centers.

According to Kundra, the federal government today has an IT budget of $76 billion, of which more than $19 billion is spent on infrastructure alone. And within that system, he said, the government "has been building data center after data center," resulting in an environment in which the Department of Homeland Security alone, for example, has 23 data centers.

Obama administration CIO Vivek Kundra on Tuesday unveiled the government's new cloud computing initiative.

(Credit: Daniel Terdiman/CNET)

All told, this has resulted in a doubling of federal energy consumption from 2000 to 2006. "We cannot continue on this trajectory," Kundra said.

That's why the administration is now committed to a policy of reducing infrastructure spending and instead, relying on existing systems, at least as much as is possible, given security considerations, Kundra said.

As an example of what's possible with cloud computing, Kundra pointed to a revamping of the General Services Administration's USA.gov site. Using a traditional approach to add scalability and flexibility, he said, it would have taken six months and cost the government $2.5 million a year. But by turning to a cloud computing approach, the upgrade took just a day and cost only $800,000 a year.

But while some of the benefits of the administration's cloud computing initiative are on display today--mainly at the brand-new Apps.gov Web site--Kundra's presentation was short on specifics and vague about how long it may take the government to transition fully to its new paradigm.

Indeed, Kundra hinted that it could take as much as a decade to complete the cloud computing "journey."

Three parts to initiative

While repeatedly referencing the realities that many government efforts must make allowances in their IT needs for security, Kundra argued strongly that in many other cases, there is little reason that federal agencies cannot turn to online resources for quick, easy and cheap provisioning of applications.

As a result, the first major element of the initiative is the brand-new Apps.gov site, a clearinghouse for business, social media and productivity applications, as well as cloud IT services. To be sure, the site isn't fully functional yet, and in fact, a brief survey of it resulted in a series of error messages. But it's evident that the administration hopes that for many agencies, the site will eventually be a one-stop shop for the kinds of services that to date have required extensive IT spending, and Kundra said he believes that some at the Department of Energy had already been using the site for some of its needs.

Kundra said that the second element of the effort will be budgeting. For fiscal year 2010, the administration will be pushing cloud computing pilot projects, reflecting the effort's priority and hopes that many lightweight workflows can be moved into the cloud, and for fiscal 2011, it will be issuing guidance to agencies throughout government.

Finally, the initiative will include policy planning and architecture that will be made up of centralized certifications, target architecture and security, privacy and procurement concerns. Kundra said that every effort will be made to ensure that data is protected and secure, and that whatever changes are made are "pragmatic and responsible."

Clearly, though, the administration has seen benefits in the way private industry uses cloud computing, and intends to mirror those benefits. Ultimately, he added, the idea is to make it simple for agencies to procure the applications they need. "Why should the government pay for and build infrastructure that may be available for free," Kundra said.

One inspiration, he explained, is advances the government has already seen in the streamlining of student aid application forms. The so-called FAFSA (Free application for federal student aid) form is "more complicated" than the federal 1040 tax form, Kundra said. But in a joint effort between the IRS and the Department of Education, it has become possible with one click of a mouse button for IRS data to populate the FAFSA form, Kundra said, eliminating more than 70 questions and 20 screens.

That, then, should be the kind of thing that the government seeks to do across the board, ultimately delivering large savings to taxpayers and significantly reducing the environmental impact of government IT systems.

Posted via web from chetty's posterous

Monday, September 14, 2009

Does cloud computing affect innovation?

Earlier this summer, Jonathan Zittrain wrote a New York Times OpEd piece that discussed his concerns with the cloud computing paradigm. Zittrain stated that, while it may seem on the surface that cloud adoption is as "inevitable as the move from answering machines to voice mail," he sees some real dangers.

(Credit: SuccessFactors)

Zittrain covered the usual concerns about data ownership, privacy, and the access that data placed in the cloud gives governments all over the world--a concern I certainly share. He went on to point out that these problems are solvable "with a little effort and political will," a view that I also adhere to.

To Zittrain, however, the biggest threat the cloud posed to computing wasn't privacy, but innovation--or a lack thereof:

This freedom is at risk in the cloud, where the vendor of a platform has much more control over whether and how to let others write new software. Facebook allows outsiders to add functionality to the site but reserves the right to change that policy at any time, to charge a fee for applications, or to de-emphasize or eliminate apps that court controversy or that they simply don't like. The iPhone's outside apps act much more as if they're in the cloud than on your phone: Apple can decide who gets to write code for your phone and which of those offerings will be allowed to run. The company has used this power in ways that Bill Gates never dreamed of when he was the king of Windows: Apple is reported to have censored e-book apps that contain controversial content, eliminated games with political overtones, and blocked uses for the phone that compete with the company's products.

When I first read Zittrain's words, I was concerned that he had a point--that the future of software would be a few big platform owners controlling user experience and functionality much the same way that Apple has for both the Mac and the iPhone. Where would the next radical disruption come from if none of the platforms would allow disruption?

However, my eyes were opened a little bit last week when I met with Paul Albright and Tom Fischer, CEO and CTO respectively of a business productivity software vendor called SuccessFactors. I went to the meeting expecting to talk cloud, hearing something about how SuccessFactors was going to change the cloud landscape. I'm not a business process guy (anymore), and I thought I was going to be bored to tears by Albright's explanation.

Instead, SuccessFactors wanted to talk to me about what they claimed was a new class of business software--one that would greatly improve business productivity. They called it "business execution software."

My eyes rolled.

To my surprise, however, what I found appeared to be a very innovative way of dealing with organizations, their goals, and their measurement. Unobtrusive, the new SuccessFactors business execution software and its related SuccessCloud SaaS (software as a service) offering aim to leverage the company's extensive experience in business performance measurement and create a real-time monitor of business execution against goals.

Sounds innovative to me.

(It is important to note that SuccessFactors isn't a "pure" SaaS play, but the SuccessCloud services are critical to the business execution story, so it qualifies as a SaaS cloud to me.)

So the question arises: why would one of the most successful SaaS offerings--if not the most successful--work so hard to disrupt the way performance measurement was done? And are there any guarantees that this innovation will continue if cloud computing becomes a dominant paradigm?

The answer came to me this weekend as I contemplated how to measure the SuccessFactors story against Zittrain's prediction. Innovation in SaaS, and probably in PaaS (platform as a service) and IaaS (infrastructure as a service) as well, is driven by competition--the same competition that drove shrink wrap vendors to innovate in the past. However, SaaS has an advantage: upgrades are easier, and all customers are guaranteed to upgrade at the same time, enabling rapid, iterative development and deployment.

Think the barrier to build and release an alternative is insurmountable? Think again. If an entrepreneur thinks they can beat SuccessFactors at their own game, they can build their alternative on Amazon Web Services, or Google App Engine, or Microsoft Azure, or any of a number of cloud platforms and infrastructures. They can build their own SaaS business against that software, or they can simply make the image/code available to be replicated by whoever wants their own instance. The important thing is: the fact that SuccessFactors owns their own data center gives them little advantage.

In the end, it is the ability to deliver application functionality that makes a cloud desirable, not simply the technology used to deliver functionality. So, in the enterprise SaaS category at least, innovation remains alive and well.

Posted via web from chetty's posterous

Tuesday, September 8, 2009

Contextvoice-the real time search

The Search API is here. Full Conversation Search and Powerful Filtering among top features

Posted in News | by Dragos ILINCA | September 8th, 2009

Today is party time at ContextVoice / uberVU as we’re officialy launching our Search API as part of the ContextVoice suite of APIs.

Why another Search API? Don’t we have blog search and Twitter search already?

Yes, we do. But we do something completely different. We do conversational search. A conversation is usually a story (blog post, news article) and all its related comments from all over the Web. So a blog post that has comments on Digg, has been retweeted by people and has got comments on FriendFeed is, for us, a single conversation.

Blog search only indexes blog posts and leaves out comments of any kind. And it’s usually slow.

Twitter indexes only tweets that contain the keyword you’re searching for. It’s close to real-time.

We index both the story content and all the distributed comment content and we return as a result complete conversations. And we’re close to real-time. The best of both worlds.

And this is useful because…?

The Web right now is all about conversations. However, until now, search engines have mostly returned results by relevance (Google), recency (Twitter) or some sort of combination (blog search). But out of those recent results, which one is more important? Where’s all the attention? What’s hot right now as opposed to what’s just fresh?

What’s very important to know is what Conversations (not blog posts) are HOT right now – so you can participate and make decisions in close to real-time. That’s what our API is focusing on.

What to expect from our search

Full conversation search. Because we don’t index just page content but all the reactions and comments from all over the web our results offer a better understanding of the conversation than a regular blog search. The blog post may be about the Google OS, but the comments are mostly about Microsoft. Our Search makes that visible to you.

Close To Real-Time results. We monitor the top social media sites in realtime so if something happens there, you know instantly.

Filtering options. We can show you the most relevant, the newest or the hottest conversations. We calculate the hotness based on conversation acceleration – how many reactions it got in the last time-frame. So a conversation with 10 reactions in the last minute may be hotter than one with 1500 reactions distributed over a month. But the hot conversation matters more, because that’s the one that has the attention and momentum.

What to build with the API

The Search we provide can be used in a lot of products and dashboards. uberVU.com, for example, is entirely built on top of this API. You should check it out as it’s a great starting point for what’s possible with the Search API.

Social media dashboards – want to search entire conversations for your brand of product and get an insight into what’s hot right now and where the attention is – whether on Twitter, FriendFeed or blogs? The Search API is a great place to start

Memetrackers – interested in building a meme site for the NFL or the latest gadgets? Just use the Search API and get the hottest conversations about that topic in close to real-time.

Community management projects – want to bring fresh content and community conversations from all over the Web to your site or community? Search for the topics you’re interested in with our Search API and expose the hottest conversations on your site. We don’t just get the stories, we get the comments. And the comments are the most important part of a community. People comment if other people have commented. That’s what you want.

Financial analytics – search for your favorite company and graph comments to the evolution of the stock quote. You may be intrigued at what you find.

These are just some starting ideas, I’m sure there’s a lot more you can think of.

Posted via web from chetty's posterous

Forget Silicon, This Teenager's Solar Panel Uses Human Hair as a Conductor [Solar Energy]

By Rosa Golijan, 11:15 PM on Tue Sep 8 2009, 2,476 views (Edit, to draft, Slurp)

If eighteen-year-old Milan Karki doesn't turn out to be the next Tesla or Edison, then I'll chop off my locks. This kid invented a solar panel which uses human hair as a conductor and could solve an energy crisis.

As a teenager in a rural village in Nepal, Milan Karki knows just how desperately developing countries need an affordable, renewable energy source. But rather than whine about the availability of electricity or the cost of batteries, he sat down and came up with a solution: Low-cost solar panels with human hair conductors.

Solar energy isn't anything new, but solar panels themselves can be pricey to produce due to using silicon. Karki solved the cost issue by using human hair instead since it turns out that Melanin, a color pigment in hair, is a good conductor. Oh, and did we mention that it's cheaper than silicon?

Half a kilo of hair can be bought for only 16p in Nepal and lasts a few months, whereas a pack of batteries would cost 50p and last a few nights.

I don't know why they're selling hair by the kilo, but this idea is absolutely brilliant and I can't wait to see if it turns into something widely used. [Daily Mail]

Posted via web from chetty's posterous

Enterprise cloud computing coming of age

One of the most interesting aspects of the weeks leading up to and including this year's VMWorld was the incredible innovation in cloud-computing service offerings for enterprises--especially in the category of infrastructure as a service. A variety of service providers are stepping up their cloud offerings, and giving unprecedented capabilities to their customer's system administrators.

In this category, enterprises are most concerned about security, control, service levels, and compliance; what I call the "trust" issues. Most of the new services attempt to address some or all of these issues head on. Given that this is the infancy of enterprise cloud computing, I think these services bode well for what is coming in the next year or two.

Here is a brief analysis of the offerings that recently caught my eye:

  1. Amazon Web Services Virtual Private Cloud: There is no doubt that the smart people at Amazon continue to innovate at a breathtaking pace. The last three years have seen a whirlwind of new and upgraded services, ranging from storage and server capacity, to payment processing and content delivery.

    Amazon's new Virtual Private Cloud offering is just another example of how they listen to their customers when they build solutions. Not so much unique and innovative, as a near perfect execution of a simple solution to a raft of thorny problems, Amazon's VPC service is essentially a powerful VPN gateway which allows Amazon services to be added to the customer's network.

    Now, this doesn't directly address security, compliance, or service levels, but it gives enterprise customers a level of control over network configuration that was previously unavailable from Amazon, which in turn enables the customer greater latitude to address those issues.

  2. Savvis "Project Spirit": Available in beta "by the end of this year," Savvis's Project Spirit adheres to a "Virtual Private Data Center (VPDC)" concept very similar to the Virtual Data Center vision espoused by Sun. In a video providing an overview of the service, Savvis indicates that Project Liberty provides three tiers of service, each with an increasing set of capabilities and improved quality of service (QoS).

    The video demonstrates wizard-based provisioning and drag-and-drop resource topology design, both of which are similar to features from GoGrid and Sun, though perhaps a little more aligned with the latter than the former.

    What I like about Project Spirit is its sense of configurability; something that I think has been missing from many IaaS offerings to date.

  3. Terremark vCloud Express: Terremark is one of the first out of the gate with a basic "one server at a time" offering based on VMWare's vCloud Express infrastructure. Targeted at the same users who find Amazon's EC2 so easy to use, the service is meant as a simple, low-risk way for customers to acquire compute capacity.

    In a video recorded at VMWorld, Simon West, Terremark's VP of marketing, demonstrates provisioning a server in the service. Like other services in its class, it focuses on allowing you to select a server image from a menu of possibilities, click a button, and boot the resulting server in a few minutes. Pricing starts at $.036/hr for a 1 "VPU," 0.5GB server, but as Chris Flex of Citrix Systems notes in a blog post, Terremark charges differently than Amazon, so the CPU cost does not necessarily reflect cheaper overall operation costs.

    Terremark's new service complements its existing Enterprise Cloud service, which is targeted at larger, more sophisticated infrastructure needs.

  4. OpSource Cloud: Hosting vendor, OpSource, is taking a more network-centric approach toward cloud definition, similar to the "subnets" that Amazon allows customers to create in its VPC offering. The OpSource cloud is in pre-beta now, with an October target for "public release." When the OpSource team demonstrated their user interface to me, they showed me a metaphor that begins with the definition of a "network," which is an isolated through custom routing capabilities at the OpSource data centers.

    Each network comes with eight public IP addresses (more can be added), and you can add resources such as servers, storage, and firewalls as you see fit. You can also create as many networks as you'd like for each account.

Obviously, there are many more offerings like these in the market today. However, it is interesting to note that the common theme here seems to be security, either through "isolation" via networking, and/or through the availability of enterprise-class firewalls, load balancers, and the like. The expansion of virtual data center offerings is also interesting, as I think it shows the early growth of what will likely be the true enterprise cloud-computing space.

Access control and user account management was a little sketchy in most of the services I saw, although some showed real promise.

However, one has to wonder as application architectures adjust to cloud computing, how much longer they are going to be tightly coupled to data center architectures. At what point will it no longer be advantageous for application owners to define infrastructure in terms of servers, storage, and security devices?

That being said, the independence of distributed applications from underlying architecture is a long way off, even from the enterprise perspective. I expect that by this time next year, we will see a stable of very strong enterprise public cloud offerings, with support for various compliance standards, sophisticated networking, and cloud-centric security services and technologies.

This is just the beginning of a long evolution, folks.

Posted via web from chetty's posterous

Friday, September 4, 2009

Not every cloud has a silver lining

The tech press is full of people who want to tell you how completely awesome life is going to be when everything moves to "the cloud" – that is, when all your important storage, processing and other needs are handled by vast, professionally managed data-centres.

Here's something you won't see mentioned, though: the main attraction of the cloud to investors and entrepreneurs is the idea of making money from you, on a recurring, perpetual basis, for something you currently get for a flat rate or for free without having to give up the money or privacy that cloud companies hope to leverage into fortunes.

Since the rise of the commercial, civilian internet, investors have dreamed of a return to the high-profitability monopoly telecoms world that the hyper-competitive net annihilated. Investors loved its pay-per-minute model, a model that charged extra for every single "service," including trivialities such as Caller ID – remember when you had to pay extra to find out who was calling you? Imagine if your ISP tried to charge you for seeing the "FROM" line on your emails before you opened them! Minitel, AOL, MSN — these all shared the model, and had an iPhone-like monopoly over who could provide services on their networks, and what those service-providers would have to pay to supply these services to you, the user.

But with the rise of the net – the public internet, on which anyone could create a new service, protocol or application – there was always someone ready to eat into this profitable little conspiracy. The first online services charged you for every email you sent or received. The next generation kicked their asses by offering email flat-rate. Bit by bit, the competition killed the meter running on your network session, the meter that turned over every time you clicked the mouse. Cloud services can reverse that, at least in part. Rather than buying a hard-drive once and paying nothing – apart from the electricity bill – to run it, you can buy cloud storage and pay for those sectors every month. Rather than buying a high-powered CPU and computing on that, you can move your computing needs to the cloud and pay for every cycle you eat.

Now, this makes sense for some limited applications. If you're supplying a service to the public, having a cloud's worth of on-demand storage and hosting is great news. Many companies, such as Twitter, have found that it's more cost-effective to buy barrel-loads of storage, bandwidth and computation from distant hosting companies than it would be to buy their own servers and racks at a data-centre. And if you're doing supercomputing applications, then tapping into the high-performance computing grid run by the world's physics centres is a good trick.

But for the average punter, cloud computing is – to say the least – oversold. Network access remains slower, more expensive, and less reliable than hard drives and CPUs. Your access to the net grows more and more fraught each day, as entertainment companies, spyware creeps, botnet crooks, snooping coppers and shameless bosses arrogate to themselves the right to spy on, tamper with or terminate your access to the net.

Alas, this situation isn't likely to change any time soon. Going into the hard-drive business or the computer business isn't cheap by any means – even with a "cloud" of Chinese manufacturers who'll build to your spec – but it's vastly cheaper than it is to start an ISP. Running a wire into the cellar of every house in an entire nation is a big job, and that's why you're lucky if your local market sports two or three competing ISPs, and why you can buy 30 kinds of hard drive on Amazon. It's inconceivable to me that network access will ever overtake CPU or hard-drive for cost, reliability and performance. Today, you can buy a terabyte of storage for £57. Unless you're recording hundreds of hours' worth of telly, you'd be hard-pressed to fill such a drive.

Likewise, you can buy a no-name quad-core PC with the aforementioned terabyte disc for £348. This machine will compute all the spreadsheets you ever need to tot up without breaking a sweat.

It's easy to think of some extremely specialised collaborative environments that benefit from cloud computing– we used a Google spreadsheet to plan our wedding list and a Google calendar to coordinate with my parents in Canada – but if you were designing these applications to provide maximum utility for their users (instead of maximum business-model for their developers), they'd just be a place where encrypted bits of state information was held for periodic access by powerful PCs that did the bulk of their calculations locally.

That's how I use Amazon's S3 cloud storage: not as an unreliable and slow hard drive, but as a store for encrypted backups of my critical files, which are written to S3 using the JungleDisk tool. This is cheaper and better than anything I could do for myself by way of offsite secure backup, but I'm not going to be working off S3 any time soon.


Posted via web from chetty's posterous

Wednesday, September 2, 2009

VMware service links public and private clouds

VMware has introduced a service for developers that want to test out building cloud-based applications that will work with virtualized environments based on its products.

The infrastructure service, vCloud Express, will be offered via a number of cloud service providers that have signed up as partners, the company said in its announcement at the VMworld conference on Tuesday.

vCloud Express is based on the company's vSphere virtualization platform. As with other recently launched services, such as the Xen Cloud Platform, it aims to allow a business' internal cloud to work with an external cloud. It offers developers a way to prototype and test applications using pay-as-you-go cloud services that are compatible with IT deployments based on VMware's platform. They can then run those applications in the cloud and on the business' virtualized infrastructure.

Terremark, BlueLock, Hosting.com, Logica, Melbourne IT, and several other cloud service providers, have signed up to provide vCloud Express. These companies are currently offering the service as a beta.

"Terremark's vCloud Express services will provide our customers pay-as-you-go, on-demand access to enterprise-class infrastructure that is flexible enough to offer unmatched compatibility with their own internal IT platforms," Terremark chief executive Manuel Medina said in a VMware statement.

Gartner research director Stewart Buchanan told ZDNet UK on Wednesday that enterprises could benefit from the flexibility and availability of the cloud, but he also warned against the pitfalls that could arise because of licensing terms.

"You have your operating system, your utilities and management tools--all the kind of things you would ordinarily run in-house on this virtual machine," Buchanan said. "Will you be able to lift them up and take them into the cloud? According to (most software vendors' licensing terms), the short answer is no."

Buchanan explained that with, for example, most vendors' databases, licensing is based on hardware capacity--the number of processors, cores, or hosts. He said that, unless software vendors themselves had a stake in the cloud, it would be "quite a difficult challenge" to take advantage of the cloud while respecting existing licensing models.

"As we move into the cloud, we are starting to see more cloud service providers being able to license technology," Buchanan said. "Instead of buying your software and running it on the cloud, we'll see a different model where you build your application but, instead of you providing the licenses, you will buy a license from the cloud service provider."

Buchanan noted that the current licensing environment would limit applications for vCloud Express to products that are safe to license to the cloud--particularly open-source applications.

"Open-source licenses tend not to have issue unless you want to have maintenance," he said. "You will need to negotiate an alternative solution with your maintenance provider, but, in general terms, licensing your product isn't going to be such a problem."

The majority of cloud service providers have been focusing on open-source implementations because of these licensing challenges, Buchanan said, suggesting that the maturation of the cloud market would bring more enterprise-class commercial offerings.

"As the cloud is (established) on a more commercial basis, we are going to see opportunities for companies like VMware to expand their presence through cloud implementations," Buchanan said. "VMware has a well-established brand in enterprise virtualization in the data center, but they haven't really exploited the cloud opportunity yet. That's effectively what they are doing now."


Posted via web from chetty's posterous

Virtualization and the cloud: Tech, talk to converge

Posted via web from chetty's posterous

Bye bye to the 100W bulb

Shanta Barley, reporter

Europeans bid farewell to the 100 watt bulbs today. From now on, Edison's brainchild can no longer be legally made in or imported into the European Union, thanks to a Union-wide ban which kicks off today.

Shed a tear, but don't let your sentimentality tempt you into smuggling one into the EU under your jumper: you'll be hit with a £5000 fine, according to The Daily Telegraph. That's the price for individuals caught transgressing the ban. Companies will face unlimited fines.

The EU hopes that the ban on incandescent light bulbs will force businesses and consumers to invest in low-carbon Light Emitting Diodes and Compact Fluorescent Lamps, which use up to 80 per cent less energy.

The ban could save the EU anywhere from 15 to 53 million tonnes of carbon dioxide, says Matt Prescott, founder of the Ban the Bulb campaign.

And the UK could save 2 to 5 million tonnes of the stuff, he says. Will it make a difference? You decide: the ban will cut Britain's yearly emissions of CO2 by - oh, about 0.643737355 per cent.

Categories: Environment

Tags: | | | | | |

  • Posted on September 1, 2009 6:20 PM
  • Posted by Shanta Barley at September 1, 2009 6:20 PM
  • Permalink
  • Comments (5)

Posted via web from chetty's posterous

Tuesday, September 1, 2009

Snow Leopard Combines Minor Improvements, Major Future-Proofing

Posted via web from chetty's posterous

OpenCalais to Add Semantic Metadata to Oracle Databases

calais_logo_sept09.gifEnterprise giant Oracle released its Database 11g Release 2 today, and it now supports OpenCalais, the Semantic Web service from Thomson Reuters. Native support for OpenCalais means users can now extract rich semantic metadata about people, places, companies, and events. Oracle directly calls the OpenCalais API through your normal database administration, though users will still have to grab an API key from Reuters.

OpenCalais began as the Clear Forest service and was acquired by Reuters back in 2007. By pairing with a leading enterprise-class database like Oracle, OpenCalais will prove that it can handle increasingly large document transactions, providing better search indexing and other semantic know-how to businesses as well as the consumer Web.

Posted via web from chetty's posterous

Expected Org adoption on Server Virtualization

http://bit.ly/46ln1k

Posted via email from chetty's posterous

Sunday, April 26, 2009