I’ve been talking to a number of people about our virtyou Opensim Cluster recently, and everybody seems to know a lot about cloud computing and virtualisation. Somehow most of these guys think that cloud computing is the magical bullet to all load problems. Let’s think about that: What is cloud computing anyway?
Maybe you know one or two services by first-hand experience in the so called cloud computing bubble: Amazon EC2 “the elastic compute cloud” or Google Appserver and GQL. What both of them do, is 1) creating an abstraction layer above the database, storage and machines, so you talk to a virtual XY instead of connecting to an Oracle or DB2 or Apache engine, which in return makes the companies use a certain amount of racks more effectively.
- Virtual Database? So far, so good, there are several solutions to parallelize databases for thoughoutput.
- Virtual Storage sounds good too, like Amazon S3. So storage would be the second thing that is somewhat cloudable, OK.
- And then there’s something missing, a CPU. So you would go to Amazon again and rent a virtual machine, and a operating system image, and start that beast up, in the cloud. No, actually not. The virtual machine is neither smeared over several locations, nor would it become faster (or cheaper, if not used) by virtualisation.
No matter how many people tell me about the magic of clouds, it is a fact that I have to rent a certain machine, with soandso many virtual cores and bogomips, and I have a bill for this very machine by the end of the month. Not so elastic after all! In fact any sort of CPU virtualisation has a pretty high overhead, so I never get as much mips out of a virtualized machine as they put in the socket. No cloud for CPU, sorry.
Now just 2 weeks ago, Opensim was peaking in the press, because Intel find that Opensim and the 3D Internet is the future killer application for the supercomputers. As an example for the importance of 3D interactive worlds, they had 2 very interesting guests, Crista Lopes, better known as Diva Canto, one of the core developers of Opensim and inventor of Hypergrid, and Shenlei Winkler, who is running the famous X00.000 Prim Regions in the Intel ScienceSim.
Not the cloud, you see? Intel is big in both the cloud movement and the big irons, but they talked about Opensim at the Super computing SC09 in Portland for a reason.
They just need those big irons for simulating x00.000 prims and their interaction with the x00 avatars in real time, streamed out to all hundereds of them, you know? Not some sort of smeared cloud stuff, BUT massive multiplayer realtime. That’s why you just cannot split up a sim between several computers (or else it would be several smaller sims with lots of borders, and we don’t want no border!) At least under Linux / Mono, a Opensim process runs perfectly in threads, so it can use all the virtual cores you might have on your big iron, but you cannot split up these threads to run them on several machines, or you would lose the simultaneity of your simulation.
Intel have those big irons, and TADA we at virtyou have some of the bigger irons too, for you. We have been testing intel i7 CPUs in the last 2 months now, these emulate 8 Cores whith 5300 Bogomips each (Hyperthreading). And I must say, I’m amazed how smooth all those single processes are threading over all of my 8 virtual CPUs.
Just a short word on hyperthreading: On full load you still have only 4 Cores on that machine (ranking on 11 on the PassMark highend CPUs, in our case) but the designers have duplicated most of the peripheral stuff as if they had 8 cores, so under mid/low load it behaves more like a 8 core machine.
And no, I won’t virtualize any generic CPUs on these big irons, I use them just as they come (with a modern gcc4.4 and 64bit Gentoo Linux, as usual). They do their job very, very good. No clouds for me, just some shiny pieces of silicon, please.
Oh, did I forget about security? Yeah we all know, you need different virtual machines, or else they are not secure (I call it the “Wir werden alle sterben”-criterion). Wrong. Each of our processes runs under a different Linux user, which is pretty much better security than I usually hear from the Windows guys. These users cannot read each others data, but happily use the CPU together, each to his taste. And if one of our big irons burns away, don’t worry, the asset and configuration data is saved between 2 computing centers in 2 european countries, then we just rent the next big iron for you.