International News

18.11.2004

America: Check Point IPS defends against attacks

By Victor R. Garza, InfoWorld (US)

At first, Check Point Software Technologies Ltd.'s Inter-Spect IPS seems no different from many of the IDS/IPS products we tested this summer. InterSpect defends against worms and quaran-tines suspicious computers or networks. Its LAN protocol protection and preemptive attack mitigation are competitive and it can do something most of its competitors can't: physically segment internal LAN segments into organizational security zones.

Dividing the LAN across organizational boundaries is a great idea in theory. Not all network traffic is created equal. It's useful to be able to apply different security policies to different segments but this is not necessarily easy to do in practice. We'll explore this issue further as we test the product in the next few weeks. Another cool feature we'll be looking at during testing is InterSpect's facility for working with Check Point Integrity clients. Formerly a Zone Labs product, Integrity combines a management server and client-side agents to enforce security policies on each end point. Integration with InterSpect should extend policy enforcement to the network level.

From a reporting standpoint, InterSpect isn't as complex as Lancope Inc.'s StealthWatch, nor does it seem as informative as StillSecure's Border Guard. InterSpect's SmartView Monitor presents some interesting and nicely laid out sta-tistics, but it's one of four separate applications presenting informa-tion. Normally, I'd prefer to have everything available from one console, so we'll see how this pans out. InterSpect is a promising IPS solution from a leading security vendor. Check Point has certainly earned its stripes in the firewall market; we're eager to see how it fares in the IPS arena. Cost: InterSpect 210, US$9,000; InterSpect 410, $18,000; InterSpect 610, $36,000; InterSpect 610F, $39,000

(Victor R. Garza is a freelance author and network security consultant in the Silicon Valley.)

America: VMware delivers a datacenter in a box

By Tom Yager, InfoWorld (US)

VMware Inc., now owned by EMC Corp., created its ESX Server virtualization product for bu-sinesses that need truly enterprise-class virtualization. ESX Server 2.1.1 implements the consolida-tion, dynamic provisioning, re-source pooling, and all-basescovered availability assurance of expensive system and storage hardware. But ESX Server does it with ordinary servers, modular SANs, and vanilla operating systems.

I started testing with a pair of dual-processor rack servers - one Opteron and one pre-Nocona Xeon - but then moved to a single Opteron DP server and a pair of stand-alone Athlon FX (single-processor desktop Opteron) systems to get a better feel for ESX Server's ap-proach to distributed management.

My expectation going into this review was that ESX Server would perform similar to VMware's lower-end GSX Server product, just scaled for higher-volume environments. It will serve that purpose, but limiting it to the typical consolidation/isolation role strikes me as a poor investment. What's revolutionary about this product is that it creates a fabric of physical servers, VMs, and networked storage volumes that connect in any-to-any, many-to-many fashion.

VMware strongly advised me to use a heterogeneous SAN for my tests. I put an Apple Computer Inc./LSI Logic Corp. dual-port Fibre Channel adapter in each server and used an Emulex Corp. 355 storage switch to link the servers to a pair of Apple Xserve RAID disk arrays. In practice, setting up the SAN took longer than installing ESX Server and the guest operating systems, but I can't overstate ESX Server's brilliant use of networked storage. It implements its own SAN file system, replete with leading-edge features such as read/write volume sharing, file-level locking, and multipathing for transparent fail-over and volume spanning. ESX Server's virtualization layer delivers all this SAN goodness even to operating systems that don't have Fibre Channel drivers; to each guest OS the SAN looks like a simple SCSI adapter.

ESX Server handles the LAN transparently, too. When it routes around network traffic jams and card failures or relocates VMs from place to place, the guest OS is clueless. It sees the same set of network cards and the same fixed IP addresses.

VMware licenses ESX Server on a per-CPU basis. Its host core is a custom Linux kernel with a limited set of bulletproof device drivers, for reasons of stability. The hardware compatibility list for ESX Server is thus very short, but my dual-processor Opteron and Xeon sys-tems proved compatible without alteration.

Although they are not a specific focus of this review, I used three optional VMware products in my testing: VirtualCenter, a scalable provisioning solution; Virtual SMP, which creates dual-processor VMs (a significant advance); and VMotion, which allows you to move a running VM from one physical location to another without interrupting its execution.

VMotion serves a compellingly practical purpose. In a serviceoriented environment, it can reprovision services and all their dependencies, from databases to IP addresses, grabbing and releasing resources from a pool. For example, a service that's managing a large quantity of XML data needs a fast path to storage. VMotion can move that service to a system that's loaded with Fibre Channel ports. When the service's needs return to a nominal level, VMotion can move the service back to the pool. No connections are broken, nor are any IP addresses reassigned.

Using my setup, ESX Server's SQL Server database performance was what I'd expect from a dedicated server with a slower CPU but fabulous I/O. In fact, after my research, I'd be less likely to run multiple instances of SQL Server or Oracle on one physical machine than to run one instance each in multiple VMs.

When considering ESX Server, it's vital not to lose sight of one inescapable reality: PC servers are not designed for virtualization or hardware partitioning. Although VMware ESX Server conveys capabilities to x86 systems that come strikingly close to those of bigger iron, the performance overhead of doing all the virtualization work in software is substantial. Also, keep in mind that even on 64-bit hardware, ESX Server creates virtual 32-bit x86 systems, limiting the work-load that each VM can take on. And ESX Server's network interconnects can't match the compute cycle aggregation offered by monolithic multiprocessing servers - and blades with fast backplanes. But so much for the bad news.

For all the time I've spent with ESX Server, it will take a lot longer to uncover all of its complexities, but I know this much: There is nothing PC-like about x86 servers running this product. Those coming down to x86 from Sparc, Power, or PA-RISC hardware should consider no option other than ESX Server. And those running more than a rack's worth of x86 servers should think seriously about trading some raw performance, so often wasted, for the high-availability, ultimately reconfigurable server infrastructure that this product en-ables. It's remarkable - even marvelous - to see VMware carry IT so far with software that fits on two CDs. (Tom Yager is technical director of the InfoWorld Test Center.)

America: Virtual Server 2005 is Windows on Windows

By Tom Yager, InfoWorld (US)

Microsoft Corp.'s Virtual Server 2005 is probably best viewed as a direct competitor to VMware Inc.'s well-entrenched GSX Server, but the degree to which Virtual Server integrates with other Microsoft server products puts it in a class of its own.

To test the product, I configured a variety of systems. The primary server bank was a pair of dual-processor Opteron rack servers, one with 4GB of RAM and one with 8GB. I focused on the Opterons because both Microsoft and VMware have adapted their products to run on Opteron with enhanced capabilities. But, similar to VMware's products, Virtual Server 2005 is 32-bit software that doesn't take advantage of the extended registers and math capabilities of Opteron or Intel's EM64T extensions.

With the exception of memory, Virtual Server 2005's system requi-rements are easy to meet. Windows Server 2003 is the only supported host OS, so its hardware compatibility list sets the rules. Virtual Server 2005 Standard Edition works with as many as four CPUs, whereas the Enterprise Edition supports an unlimited number of processors in a single machine.

RAM is the most significant requirement. You need what you'd ordinarily put in a server - I consider 1GB to be the minimum for servers in the Opteron/Xeon class - plus as much physical memory as you plan to dedicate to all of your running virtual servers combined. That adds up fast: If you only intend to run four virtual servers simultaneously, dedicating a scant 512 MB to each, you're still looking at 3GB to 4GB of RAM.

By default, Virtual Server 2005's virtual hard drives grow as needed; even if you allocate 20GB of disk space to a VM, it will initially occupy only as much real disk space as the installed software requires. Because storage space wasn't an issue, however, I was able to squeeze markedly improved VM performance by using dedicated volumes on a Fibre Channel SAN.

In addition, Virtual Server's "differencing disks" feature supports an install-once, run-many configuration. You can launch as many VMs as you please from a single disk image without interfering with the others. Virtual Server will store each machine's data in a separate file that contains only that data which differs from the original machine's image.

I focused most of my testing on the majority case: hosting Win-dows. Using each operating sys-tem's ordinary CD-boot installa-tion methods, I built VMs for Windows NT 4.0, Windows 2000 Server, and Windows XP. On the 8GB Opteron server, the performance of Windows burst-demand applications - I primarily used IIS, Exchange Server, Terminal Services, and Visual Studio .Net - was acceptable when running four virtual servers. I could push it to six by reducing the memory allocated to each VM. I was also able to balance CPU resources to favor either interactive or background sessions.

Virtual Server 2005 offers a Web-based administrative interface, but this UI won't handle the sort of active management needed in a demanding production environment or in situations where you're closely monitoring a number of VMs for testing. In these settings, the best course of action is to use the sup-plied set of Virtual Server 2005 management extensions for MOM (Microsoft Operations Manager). MOM handles VMs exactly as it does physical ones but with an added awareness that links the operating status of a VM to the health of the real hardware in its physical host.

Unfortunately, Virtual Server's potential is hobbled. Microsoft doesn't document or officially support the use of Linux or BSD as guest OSes - a departure from the policies of Connectix, from whom Microsoft bought the Virtual Server technology. Also, Virtual Server restricts each VM to a single virtual processor, limiting its best-case performance.

What's more, guest OSes can't balance the use of I/O, processor cache, and memory. Microsoft claims this is less of a problem for Opteron's NUMA (non-uniform memory access), which doesn't require the OS to handle the minute arbitration of multiple streams of data across a single bus. I was not able to bring in an EM64T-enhanced Xeon system for this review, so I can't say how much real diffe-rence there is between the two ar-chitectures.

Microsoft's primary contributions to Virtual Server since purchasing it from Connectix in early 2003 have been in the areas of management, enterprise integration, and NUMA tuning. The differen-cing disks feature alone enables myriad large-scale testing, lab isolation, and extreme security scenarios, with its capability of snapping back to a known-good or known-safe boot disk image in an instant.

I'm quite sure that virtualiza-tion will eventually become a standard feature of Windows servers. As it is, with Virtual Server priced affordably and given the management integration Microsoft crafted for it, it's more than worth the cost.

America: NVidia's new chip challenges rivals

By Eric Dahl, PC World.com (US)

PC World's first tests of NVidia Corp.'s just announced high-end mobile graphics chip, the GeForce Go 6800, show that it is one of the first notebook graphics components to support performance rivaling that of desktop boards.

The GeForce Go 6800 is based on NVidia's line of high-end desktop graphics chips. Scheduled to be available in gaming notebooks, it promises to let gamers take desktop performance on the road.

The PC World Test Center put a ProStar 9095 notebook equipped with a 3-GHz Pentium 4 processor, 1GB of RAM, and NVidia's new chip through our graphics tests - and recorded some impressive results.

Test results

In Doom 3, for example, the notebook we tested posted scores similar to those achieved by a 3.66-GHz PCI Express desktop graphics test machine running a GeForce 6600 GT graphics board. At 1024 by 768 resolution, the notebook managed a frame rate of 46 frames per second, while the desktop clocked in at 55 fps. But with anti-aliasing turned on, the gap narrowed sharply: The notebook finished at 32 fps to the desktop's 33 fps. Anti-aliasing removes jagged edges from computer-generated graphics to make them appear smoother.

Performance was impressive in other current games, as well. Running at 1024 by 768 resolution with anti-aliasing turned on, the chip averaged 39 fps in our Far Cry test; and at the same resolution without anti-aliasing, it posted 60 fps in our Halo test.

Desktop technology

The GeForce Go 6800 features much the same 3D graphics technology found in GeForce 6800 desktop chips. This includes full support for Shader Model 3.0 in Microsoft's DirectX 9.0. Shader Model lets the GPU run more-complex programs for processing data in 3D scenes, and it allows developers to write more-efficient code to do a better job of rendering effects such as displacement mapping.

NVidia offers the GeForce Go 6800 in two configurations - one that runs both the chip and the video memory at 300 MHz, and another combines a 450-MHz graphics chip with 600-MHz memory. Our test system featured the 300/300 version. Laptops that use the faster configuration should perform better.

The new chip also contains a technology called PureVideo that NVidia claims will improve the quality of DVD and video playback. PureVideo handles several DVD decoding features and accelerates such advanced video codecs as MPEG4 and Microsoft's WMV HD.

ATI Technologies Inc. (NVidia's main competitor in the graphics chip market) plans to launch a new mobile graphics chip of its own in coming weeks.

Zur Startseite