Lucasfilm: The Real Magic is in the Data Center
Kevin Clark, director of IT operations for Lucasfilm, discusses how their data center works:
* Linux-based platform, SUSE (looking to change), and a lot of proprietary open source applications for content creation.
* 4,500-processor render farm in the datacenter. Workstations are used off hours.
* Developed their own proprietary scheduler to schedule their 5,500 available processors.
* Render nodes, the blade racks (from Verari), run dual-core dual Opteron chips with 32GB of memory on board, but are expanding those to quad-core. Are an AMD shop.
* 400TB of storage online for production.
* Every night they write out 10-20TB of new data on a render. A project will use up to a hundred-plus terabytes of storage.
* Incremental backups are a challenge because the data changes up to 50 percent over a week.
* NetApps used for storage. They like the global namespace in the virtual file system.
* Foundry Networks architecture shop. One of the larger 10-GbE-backbone facilities on the West coast. 350-plus 10 GbE ports that used for distribution throughout the facility and the backend.
* Grid computing used for over 4 years.
* A 10-Gig dark fiber connection connects San Rafael and their home office. Enables them to co-render and co-storage between the two facilities. No difference in performance in terms of where they went to look for their data and their shots.
* Artists get server class machines: HP 9400 workstations with dual-core dual Opteron processors and 16GB of memory.
* Challenge now is to better segment storage to not continue to sink costs into high-cost disks.
* VMware used to host a lot of development environments. Allows the quick turn up of testing as the tests can be allocated across VMs.
* Provides PoE (power-over-ethernet) out from the switch to all of our Web farms.
* Next push on the facilities side. How to be more efficient at airflow management and power utilization.