advertise
Monday
May042009

STRUCTURE 09 IS BACK!

The GigaOM Network today announces its second Structure conference after the runaway success of the 2008 event. The Structure 09 conference returns to San Francisco, Calif., on June 25th, 2009. Structure 09 (http://structureconf.com) is a conference designed to explore the next generations of Internet infrastructure. Over a year ago, The GigaOM Network Founder Om Malik saw that the platforms on which we have done business for over a decade were starting to provide diminishing returns, and smart money was seeking new options. Structure 09 looks at the changing needs and rapid growth in the Internet infrastructure sector, and this year's event will consider the impact of the global economy. "I cannot remember a time when a new technology had so much relevance to our industry as cloud computing does in the current economic climate," said The GigaOM Network Founder Om Malik. "We all need to find ways to leverage what we have and cut costs without compromising future options. Infrastructure On Demand and Cloud Computing are very strong avenues for doing so and we will look for what practicable advice we can bring to our audience." "Structure 08 was a great experience for our audience and partners, and I am very pleased to be bringing it back again this year," said Malik. "Along with GigaOM Lead Writer Stacey Higginbotham and the conference program committee, I am bringing together what I intend to be one of the most authoritative programs for the cloud computing and Internet infrastructure space." The GigaOM Network is also announcing early speaker selections. Confirmed speakers include: Marc Benioff - Chairman and CEO, Salesforce.com Paul Sagan, President and CEO, Akamai Werner Vogels, CTO, Amazon Russ Daniels, VP and CTO, Cloud Services Strategy, HP Raj Patel, VP of Global Networks, Yahoo! Jonathan Heiliger, VP, Technical Operations, Facebook Greg Papadopoulos, CTO and EVP - Research and Development, Sun Microsystems Jack Waters, President, Global Network Services and CTO, Level 3 Communications Michael Stonebraker, PhD, CTO and Co-Founder, Vertica Systems David Yen, EVP and GM, Data Center Business Group, Juniper Networks Vijay Gill, VP Engineering, Google Yousef Khalidi, Distinguished Engineer, Microsoft Corporation Tobias Ford, Assistant VP, IT, AT&T Richard Buckingham, VP of Technical Operations, MySpace Lew Tucker, VP and CTO, Cloud Computing, Sun Microsystems Lloyd Taylor, VP Technical Operations, LinkedIn Michael Crandell, CEO and Founder, RightScale Jim Smith, General Partner, MDV-Mohr Davidow Ventures Bryan Doerr, CTO, Savvis Doug Judd, Principal Search Architect, Zvents Brandon Watson, Director, Azure Services Platform, Microsoft Jeff Hammerbacher, Chief Scientist, Cloudera Jason Hoffman, PhD, CTO, Joyent Mayank Bawa, CEO, Aster Data James Urquhart, Market Manager, Cloud Computing and Infrastructure, Cisco Systems Kevin Efrusy, General Partner, Accel Lew Moorman, CEO and Founder, Rackspace Joe Weinman, Strategy and Business Development VP, AT&T Business Solutions Peter Fenton, General Partner, Benchmark Capital David Hitz, Founder and Executive Vice President, NetApp James Lindenbaum, Co-Founder and CEO, Heroku Joseph Tobolski, Director of Cloud Computing, Accenture Steve Herrod, CTO and Sr. VP of R&D, VMware Further Details can be found at the Structure 09 Website http://events.gigaom.com/structure/09/ High Scalability readers can register with a $50 discount at http://structure09.eventbrite.com/?discount=HIGHSCALE

Click to read more ...

Friday
May012009

FastBit: An Efficient Compressed Bitmap Index Technology

Data mining and fast queries are always in that bin of hard to do things where doing something smarter can yield big results. Bloom Filters are one such do it smarter strategy, compressed bitmap indexes are another. In one application "FastBit outruns other search indexes by a factor of 10 to 100 and doesn’t require much more room than the original data size." The data size is an interesting metric. Our old standard b-trees can be two to four times larger than the original data. In a test searching an Enron email database FastBit outran MySQL by 10 to 1,000 times.

FastBit is a software tool for searching large read-only datasets. It organizes user data in a column-oriented structure which is efficient for on-line analytical processing (OLAP), and utilizes compressed bitmap indices to further speed up query processing. Analyses have proven the compressed bitmap index used in FastBit to be theoretically optimal for one-dimensional queries. Compared with other optimal indexing methods, bitmap indices are superior because they can be efficiently combined to answer multi-dimensional queries whereas other optimal methods can not.
It's not all just map-reduce and add more servers until your attic is full.

Related Articles

  • FastBit: Digging through databases faster. An excellent description of how FastBit works, especially compared to b-trees.

    Click to read more ...

  • Wednesday
    Apr292009

    Presentations: MySQL Conference & Expo 2009

    The Presentations of the MySQL Conference & Expo 2009 held April 20-23 in Santa Clara is available on the above link.

    They include:

    • Beginner's Guide to Website Performance with MySQL and memcached by Adam Donnison

    • Calpont: Open Source Columnar Storage Engine for Scalable MySQL DW by Jim Tommaney

    • Creating Quick and Powerful Web Applications with MySQL, GlassFish, and NetBeans by Arun Gupta

    • Deep-inspecting MySQL with DTrace by Domas Mituzas

    • Distributed Innodb Caching with memcached by Matthew Yonkovit and Yves Trudeau

    • Improving Performance by Running MySQL Multiple Times by MC Brown

    • Introduction to Using DTrace with MySQL by Vince Carbone

    • MySQL Cluster 7.0 - New Features by Johan Andersson

    • Optimizing MySQL Performance with ZFS by Allan Packer

    • SAN Performance on a Internal Disk Budget: The Coming Solid State Disk Revolution by Matthew Yonkovit

    • This is Not a Web App: The Evolution of a MySQL Deployment at Google by Mark Callaghan
    Wednesday
    Apr292009

    How to choice and build perfect server

    There are a lot of questions about the server components, and how to choice and/or build perfect server with consider the power consumption. So I decide to write about this topic.

    Key Points:

    • What kind of components the servers needs

    • The Green Computing and the Servers components.

    • How much power the server consume.

    • Choice the right components: Processors, HDD, RAID, Memory

    • Build Server, or buy?
    Monday
    Apr272009

    Some Questions from a newbie

    Hello highscalability world. I just discovered this site yesterday in a search for a scalability resource and was very pleased to find such useful information. I have some questions regarding distributed caching that I was hoping the scalability intelligentsia trafficking this forum could answer. I apologize for my lack of technical knowledge; I'm hoping this site will increase said knowledge! Feel free to answer all or as much as you want. Thank you in advance for your responses and thank you for a great resource! 1.) What are the standard benchmarks used to measure the performance of memcached or mySQL/memcached working together (from web 2.0 companies etc)? 2.) The little research I've conducted on this site suggests that most web 2.0 companies use a combination of mySQL and a hacked memcached (and potentially sharding). Does anyone know if any of these companies use an enterprise vendor for their distributed caching layer? (At this point in time I've only heard of Jive software using Coherence). 3.) In terms of a web 2.0 oriented startup, what are the database/distributed caching requirements typically needed to get off the ground and grow at a fairly rapid pace? 4.) Given the major players in the web 2.0 industry (facebook, twitter, myspace, PoF, Flickr etc, I'm ignoring google/amazon here because they have a proprietary caching layer) what is the most common, scalable back-end setup (mySQL/memcached/sharding etc)? What are its limitations/problems? What features does said setup lack that it really needs? Thank you so much for your insight!

    Click to read more ...

    Sunday
    Apr262009

    Map-Reduce for Machine Learning on Multicore

    We are at the beginning of the multicore era. Computers will have increasingly many cores (processors), but there is still no good programming framework for these architectures, and thus no simple and unified way for machine learning to take advantage of the potential speed up.
    In this paper, we develop a broadly applicable parallel programming method, one that is easily applied to many different learning algorithms. Our work is in distinct contrast to the tradition in machine learning of designing (often ingenious) ways to speed up a single algorithm at a time.
    Specifically, we show that algorithms that fit the Statistical Query model can be written in a certain “summation form,” which allows them to be easily parallelized on multicore computers. We adapt Google’s map-reduce paradigm to demonstrate this parallel speed up technique on a variety of learning algorithms including locally weighted linear regression (LWLR), k-means, logistic regression (LR), naive Bayes (NB), SVM, ICA, PCA, gaussian discriminant analysis (GDA), EM, and backpropagation (NN). Our experimental results show basically linear speedup with an increasing number of processors.

    Read more about this study here (PDF - you can download also)

    Click to read more ...

    Sunday
    Apr262009

    Scale-up vs. Scale-out: A Case Study by IBM using Nutch/Lucene

    Scale-up solutions in the form of large SMPs have represented the mainstream of commercial computing for the past several years. The major server vendors continue to provide increasingly larger and more powerful machines. More recently, scale-out solutions, in the form of clusters of smaller machines, have gained increased acceptance for commercial computing.
    Scale-out solutions are particularly effective in high-throughput web-centric applications. In this paper, we investigate the behavior of two competing approaches to parallelism, scale-up and scale-out, in an emerging search application. Our conclusions show that a scale-out strategy can be the key to good performance even on a scale-up machine.
    Furthermore, scale-out solutions offer better price/performance, although at an increase in management complexity.

    Read more about scaling out/up and about the conclusions here (PDF - you can also download it)

    Click to read more ...

    Sunday
    Apr262009

    Poem: Partly Cloudy

    As any reader of this site knows we're huge huge supporters of the arts. To continue that theme here's a visionary poem by Mason Hale. Few have reached for inspiration and found their muse in the emotional maelstrom that is cloud computing, but Mason has and the results speak for themselves: Partly Cloudy We have a dream A vision An aspiration To compute in the cloud To pay as we go To drink by the sip To add cores at our whim To write to disks with no end To scale up with demand And scale down when it ends Elasticity Scalability Redundancy Computing as a utility This is our dream Becoming reality But… There’s a hitch. There’s a bump in the road There’s a twist in the path There’s a detour ahead on the way to achieving our goal It’s the Database Our old friend He is set in his ways He deals in transactions to keeps things consistent He maintains the integrity of all his relations He eats disks for breakfast He hungers for RAM He loves queries and joins, and gives each one a plan He likes his schemas normal and strict His changes are atomic That is his schtick He’s an old friend as I said We all know him well So it pains me to say that in this new-fangled cloud He doesn’t quite fit Don’t get me wrong, our friend can scale as high as you want But there’s a price to be paid That expands as you grow The cost is complexity It’s more things to maintain More things that can go wrong More ways to inflict pain On the poor DBA who cares for our friend The one who backs him up and, if he dies, restores him again I love our old friend I know you do too But it is time for us all to own up to the fact That putting him into the cloud Taking him out of the rack Just causes us both more pain and more woe So… It’s time to move on Time to learn some new tricks Time to explore a new world that is less ACIDic It’s time to meet some new friends Those who were born in the cloud Who are still growing up Still figuring things out There’s Google’s BigTable and Werner’s SimpleDB There’s Hive and HBase and Mongo and Couch There’s Cassandra and Drizzle And not to be left out There’s Vertica and Aster if you want to spend for support There’s a Tokyo Cabinet and something called Redis I’m told It’s a party, a playgroup of newborn DB’s They scale and expand, they re-partition with ease They are new and exciting And still flawed to be sure But they’ll learn and improve, grow and mature They are our future We developers should take heed If our databases can change, then maybe Just maybe So can we

    Click to read more ...

    Friday
    Apr242009

    INFOSCALE 2009 in June in Hong Kong

    In case you are interested here's the info: INFOSCALE 2009: The 4th International ICST Conference on Scalable Information Systems. 10-12 June 2009, Hong Kong, China. In the last few years, we have seen the proliferation of the use of heterogeneous distributed systems, ranging from simple Networks of Workstations, to highly complex grid computing environments. Such computational paradigms have been preferred due to their reduced costs and inherent scalability, which pose many challenges to scalable systems and applications in terms of information access, storage and retrieval. Grid computing, P2P technology, data and knowledge bases, distributed information retrieval technology and networking technology should all converge to address the scalability concern. Furthermore, with the advent of emerging computing architectures - e.g. SMTs, GPUs, Multicores. - the importance of designing techniques explicitly targeting these systems is becoming more and more important. INFOSCALE 2009 will focus on a wide array of scalability issues and investigate new approaches to tackle problems arising from the ever-growing size and complexity of information of all kinds. For further information visit http://infoscale.org/2009/

    Click to read more ...

    Friday
    Apr242009

    Heroku - Simultaneously Develop and Deploy Automatically Scalable Rails Applications in the Cloud

    Update 4: Heroku versus GAE & GAE/J

    Update 3: Heroku has gone live!. Congratulations to the team. It's difficult right now to get a feeling for the relative cost and reliability of Heroku, but it's an impressive accomplishment and a viable option for people looking for a delivery platform.

    Update 2: Heroku Architecture. A great interactive presentation of the Heroku stack. Requests flow into Nginx used as a HTTP Reverse Proxy. Nginx routes requests into a Varnish based HTTP cache. Then requests are injected into an Erlang based routing mesh that balances requests across a grid of dynos. Dynos are your application "VMs" that implement application specific behaviors. Dynos themselves are a stack of: POSIX, Ruby VM, App Server, Rack, Middleware, Framework, Your App. Applications can access PostgreSQL. Memcached is used as an application caching layer.

    Update: Aaron Worsham Interview with James Lindenbaum, CEO of Heroku. Aaron nicely sums up their goal: Heroku is looking to eliminate all the reasons companies have for not doing software projects.


    Adam Wiggins of Heroku presented at the lollapalooza that was the Cloud Computing Demo Night. The idea behind Heroku is that you upload a Rails application into Heroku and it automatically deploys into EC2 and it automatically scales using behind the scenes magic. They call this "liquid scaling." You just dump your code and go. You don't have to think about SVN, databases, mongrels, load balancing, or hosting. You just concentrate on building your application. Heroku's unique feature is their web based development environment that lets you develop applications completely from their control panel. Or you can stick with your own development environment and use their API and Git to move code in and out of their system.

    For website developers this is as high up the stack as it gets. With Heroku we lose that "build your first lightsaber" moment marking the transition out of apprenticeship and into mastery. Upload your code and go isn't exactly a heroes journey, but it is damn effective...

    I must confess to having an inherent love of Heroku's idea because I had a similar notion many moons ago, but the trendy language of the time was Perl instead of Rails. At the time though it just didn't make sense. The economics of creating your own "cloud" for such a different model wasn't there. It's amazing the niches utility computing will seed, fertilize, and help grow. Even today when using Eclipse I really wish it was hosted in the cloud and I didn't have to deal with all its deployment headaches. Firefox based interfaces are pretty impressive these days. Why not?

    Adam views their stack as:
    1. Developer Tools
    2. Application Management
    3. Cluster Management
    4. Elastic Compute Cloud

    At the top level developers see a control panel that lets them edit code, deploy code, interact with the database, see logs, and so on. Your website is live from the first moment you start writing code. It's a powerful feeling to write normal code, see it run immediately, and know it will scale without further effort on your part. Now, will you be able toss your Facebook app into the Heroku engine and immediately handle a deluge of 500 million hits a month? It will be interesting to see how far a generic scaling model can go without special tweaking by a certified scaling professional. Elastra has the same sort of issue.

    Underneath Heroku makes sure all the software components work together in Lennon-McCartney style harmony. They take care (or will take care of) starting and stopping VMs, deploying to those VMs, billing, load balancing, scaling, storage, upgrades, failover, etc. The dynamic nature of Ruby and the development and deployment infrastructure of Rails is what makes this type of hosting possible. You don't have to worry about builds. There's a great infrastructure for installing packages and plugins. And the big hard one of database upgrades is tackled with the new migrations feature.

    A major issue in the Rails world is versioning. Given the precambrian explosion of Rails tools, how does Heroku make sure all the various versions of everything work together? Heroku sees this as their big value add. They are in charge of making sure everything works together. We see a lot companies on the web taking on the role of curator ([1], [2], [3]). A curator is a guardian or an overseer. Of curators Steve Rubel says: They acquire pieces that fit within the tone, direction and - above all - the purpose of the institution. They travel the corners of the world looking for "finds." Then, once located, clean them up and make sure they are presentable and offer the patron a high quality experience. That's the role Heroku will play for their deployable Rails environment.

    With great automated power comes great restrictions. And great opportunity. Curating has a cost for developers: flexibility. The database they support is Postgres. Out of luck if you wan't MySQL. Want a different Ruby version or Rails version? Not if they don't support it. Want memcache? You just can't add it yourself. One forum poster wanted, for example, to use the command line version of ImageMagick but was told it wasn't installed and use RMagick instead. Not the end of the world. And this sort of curating has to be done to keep a happy and healthy environment running, but it is something to be aware of.

    The upside of curation is stuff will work. And we all know how hard it can be to get stuff to work. When I see an EC2 AMI that already has most of what I need my heart goes pitter patter over the headaches I'll save because someone already did the heavy curation for me. A lot of the value in services like rPath offers, for example, is in curation. rPath helps you build images that work, that can be deployed automatically, and can be easily upgraded. It can take a big load off your shoulders.

    There's a lot of competition for Heroku. Mosso has a hosting system that can do much of what Heroku wants to do. It can automatically scale up at the webserver, data, and storage tiers. It supports a variery of frameworks, including Rails. And Mosso also says all you have to do is load and go.

    3Tera is another competitor. As one user said: It lets you visually (through a web ui) create "applications" based on "appliances". There is a standard portfolio of prebuilt applications (SugarCRM, etc.) and templates for LAMP, etc. So, we build our application by taking a firewall appliance, a CentOS appliance, a gateway, a MySql appliance, glue them together, customize them, and then create our own template. You can specify down to the appliance level, the amount of cpu, memory, disk, and bandwidth each are assigned which let's you scale up your capacity simply by tweaking values through the UI. We can now deploy our Rails/Java hosted offering for new customers in about 20 minutes on our grid. AppLogic has automatic failover so that if anything goes wrong, it reploys your application to a new node in your grid and restarts it. It's not as cheap as EC2, but much more powerful. True, 3Tera won't help with your application directly, but most of the hard bits are handled.

    RightScale is another company that combines curation along with load balancing, scaling, failover, and system management.

    What differentiates Heroku is their web based IDE that allows you to focus solely on the application and ignore the details. Though now that they have a command line based interface as well, it's not as clear how they will differentiate themselves from other offerings.

    The hosting model has a possible downside if you want to do something other than straight web hosting. Let's say you want your system to insert commercials into podcasts. That sort of large scale batch logic doesn't cleanly fit into the hosting model. A separate service accessed via something like a REST interface needs to be created. Possibly double the work. Mosso suffers from this same concern. But maybe leaving the web front end to Heroku is exactly what you want to do. That would leave you to concentrate on the back end service without worrying about the web tier. That's a good approach too.

    Heroku is just getting started so everything isn't in place yet. They've been working on how to scale their own infrastructure. Next is working on scaling user applications beyond starting and stopping mongrels based on load. They aren't doing any vertical scaling of the database yet. They plan on memcaching reads, implementing read-only slaves via Slony, and using the automatic partitioning features built into Postgres 8.3. The idea is to start a little smaller with them now and grow as they grow. By the time you need to scale bigger they should have the infrastructure in place.

    One concern is that pricing isn't nailed down yet, but my gut says it will be fair. It's not clear how you will transfer an existing database over, especially from a non-Postgres database. And if you use the web IDE I wonder how you will normal project stuff like continuous integration, upgrades, branching, release tracking, and bug tracking? Certainly a lot of work to do and a lot of details to work out, but I am sure it's nothing they can't handle.

    Related Articles

  • Heroku Rails Podcast
  • Heroku Open Source Plugins etc