advertise
Friday
Jun262009

PlentyOfFish Architecture

Update 5: PlentyOfFish Update - 6 Billion Pageviews And 32 Billion Images A Month
Update 4: Jeff Atwood costs out Markus' scale up approach against a scale out approach and finds scale up wanting. The discussion in the comments is as interesting as the article. My guess is Markus doesn't want to rewrite his software to work across a scale out cluster so even if it's more expensive scale up works better for his needs.
Update 3: POF now has 200 million images and serves 10,000 images served per second. They'll be moving to a 250,000 IOPS RamSan to handle the load. Also upgraded to a core database machine with 512 GB of RAM, 32 CPU’s, SQLServer 2008 and Windows 2008.
Update 2: This seems to be a POF Peer1 love fest infomercial. It's pretty content free, but the production values are high. Lots of quirky sounds and fish swimming on the screen.
Update: by Facebook standards Read/WriteWeb says POF is worth a cool one billion dollars. It helps to talk like Dr. Evil when saying it out loud.

PlentyOfFish is a hugely popular on-line dating system slammed by over 45 million visitors a month and 30+ million hits a day (500 - 600 pages per second). But that's not the most interesting part of the story. All this is handled by one person, using a handful of servers, working a few hours a day, while making $6 million a year from Google ads. Jealous? I know I am. How are all these love connections made using so few resources?

Site: http://www.plentyoffish.com/

Information Sources

  • Channel9 Interview with Markus Frind
  • Blog of Markus Frind
  • Plentyoffish: 1-Man Company May Be Worth $1Billion

    The Platform

  • Microsoft Windows
  • ASP.NET
  • IIS
  • Akamai CDN
  • Foundry ServerIron Load Balancer

    The Stats

  • PlentyOfFish (POF) gets 1.2 billion page views/month, and 500,000 average unique logins per day. The peak season is January, when it will grow 30 percent.
  • POF has one single employee: the founder and CEO Markus Frind.
  • Makes up to $10 million a year on Google ads working only two hours a day.
  • 30+ Million Hits a Day (500 - 600 pages per second).
  • 1.1 billion page views and 45 million visitors a month.
  • Has 5-10 times the click through rate of Facebook.
  • A top 30 site in the US based on Competes Attention metric, top 10 in Canada and top 30 in the UK.
  • 2 load balanced web servers with 2 Quad Core Intel Xeon X5355 @ 2.66Ghz), 8 Gigs of RAM (using about 800 MBs), 2 hard drives, runs Windows x64 Server 2003.
  • 3 DB servers. No data on their configuration.
  • Approaching 64,000 simultaneous connections and 2 million page views per hour.
  • Internet connection is a 1Gbps line of which 200Mbps is used.
  • 1 TB/day serving 171 million images through Akamai.
  • 6TB storage array to handle millions of full sized images being uploaded every month to the site.

    What's Inside

  • Revenue model has been to use Google ads. Match.com, in comparison, generates $300 million a year, primarily from subscriptions. POF's revenue model is about to change so it can capture more revenue from all those users. The plan is to hire more employees, hire sales people, and sell ads directly instead of relying solely on AdSense.
  • With 30 million page views a day you can make good money on advertising, even a 5 - 10 cents a CPM.
  • Akamai is used to serve 100 million plus image requests a day. If you have 8 images and each takes 100 msecs you are talking a second load just for the images. So distributing the images makes sense.
  • 10’s of millions of image requests are served directly from their servers, but the majority of these images are less than 2KB and are mostly cached in RAM.
  • Everything is dynamic. Nothing is static.
  • All outbound Data is Gzipped at a cost of only 30% CPU usage. This implies a lot of processing power on those servers, but it really cuts bandwidth usage.
  • No caching functionality in ASP.NET is used. It is not used because as soon as the data is put in the cache it's already expired.
  • No built in components from ASP are used. Everything is written from scratch. Nothing is more complex than a simple if then and for loops. Keep it simple.
  • Load balancing
    - IIS arbitrarily limits the total connections to 64,000 so a load balancer was added to handle the large number of simultaneous connections. Adding a second IP address and then using a round robin DNS was considered, but the load balancer was considered more redundant and allowed easier swap in of more web servers. And using ServerIron allowed advanced functionality like bot blocking and load balancing based on passed on cookies, session data, and IP data.
    - The Windows Network Load Balancing (NLB) feature was not used because it doesn't do sticky sessions. A way around this would be to store session state in a database or in a shared file system.
    - 8-12 NLB servers can be put in a farm and there can be an unlimited number of farms. A DNS round-robin scheme can be used between farms. Such an architecture has been used to enable 70 front end web servers to support over 300,000 concurrent users.
    - NLB has an affinity option so a user always maps to a certain server, thus no external storage is used for session state and if the server fails the user loses their state and must relogin. If this state includes a shopping cart or other important data, this solution may be poor, but for a dating site it seems reasonable.
    - It was thought that the cost of storing and fetching session data in software was too expensive. Hardware load balancing is simpler. Just map users to specific servers and if a server fails have the user log in again.
    - The cost of a ServerIron was cheaper and simpler than using NLB. Many major sites use them for TCP connection pooling, automated bot detection, etc. ServerIron can do a lot more than load balancing and these features are attractive for the cost.
  • Has a big problem picking an ad server. Ad server firms want several hundred thousand a year plus they want multi-year contracts.
  • In the process of getting rid of ASP.NET repeaters and instead uses the append string thing or response.write. If you are doing over a million page views a day just write out the code to spit it out to the screen.
  • Most of the build out costs went towards a SAN. Redundancy at any cost.
  • Growth was through word of mouth. Went nuts in Canada, spread to UK, Australia, and then to the US.
  • Database
    - One database is the main database.
    - Two databases are for search. Load balanced between search servers based on the type of search performed.
    - Monitors performance using task manager. When spikes show up he investigates. Problems were usually blocking in the database. It's always database issues. Rarely any problems in .net. Because POF doesn't use the .net library it's relatively easy to track down performance problems. When you are using many layers of frameworks finding out where problems are hiding is frustrating and hard.
    - If you call the database 20 times per page view you are screwed no matter what you do.
    - Separate database reads from writes. If you don't have a lot of RAM and you do reads and writes you get paging involved which can hang your system for seconds.
    - Try and make a read only database if you can.
    - Denormalize data. If you have to fetch stuff from 20 different tables try and make one table that is just used for reading.
    - One day it will work, but when your database doubles in size it won't work anymore.
    - If you only do one thing in a system it will do it really really well. Just do writes and that's good. Just do reads and that's good. Mix them up and it messes things up. You run into locking and blocking issues.
    - If you are maxing the CPU you've either done something wrong or it's really really optimized. If you can fit the database in RAM do it.
  • The development process is: come up with an idea. Throw it up within 24 hours. It kind of half works. See what user response is by looking at what they actually do on the site. Do messages per user increase? Do session times increase? If people don't like it then take it down.
  • System failures are rare and short lived. Biggest issues are DNS issues where some ISP says POF doesn't exist anymore. But because the site is free, people accept a little down time. People often don't notice sites down because they think it's their problem.
  • Going from one million to 12 million users was a big jump. He could scale to 60 million users with two web servers.
  • Will often look at competitors for ideas for new features.
  • Will consider something like S3 when it becomes geographically load balanced.

    Lessons Learned

  • You don't need millions in funding, a sprawling infrastructure, and a building full of employees to create a world class website that handles a torrent of users while making good money. All you need is an idea that appeals to a lot of people, a site that takes off by word of mouth, and the experience and vision to build a site without falling into the typical traps of the trade. That's all you need :-)
  • Necessity is the mother of all change.
  • When you grow quickly, but not too quickly you have a chance grow, modify, and adapt.
  • RAM solves all problems. After that it's just growing using bigger machines.
  • When starting out keep everything as simple as possible. Nearly everyone gives this same advice and Markus makes a noticeable point of saying everything he does is just obvious common sense. But clearly what is simple isn't merely common sense. Creating simple things is the result of years of practical experience.
  • Keep database access fast and you have no issues.
  • A big reason POF can get away with so few people and so little equipment is they use a CDN for serving large heavily used content. Using a CDN may be the secret sauce in a lot of large websites. Markus thinks there isn't a single site in the top 100 that doesn’t use a CDN. Without a CDN he thinks load time in Australia would go to 3 or 4 seconds because of all the images.
  • Advertising on Facebook yielded poor results. With 2000 clicks only 1 signed up. With a CTR of 0.04% Facebook gets 0.4 clicks per 1000 ad impressions, or .4 clicks per CPM. At 5 cent/CPM = 12.5 cents a click, 50 cent/CPM = $1.25 a click. $1.00/CPM = $2.50 a click. $15.00/CPM = $37.50 a click.
  • It's easy to sell a few million page views at high CPM’s. It's a LOT harder to sell billions of page views at high CPM’s, as shown by Myspace and Facebook.
  • The ad-supported model limits your revenues. You have to go to a paid model to grow larger. To generate 100 million a year as a free site is virtually impossible as you need too big a market.
  • Growing page views via Facebook for a dating site won't work. Having a visitor on you site is much more profitable. Most of Facebook's page views are outside the US and you have to split 5 cent CPM’s with Facebook.
  • Co-req is a potential large source of income. This is where you offer in your site's sign up to send the user more information about mortgages are some other product.
  • You can't always listen to user responses. Some users will always love new features and others will hate it. Only a fraction will complain. Instead, look at what features people are actually using by watching your site.

    Related Articles

  • MySpace also uses Windows to run their site.
  • Markus Frind's posts on Webmaster World.
  • And the Money Comes Rolling In by Max Chafkin
  • How I started A Dating Empire by Markus Frind

    Thanks to Erik Osterman for recommending profiling PlentyOfFish.
  • Wednesday
    Jun242009

    Habits of Highly Scalable Web Applications 

    Nick Belhomme wrote up a excellent summary of a talk given by Eli White on building scalable web applications. Eli worked at digg.com and is now the PHP Community Manager & DevZone Editor-in-Chief at Zend Technologies. Eli takes us on a grand tour through various proven scaling strategies. On the trip you'll visit:

  • What is scalable application design
  • Tip 1: load balancing the webserver
  • Tip 2: scaling from a single DB server to a Master-Slave setup
  • Tip 3: Partitioning, Vertical DB Scaling
  • Tip 4: Partitioning, horizontal DB Scaling
  • Tip 5: Application Level Partitioning
  • Tip 6: Caching to get around your database
  • Resources
  • Tuesday
    Jun232009

    Learn How to Exploit Multiple Cores for Better Performance and Scalability

    InfoQueue has this excellent talk by Brian Goetz on the new features being added to Java SE 7 that will allow programmers to fully exploit our massively multi-processor future. While the talk is about Java it's really more general than that and there's a lot to learn here for everyone.

    Brian starts with a short, coherent, and compelling explanation of why programmers can't expect to be saved by ever faster CPUs and why we must learn to exploit the strengths of multiple core computers to make our software go faster.

    Some techniques for exploiting multiple cores are given in an equally short, coherent, and compelling explanation of why divide and conquer as the secret to multi-core bliss, fork-join, how the Java approach differs from map-reduce, and lots of other juicy topics.

    The multi-core "problem" is only going to get worse. Tilera founder Anant Agarwal estimates by 2017 embedded processors could have 4,096 cores, server CPUs might have 512 cores and desktop chips could use 128 cores. Some disagree saying this is too optimistic, but Agarwal maintains the number of cores will double every 18 months.

    An abstract of the talk follows though I would highly recommend watching the whole thing. Brian does a great job.

    Why is Parallelism More Important Now?

  • Coarse grain concurrency was all the rage for Java 5. The hardware reality has changed. The number of cores is increasing so applications must now search for fine grain parallelism (fork-join)
  • As hardware becomes more parallel, more and more cores, software has to look for techniques to find more and more parallelism to keep the hardware busy.
  • Clock rates have been increasing exponentially over the last 30 years or so. Allowed programmers to be lazy because a faster processor would be released that saved your butt. There wasn't a need to tune programs.
  • That wait for faster processor game is up. Around 2003 clock rates stopped increasing. Hit the power wall. Faster processors require more power. Thinner chip conductor lines were required and the thinner lines can't dissipate the increased power without causing overheating which effects the resistance characteristics of the conductors. So you can't keep increasing clock rate.
  • Fastest Intel CPU 4 or 5 years ago was 3.2 Ghz. Today it's about the same or even slower.
  • Easier to build 2.6 Ghz or 2.8 Ghz chips. Moore's law wasn't repealed so we can cram more transistors on each wafer. So more processing power could be put on a chip which leads to putting more and more processing cores on a chip. This is multicore.
  • Multicore systems are the trend. The number of cores will grow at exponential rate for the next 10 years. 4 cores at the low end. The high end 256 (Sun) and 800 (Azul) core systems.
  • More cores per chip instead of faster chips. Moore's law has been redirected to multicore.
  • The problem is it's harder to make a program go faster on a multicore system. A faster chip will run your program faster. If you have a 100 cores you program won't go faster unless you explicitly design it to take advantage of those chips.
  • No free lunch anymore. Must now be able to partition your program so it can run faster by running on multiple cores. And you must be able keep doing that as the number of cores keeps improving.
  • We need a way to specify programs so they can be made parallel as topologies change by adding more cores.
  • As hardware evolves platforms must evolve to take advantage of the new hardware. Started off with course grain tasks which was sufficient given the number of cores. This approach won't work as the number cores increase.
  • Must find finer-grained parallelism. Example sorting and searching data. Opportunities around data. The data can for sorting can be chunked and sorted and the brought together with a merge sort. Searching can be done in parallel by searching subregions of the data and merging the results.
  • Parallel solutions use more CPU in aggregate because of the coordination needed and that data needs to be handled more than once (merge). But the result is faster because it's done in parallel. This adds business value. Faster is better for humans.

    What has Java 7 Added to Support Parallelism?

  • Example problem is to find the max number from a list.
  • The course grained threading approach is to use a thread pool, divide up the numbers, and let the task pool compute the sub problems. A shared task pool is slow as the number increases which forces the work to be more course grained. No way to load balance. Code is ugly. Doesn't match the problem well. The runtime is dominated by how long it takes the longest subtask to run. Had to decide up front how many pieces to divide the problem into.
  • Solution using divide and conquer. Divide set into pieces recursively until the problem is so small the sequential solution is more efficient. Sort the pieces. Merge the results. 0(n log n), but problem is parallelizable. Scales well and can keep many CPUs busy.
  • Divide and conquer uses fork-join to fork off subtasks and wait for them to complete and then join the results. A typical thread pool solution is not efficient. Creates too many threads and creating threads are expensive and use a lot of memory.
  • This approach portable because it's abstract. It doesn't know how many processors are available It's independent of the topology.
  • The fork-join pool is optimized for fine grained operations whereas the thread pool is optimized for course grained operations. Best used for problems without IO. Just computations using CPU that tend to fork off sub problems. Allows data to be shared read-only and used across different computations without copying.
  • This approach scales nearly linearly with the number of hardware threads.
  • The goal for fork-join: Avoid context switches; Have as many threads as hardware threads and keep them all busy; Minimize queue lock contention for data structures. Avoid common task queue.
  • Implementation uses Work-Stealing. Each thread has a work queue that is a double ended queue. Each thread pulls work from the head of queue and processes it. When there's nothing do it steals work from the tail of another queue. No contention for the head because only one thread access it. Rare contention on tail because stealing is infrequent as the stolen work is large which takes them time to process. Process starts with one task. It breaks up the work. Other tasks steal work and start the same process. Load balances without central coordination, few context switches, little coordination.
  • The same approach also works for graph traversal, matrix operations, linear algebra, modeling, generate moves and evaluate the result. Latent parallelism can be found in a lot of places once you start looking.
  • Support higher level operations like ParallelArray. Can specify filtering, transformation, and aggregation options. Not a generalized in-memory database, but has a very transparent cost model. It's clear how many parallel operations are happening. Can look at the code and quickly know what's a parallel operation so you will know the cost.
  • Looks like map reduce except this is scaling across a multicore system, one single JVM, whereas map reduce is across a cluster. The strategy is the same: divide and conquer.
  • Idea is to make specifying parallel operations so easy you wouldn't even think of the serial approach.

    Related Articles

  • The Free Lunch Is Over - A Fundamental Turn Toward Concurrency in Software By Herb Sutter
  • Intuition, Performance, and Scale by Dan Pritchett
  • "Multi-core Mania": A Rebuttal by Ted Neward
  • CPU designers debate multi-core future by Rick Merritt
  • Multicore puts screws to parallel-programming models by Rick Merritt
  • Challenges in Multi-Core Era – Part 1 and Part 2 by Gaston Hillar.
  • Running multiple processes to understand multicore CPUs power.
  • Learning to Program all Over Again by Vineet Gupta
  • Monday
    Jun222009

    Improving performance and scalability with DDD

    Distributed systems are not typically a place domain driven design is applied. Distributed processing projects often start with an overall architecture vision and the idea about a processing model which basically drives the whole thing, including object design if it exists at all. Elaborate object designs are thought of as something that just gets in the way of distribution and performance, so the idea of spending time to apply DDD principles gets rejected in favour of raw throughput and processing power. However, from my experience, some more advanced DDD concepts can significantly improve the performance, scalability and throughput of distributed systems when applied correctly.

    This article a summary of the presentation titled "DDD in a distributed world" from the DDD Exchange 09 in London.

    Saturday
    Jun202009

    Building a data cycle at LinkedIn with Hadoop and Project Voldemort

    Update: Building Voldemort read-only stores with Hadoop.

    A write up on what LinkedIn is doing to integrate large offline Hadoop data processing jobs with a fast, distributed online key-value storage system, Project Voldemort.

    Friday
    Jun192009

    GemFire 6.0: New innovations in data management

    GemStone has unveiled GemFire 6.0 which is the culmination of several years of development and the continuous solving of the hardest data management problems in the world. With this release GemFire touts some of the latest innovative features in data management.

    In this release:

    - GemFire introduces a resource manager to continuously monitor and protect cache instances from running out of memory, triggering rebalancing to migrate data to less loaded nodes or allow dynamic increase/decrease in the number of nodes hosting data for linear scalability without impeding ongoing operations (no contention points).

    - GemFire provides explicit control over when rebalancing can be triggered, on what class of data and even allows the administrator to simulate a "rebalance" operation to quantify the benefits before actually doing it.

    - With built in instrumentation that captures throughput and latency metrics, GemFire now enables applications to sense changing performance patterns and proactively provision extra resources and trigger rebalancing. The end result is predictable data access throughput and latency without the need to overprovision capacity.

    - We continue down the path of making the product more resilient than ever before - dealing with complex membership issues when operating in large clusters and allowing thresholds to be set in terms of consumption of memory in any server JVM that significantly reduces the probability of "stop the world" garbage collection cycles.

    - Advanced Data Partitioning: Applications are no longer restricted by the memory available across the cluster to manage partitioned data. Applications can pool available memory as well as disk and stripe the data across memory and disk across the cluster. When the data fabric is configured as a cache, partitioned data can be expired or evicted so that only the most frequently used data is managed.

    - Data-aware application behavior routing: There are several extensions added to the GemFire data-aware function execution service - a simple grid programming model that allows the application to synchronously or asynchronously execute application behavior on the data nodes. Applications invoke functions hinting the data they are dependent on and the service parallelizes the execution of the application function on all the grid nodes where the data is being managed. Applications can now define relationships between different classes of data to colocate all related data sets and application functions when routed to the data nodes can execute complex queries on in-process data. These and other features offered in the 'Function execution service' offers linear scalability for compute and data intensive applications. Simply add more nodes when demand spikes to rebalance data and behavior to increase the overall throughput for your application.

    - API additions for C++, C#: Support for continuous querying, client side connection pooling and dynamic load balancing and ability to invoke server side functions.

    - Cost based Query optimization: A new compact index to conserve memory utilizaton and enhanced query processor design with cost-based optimization has been introduced as part of this release.

    - Developer productivity tools: It can be daunting when developers have to quickly develop and test their clustered application. Developers need the capability to browse the distributed data using ad-hoc queries, apply corrections or monitor resource utilization and performance metrics. A new graphical Data browser permits browsing and editing of data across the entire cluster, execution of ad-hoc queries and even create real-time table views that are continuously kept up-to-date through continuous queries. The GemFire Monitor tool (GFMon) also has several enhancements making the tool much more developer friendly.

    For more information on GemFire, view our newly rewritten technical white paper at:
    http://community.gemstone.com/download/attachments/4752318/GemFire+Data+Fabric+-+Technical+White+paper.pdf?version=1

    Monday
    Jun152009

    Large-scale Graph Computing at Google

    To continue the graph theme Google has got into the act and released information on Pregel. Pregel does not appear to be a new type of potato chip. Pregel is instead a scalable infrastructure...

    ...to mine a wide range of graphs. In Pregel, programs are expressed as a sequence of iterations. In each iteration, a vertex can, independently of other vertices, receive messages sent to it in the previous iteration, send messages to other vertices, modify its own and its outgoing edges' states, and mutate the graph's topology.

    Currently, Pregel scales to billions of vertices and edges, but this limit will keep expanding. Pregel's applicability is harder to quantify, but so far we haven't come across a type of graph or a practical graph computing problem which is not solvable with Pregel. It computes over large graphs much faster than alternatives, and the application programming interface is easy to use. Implementing PageRank, for example, takes only about 15 lines of code. Developers of dozens of Pregel applications within Google have found that "thinking like a vertex," which is the essence of programming in Pregel, is intuitive.

    Pregel does not appear to be publicly available, so it's not clear what the purpose of the announcement could be. Maybe it will be a new gmail extension :-)

    Monday
    Jun152009

    starting small with growth in mind

    Hello all,

    I'm working on a web site that might totally flop or it might explode to be the next facebook/flickr/digg/etc. Since I really don't know how popular the site will be I don't want to spend a ton of money on the hardware/hosting right away but I want to be able to scale it easily if it does grow rapidly. With this in mind, what would be the best approach to launch the site?

    Thanks,
    Dan

    Sunday
    Jun142009

    CLOUD & GRID EVENT BY THE ONLINE GAMING HIGH SCALABILITY SIG

    The first meeting of this Online Gaming High Scalability SIG will be on the 9th of July 2009 in central London, starting at 10 AM and finishing around 5PM.

    The main topic of this meeting will be potentials for using cloud and grid technologies in online gaming systems. In addition to experience reports from the community, we have invited some of the leading cloud experts in the UK to discuss the benefits such as resource elasticity and challenges such as storage and security that companies from other industries have experienced. We will have a track for IT managers focused on business opportunities and issues and a track for architects and developers more focused on implementation issues.

    The event is free but up-front registration is required for capacity planning, so please let us know in advance, if you are planning to attend by completing the registration form on this page

    To propose a talk or for programme enquiries, contact meetings [at] gamingscalability [dot] org.

    Note: The event is planned to finish around 5 PM so that people can make their way to Victoria on time for CloudCamp London. CloudCamp is a meeting of the cloud computing community with short talks, is also free but you will have to register for it separately

    PROGRAMME: http://skillsmatter.com/event/cloud-grid/online-gaming-high-scalability-sig/wd-99

    Sunday
    Jun142009

    kngine 'Knowledge Engine' milestone 2

    Kngine is Knowledge Web search engine designed to provide meaningful search results, such as: semantic information about the keywords/concepts, answer the user’s questions, discover the relations between the keywords/concepts, and link the different kind of data together, such as: Movies, Subtitles, Photos, Price at sale store, User reviews, and Influenced story

    Goals

    Kngine long-term goal is to make all human beings systematic knowledge and experience accessible to everyone. I aim to collect and organize all objective data, and make it possible and easy to access. Our goal is to build on the advances of Web search engine, semantic web, data representation technologies a new form of Web search engine that will unleash a revolution of new possibilities.

    Kngine tries to combine the power of Web search engines with the power of Semantic search and the data representation to provide meaningful search results compromising user needs.

    Status

    Kngine starts as a research project in October 2008. Over times, I succeeded to collect, represent, and index a lot of human binges systematic knowledge but it is just the start. As of now, Kngine contains 500+ million of pieces of data, for 4,000+ domains. Kngine knowledge base and capabilities already span a great number of domains, such as:

    • 60,000+ Companies
    • 700,000+ Movie
    • 750,000+ Person
    • 400,000+ Location
    • 115,000+ Book
    • About 5,000,000 concepts.

    Future

    Kngine, as it exists today, is just the beginning. I have both short- and long-term plans to dramatically expand all aspects of Kngine, qualities, broadening and deepening our data, and more.

    I just released Kngine Milestone 2 (Our first public release), soon a preview of section called ‘Labs’ that will include a set of new research and technologies to access the knowledge will be presented.

    Milestone 2

    Milestone 2 is the firsts public release. This release include some useful features that help the users to reach what they want directly, such as:

    • Smart Information
    • Answer your questions
    • Link the data, and view direct data

    for more information about milestone 2 Go there.

    Check this out:

    Hash-Table, The dark knight, When did Toronto-Dominion Bank Tower opened
    profession of Alexander the Great, the director of up movie, I 'm Alive lyrics