Sunday, January 31, 2016

Back to Windows (Minimalist Edition)

Nearly two years later, I have decided to revisit my Back to Windows blog post. Here is a new, minimalist list:

Tuesday, December 15, 2015

2015 Reading List

In the spirit of Mark Zuckerberg's "A Year of Books" initiative, I thought I would share some of the books I have read this year.

Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future by Ashlee Vance

Ashless Vance's well-researched biography on Elon Musk makes a Musk-fan respect Elon Musk even more. If you do not have a chance to read the book, at least read this excerpt.

Flash Boys: A Wall Street Revolt by Michael Lewis

There is a fair bit of financial jargon in this book and it can be tough to follow for those unacquainted with the stock market. Nevertheless, Lewis does a good job explaining the take-all, competitive culture of Wall Street and the absurdness of high-frequency trading.

The Boys in the Boat: Nine Americans and Their Epic Quest for Gold at the 1936 Berlin Olympics Paperback by Daniel James Brown

The Boys in the Boat is a biographical account of the University of Washington rowing team and the improbable journey of the team from beating Cal, winning the Poughkeepsie Regatta, and to representing the country at the 1936 Berlin Olympics. The book is superbly written for the man-on-the-street -- one does not need any specialized knowledge of rowing to appreciate this uniquely american story of hard work and determination.

The Idea Factory: Bell Labs and the Great Age of American Innovation by Jon Gertner

From the 1930s to the late 1990s, Bells Labs was the center of the world of American innovation. What started of as a research lab to preserve Ma Bell's monopoly gave way to numerous technological innovations that changed the course of human history. The word "innovation" was literally born at Bell Labs. Mervin Kelly's leadership and vision created a defining environment -- an idea factory -- where new ideas and new discoveries could be facilitated by human process and not by chance. Naturally, the book devotes a good section to the discovery of the transistor. I was extremely delighted that a good chapter was devoted to Claude Shannon, a man who founded the entire field of information/communications theory but is often unknown to the general public.

Marissa Mayer and the Fight to Save Yahoo! by Nicholas Carlson

A well-written biography on Marissa Mayer and the history of Yahoo!. At a time when the tenure of Marissa Mayer hangs in the balance, this book might be relevant again.

The Sense of Style: The Thinking Person's Guide to Writing in the 21st Century by Steven Pinker

The Sense of Style presents a contemporary view on writing, and attempts to explain the beauty and joy of writing creatively. It is an alternative to the plain unimaginative style espoused in Strunk and White's "The Elements of Style".

Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers by Geoffrey Moore

This is a must-read book for anyone attempting to market any innovative, new product. Moore argues that the Technology Adoption Lifecycle is an illusion, and proposes a revised model filled with gaps (or chasms) between market groups.

Mindset: The New Psychology of Success by Carol Dweck

Everyone from Bill Gates to Satya Nadella has been talking about the importance of having a growth mindset. Having a growth mindset is perhaps easier said than done, but Dweck's book is accessible for all ages and worth the read.

Business Adventures: Twelve Classic Tales from the World of Wall Street by John Brooks

Personally, I found this book too meticulous and long-winded for my taste, but the stories are interesting. There has to be a reason Gates calls this book the best business book he has ever read.

Friday, December 26, 2014

2014 Reading List

It's that time of the year where I get away from hacking and instead spend quality time doing a Think Week (more of a Read Week) of sorts.

Book Reviews

This is a compilation of some of the books I have been reading this year.

Mature Optimization Handbook (pdf) by Carlos Bueno

Bueno taps on his 19 years of software engineering experience to write this handbook on how to approach performance optimization within the framework of a telemetry-based feedback system (aka Scuba) from the perspective of Facebook. This handbook provided fascinating insights into what pieces should constitute a modern telemetry stack to enable effective performance optimization and monitoring. Beuno left Facebook for MemSQL after the publication of this handbook.

The Hard Thing about Hard Things by Ben Horowitz

This is a must-read for any serious entrepreneur. The first half the book focuses on Loudcloud and Opsware, and how Horowitz led the company through some really tough times with a ton of sheer determination and luck. The second half of the book is more relevant: Ben Horowitz dispenses advice from management to leadership.

Zero to One: Notes on Startups, or How to Build the Future by Peter Thiel and Blake Masters

This book is suppose to be a summary/extension of the original CS183 course that Thiel taught at Stanford and some of his talks. I felt this book was over-hyped and was disappointed by Thiel's generalizations. It took me a while to really understand the crux of Thiel's thinking -- Thiel's thinking is based around a contrarian philosophy where he postulates a non-conventional or seemingly incongruent viewpoint and rationalizes it to explain worldly and human behavior within the confines of a specific domain. Such a thinking methodology requires one to be fairly comfortable with thinking from first principles and be flexible enough to avoid the pitfalls of framework-based thinking.

Marc Andressen pretty much sums up the way to approach Thiel's writings: "'exactly half' of whatever Thiel says" [Fortune article] (i.e. with a pinch of salt). That said, Thiel does have some keen and valid observations such as his theory on mafia.

Blake Master's original CS183 notes made for more interesting reading.

The Everything Store: Jeff Bezos and the Age of Amazon by Brad Stone

Brad Stone put together impressive research for an auto-biography on Jeff and Amazon. Reading this book made me understand the growth of Amazon, its many flops, successes and business secrets, and the almost Jobsian personality of Bezos.

How Google Works by Eric Schmidt and Jonathan Rosenberg

To me, this book felt like an attempt by Eric Schmidt to cement his achievements as the CEO of Google and to take some credits for Googly operating and management methods. However, this book did have some compelling and insightful ideas (which might work at Google since Google is cash-rich, but not work elsewhere). I found Schmidt's definition of "smart creatives" to be enlightening and relevant in the context of the contemporary technological company. A fairly accurate TL;DR of the book is on Slideshare.

World Order by Henry Kissinger

This is a pretty dense book written in classic Kissinger style with great insights. This is my second Kissinger book after On China (which some say is Kissinger's seminal piece). I am a newbie to the topic of International Relations and so I am taking this slowly.

How to Start a Startup (Stanford CS183b) by Sam Altman

This isn't exactly a book, but an online course I took. I have mixed reactions to this course -- some lectures were structured and valuable while some were unstructured Q&As and the questions asked had a tendency to be off-tangent or boring. The Q&A format usually leads to impromptu responses from the speakers which sometimes lack insights. My favorite lectures were: GrowthBuilding for the Enterprise and How to Operate.

Y-Combinator is perhaps one of the seed accelerators to have taken a distinctive approach to mentoring startups. Paul Graham and his team are able to break down the process of starting a startup into components that could be easily understood, and engineer success by adhering to a set of guiding methodologies backed by real historical data compiled over a decade of the YC program. The CS183b course and the continued success and recognition of the YC program is a result of that unique effort.

Bonus: A Brave New World In Which Men Ruled by Jodi Kantor

This is a NYT interactive article that explores gender inequality from the perspective of Stanford's graduating class of 1994 which entered society 20 years before this year and a time before the Internet came of age. It features soundbites and (contrarian) viewpoints from the Paypal Mafia. The article itself did not exactly explore gender inequality in great detail, but rather exploited the setting of Stanford alumni (as actors) and Silicon Valley (as the stage) to tell an intriguing story of careers and lives.

Lastly, wishing everyone a Merry Christmas and a Happy 2015!

Monday, April 28, 2014

Common Allergies and Medications

A summary of common medication for treating allergies grouped by their purpose:

Active ingredient: Guaifenesin (400-600 mg)
Thins and loosens mucus, making it easier to be coughed out. [1]

(Nasal) Decongestant
Active ingredient: Phenylephrine (10 mg), Pseudoephedrine (part of Decondine)
Relieves nasal congestion in the upper respiratory tract by constricting blood vessels and reducing the blood supply to nasal mucous membranes. This reduces nasal congestion, stuffiness, and runny noses. [1]

Active ingredient: Loratadine (10 mg), Triprolidine (part of Decondine)
Reduce or block histamines. Histamine is an organic compound produced by local immune responses. It results in running nose and sneezing. [1] [2] [3]

Active ingredient: Dextromethorphan (20 mg)
Used for temporarily cough relief; suppresses cough reflex. [1]

Pain Reliever
Active ingredient: Acetaminophen (650 mg)
Main purpose is to treat headaches and minor body pain. Also reduces fever.

Note: This blog post is not an accurate source of medical information. Consult a doctor if you have a medical concern.

Sunday, April 27, 2014


I have been trying out ASP MVC recently. Naturally, I hit a few roadblocks and found some solutions.

Make sure you update your nuget packages after initializing the project.

I settled on using NLog. Darren has an entire series of blog posts on MVC logging.

Custom Model Binding and Validation
This is a great introduction, but I settled on the solution on Stack Overflow. Always use DefaultModelBinder over IModelBinder.

Passing Multiple Models into a View
Again, great article on SO. There are two main methods I considered: PartialViewResults and passing into a Tuple. The "tuple" method is very much a hack, and results in a bunch of issues if you need to display validation messages from the ModelState.

Article. Routing is ugly. Do the web a favor and set routes.LowercaseUrls = true; 

Custom Error Pages
After many hours of trying out different solutions, I am still stuck on this one.

Finally, stuff I liked:
- Script bundling and compression out of the box
- CSRF token and validation
- Model validation using data annotation attributes
- Nuget packages: MVC HTML5 Toolkit

Monday, April 14, 2014

Robert Schiller's Financial Markets

I enjoyed Robert Schiller's Coursera class on Financial Markets. Robert gives his students a broad overview of the principles behind finance, touching on a wide range of topics in contemporary finance such as stock markets, monetary policy and behavioral finance.

His course is more philosophical than it is technical, the pace is easy, and is well-suited for the man-on-the-street. His weekly introduction sets the learning expectations and as a bonus, gives us a window into the beautiful Yale campus. Robert Schiller is the 2013 winner of the Nobel Prize in Economics, and he puts his stature to good use -- inviting eminent weekly guest speakers such as Maurice Greenberg, Larry Summers and Carl Icahn. 

Robert is a keen proponent of finance, and he tries to dispel conventional myths popularized by the Occupy Wall Street movement. He argues that finance is a instrument of good, that both greed and selfishness are problems with society, not finance. The course gave me a more appreciative understanding of finance, and convinced me that finance is a creative invention of goodness and opportunity.

Monday, April 07, 2014

.NET and the Unification of Languages

I was looking back at Steve Yegge's EE380 GROK talk. Steve is as usual both entertaining and provocative.

It is difficult to describe GROK, but it is a compiler toolchain at a high level. As polygot programmers, we have used different IDEs and editors for different languages, but there is no one text editor or IDE to rule-them-all -- simply because each language has its own idiocracies and what not. Grok tries to solve the toolchain parity problem:
My project is accomplishing this lofty and almost insanely ambitious goal through the (A) normative, language-neutral, cross-language definitions of, and (B) subsequent standardization of, several distinct parts of the toolchain: (I) compiler and interpreter Intermediate Representations and metadata, (II) editor-client-to-server protocols, (III) source code indexing, analysis and query languages, and (IV) fine-grained dependency specifications at the level of build systems, source files, and code symbols.
-- Steve Yegge, Notes from the Mystery Machine Bus
It is interesting to note that in the Microsoft .NET world, the toolchain problem is less pervasive. In .NET, all languages share a common library (.NET framework), compile to one CIL (common intermediate language) standard, runs on a single CLR (common language runtime), and development is driven by a single Visual Studio IDE. This unification of languages resulting in a standard toolchain showcases the beauty of .NET.

In other news, the .NET foundation open sources significant part of the .NET platform, including the .NET compiler platform (Roslyn). (However, language differences are still tricky -- Roslyn provides two distinct compiler APIs for C# and VB.) Microsoft also released CTP3 (Community Technical Preview) of RyuJIT (a .NET JIT compiler).

Sunday, April 06, 2014

Back to Windows

I recently switched back to using Windows 8.1 after several years of *nix-based operating systems (Ubuntu, OS X). Windows 8.1 has some "design flaws". Here are my recommend installs to fix them:

  • Google Chrome (64 bit)
  • Cygwin (64 bit)
  • Clover 3: multi-tab functionality in Windows Explorer
  • Pokki: brings back the start menu on Windows 8
  • Launchy: nothing close to Alfred for Mac, but sometimes does the job
  • f.lux
  • Spotify
  • Notepad++, Sublime Text, GVim
  • Git with Git Bash
  • Java
  • Adobe Reader
PS: First blog post in quite a while, maybe I am back from my blogging hiatus!

Saturday, August 25, 2012

Down the Rabbit Hole with Kafka


Kafka is a "distributed publish-subscribe messaging system". In this post, I will discuss how the nitty-gritty details of how the producer and consumer mechanisms work in Kafka 0.7.1.

In a typical setup, you have a single ZooKeeper instance and a cluster of Kafka servers (e.g. 3) on a high-numbered RAID array. Here, we strive to understand the dynamic perspective of a Kafka setup.

Starting Up Kafka
  • For each server, a unique brokerid is specified in This serves to uniquely identify a broker while allowing the broker to change host or port.
  • By default, when a Kafka broker starts up, it registers itself with zookeeper (unless enable.zookeeper is false in The ephemeral ZK znode /brokers/ids/ contains the (host, port) 2-tuple.
  • Kafka will then attempt to read the topics and the load the logs from log.dir (which defaults to /tmp/kafka-logs if you use the provided If the log.dir path does not exist, a new log directory is created.
  • Note that at this point, no topics or partitions are created. Topics and partitions are only created when a producer registers with the broker. The number of partitions is specified via num.partitions and this is for each topic in that particular server. If you wish to specify partitions on a per-topic basis, you can override the default number of partitions using Once again, this is server-specific.
  • The Kafka broker is registered with ZK once the LogManager is instantiated. All existing topics and partitions are also registered with ZK.
  • A folder is created in log.dir for each combination of topic+partition. For instance, if you have a topic "test-topic" with 3 partitions, then you will have the following folders: "test-topic-0", "test-topic-1", "test-topic-2".
  • What is stored in ZooKeeper regarding your Kafka broker?
    Basically the mapping from broker id to (host, port) and the mapping from (topic, broker id) to number of partitions.

Kafka Producer

When a new Producer is instantiated, it looks at either zk.connect (for automatic broker discovery) or broker.list (for static list of kafka brokers defined by the 3-tuple (brokerId, host, port)).

Internally, the producer client keeps a local copy of the list of brokers and their number of partitions. If you are using zookeeper, then this copy changes over time when brokers are added or dropped.

Assume that you have the following code:

ProducerData data = new ProducerData("test-topic", "test-topic");

Which partition/broker does the message go to? It depends. The request gets funneled into a send method in kafka.producer, which routes the request to a different function depending on whether you have zookeeper enabled.
  • If you go with the zookeeper option...
  • the Producer retrieves from
    maintains a pool of connections to the brokers, one per broker.
  • In zkSend(),  the topicPartitionsList is fetch from for the specified topic "test-topic" via a call to getPartitionListForTopic(). This returns a scala sequence of (brokerId, partitionId). For instance, If we have two brokers of 3 and 4 partitions respectively, then getPartitionListForTopic may return Seq( (0,0),(0,1),(0,2),   (1,0,),(1,1),(1,2),(1,3) ). This result is sorted in asc ordering by brokerId as primary and partitionId as secondary.
  • The length of that sequence is assigned to totalNumPartitions
  • Now, we want to pick a partitionId in the range [0, N-1], where N is totalNumPartitions
    • If the semantic key is specific in the ProducerData, i.e.:
      new ProducerData("test-topic", "test-key", "test-message"),
      • If partitioner.class was specified: then the partition(key, numPartitions) method is called which returns the required partitionId.
      • Else the partitioner.class was not specified and the default partitioner class (kafka.producer.DefaultPartitioner) is used. This returns the equivalent of
        math.abs(key.hashCode) % numPartitions
    • Otherwise, partitionId = random.nextInt(totalNumPartitions)
  • Using this partitionId, we can find the which broker that partition id belongs to by looking up the sequence using the partitionId as the index.
  • Now that we have the broker information, the client proceeds to send the message over the wire.
  • The process to find a broker+partition to send the message repeats up to zkReadRetries (aka times, and each trial other than the first re-reads information from ZK.
  • If you go with the static broker list option...
  • getPartitionListForTopic() is called which returns a sequence as described earlier.
  • Now we have: partitionId = random.nextInt(totalNumPartitions)
  • Using this partitionId, we can retrieve the broker information by looking up the sequence using partitionId as index.
  • Now that we have the broker information, the client proceeds to send the message over the wire.
Note that for async, messages are simply batched before sending, and batch.size and queue.time provide SLA guarantees for the message.

Kafka Consumer

There are two consumer APIs you should be aware of: the high level api (aka ConsumerConnector) and low level api (SimpleConsumer). The big difference between here is that the high level api does broker discovery, consumer rebalancing and keep track of state (i.e. offsets) in zookeeper, while the low level api does not.

If you have a consumer that needs to do fancy stuff such as replaying using specific offsets (e.g. storm spout or a hadoop job which may fail), then that consumer needs to keep track of state manually, and so you should use the low level api.

Kafka Consumer High Level API

The high level api stores state using zookeeper and groups consumers together for load balancing using a unique group_id which is provided by the client. To simply our understanding of state management in ZK, we can think of the znodes as a hash table storing the following information:

key :: value
 owners(group_id, topic, broker_id, partition id) :: consumer_node_id
offsets(group_id, topic, broker_id, partition_id) :: offset counter value
consumer(group_id, consumer_id) :: map({topic, num of streams})

The consumer_id is a 2-tuple of the form (hostname, uuid). This allows for threaded consumers on a single host. The owners(...) key acts as a lock and simplify offset management by ensuring that no more than one consumers are reading from the same combination of (group_id, topic, broker_id, partition_id).

We refer to a consumer here refers to an instance of ConsumerConnector(). A ConsumerConnector instance can have multiple KafkaStreams to allow for multi-threaded consumption.

Because each broker partition can be matched only to one consumer at any given time, you will have non-active consumers if you have more consumers than broker partitions. The benefit of using the high level api is that a consumer will not be starved if a broker fails in a given cluster of kafka brokers. When the failed broker is restored, messages will then be consumed from that broker.

The consumer rebalancing algorithm is triggered via ZK watchers on either of the following conditions:
- addition/removal of broker
- addition/removal of consumer
- new allowed topic

This rebalancing algorithm is triggered for every ConsumerConnector instance in the consumer group (hopefully around the same time, but this isn't a guarantee). So how does the rebalancing work? Effectively speaking for each ConsumerConnector:

First, syncedRebalance() is called. syncedRebalance() effectively loops around rebalance() for a maximum of rebalance.retries.max (defaults to 4) times. For each rebalance attempt, it is possible for a ZK exception to be thrown due to changing ZK states. If there is an exception, the exception is safely caught and the consumer backs off for milliseconds (this defaults to In rebalance() a number of actions happen:
  1. The consumer closes all fetch requests (to avoid data duplication) and offsets are flushed out to ZK
  2. Release all partition ownership from ZK by deleting the znodes for owners(group_id, topic, broker_id, partition id)
  3. Get the partitions per topic mapping
  4. For each topic that the ConsumerConnector is subscribed to:
    1. Using the partitions per topic mapping, get the partitions for that topic, which are of the form (broker-partition). This list is sorted.
    2. Get the total number of consumers for that topic. This is the total number of KafkaStreams subscribing to that topic in the consumer group, which might be more than the number of ConsumerConnector instances.
    3. For each KafkaStreams in the ConsumerConnector: 
      1. Range partition the the sorted partitions to consumer as equally as possible, with the first few consumers getting an extra partition if there are left overs (Note: the consumers were sorted).
        Example 1: If you have 5 partitions with 2 ConsumerConnector instances of 1 stream each, then consumer 0 gets [p0, p1, p2] and consumer 1 gets [p3, p4].
        Example 2: If you have 5 partitions with 2 ConsumerConnector instances of 4 streams each, then consumer 0 gets [p0, p1, p2, 3], and consumer 1 gets [p4].
      2. Note that range partitioning allows for locality, where there is a higher chance for a consumer to fetch data from multiple partitions from a broker rather than all the brokers.

Kafka Consumer Low Level API

In the low level api, you provide everything -- broker host+port, partition id, and offset.

long offset = 0;
SimpleConsumer consumer = new SimpleConsumer("", 9092, 10000, 1024000);
FetchRequest fetchRequest = new FetchRequest("test", 0, offset, 1000000);
ByteBufferMessageSet messages = consumer.fetch(fetchRequest);

The low level api does not talk to zookeeper, and you are responsible for figuring out which broker and partition to connect to and keep track of your offsets.

Achieving High-Availability
  • Local Redundancy
    Using RAID mirroring will provide local data redundancy, while striping provides the benefit of increased performance. According to the docs, LinkedIn uses RAID 10 on Ext 4.
  • Non Local Redundancy
    As of this writing, Kafka does not support non-local data redundancy. Work is in progress to support inter-cluster replication (KAFKA-50) in v0.8 with automatic recovery.
    While intra-cluster mirroring via MirrorMaker (run has been supported since v0.7, MirrorMaker does not do failover.
Packaged Tools
  • Additional tools are available in Such tools include:
    • ConsumerOffsetChecker: Shows how much the consumer is lagging behind
    • MirrorMaker: discussed earlier
    • ProducerShell/ConsumerShell: explained in the quickstart guide
    • ExportZKOffsets/ImportZkOffsets: Manual configuration of ZK state.
    • etc
  • If you wish to benchmark Kafka's performance, the following entry points are provided in bin/
  • There is a bug in each of those scripts: (* should be kafka.perf.*)
  • These helper scripts might be useful.
  • However, you probably want to take a look at the sources for usage.
  • All these being said, the defaults provided are actually pretty reasonable.

Information for the above write up comes from a few sources:

Sunday, August 12, 2012

Making Eclipse Usable

As a Vim user, I have found it really hard to get back to using Eclipse. For big Java projects, using Eclipse can improve productivity tremendously. Here's what I have done to make Eclipse usable: