Tuesday, February 28, 2012

Synchronicity

In mathematics, functions are beautiful things. They represent lines, curves, surfaces, and things in higher dimensional spaces that defy verbal description. With applications in areas that span from practical statistics to abstruse string theory, no one can argue their importance to daily life.

In programming, functions are just as important (I would have said "if not more important", but that seemed needlessly tautological). Interesting, and sometimes overlooked though, are synchronicity aspects of function invocation in programming. Now that we already have a post on location based services, we can leverage that to make our points here more clearly. So let's embark on this journey.

Most functions in computer science are synchronous. By this we mean, we build a function, we call it with zero, one or more arguments, and it returns a value. The code we write that invokes the function populates the arguments, makes the function invocation, then waits patiently till the function call completes, assigns the returned variable to a program variable, and continues execution. Of course, depending on whether the variables were passed by value or by reference, and whether any global variables were also used, etc, there may be a number of other "side-effects" caused by the function invocation on other aspects of program execution but we blithely ignore these here for sake of simplicity.

Synchronous function execution is thus, the simplest, and most intuitive way that functions, and yes sub-routines, and the like are used. But is that all functions can be used for? Let's study a couple of other models.

Enter the Asynchronous function call
When we look at complex APIs - such as, for example the OSA/Parlay APIs for mobile network application development, these contain function calls where what the function is asked to do cannot be done immediately. It is unreasonable to expect a program to wait for as long as a function might take to return in such cases. A way out of this dilemma might be to embed this function call in a separate thread from the main thread of program execution, and have that thread block on the function while the rest of the program goes on its merry way. An arguably more elegant approach is through the use of asynchronous methods.

So how do these work? Software platforms like CORBA and Java RMI (does anyone still use this anymore?) permit one to invoke a function and register a "call-back". The call-back is a reference to a function in the main program that the called function (typically run across the network on another machine somewhere), can invoke on it. So the logic flow proceeds something like this:

  1. main program A invokes function x on program B, passing along a call-back reference c.
  2. B returns an immediate return value, typically indicating whether the function was well structured and     if it will be executed.
  3. the function x returns synchronously to A the immediate return value from (2) above.
  4. B continues executing the function, and when it has an update for A, passes this value to A by invoking call-back c. c returns immediately (i.e. c is synchronous to B)
  5. at this point, c is either still open, or is closed based on pre-determined call-back protocol design between A and B specified in the API.
All this might seem needlessly complicated. But it can be extremely useful. So we look at an example below.

Using Asynchronous Functions - An Example
Let's say you (an application) want periodic location updates for a mobile handset. You could poll the network for location every 30 seconds or so synchronously, and speed up the poll interval if the handset is traveling faster, and slow down the interval if it is slower-moving. But this puts the burden on the application to keep timers etc., A more elegant way to implement this would be to have the application register a call-back with the underlying network node saying: "here's the terminal whose location I need: 555-555-1111. send it to me every 30 sec at this address http://updatesRUs/5555551111x will you? till I tell you to stop, thanks. and if you think it's not really moving that much, feel free to slow down updates a bit"

And the app can continue about its business and process updates as it receives them. When the user tied to the app decides he has for example, reached his destination and does not need location information anymore, the app can send another message to the network saying "remember my request for 555-555-1111? cancel that now please". And the call-back address specified above disappears, so the network cannot post any more updates. Useful, isn't it?

From callbacks to "recruiting"
An erstwhile colleague of mine (the author thanks A. Sahuguet) once showed me another model called "recruiting" that can be even more useful. The way this works is, A invokes a method on B, B on C, and C gets back directly to A. In other words, B "recruits" C to respond to A's request. Typically, networked software implementations require the request/response flow to be A->B->C->B->A, but this is shortened to A->B->C->A in this scenario. Of course there are issues with security etc, and yes, there are ways to address them, but we do not cover those here. How one might set up asynchronous function calls to support a recruiting model is left as an exercise to the reader.

Hope this made fun reading! 

Fifty Nifty Pytools

In this post, we look at several simple but useful python code snippets. While we say "fifty" in the post title, it is hoped that over time the count will grow much larger.

[Snippet 0001] Particularly while working in Finance, one needs very strong date libraries and tools. Python gives us datetime which is excellent, and additional functionality can be built atop that. When one has to tie together data from excel spreadsheets, databases e.g. MySQL, and other sources and maintain a single data set that can be used for data-mining or back-testing, having good date functions at hand becomes critical. An example of such is our first snippet below. More date-related snippets will likely follow.

import os, sys;
from datetime import *;


import os, sys;
from datetime import *;

def dtConv(x,s="OBJ"): # converts dates from one form to another
 def s2(n): # nested helper function. returns str form of n if >10 else "0"+str(n)
  if n<10: return "0"+str(n);
  return str(n);

 # first, parse the input depending on type, collecting year, month, day as int
 # styles (s) supported: 
 # s="OBJ" return type is a date object. the default return type
 # s="TXT" return type is of the form "yyyymmdd" e.g. "20120131"
 # s="XL"  return type is of the form "m/d/yyyy" e.g. "2/3/2012"
 # s="XL0" return type is of the form "mm/dd/yyyy" e.g. "02/03/2012"
 # s="DB"  return type is of the form "yyyy-m-d" e.g. "2012-2-3"
 # s="DB0" return type is of the form "yyyy-mm-dd" e.g. "2012-02-03"
 if type(x)==date: y,m,d=x.year,x.month,x.day;
 else: 
  if x.count("/")==2: y,m,d=int(x.split("/")[2]),int(x.split("/")[0]),int(x.split("/")[1]);
  if x.count("-")==2: y,m,d=int(x.split("/")[0]),int(x.split("/")[1]),int(x.split("/")[2]);
  if x.count("/")==0 and x.count("-")==0 and len(x)==8: y,m,d=int(x[:4]),int(x[4:6]),int(x[6:]);
  
 # next, we generate output in the form requested
 if s=="OBJ": return date(y,m,d);
 if s=="XL": return "/".join([str(m),str(d),str(y)]);
 if s=="DB": return "-".join([str(y),str(m),str(d)]);
 if s=="XL0": return "/".join([s2(m),s2(d),s2(y)]);
 if s=="DB0": return "-".join([s2(y),s2(m),s2(d)]);
 if s=="TXT": return s2(y)+s2(m)+s2(d);
 return -1;



Examples of use:
dtConv("1/2/2012") gives datetime.date(2012,1,2)
dtConv("1/2/2012","DB0") gives "2012-01-02"
dtConv("1/2/2012","TXT") gives "20120102"
dtConv("20120102") gives datetime.date(2012,1,2)

[Snippet 0002] Simple object example with output. We define a circular list for ordinary list objects, accessing beyond the end of list causes errors. clists overcome this problem through modular arithmetic on index location.

class clist(object):
# creates a circular list object
 num_inst=0; # enforces singleton pattern

 def __init__(self,arg1=[]): # constructor
  if clist.num_inst==0: 
   clist.num_inst+=1;
   self.L=arg1;
  else: 
   print "cannot have more than one instance of class clist";
   self.__del__();

 def __del__(self): # object destructor
  pass;

 def __len__(self): # get length of clist
  return len(self.L);

 def __getitem__(self,key): # get an item of clist
  pos=key%len(self.L);
  return self.L[pos];

 def __contains__(self,key):
  if key in self.L: return True;
  return False;

 def __reversed__(self): # reverse clist contents
  self.L.reverse();
  
 def content(self): # accessor for clist contents
  return self.L;

 def redef(self,L): # reset clist contents
  self.L=L;

# sample use:
>>> execfile("clist.py");
>>> b=clist([1,2,3,4,'a','b','c']);
>>> len(b)
7
>>> reversed(b)
>>> b.content();
['c', 'b', 'a', 4, 3, 2, 1]
>>> b.redef([1,2,3,4,5,6,7]);
>>> b.content();
[1, 2, 3, 4, 5, 6, 7]
>>> len(b);
7
>>> b[1]
2
>>> b.content()
[1, 2, 3, 4, 5, 6, 7]
>>> 'a' in b
False
>>> 1 in b
True
>>> c=clist([1,2,3]);

cannot have more than one instance of class clist
>>> b[-13]
2
>>> b[13]
7
>>>

[Snippet 0003] Here, we combine the previous two snippets to design a date class dt that behaves like the examples from snippet 0001 i.e. we can store the date any way we like and seamlessly convert from one form to another as desired. Notice how clean the Object Oriented approach looks in comparison to the purely procedural technique, though of course, for more complex examples, there is probably a greater up-front price to pay in designing things in the OO way as opposed to coding procedurally. The former is arguably easier to maintain however. The code (dt.py) and output follow:


import os, sys;
from datetime import *;

def s2(x): # returns "03" if x==3, or "10" if x==10
 if x<10: return "0"+str(x); # i.e. adds leading zeroes as needed
 else: return str(x);

class dt(object):
 # creates a dt object and provides means to view it in different ways

 def __init__(self,x): # constructor
  if type(x)==str and x.count('/')==2: # covers XL and XL0 types
   m,d,y=x.split("/");
   self.m,self.d,self.y=int(m),int(d),int(y);
  if type(x)==str and x.count('-')==2: # covers DB and DB0 forms
   m,d,y=x.split("-");
   self.m,self.d,self.y=int(m),int(d),int(y);
  if type(x)==date: self.y,self.m,self.d=x.year,x.month,x.day;
   # covers the date object format
  
 def __del__(self): # destructor
  pass;

 def OBJ(self): # returns the date object
  return date(self.y,self.m,self.d);

 def TXT(self): # returns the text representation
  m,d=s2(self.m),s2(self.d);  
  return str(self.y)+m+d;

 def XL(self): # returns the Excel date type
  return "/".join([str(self.m),str(self.d),str(self.y)]);

 def XL0(self): # returns Excel date type with leading 0s
  return "/".join([s2(self.m),s2(self.d),str(self.y)]);

 def DB(self): # returns the MySQL DB date type
  return "-".join([str(self.y),str(self.m),str(self.d)]);

 def DB0(self): # returns the MySQL DB date type with LZs
  return "-".join([str(self.y),s2(self.m),s2(self.d)]);

# sample output generated as below
>>> execfile("dt.py");
>>> a=dt("4/10/2012");
>>> a.OBJ();
datetime.date(2012, 4, 10)
>>> a.TXT();
'20120410'
>>> a.DB();
'2012-4-10'
>>> a.DB0();
'2012-04-10'
>>> a.XL0();
'04/10/2012'
>>> a=dt(date(2012,4,10));
>>> a.OBJ()
datetime.date(2012, 4, 10)
>>> a.TXT()
'20120410'
>>> a.XL0();
'04/10/2012'
>>> a.XL();
'4/10/2012'
>>> a.DB();
'2012-4-10'
>>> a.DB0();
'2012-04-10'
>>>

[Snippet 0004] Python provides native support for a number of different and useful data-types. However, sometimes we may need to solve problems for which the required data types are not readily available. For instance, sometimes we may want to store data in a tree or a trie. How would we go about doing this? Luckily, Python is very intuitive in its support for these kinds of derived data structures. We present a simple implementation below that can be used to build graphs, DAGs, tries etc along with some sample examples of its use.


class node(object): # the class that creates every node object
 allNodes=[]; # static class variable to store all nodes created

 def __init__(self,val=-1): # constructor that creates node with specified tag
  self.val=val;
  self.parent=[];
  self.child=[];
  node.allNodes+=[self];

 def getVal(self): # accessor function that returns the tag value of a node
  return self.val;

 def addChild(self,n): # mutator function that connects a node to a child
  if self.getVal()!=n.getVal():
   self.child+=[n];

 def addParent(self,n): # mutator function that connects a node to a parent
  self.parent+=[n];

 def getChildren(self): # returns a list of child nodes for a node
  return self.child;

 def getChildVals(self): # returns a list of child node values for a node
  t=self.getChildren();
  r=[i.getVal() for i in t];
  return r;

 def getChildByVal(self,val): # returns a particular child node of a node by value
  p=self.getChildren();
  q=self.getChildVals();
  if val not in q: return None;
  else: return p[q.index(val)];

# Example usage

a=node(2);
b=node(3);
c=node(4);
d=node(5);
e=node(6);
f=node(7);
g=node(8);

a.addChild(b);
a.addChild(c);
b.addChild(g);
c.addChild(d);
c.addChild(e);
e.addChild(f);

b.addParent(a);
c.addParent(a);
d.addParent(c);
e.addParent(c);
f.addParent(e);
g.addParent(b);


def getDFSChain(n): # get the depth first search chain of nodes and values
 if type(n)!=node: return -1;
 r=[n];
 c=n;
 for i in r: r+=i.getChildren();
 for i in r: print i.getVal(),;

getDFSChain(a);

# output follows:
2 3 4 8 5 6 7

[Snippet 0005] This is a very interesting code snippet I remember reading on the web somewhere. Comes from a Google software engineer with a screen name of "Tryptych". I don't remember which website I read it, but the idea was so clean and elegant I remember it clearly. It is an algorithm to factorize numbers. Code follows along with example output:

import os, sys;

def F(x): # this function factors a number x into its prime factors
 r=[];
 i=2;
 while x>1:
  while (x % i)==0: 
   r+=[i];
   x/=i;
  i+=1;
 return r;

print "prime factors of 100 are: ",F(100);
print "prime factors of 1024 are: ",F(1024);
print "prime factors of 1789 are: ",F(1789);
print "prime factors of 2013 are: ",F(2013);
print "prime factors of 11204243 are: ",F(11204243);
print "prime factors of 112042431 are: ",F(112042431);
print "prime factors of 1120424311 are: ",F(1120424311);

# output follows:
prime factors of 100 are:  [2, 2, 5, 5]
prime factors of 1024 are:  [2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
prime factors of 1789 are:  [1789]
prime factors of 2013 are:  [3, 11, 61]
prime factors of 11204243 are:  [19, 23, 25639]

prime factors of 112042431 are:  [3, 3, 101, 123259]
prime factors of 1120424311 are:  [1120424311]

[Snippet 0006] 

Sunday, February 26, 2012

On The Design of Location Based Services

In this post, we present a broad-brush, 10,000 foot level survey of developments in the Location Based Services domain. This post is a work in progress. There are several other ideas that merit coverage here including privacy and security concerns, other means of tracking user location, and leading edge location services application scenarios.

As mobile Internet devices become more prevalent, there is a larger pull for mobility characteristics to be blended into the end-user experience. Earlier, people would log into the nearest computer and find directions to the nearest ATM, restaurant, or POI (Point Of Interest) by providing their current geographical location. Later, they graduated to providing their location (i.e. the location of their computer) once, which the computer would "register" (either using cookies or by saving with the user profile on the application website) and send out seamlessly to applications that needed this information to provide value-added services.

This in turn gradually led to more "sophisticated" usage scenarios where applications would recognize the geographic location of the RAS (remote access server) or other IP address pool from which the user's computer was assigned an address and divine the user's location from that (though due to idiosyncracies of how some ISP networks are set up and the details of the underlying infrastructure used - DSL vs. Cable, with head-end locations etc, this can sometimes lead to embarrassing errors).

As Location Based Services (or LBS for short henceforth) evolved further, cellular networks employed technologies that factored in the time difference of arrival of signals from the handset, or the angle of arrival of the signal from a handset or combinations thereof to determine the location of a handset in 3-dimensional space relative to a network of stationary cell-phone towers. During this time, acronyms like PDEs (Position Determining Equipment), MPCs (Mobile Positioning Centers), GMLCs, TDOA (Time Difference Of Arrival), EFLT, AFLT, .... (and others too numerous to mention, and perhaps not all worth even googling for anymore) were widely prevalent. But once GPS-capable hardware became available in handsets, LBS really took off in a big way.

Aside: Note how the above describe scenarios where there are sets of mobile handsets that communicate with each other via a network of stationary nodes (cell towers). Of course, more ad-hoc implementations of a set of mobile handsets that talk directly to each other in true peer-to-peer fashion (like CB radios or walkie-talkies) or the kinds of networks covered under IETF MANET standards also exist, though for simplicity we ignore them here since location in such systems can get much more complicated. <end of Aside>

Some issues persisted even with GPS. A common problem in large metropolitan areas is that of an "urban canyon". This happens when between tall buildings, say in NYC, or within buildings where surrounded by sky-scrapers, there is limited access to satellites, so one has to revert to the older ways of determining location (see para 1 above for example) to be able to provide value-added services to consumers.

As Wifi proliferated, methods akin to those from the second paragraph above were employed at Wifi hotspots to determine the relative strengths of signals from different hotspots (if these were part of a single larger infrastructure network), and then used as a basis to compute the possible location of the mobile client. This alleviated to some extent, the problems with urban canyons, since hybrid technologies could now be employed to effectively location enable applications in areas where such coverage would previously not have been possible. Hybrid as in, use GPS where GPS is available, else, if a Wifi capability existed on the handset and Wifi networks were also located in the vicinity, utilize those to get either the fix, or improve the accuracy of a fix obtained by other methods. Once this base location capability was available, service logic could be overlaid atop this.

This is where we are today. I guess what matters more than the technologies used to support location data is the applications that reside atop the stack. Once can deploy some very useful or even some very "cool" (even if unuseful) applications today, all thanks to location. Examples include geocaching, location-based games, automated mobile tourist guides, of course, GPS-enabled driving directions, and even research projects like "street stories" (yes, this was very cool in 2002).


Tuesday, February 7, 2012

Facebook posts Sentiment Analysis

In a previous post, we examined the utility of mining social media such as the micro-blogging site Twitter. Unrestricted, light, friendly, uncensored, and sometimes trivial and uninteresting information is shared by people with one another on such media. However, at the same time, there are some insightful posts or tweets, reporting from the site of event occurrences, from people closest to the action etc., which gives this information a certain relevance, an immediacy, a currency, an accuracy, and even a certain un-pretentiousness that comes from it being delivered by ordinary people who want to get their voice out there, get heard, share a view etc., which makes social media so fascinating.

People don't usually care that Alice had chicken for dinner. Perhaps even Alice's closest friends do not. But people do care what someone - say a civilian - on the ground in a war-zone has to say about what she has seen, is experiencing, along with video footage where available, to break a news-story to the broader world. In this latter sense, social media's contribution to the world at large is immense.

In this post, we look at legally mining data from that other large social media engine, Facebook. As of this writing, Facebook has ~800M subscribers, which makes it the size of a medium size to large country if it were a geographic agglomeration of people. And these people tend to interact with each other, some quite frequently, and do so by posting messages, sharing photographs, using applications, "liking", "poking" and doing other Facebook specific actions, all with a view to having fun, "keeping in touch", and generally having a good time being social.

Mining Facebook data can be done in two ways:
1. Searching through Facebook public domain posts and messages. This does not require one to log into Facebook but can be achieved by merely using the published public domain Facebook API to access data that many Facebook users do not necessarily even know they have placed into public domain, though lately at least part of the user population seems to be growing up to data ownership and privacy issues.
2. Searching Facebook posts not publicly accessible - this requires that one log in, but provides a deeper access to the Facebook "graph" that connects various Facebook objects including messages, groups, pictures, links etc. all together into a structure that can be queried via a REST-ful API.

We implement a very simple version of (1) in the code below. Again, we note that mining data here can have various practical applications, such as for example, performing sentiment analysis of the user population towards world or local events e.g. the crisis in Europe or Greece, the Arab Spring, elections in the US, the price of oil, Madonna's performance during the Super-Bowl etc., Some sentiment analysis can even be used to help with marketing of products and services and even for applications like investment management. There have been news-stories of how sentiment analysis was used to predict the direction of Netflix stock.

Our implementation stops short of performing the actual sentiment analysis because we have already implemented simple sentiment analysis in our earlier Twitter post. The same approach can be replicated here by the interested reader with minimal effort. Several additional enhancements can also be made if (2) above is used to determine how "connected" a post-writer is to the rest of the Facebook graph (we can do this in the Twitter context using the number of followers one might have). Facebook also offers additional media like pictures that can be also used to add additional context to the story.

One issue we face while performing sentiment analysis in the Facebook context that we do not see with Twitter comes from the fact that Facebook posts tend to be longer on average and are not limited to the 140 characters of a typical Twitter micro-blog tweet. This means that even our simple sentiment analysis algorithm needs to be tweaked to compute the overall sentiment of a post by calculating the relative percentage of positive, negative, and neutral sentiment key-words, and then interpret these in the larger context of the post. Similarly an additional hurdle we face here is that larger posts offer greater capacity for one to express his/her creativity, and that might mean there are more posts that are sarcastic, satirical etc in nature, and text-based analysis unless very sophisticated, is likely to miss these nuances in meaning, making things more difficult from a classification stand-point.

Code for simple Facebook data-mining is posted below, for a sample query with the key-words "quantitative easing" and the generated output file. Enjoy!

Some other issues with the code (an optimist might have titled this section "Future Work" or "Next Steps"):

  1. This is a very simple, unpretentious implementation focused on the core issue of mining Facebook posts for data and parsing the results into a human-readable, usable input form for sentiment analysis. It can easily be made much more sophisticated, but we just hit the highlights here and move on to other things.
  2. I did not build in a processor for special characters in the larger unicode data set. So these appear as noise in the output.
  3. I do not check for messages that repeat and filter them out, with good reason. Sometimes messages with minimal changes are re-posted by other people, sometimes with, and sometimes without attribution to the original post. I guess the general rules about plagiarism vs. original thought do not apply as much to social media. 


Sample Source Code:

import os, sys, urllib2; # include standard libraries of interest


# function that takes two lists as input, reads them,
# then returns a list of tuples (x,y) where the x is from the first
# list, and the second element y is the smallest number larger than x
# from the second list. we use this as a helper function to parse the
# output of the query to Facebook.
def L(a,b):  
 r=[];
 for i in a: 
  t=[j for j in b if j>i+2][0];
  r+=[(i+2,t)];
 return r;


# program sample usage is "python fbmine.py "quantitative easing" qe2.txt
# here fbmine.py is this source code file
# "quantitative easing" is the string of space separated keywords we mine for
# qe2.txt is the output file generated by this data-mining exercise.
wrds=sys.argv[1]; # wrds is the string of words we want to filter for
wrds=wrds.split(" ");
s="";
for i in wrds[:-1]: s+=i+"%20"; # populate the query 


s+=wrds[-1];
#print "query: http://graph.facebook.com/search?q="+s+"&type=post&limit=1000\n\n"; # create the query string and launch it
req=urllib2.Request("http://graph.facebook.com/search? q="+s+"&type=post&limit=1000");
response=urllib2.urlopen(req); # collect the query results
txt=response.read();
txt=txt.replace("\\n","").replace("\\",""); # some simple cleanup of read data


p=txt.split("\"");
m1=[i for i in range(len(p)) if p[i]=="message"]; # parsing the messages
m2=[i for i in range(len(p)) if p[i]==","];
R=L(m1,m2); # using the helper function
g=open(sys.argv[2],"w"); # generating and writing out the output
for i in R: 
 s="";
 for j in range(i[0],i[1]): s+=p[j]+" \n";
 g.write("-----------------------------------------------------------------\n");
 g.write(s+"\n");


g.write("------------------------------------------------------------------\n");


Sample Output File: (file was generated around 1815 hrs Friday Feb 10 2012)

--------------------------------------------------------------------------------
The Bank of England has announced another round of 'quantitative easing', this time printing u00a350 billion of money.Keep it up lads; at this rate soon we'll all be billionaires, just like everyone in Zimbabwe.Turns out that smashing a stake through a vampire's heart works, even if your neighbours cat's not a vampire. 


--------------------------------------------------------------------------------
The Bank of England has announced another round of 'quantitative easing', this time printing u00a350 billion of money.Keep it up lads; at this rate soon we'll all be billionaires, just like everyone in Zimbabwe. 


--------------------------------------------------------------------------------
Neat, expression, So why the blithering flip.....Very interesting article on printing money..  It's English, but applies equally here, I think:  (Be sure to read the last link in article)http://blogs.telegraph.co.uk/news/danielhannan/100136397/quantitative-easing-has-failed-and-failed-again-what-madness-has-seized-our-leaders/ 


--------------------------------------------------------------------------------
The Bank of England has announced another round of 'quantitative easing', this time printing u00a350 billion of money.Keep it up lads; at this rate soon we'll all be billionaires, just like everyone in Zimbabwe. 


--------------------------------------------------------------------------------
just saw ths funny joke....... The Bank of England has announced another round of 'quantitative easing', this time printing u00a350 billion of money.Keep it up lads; at this rate soon we'll all be billionaires, just like everyone in Zimbabwe. 


--------------------------------------------------------------------------------
http://www.zerohedge.com/news/obama-revises-cbo-deficit-forecast-predicts-110-debt-gdp-end-2013quantitative easing is not the panacea that Obama is hoping for 


--------------------------------------------------------------------------------
The system is FRAUD! 


--------------------------------------------------------------------------------
The Bank of England has announced another round of 'quantitative easing', this time printing u00a350 billion of money. Keep it up lads; at this rate soon we'll all be billionaires, just like everyone in Zimbabwe. 


--------------------------------------------------------------------------------
The bank of england has just announced another round of 'quantitative easing', this time printing u00a350 billion in notes.Keep it up lads, at this rate we'll soon all be billionaires. Just like everyone in zimbabwe. 


--------------------------------------------------------------------------------
u2018u201cQuantitative Easing is a transfer of wealth from the poor to the rich,u201d he says, u201cIt floods banks with money, which they use to pay themselves bonuses. The banks have money, and assets, so they can borrow easily. The poor guy, who is unemployed and can't borrow, is not going to benefit from it.u201d The QE process pushes asset prices up, he says, which is great for those who own stocks, shares and expensive houses. u201cBut the state is subsidising the rich. It is the top 1 per cent who benefit from Quantitative Easing, not the 99 per cent.u201du2019 -- Nassim Taleb 


--------------------------------------------------------------------------------
Quantitative easing is now more vile on the lips than any four letter word http://tgr.ph/zNpxTg 


--------------------------------------------------------------------------------
http://blogs.telegraph.co.uk/news/danielhannan/100136397/quantitative-easing-has-failed-and-failed-again-what-madness-has-seized-our-leaders/ 


--------------------------------------------------------------------------------

[...] I've truncated this to save space.