Open-source software (OSS) is a catalyst for growth and change in the IT industry, and one can’t overestimate its importance to the sector. Quoting Mike Olson, co-founder of Cloudera, “No dominant platform-level software infrastructure has emerged in the last ten years in closed-source, proprietary form.”
Apart from independent OSS projects, an increasing number of companies, including the blue chips, are opening their source code to the public. They start by distributing their internally developed products for free, giving rise to widespread frameworks and libraries that later become an industry standard (e.g., React, Flow, Angular, Kubernetes, TensorFlow, V8, to name a few).
Adding to this momentum, there has been a surge in venture capital dollars being invested into the sector in recent years. Several high profile funding rounds have been completed, with multimillion dollar valuations emerging (Chart 1).
But are these valuations justified? And more importantly, can the business perform, both growth-wise and profitability-wise, as venture capitalists expect? OSS companies typically monetize with a business model based around providing support and consulting services. How well does this model translate to the traditional VC growth model? Is the OSS space in a VC-driven bubble?
In this article, I assess the questions above, and find that the traditional monetization model for OSS companies based on providing support and consulting services doesn’t seem to lend itself well to the venture capital growth model, and that OSS companies likely need to switch their pricing and business models in order to justify their valuations.
By definition, open source software is free. This of course generates obvious advantages to consumers, and in fact, a 2008 study by The Standish Group estimates that “free open source software is [saving consumers] $60 billion [per year in IT costs].”
While providing free software is obviously good for consumers, it still costs money to develop. Very few companies are able to live on donations and sponsorships. And with fierce competition from proprietary software vendors, growing R&D costs, and ever-increasing marketing requirements, providing a “free” product necessitates a sustainable path to market success.
As a result of the above, a commonly seen structure related to OSS projects is the following: A “parent” commercial company that is the key contributor to the OSS project provides support to users, maintains the product, and defines the product strategy.
Latched on to this are the monetization strategies, the most common being the following:
Historically, the vast majority of OSS projects have pursued the first monetization strategy (support and consulting), but at their core, all of these models allow a company to earn money on their “bread and butter” and feed the development team as needed.
An interesting recent development has been the huge inflows of VC/PE money into the industry. Going back to 2004, only nine firms producing OSS had raised venture funding, but by 2015, that number had exploded to 110, raising over $7 billion from venture capital funds (chart 2).
Underpinning this development is the large addressable market that OSS companies benefit from. Akin to other “platform” plays, OSS allows companies (in theory) to rapidly expand their customer base, with the idea that at some point in the future they can leverage this growth by beginning to tack-on appropriate monetization models in order to start translating their customer base into revenue, and profits.
At the same time, we’re also seeing an increasing number of reports about potential IPOs in the sector. Several OSS commercial companies, some of them unicorns with $1B+ valuations, have been rumored to be mulling a public markets debut (MongoDB, Cloudera, MapR, Alfresco, Automattic, Canonical, etc.).
With this in mind, the obvious question is whether the OSS model works from a financial standpoint, particularly for VC and PE investors. After all, the venture funding model necessitates rapid growth in order to comply with their 7-10 year fund life cycle. And with a product that is at its core free, it remains to be seen whether OSS companies can pin down the correct monetization model to justify the number of dollars invested into the space.
Answering this question is hard, mainly because most of these companies are private and therefore do not disclose their financial performance. Usually, the only sources of information that can be relied upon are the estimates of industry experts and management interviews where unaudited key performance metrics are sometimes disclosed.
Nevertheless, in this article, I take a look at the evidence from the only two public OSS companies in the market, Red Hat and Hortonworks, and use their publicly available information to try and assess the more general question of whether the OSS model makes sense for VC investors.
Red Hat is an example of a commercial company that pioneered the open source business model. Founded in 1993 and going public in 1999 right before the Dot Com Bubble, they achieved the 8th biggest first-day gain in share price in the history of Wall Street at that time.
At the time of their IPO, Red Hat was not a profitable company, but since then has managed to post solid financial results, as detailed in Table 1.
Instead of chasing multifold annual growth, Red Hat has followed the “boring” path of gradually building a sustainable business. Over the last ten years, the company increased its revenues tenfold from $200 million to $2 billion with no significant change in operating and net income margins. G&A and marketing expenses never exceeded 50% of revenue (Chart 3).
The above indicates therefore that OSS companies do have a chance to build sustainable and profitable business models. Red Hat’s approach of focusing primarily on offering support and consulting services has delivered gradual but steady growth, and the company is hardly facing any funding or solvency problems, posting decent profitability metrics when compared to peers.
However, what is clear from the Red Hat case study is that such a strategy can take time—many years, in fact. While this is a perfectly reasonable situation for most companies, the issue is that it doesn’t sit well with venture capital funds who, by the very nature of their business model, require far more rapid growth profiles.
More troubling than that, for venture capital investors, is that the OSS model may in and of itself not allow for the type of growth that such funds require. As the founder of MySQL Marten Mickos put it, MySQL’s goal was “to turn the $10 billion a year database business into a $1 billion one.”
In other words, the open source approach limits the market size from the get-go by making the company focus only on enterprise customers who are able to pay for support, and foregoing revenue from a long tail of SME and retail clients. That may help explain the company’s less than exciting stock price performance post-IPO (Chart 4).
If such a conclusion were true, this would spell trouble for those OSS companies that have raised significant amounts of VC dollars along with the funds that have invested in them.
To further assess our overarching question of OSS’s viability as a venture capital investment, I took a look at another public OSS company: Hortonworks.
The Hadoop vendors’ market is an interesting one because it is completely built around the “open core” idea (another comparable market being the NoSQL databases space with MongoDB, Datastax, and Couchbase OSS).
All three of the largest Hadoop vendors—Cloudera, Hortonworks, and MapR—are based on essentially the same OSS stack (with some specific differences) but interestingly have different monetization models. In particular, Hortonworks—the only public company among them—is the only player that provides all of its software for free and charges only for support, consulting, and training services.
At first glance, Hortonworks’ post-IPO path appears to differ considerably from Red Hat’s in that it seems to be a story of a rapid growth and success. The company was founded in 2011, tripled its revenue every year for three consecutive years, and went public in 2014.
Immediate reception in the public markets was strong, with the stock popping 65% in the first few days of trading. Nevertheless, the company’s story since IPO has turned decisively sour. In January 2016, the company was forced to access the public markets again for a secondary public offering, a move that prompted a 60% share price fall within a month (Chart 5).
Underpinning all this is that fact that despite top-line growth, the company continues to incur substantial, and growing, operating losses. It’s evident from the financial statements that its operating performance has worsened over time, mainly because of operating expenses growing faster than revenue leading to increasing losses as a percent of revenue (Table 2).
Among all of the periods in question, Hortonworks spent more on sales and marketing than it earns in revenue. Adding to that, the company incurred significant R&D and G&A as well (Table 2).
On average, Hortonworks is burning around $100 million cash per year (less than its operating loss because of stock-based compensation expenses and changes in deferred revenue booked on the Balance Sheet). This amount is very significant when compared to its $630 million market capitalization and circa $350 million raised from investors so far. Of course, the company can still raise debt (which it did, in November 2016, to the tune of a $30 million loan from SVB), but there’s a natural limit to how often it can tap the debt markets.
All of this might of course be justified if the marketing expense served an important purpose. One such purpose could be the company’s need to diversify its customer base. In fact, when Hortonworks first launched, the company was heavily reliant on a few major clients (Yahoo and Microsoft, the latter accounting for 37% of revenues in 2013). This has now changed, and by 2016, the company reported 1000 customers.
But again, even if this were to have been the reason, one cannot ignore the costs required to achieve this. After all, marketing expenses increased eightfold between 2013 and 2015. And how valuable are the clients that Hortonworks has acquired? Unfortunately, the company reports little information on the makeup of its client base, so it’s hard to assess other important metrics such as client “stickyness”. But in a competitive OSS market where “rival developers could build the same tools—and make them free—essentially stripping the value from the proprietary software,” strong doubts loom.
With all this in mind, returning to our original question of whether the OSS model makes for good VC investments, while the Hortonworks growth story certainly seems to counter Red Hat’s—and therefore sustain the idea that such investments can work from a VC standpoint—I remain skeptical. Hortonworks seems to be chasing market share at exorbitant and unsustainable costs. And while this conclusion is based on only two companies in the space, it is enough to raise serious doubts about the overall model’s fit for VC.
Given the above, it seems questionable that OSS companies make for good VC investments. So with this in mind, why do venture capital funds continue to invest in such companies?
Apart from going public and growing organically, an OSS company may find a strategic buyer to provide a good exit opportunity for its early stage investors. And in fact, the sector has seen several high profile acquisitions over the years (Table 3).
What makes an OSS company a good target? In general, the underlying strategic rationale for an acquisition might be as follows:
What about the financial rationale? The standard transaction multiples valuation approach completely breaks apart when it comes to the OSS market. Multiples reach 20x and even 50x price/sales, and are therefore largely irrelevant, leading to the obvious conclusion that such deals are not financially but strategically motivated, and that the financial health of the target is more of a “nice to have.”
With this in mind, would a strategy of investing in OSS companies with the eventual aim of a strategic sale make sense? After all, there seems to be a decent track-record to go off of.
My assessment is that this strategy on its own is not enough. Pursuing such an approach from the start is risky—there are not enough exits in the history of OSS to justify the risks.
While the promise of a lucrative strategic sale may be enough to motivate VC funds to put money to work in the space, as discussed above, it remains a risky path. As such, it feels like the rationale for such investments must be reliant on other factors as well. One such factor could be returning to basics: building profitable companies.
But as we have seen in the case studies above, this strategy doesn’t seem to be working out so well, certainly not within the timeframes required for VC investors. Nevertheless, it is important to point out that both Red Hat and Hortonworks primarily focus on monetizing through offering support and consulting services. As such, it would be wrong to dismiss OSS monetization prospects altogether. More likely, monetization models focused on support and consulting are inappropriate, but others may work better.
In fact, the SaaS business model might be the answer. As per Peter Levine’s analysis, “by packaging open source into a service, […] companies can monetize open source with a far more robust and flexible model, encouraging innovation and ongoing investment in software development.”
Why is SaaS a better model for OSS? There are several reasons for this, most of which are applicable not only to OSS SaaS, but to SaaS in general.
First, SaaS opens the market for the long tail of SME clients. Smaller companies usually don’t need enterprise support and on-premises installation, but may already have sophisticated needs from a technology standpoint. As a result, it’s easier for them to purchase a SaaS product and pay a relatively low price for using it.
Citing MongoDB’s VP of Strategy, Kelly Stirman, “Where we have a suite of management technologies as a cloud service, that is geared for people that we are never going to talk to and it’s at a very attractive price point—$39 a server, a month. It allows us to go after this long tail of the market that isn’t Fortune 500 companies, necessarily.”
Second, SaaS scales well. SaaS creates economies of scale for clients by allowing them to save money on infrastructure and operations through aggregation of resources and a combination and centralization of customer requirements, which improves manageability.
This, therefore, makes it an attractive model for clients who, as a result, will be more willing to lock themselves into monthly payment plans in order to reap the benefits of the service.
Finally, SaaS businesses are more difficult to replicate. In the traditional OSS model, everyone has access to the source code, so the support and consulting business model hardly has protection for the incumbent from new market entrants.
In the SaaS OSS case, the investment required for building the infrastructure upon which clients rely is fairly onerous. This, therefore, builds bigger barriers to entry, and makes it more difficult for competitors who lack the same amount of funding to replicate the offering.
Importantly, OSS SaaS companies can be financially viable on their own. GitHub is a good example of this.
Founded in 2008, GitHub was able to bootstrap the business for four years without any external funding. The company has reportedly always been cash-flow positive (except for 2015) and generated estimated revenues of $100 million in 2016. In 2012, they accepted $100 million in funding from Andreessen Horowitz and later in 2015, $250 million from Sequoia with an implied $2 billion valuation.
Another well-known successful OSS company is DataBricks, which provides commercial support for Apache Spark, but—more importantly—allows its customers to run Spark in the cloud. The company has raised $100 million from Andreessen Horowitz, Data Collective, and NEA. Unfortunately, we don’t have a lot of insight into their profitability, but they are reported to be performing strongly and had more than 500 companies using the technology as of 2015 already.
Generally, many OSS companies are in one way or another gradually drifting towards the SaaS model or other types of cloud offerings. For instance, Red Hat is moving to PaaS over support and consulting, as evidenced by OpenShift and the acquisition of AnsibleWorks.
Different ways of mixing support and consulting with SaaS are common too. We, unfortunately, don’t have detailed statistics on Elastic’s on-premises vs. cloud installation product offering, but we can see from the presentation of its closest competitor Splunk that their SaaS offering is gaining scale: Its share in revenue is expected to triple by 2020 (chart 6).
To conclude, while recent years have seen an influx of venture capital dollars poured into OSS companies, there are strong doubts that such investments make sense if the monetization models being used remain focused on the traditional support and consulting model. Such a model can work (as seen in the Red Hat case study) but cannot scale at the pace required by VC investors.
Of course, VC funds may always hope for a lucrative strategic exit, and there have been several examples of such transactions. But relying on this alone is not enough. OSS companies need to innovate around monetization strategies in order to build profitable and fast-growing companies.
The most plausible answer to this conundrum may come from switching to SaaS as a business model. SaaS allows one to tap into a longer-tail of SME clients and improve margins through better product offerings. Quoting Peter Levine again, “Cloud and SaaS adoption is accelerating at an order of magnitude faster than on-premise deployments, and open source has been the enabler of this transformation. Beyond SaaS, I would expect there to be future models for open source monetization, which is great for the industry”.
Whatever ends up happening, the sheer amount of venture investment into OSS companies means that smarter monetization strategies will be needed to keep the open source dream alive.
This article is originally posted in Toptal.
More likely than not, you have recently found yourself barraged with headlines regarding the border adjustment tax (BAT), a portion of the Republican House Tax Reform Blueprint intended to overhaul the current U.S. corporate tax code. The proposal has emerged in response to common criticism that the current corporate tax rate of 35% and offshore tax deferrals create incentives for multinational companies to outsource jobs, make offshore investments, and take on unnecessary domestic debt.
While there would surely be winners, losers, and an estimated $1 trillion raised in revenue with the implementation of the proposed tax code, it is difficult to determine its exact implications without the actual legislative language, which has yet to be provided. With the nation coming off the heels of a failed healthcare reform attempt, the GOP will be making tax reform its top priority. Regardless of which side you sit on, you will want to understand the potential implications.
According to the nonpartisan Tax Foundation, a border-adjustment tax conforms to the “destination-based” principle whereby the tax is levied based on where the good is consumed (destination), instead of where it was produced (origin). Put simply, a BAT taxes imports but not exports, creating incentives for companies to import less and export more—a significant shift for the U.S. economy, which is heavily dependent on global supply chains.
The House proposal applies a border adjustment to the U.S. corporate income tax. Per the plan, U.S. corporations would no longer be able to deduct the cost of purchases from abroad (imports) and would no longer be subject to taxes on revenue attributable to international sales (exports).
Despite common misconceptions, the border adjustment tax is neither a tariff nor a value-added tax. A tariff is a tax imposed only upon imports, and can be applied selectively to certain products, companies, or countries. In contrast, the border-adjustment tax in consideration would affect all imports and exports, and all countries.
In addition, the border adjustment tax is not a value-added tax (VAT), a taxation system widely adopted across the globe (employed by 140 of the world’s 193 countries). Corporations under the VAT are not permitted payroll deductions from taxable income, while the proposed plan does permit payroll deductions. This seemingly insignificant detail could have crucial compliance implications with existing World Trade Organization (WTO) agreements, which will be discussed further into the article.
The major components of the House proposal include:
So it is important to understand that the border adjustment is only an element of the broader House proposal, a point some commentaries tend to confuse.
With the changes outlined above, the new tax system would essentially become a “destination-based cash flow tax” (DBCFT). Here is a breakdown:
One other consideration in this scenario is the potential appreciation in the value of the dollar. According to economic theory, by exempting U.S. exports from taxes, the border adjustment would initially create higher demand for U.S. goods and U.S. dollars. Simultaneously, by taxing imported goods, there would be lower demand for foreign goods and currencies.
Thus, the expected combined result would be a rise in the value of the dollar. Economists are split on whether or not it would occur. However, if the currency rates work as intended, the value of the dollar would appreciate and the cost of purchasing imported goods would decrease.
Raise tax revenue: In the context of the broader proposal, a border adjustment would generate an estimated $1.1 trillion over the next ten years, which could be used to offset the loss in revenue resulting from the lower corporate tax rate.
Eliminate incentives to move profits offshore: It would eliminate profit-shifting strategies currently utilized by multinational companies such as Apple and its Irish subsidiaries. Since import expenses cannot be deducted from taxable income, it cannot change its domestic tax liability. On the flip side, exports are excluded from taxable income, so tax liability is similarly unaffected. The proposal would eliminate incentives to place intellectual property abroad or load up domestic operations with debt.
Simplify the current tax code: This may seem counterintuitive given the seemingly complicated mechanics of border adjustment taxes. However, the main reason why it would simplify the tax code is because it is easier for corporations to determine where its sales occurred, rather than where the production occurred. According to the Tax Foundation:
“It will probably turn out to be much less complicated than the byzantine tax rules that currently govern businesses today. The border adjustment would eliminate the need for firms to comply with our complex rules governing controlled foreign corporations (CFCs), passive foreign income (Subpart F), transfer pricing, interest allocation, foreign tax credits, and accounting for deferred taxes. Under a border adjustment, all companies would need to account for is what items they purchase from abroad and what products they send abroad.”
WTO violation: While the proposed plan is inspired by the consumption-based VAT, the possibility of it being income-based rather than consumption-based is at the root of much controversy. Consumption taxes do not allow for payroll, interest, or depreciation deductions, as they pertain not to taxable income but to consumption. The House proposal, crucially, includes a provision allowing payroll deductions from taxable income.
Consequently, according to KPMG, it is unclear whether the proposal would replace the current income tax with a consumption tax, or whether it would technically remain an income tax that closely mimics a consumption tax. This distinction has the potential to create inconsistencies with existing World Trade Organization commitments against protectionism. Compliance hinges on whether or not labor costs can be deducted from gross revenue to determine taxable income. If so, the reform would effectively be a corporate income tax with immediate 100% depreciation, disqualifying it as value-added, and would thus be considered a violation.
Increased Consumer Prices: Experts are divided as to whether the border adjustment tax would cause increased consumer prices. Some experts argue that businesses would almost certainly pass the cost increases to consumers, who would experience price hikes in imported goods (including everything from foreign cars and gas to avocados and clothing). David French, SVP of government relations at the National Retail Federation, recently commented, “I really hope everybody understands that what they’re really talking about is a 20% tax on the U.S. consumer.”
There is a fear that this cost burden will be particularly difficult for working class and middle class families to shoulder. For example, if the tax includes oil imports, rural Americans will likely be more affected than the more affluent residing in cities.
Others argue that though the 20% import tax might be passed onto customers in the short to medium term, it would concurrently cause an appreciation in the dollar value that would eventually neutralize the additional consumer cost. Harvard economist Martin Feldstein believes that, in accordance with economic theory, the U.S. dollar would appreciate to 125% of its current value—an amount that would more than counter the expected 20% increase in the price of imported consumer goods.
However, this assertion has faced apprehension as skeptics cast doubt around Washington’s ability to accurately predict future foreign currency exchange rates. Skeptics emphasize the sheer number of factors influencing such rates, including federal rate increases, commodity prices, and the overall strength of the U.S. economy.
Foreign retaliation: If the U.S. tries to implement an inconsistent tax regimen, countries could appeal to the WTO and initiate investigations seeking compensation for illegal subsidies received by U.S. exports—ultimately risking a trade war. Opponents point to a risk of retaliation from other countries in response to the change in U.S. policy, potentially drawing $385 billion in tariffs from our trading partners, according to the Peterson Institute for International Economics. The key trigger of this scenario would be if the proposed changes violate existing WTO commitments, something which is still unclear given that the specifics of the proposal remain to be finalized.
Given the significant effects of the BAT on certain countries (Chart 2), the risk of retaliatory policies is not insignificant should the BAT violate WTO rules. Perhaps unsurprisingly, Deutsche Bank AG economists Robin Winkler and George Saravelos found that Mexico, Canada, and some Asian countries (primarily Thailand and Malaysia) have much to lose should the proposal be implemented, as measured by net trade impact as a percentage of GDP. The fact that Mexico and Canada—two of the U.S.’s largest trading partners—already have the ability to utilize retaliatory tariffs on imports from the U.S. based upon a 2015 settlement by the WTO, makes this threat all the more concerning.
U.S. sectors would be affected at varying levels: Companies are often exposed more heavily on one side of the import/export equation. (e.g., Technology companies that export in high volumes would benefit from the policy, while retailers that import and sell in high volumes would be at a disadvantage). This imbalance would likely be criticized as prejudicial and create sharp divisions among businesses, as they already are.
Companies who are reliant upon imports might not be able to adjust to such an abrupt change: Opponents of the policy have voiced concerns that domestic businesses reliant on imported goods would be harmed by such an abrupt and drastic change. They worry that these companies have long been making strategic decisions and investments assuming a certain set of rules and may be unable to adjust to the shift. Budget retailers heavily reliant upon imported goods are particularly vulnerable to such a change.
American investors would be disadvantaged: If the plan works as intended, then the appreciation of the dollar would hurt Americans who own foreign assets, such as a mutual fund including assets in euros. It is estimated that the loss would be more than $2 trillion.
Border adjustments have historically been popularized and utilized in the context of value-added taxes, a popular taxation system employed across the globe. However, it is a relatively novel concept when applied in the context of corporate income taxation—as is the case with the current U.S. tax reform proposal.
It is important to note that the proposed plan and the VAT are in fact distinct and possess key differences. For one, while the proposed plan is inspired by the consumption-based VAT, consumption taxes typically do not allow for payroll, interest, or depreciation deductions, as they are concerned not with taxable income but with consumption. However, the proposed plan, as mentioned previously, indeed allows for payroll deductions.
In addition, the VAT effectively acts as a sales tax with no competitive impact. According to the EU Taxation and Customs Union, businesses act as VAT collectors while the end consumer actually carries the entire burden of the VAT. Consequently, consumers under the VAT system are comparable to U.S. consumers paying sales taxes on products. In addition, as economist Paul Krugman reinforces throughout his widely-cited paper, the VAT does not create subsidies or trade barriers.
Consider how imports (from the U.S.) and exports (to the U.S.) would be treated by a UK company under the VAT:
Exports: Under the U.S. sales tax system, American companies do not pay sales taxes on purchases made throughout production. However, the UK company pays VAT along the production process but cannot collect it from buyers of goods sold abroad. This is where a rebate is introduced and plays a crucial role: the system allows the UK company to reclaim the VAT already paid out.
Imports: If the UK company imports American goods and sells them, the consumer has to pay the VAT all the same. The UK company then turns this VAT over to the government. Therefore, the U.S. goods are treated in the same manner as the ones produced in the UK. Ultimately, the VAT is neutral.
Despite a lack of historical examples of border adjustments applied to income taxes, we can learn from past instances of high import taxes and foreign retaliation. As Jeremy Siegel of the University of Pennsylvania warns, “if protectionism does break out globally, it would be disastrous […] if there is a trade war, the market would react extremely negatively […] we’d be down 10% to 15%.”
In the early 2000s, in the biggest case in which the WTO has granted retaliation, the U.S. was found to be unfairly subsidizing exports using certain tax exemptions. As a result of that, in 2003, the WTO permitted the European Union’s (EU) adoption of $4.04 billion in retaliatory tariffs against the U.S. The EU then instituted tariffs on U.S.-based products including everything from leather to nuclear reactors. In response, the U.S. eventually repealed the tax exemption and the tariffs were removed.
In another instance in 2009, a retaliatory tariff imposed by Mexico onto the U.S. regarding cross-border trucking permits reduced the sales of certain U.S. farm products in Mexico by 22% over the course of 18 months, or around $984 million in lost exports. While this number may not seem significant relative to the cumulative annual export amount, it is indicative of other countries’ willingness to take action against perceived injustices, and the significant impact it can have on targeted industries.
On the other hand, it is also worth noting that currency markets can respond quickly to U.S. policy changes, including the frequent fluctuations in Mexican peso values during the 2016 presidential election. In addition, over 140 countries have a border-adjusted tax as part of their VAT regimes, and there is a vast body of literature related to this which shows showing why currencies would adjust.
However, the Tax Foundation warns that “even if currencies adjust quickly, some factors may slow the speed at which import prices adjust to those changes, including the fact that many goods are priced in dollars internationally.”
A potential alternative to the border adjustment tax would be a smaller straight tax cut. A lower corporate tax rate coupled with looser regulation could add upwards of 10% to corporate earnings, which could cause a ripple of growth across the larger economy.
Another option would be a partial or reduced border adjustment tax, which would maintain the overarching structure of the DBCFT but allow for partial deductions for imports and partial tax exports. Tom Barrack, adviser to President Trump, suggested a border adjustment of 10% instead of 20%. However, this option would add additional complexity to the pure border adjustment model, and might yield negative implications for revenue neutrality.
Alternatively, the U.S. could end the ability for companies to defer taxes on their foreign profits, which would remove the incentive for multinational companies to move their profits into offshore tax havens and raise almost $1 trillion in revenue. This could be paired with an effort to close existing tax loopholes in the tax code, such as requiring companies to pool their foreign tax credits and removing distortive tax expenditures such as accelerated depreciation or domestic manufacturing credit.
It is difficult to predict what will happen regarding the House proposal, especially given the President’s unclear stance on the matter. While some organizations are already positioning themselves in anticipation of its implementation, such as hedge funds increasing their exposure to futures and options linked to WTI (domestic crude oil), others, such as large retailers, are publicly voicing their fierce opposition.
Still, with the combination of the proposed tax reform, Brexit, and the European elections, we may see significant currency exchange volatility in the near future as the system absorbs and adjusts to these changes.
This article is originally posted in Toptal.
My Wall Street journey started with Bloomberg in 2009. Since then I’ve held positions in equity research at a sell-side shop, as a senior analyst at a hedge fund, and eventually, as the director of research for a startup building research automation tools. During that time, I have produced hundreds of reports, models and recommendations on publicly traded equities.
As a result of this, I’ve been able to develop a profound understanding of the value, as well as the pitfalls of equity research. This article is intended to share with you some of the most important lessons I have learned, and guide you on how to use research reports more effectively.
Whilst the equity research industry worldwide has endured substantial declines since 2007, there are still roughly 10,000 analysts employed by investment banks, brokerages, and boutique research firms. And even more analysts offer their services independently or on a freelance basis, and there are still more voices in the crowd contributing to blogs or sites like SeekingAlpha.
The reason for the above is clear: equity research provides a very useful function in our current financial markets. Research analysts share their insights and industry knowledge with investors who may not have them, or may not have the time to develop them. Relationships with equity research teams can also provide valuable perks, such as corporate access, to institutional investors.
Nevertheless, despite its obvious importance, the profession has come under fire in recent years.
Analysts’ margin of error has been studied, and some clear trends have been identified. Some onlookers bemoan the sell-side’s role in stimulating equity market cyclicality.
In some cases, outright conflicts of interest muddle the reliability of research, which has prompted regulation in major financial markets. These circumstances have generated distrust, with many hoping for an overhaul of the business model.
With the above in mind, I’ve divided this article into two sections. The first section outlines what I believe the main value of equity research is for both sophisticated and retail investors. The second looks at the pitfalls of this profession and its causes, and how you should be evaluating research in order to avoid these issues.
As sophisticated or experienced investors, you likely have your own set of highly developed valuation techniques and qualitative criteria for investments. You will almost certainly do your own due diligence before investing, so outside parties’ recommendations may have limited relevance.
Even with your wealth of experience, here are some things to consider.
To maximize the use of your time, buy-side professionals should focus on the research aspects that complement their internal capacities. Delegation is vital for every successful business, and asset management is no different. External parties’ research can help you:
Enhanced Corporate Access
Regulations prevent corporate management teams from selectively providing material information to investors, which creates limitations for large fund managers, who often need specific information when evaluating a stock.
To circumvent this, fund managers often have the opportunity to meet corporate management teams at events, hosted by sell-side firms that have relationships with executives of their research subjects.
Buy-side clients and corporate management teams often attend conferences that include one-on-one meetings and breakout sessions with management, giving institutional investors a chance to ask specific questions.
Language around corporate strategies, such as expansion plans, turnarounds, or restructuring, can be vague in conference calls and filings, so one-on-one meetings provide an opportunity to drill down on these plans to confirm feasibility.
Tactful management teams can confirm the legitimacy and plausibility of strategic plans without violating regulations, and it should quickly become obvious if that plan is ill-conceived.
Institutional clients of sell-side firms also have the opportunity to communicate the most relevant topics that they want to see addressed by company management in quarterly earnings conference calls and reports.
The sell-side analyst’s public role and relationship with corporate management also allows him to strategically probe for deeper insights. Generally, good equity research demonstrates the analyst’s emphasis on teasing out information that is most relevant to institutional clients. This often requires artful posing of incisive questions, which allows management teams to reach an optimal balance of financial outlook disclosure.
Buy-side analysts and investors have a massive volume of sell-side research to comb through, especially during earnings season, so succinct, analytical pieces are always more valuable than reports that simply relay information presented in press releases and financial filings. If these revelations echo the interest or concerns of investors, the value is immediately apparent.
Outsourcing Tedious or Low-ROI Aspects of Research
Smaller buy-side shops may lack the resources to monitor entire sectors for important trends. These asset managers can effectively widen their net by consuming research reports. Sell-side analysts tend to specialize in a specific industry, so they closely track the performance of competitors and external factors that might have sector-wide influence. This provides some context and nuance that might otherwise go unnoticed when concentrating on a smaller handful of positions and candidates.
Research reports can also be useful ways for shareholders to spot subtle red flags that might not be apparent without carefully reading through lengthy financial filings in their entirety. Red flags might include changes to reporting, governance issues, off-balance sheet items, and so forth.
Buy-side investors will of course dig into these issues on their own, but it’s useful to have multiple eyes (and perspectives) on portfolio constituents that may number in the hundreds. Likewise, building a historical financial model can be time consuming without providing the best ROI for shops with limited resources. Sell-side analysts build competent enough models, and investors can maximize their added value by focusing entirely on superior forecasting or a more capable analysis of the prepared financials.
Identifying investment opportunities falls under the umbrella of tedious activities since effectively screening an entire market/sector can be overwhelming for a smaller team at some buy-side shops. As such, idea generation has become an important element of some sell-side firms’ offerings. This is especially pronounced regarding small and medium cap stocks, which may be unknown or unfamiliar, to institutional investors.
It’s simply impractical for most buy-side teams to cover the entire investable universe.
Research teams fill a niche by identifying promising smaller stocks or analyzing unheralded newcomers to the market.They can then bring this to the attention of institutional investors for further scrutiny.
Reports might be most useful for sophisticated investors as an opportunity to develop a meta perspective. Stock prices are heavily influenced by short-term factors, so investors can learn about price movements by monitoring the research landscape as a whole.
Consuming research also allows investors to take the temperature of the industry, so to speak, and compare current circumstances to historical events. History has a way of repeating itself in the market, which is driven in part by the industry’s tendency to shake during crashes, and pull in new professionals during bull runs.
Having a detached perspective can help shed light on cyclical trends, making it easier to identify ominous signals that might be lost on the less acquainted eye. In turn, this drives idea generation for new investing opportunities.
With that being said, investors should not consume research that only confirms their own bias, a powerful force that has clearly contributed to historical booms and busts in the market.
The value of equity research is much more straightforward for retail investors, who are usually less technically proficient than their institutional counterparts.
Retail investors vary substantially in sophistication, but they mostly lack the resources available to institutional investors. Research can supplement deficiencies on some basic levels, helping investors with modeling guidance, framing an investment narrative, identifying relevant issues, or providing buy/sell recommendations.
These are great starting points for retail investors seeking value out of research. Individuals probably can’t consume the sheer volume of reports that asset managers can, so they should lean on the expertise of trustworthy professionals.
Despite the above, there are clear risks to over-relying on equity reports to make trading decisions. To properly assess the dangers of equity research, one must consider the incentives and motivations of research producers.
Regulations, professional integrity, and scrutiny from clients and peers all play significant roles in keeping research honest, but other factors are also at play. Research consumers should be mindful of the producer’s business model. I highlight the five key issues below.
Reports that are produced by banks and brokers are usually created for the purpose of driving revenues. Sell-side equity research is usually an add-on business to investment banks that earn significant fees as brokers and/or market makers in traded securities and commodities.
As a result of this, research tends to focus on highlighting opportunities for buying and selling stocks. If research reports only included forecasts of “steady” performance, this would result in less trading activity. The incentive for research analysts is therefore to come out with predictions of a change in performance (whether up or down).
This isn’t to say the content of the research is compromised, but that opinions on directional changes in performance may be exaggerated.
Numerous academic studies and industry white papers are dedicated to estimating the magnitude of analyst error. The findings have varied over time, between studies and among different market samples. But all consistently find that analyst estimates are not particularly accurate.
Andrew Stotz indicates an average EPS (earnings per share) estimate error of 25 percent from 2003 to 2015, with an annual minimum under 10 percent and annual maximum over 50 percent.
Another important point to consider is that analysts generally benefit from having a positive working relationship with the executive management and investor-relations teams of their research subjects.
Analysts rely on corporate management teams to provide more specific and in-depth information on company performance that is not otherwise publicly provided.
Brokers also provide value to investors by providing seminars and one-on-one meetings that are attended by those executive teams, so a strong roster of presenters at conferences can be dependent on the relationships built by the research teams.
I’ve witnessed the negative impact of a souring relationship:
One of the tech analysts at a company I worked for was covering a small-cap stock and built up a good relationship with both the corporate management team and the investor relations person who worked with them. Being a small company, this visibility was valuable to the research subject.
Poor results one quarter were downplayed by management as a temporary speed bump, but the analyst had concerns that these were longer-term problems.
The executive team was upset when the subsequent report negatively portrayed the recent events, and soon thereafter withdrew from an upcoming conference our research firm hosted. The analyst’s bonus compensation suffered as a result as well.
Obviously, this all has very serious implications.
Sell-side analysts tend to produce reports that portray their subjects favorably, and they are more likely to set attainable expectations. It also means that analysts may hesitate to downgrade a company’s stock. This is especially true when poor executive management would be the primary culprit causing a downgrade. All of this may help explain why EPS estimates are disproportionately bullish (see chart 2).
Equity research reports are influenced by the means by which analyst quality is measured. Analysts compete with peers at rival banks.
Taking a contrarian stand that results in bad recommendations could genuinely harm an analyst’s compensation and career prospects. These mechanisms create a herd-like mentality.
This can be extremely detrimental if it goes unnoticed or unaddressed by research consumers. These are precisely the sorts of circumstances that fuel bubbles. It’s hard to look especially foolish if everyone else looks foolish too.
Research produced by independent firms, which substantially derive all revenues from the sale of research and do not maintain a brokerage business, is not meant to motivate trading.
This eliminates some of the conflicts of interest inherent in the sell-side banks, putting extra emphasis on accuracy. However, independent and boutique analysts are tasked with creating income by selling something that’s been largely commoditized, and they compete with banks that have vast resources.
To survive, they have to offer something specialized or contrarian. Their philosophy must radically depart from the herd. They have to claim special industry knowledge through independent channels, or they must cover stocks that are largely uncovered by the larger powers in the research space.
This creates the incentive for sensationalized research that can attract attention amongst the sea of competing reports.
As we have seen, there are important facets of the equity research profession that often lead to skewed incentive mechanisms, and may ultimately compromise the quality of research being done.
To be fair, the practice of making complex and precise forecasts is necessarily flawed by the requirement to make assertions about future conditions, which by definition are unknowns.
Nevertheless, whilst this might be acceptable if the errors appeared random and with a predictable margin of error, this is not the case.
According to Factset research, consensus 12-month forward EPS estimates for S&P constituents were about 10 percent higher than the actual figures for the years 1997-2011. These numbers are skewed by large misses, and the median error is only 5.5 percent, but several important conclusions can be drawn:
There are several ways that consumers of research reports can judge the validity and quality of such reports,in light of what’s been discussed above.
Analyst credentials are an obvious method for vetting quality. CFA designations don’t necessarily guarantee quality, but it indicates a baseline level of competency.
Research produced by reputable banks ensures that it was created and reviewed by a team of professionals with impressive resumes and highly competitive skills. Likewise, veterans of big banks who leave to start an independent shop have been implicitly endorsed by the HR departments of their former employers.
Producers can also be distinguished by specialized backgrounds, for example former doctors turned healthcare analysts or engineers weighing in on industrial/energy stocks. If the analyst has been recognized in financial media, it usually indicates extremely high quality.
Investors can also look at historical analyst recommendations and forecasts to determine their credibility.
Institutional Investor provides a service that tracks analyst performance, and there are similar resources available, especially for investors with deep pockets.
Fintech startups, such as Estimize, actually focus on attracting and monitoring analyst recommendations to identify the most talented forecasters.
However, while third-parties and financial media offer helpful ranking systems based on earnings forecasts or performance of analyst recommendations, these tend to put a lot of emphasis on short-term accuracy. This might therefore be less useful for consumers with a long-term approach or emphasis on navigating black swans.
Investors should consider these factors and look for red flags that an analyst is hesitant to turn bearish. These could include shifting base assumptions to maintain growth forecasts and target prices, suddenly shifting emphasis away from the short-term to the long-term outlook, or perhaps an apparent disconnect between the material presented in the article and the analyst’s conclusions.
Investors should also consider context.
Some stocks simply don’t lend themselves to reliable research. They might have volatile financial results, an unproven business model, untrustworthy management, or limited operating history, all of which can lead to wide margins of error for earnings growth and intrinsic value.
The business cycle’s phase is also extremely important. Research shows that forecasts are less reliable in downturns, but investors are also more likely to rely most heavily on research during these times. Failure to recognize these issues can severely limit one’s ability to glean full value from research.
Research consumers should also make sure they know their own investment goals and be mindful about how these differ from those implied by research reports. A temporal mismatch or disconnect in risk aversion could completely alter the applicability of reports.
For those fund managers wishing for a more reliable research product, the most effective move is to vote with your wallets, and buy reports from the most accurate and conflict-free sources.
It might be expensive to source research from multiple sources, but there’s value in diversifying the viewpoints to which you are exposed.
It’s unlikely that you will move on from low-quality research if that’s all you are reading.
Additionally, consuming a large volume of research from different sources helps forge a meta perspective, allowing investors to identify and overcome worrisome trends in research.
The equity research industry has undergone profound changes in recent years, due to regulatory changes, the emergence of independent research shops, as well as more automated methods of analyzing public company performance.
At the same time, smart investors are looking at broader sets of investments and taking a more active approach to research. This is facilitated by the increasing quantity of publicly available information on listed companies.
Nevertheless, good research continues to be extremely valuable.
It lets you manage a wider pipeline of investment opportunities and be more efficient.
An informed and thoughtful approach can enhance the value of research reports for investors, so asset managers can better serve clients (and their own bottom lines) by considering the content above.
Research consumers need to be wary of predictable errors, analyst incentives, conflicts of interest, and the prevalence of herd-like mentality.
If you can adjust your interpretation of research along these lines, then you can focus on the most valuable aspects of research, namely idea generation, corporate access, and delegation of time-consuming activities.
This article is originally posted in Toptal.