Anastasia Zakolyukina studies corporate governance and incentives, accounting manipulation, linguistic-analysis of disclosures, and accounting-based risk assessment. Her most recent work titled "Detecting Deceptive Discussions in Conference Calls' examines prediction of misstatements from the conference calls narratives of CEOs and CFOs. This study has been mentioned in The Economist, NPR, the Wall Street Journal, the New York Times, CBC, CNBC, and Bloomberg.
Zakolyukina earned her Ph.D. in Business Administration from Stanford Graduate School of Business. Additionally, she holds a M.A. in Economics from the New Economic School. Before pursuing graduate studies, Zakolyukina studied at the Udmurt State University where she earned dual degrees in Information Systems and Law.
Outside of academia, Zakolyukina has worked as an analyst at the Center for Economic and Financial Research in Moscow and was also a short-term consultant at the World Bank, International Bank for Reconstruction and Development
2016 - 2017 Course Schedule
Corporate governance and incentives, accounting manipulation, linguistic-analysis of disclosures, accounting-based risk assessment
REVISION: Accounting Fundamentals and Systematic Risk: Corporate Failure over the Business Cycle
In this paper, we use accounting fundamentals to measure systematic risk associated with corporate failure. Prior literature, despite compelling theoretical arguments, finds little evidence of a distress risk premium associated with a firm’s probability of failure. We use a stylized model to show that a stock’s expected return depends not only on the likelihood of failure but also on the phase of the business cycle in which a firm is more likely to fail. We develop a statistical model that, based on a firm’s accounting fundamentals, allows us to predict whether the firm’s failure will coincide with a recession. We validate the obtained probability estimates out of sample and use them to construct a measure of recessionary failure risk. The return prediction tests suggest that our approach successfully extracts systematic distress risk information from accounting data — for stocks in the top quintile of distress a median hedge portfolio based on our measure generates 10% per annum.
REVISION: How Common Are Intentional GAAP Violations? Estimates from a Dynamic Model
This paper estimates the extent of undetected misstatements that violate GAAP using data on detected misstatements — earnings restatements — and a dynamic model. The model features a CEO who can manipulate his firm’s stock price by misstating earnings. I find that the CEO’s expected cost of misleading investors is low. The probability of detection over a five-year horizon is 13.91%, and the average misstatement, if detected, results in a 8.53% loss in the CEO’s wealth. The low expected cost implies a high fraction of CEOs who misstate earnings at least once at 60%, inflation in stock prices across CEOs who misstate earnings at 2.02%, and inflation in stock prices across all CEOs at 0.77%. Wealthier CEOs with higher equity holdings or higher cash wealth manipulate less and the average misstatement is larger in smaller firms.
REVISION: CEO Personality and Firm Policies
Based on two samples of high quality personality data for chief executive officers (CEOs), we use linguistic features extracted from conferences calls and statistical learning techniques to develop a measure of CEO personality in terms of the Big Five traits: agreeableness, conscientiousness, extraversion, neuroticism, and openness to experience. These personality measures have strong out-of-sample predictive performance and are stable over time. Our measures of the Big Five personality traits are associated with financing choices, investment choices and firm operating performance.
REVISION: Detecting Deceptive Discussions in Conference Calls
We estimate classification models of deceptive discussions during quarterly earnings conference calls. Using data on subsequent financial restatements (and a set of criteria to identify especially serious accounting problems), we label each call as “truthful” or “deceptive”. Our models are developed with the word categories that have been shown by previous psychological and linguistic research to be related to deception. Using conservative statistical tests, we find that the out-of-sample ...