Anderson Monken

Recent Research and Publication Summary

Policy Making for International Agricultural Trade using Association Rules and Ensemble Machine Learning

Keywords: AI, ensemble machine learning, association rules mining

Link: https://www.sciencedirect.com/science/article/pii/S2666827021000232 (new window)

Abstract: International economics has a long history of improving our understanding of factors causing trade, and the consequences of free flow of goods and services across countries. The recent shocks to the free-trade regime, especially trade disputes among major economies as well as black swan events (such as trade wars and pandemics), raise the need for improved predictions to inform policy decisions. Artificial Intelligence (AI) methods are allowing economists to solve such prediction problems in new ways. In this manuscript, we present novel methods that predict and associate food and agricultural commodities traded internationally. Association Rules (AR) analysis has been deployed successfully for economic scenarios at the consumer or store level (such as for market basket analysis). In our work however; we present analysis of imports/exports associations and their effects on country-commodity trade flows. Moreover, Ensemble Machine Learning (EML) methods are developed to provide improved agricultural trade predictions, outlier events’ implications, and quantitative pointers to policy makers.

Graph Neural Networks for Modeling Causality in International Trade

Keywords: graph neural networks, international trade

Link: https://journals.flvc.org/FLAIRS/article/view/128485 (new window)

Abstract: Neural network algorithms have proven successful for accurate classifications in many domains such as image recognition and semantic parsing. However, they have long suffered from the lack of ability to measure causality, predict outliers effectively, or provide explainability relevant to the application domain. In this paper we introduce a method that measures causal scenarios during outlier events using neural networks: Artificial Intelligence Network Explanation of Trade (AINET). AINET tailors AI techniques specifically for bilateral trade modeling. Datasets with network-like structures (such as global trade, social networks, or city traffic) can benefit from Graph Neural Networks (GNNs) modeling and structural power. These network-based models (i.e. GNNs) empower policy makers with an understanding of the fast-paced shifts in trade flows around the world due to outlier events such as increased tariffs, natural disasters, embargoes, pandemics, or trade wars. Our work is at the intersection of GNNs’ optimization, causality, and their proper application to trade. AINET results are presented with an overall test mean absolute percentage error (MAPE) of 28%, demonstrating the efficacy and potential of harnessing this method.

Artificial Intelligence Methods for Evaluating Global Trade Flows

Keywords: AI, tree-based methods, international trade, association rules mining

Link: https://www.federalreserve.gov/econres/ifdp/files/ifdp1296.pdf (new window)

Abstract: I​nternational trade policies remain in the spotlight given the recent rethink on the benefits of globalization by major economies. Since trade critically affects employment, production, prices and wages, understanding and predicting future patterns of trade is a high-priority for decision making within and across countries. While traditional economic models aim to be reliable predictors, we consider the possibility that Artificial Intelligence (AI) techniques allow for better predictions and associations to inform policy decisions. Moreover, we outline contextual AI methods to decipher trade patterns affected by outlier events such as trade wars and pandemics. Open-government data are essential to providing the fuel to the algorithms that can forecast, recommend, and classify policies. Data collected for this study describe international trade transactions and commonly associated economic factors. Models deployed include Association Rules for grouping commodity pairs; and ARIMA, GBoosting, XGBoosting, and LightGBM for predicting future trade patterns. Models and their results are introduced and evaluated for prediction and association quality with example policy implications.

AI Assurance Book – Chapter on AI for Economic Policymaking

Keywords: AI Assurance, AI explainability, LIME, Shapley, PDP, large language models (LLM), government transparency

Details: Forthcoming early 2022

Abstract: The first component of the book chapter is time series forecasting using neural networks. Through effective interpretation methods such as local interpretable model-agnostic explanations (LIME) and Shapley values, policymakers receive both improved forecasting performance and a comprehensive explanation of the prediction. The second area is network effects for international trade. The study of these network dynamics benefits from graph neural networks. Understanding these complex models requires innovative AI assurance methods. Graph-related explainability techniques GraphLIME and TraP2 will be considered. The final section is textual analytics for building economic indices. LIME methods for explaining textual models will be implemented.

Harnessing AI Methods to Improve Multi-Country Macroeconomic Forecasting

Keywords: ARIMA, ARMAX, graph neural networks, forecasting

Details: Project in exploratory and literature review stage; going to be a working paper or conference paper in Q3/Q4 2021. Collaboration with fellow Georgetown faculty member Purna Gamage

Abstract: We are going to harness the power of neural networks to explore the interdependency of country-level macroeconomic indicators to improve forecasting. Our recent work on graph neural networks for international trade demonstrates that network-based analysis of economic data can yield strong results. We plan to use a variety of macroeconomic indicators to predict GDP/CPI using a spatiotemporal graph neural network that applies neighborhood effects so weak GDP in one country can affect close neighbors. This prediction will be performed jointly so the we are optimizing a model that can predict macroeconomic indicators for all countries in the dataset. Comparisons to the perform of dynamic factor models (DFMs) and traditional forecasting techniques will show the benefits of this AI-based approach.

Practical Guide to Effective Textual Analytics at Scale: Using Earnings Calls to Conduct Sentiment Analysis

Keywords: big data, Hadoop, Hive, shiny, dashboard, data infrastructure, text analytics, PySpark

Details: Project in progress with expected draft in Q4 2021

Abstract: This project serves as a roadmap for economic researchers and data scientists to conduct textual analysis on big data. The S&P provides 70-million statement level texts that require effective distributed computing to produce meaningful timeseries sentiment indicators at the firm/industry/aggregate level. We discuss the benefits of parallel and distributed computing for big data work and outline the most prevalent choices for harnessing computing with PySpark and Hadoop using workstations, on-premises clusters, and the cloud. There are several S&P specific methods that will be explained to produce effective topical sentiment indices.

Uncovering Supply Chain Bottlenecks using Textual Analysis

Keywords: inflation, linear regression, textual analytics, big data, PySpark

Details: Project in progress with expected working paper in Q4 2021

Abstract: S&P company earnings calls provide additional information to gauge the sentiment of industries during tumultuous business periods. Supply chain bottlenecks during the COVID-19 pandemic have impacted numerous aspects of global trade, and using lexicon-based word counting methods of textual analytics can help determine how industries are reacting to the rapidly changing environment. Price changes at business-level can also be studied using sentiment indices to better determine whether price increases are transient or permanent.

Using Real-time Bill of Lading Data to Analyze the COVID-19 Trade Collapse and Rebound

Keywords: microdata, big data, PySpark, Hadoop, Hive, international trade

Details: Project in progress with expected working paper in Q4 2021

Abstract: We evaluate high-frequency bill of lading data for its suitability for use in international trade research. These data offer many advantages over both other publicly accessible official trade data and confidential datasets. This data requires big data infrastructure including Hadoop, Hive, and PySpark to effectively analyze. We provide a comprehensive overview for potential researchers to understand these strengths and weaknesses as these data become more widely available. Using the strengths of the data, we analyze the COVID-19 episode, in which trade collapsed very quickly and in some cases rebounded higher than its pre-COVID level. We show how the high-frequency data captures the within month collapse of trade between the U.S. and India. We also demonstrate how U.S. buyers shifted their purchases across suppliers over time during the recovery. Finally, we show how the data can be used to measure vessel delivery bottlenecks in near real time.