Tuesday, April 23, 2013

Concurrent Session 2F

"Big Data, Systemic Risk, and the US Intelligence Community"

Christine I. Ray of Market Intelligence, Bryan Ware of Digital Sandbox, Dr. Gary Nan Tie (PhD Mathematics) of SVP

Christine:
Started out in finance, but began to consider whether the same ideas and methods could apply to the intelligence community. There are differences, obviously, in that intelligence is focused on Bayesian models and catastrophic scenarios. IC can't rely on past data to predict the future. Structured analysis is used to measure these unique possible events. See slide 12 for an example. She advocates a similar Bayesian approach for ERM. You can integrate risks and employ expert knowledge and opinion where needed.

Bryan:
Software seller, not a risk person himself. Big data is a nice buzzword, but it doesn't have anything to do with the size of databases. It has to do with the *ubiquity* of data. Models like Google rely 100% on correlation, and in that case it works quite well. But in other situations, you need to incorporate causation. Security risks are in the latter category. All events are possible, but you can't prepare for all of them, so you need some way to determine which are more likely (not only that, but also our definition of acceptable losses.) When we started our company/project, we had zero data. So we start with just human judgment and build to having both data and judgment in a causality model. One of his tasks was to allocate security budget to major cities based both on the risk and their capabilities. NYC has high risk, but the NYPD is also effectively the 6th largest military in the world.

Gary:
How do we know the model is right? "A model is nothing more than codified common sense." It's essentially an epistemological question: how do we know we know? We can confirm via a model-independent truth, by context, or by comparison to another model. Provable, probable, and plausible are all viable for decision-making, they are just different paradigms. Provable will be based on the model-independent truth. Probable will be based on some estimate of probability, based on past data. Plausibility can be intuitive. One example: to test a complex model, he developed an economic theory of insurance and used it to predict certain relationships between different parts of the business. He then compared this to the simulations coming out of the model and showed that they were generally in-line with the expectations. This isn't proof, it isn't a probabilistic statement, but it is plausibility and *can be used* to make decisions. Recommended reading: "Theories of Decision Under Uncertainty".

Is regulation helping or hurting?
Gary: Regulators need to be careful not to create systemic risk. Can create and control behavior in predictable ways, but regulators aren't mindful of this.
Bryan: Doesn't particularly apply, but regulation turns things into managing to regulation instead of managing to get actual results.
Chris: Agreed.

End Concurrent Session 2.

No comments:

Post a Comment