class: big, middle # ECE 7420 / ENGI 9823: Security .title[ .lecture[Lecture 1:] .title[Adversaries, abstraction and trust] ] --- # Today ### Security vs risk management ### Adversarial thinking ### Abstraction and its problems ### Trust and TCBs --- # Risk management ### Computers not the only risky systems! -- * reliability * safety * fraud detection * epidemiology -- ### Q: what do these have in common? ??? ### A: a couple of things * like security: hidden problems that come to light * unlike security: quantitative analysis --- # Stochastic threats -- **Reliability:** probability of failure / time between failures -- **Safety:** probability of failures causing safety incident -- **Epidemiology:** probability of infection after exposure -- <img src="https://upload.wikimedia.org/wikipedia/commons/c/ca/ISS_impact_risk.jpg" width="400" align="right"/> ### Risk equation: $$ R = P \times C = T \times V \times C $$ -- ### Q: On what do these probabilities depend? ??? We often assume that different risks are **independent**. This can be quite reasonable in the case of safety engineering, reliability engineering, etc.. If rust can rust, **it will**. How much? **As much as it can.** If a virus can infect you, **it will**. Although, there is one wrinkle in the case of epidemiology: as we've all seen, it's not just about how the **virus** will behave, how the **population** will behave is also pretty important! --- # Know your enemy -- ### Classical risk management * an impersonal force of nature -- ### Computer security (and crime, and geopolitics...) -- * defending against **people** taking **intentional** actions ??? Crime isn't just a matter of means and opportunity: it's also a question of **motive** (as well as **ethics**, **morals** and **social contracts**). -- * not just a force, an **adversary**, an **attacker** ??? The presence of an adversary (or adversaries) is what makes security different from mere risk management. --- # Adversarial thinking #### The attacker: -- ## a directed, strategic, _adaptive_ adversary ??? **Directed:** _wants_ something **Strategic:** makes _choices_ and _plans_ to enhance effectiveness A flood or a virus doesn't choose where or when to strike Example: lighting and bird strikes **Adaptive:** will change attacks as you change defences --- # Thinking about adversaries ### Adversaries vary in their: -- * Objectives ??? Objectives: different adversaries want different things! Money, revenge, policy change or just "for the lulz". -- * Capabilities ??? Capabilities: some adversaries are technically very savvy and capable, others are not. Capabilities can also include non-technical capabilities: an adversary who can **break into your house** opens up possibilities that strictly technical adversaries don't have. "Unsophisticated" doesn't mean "**safe**", though! -- * Methods ??? Methods: not just what they're capable of, but what they like to do and even what they're willing to do. Different adversaries have different approaches that they take, and some are willing to use approaches that other's aren't. -- * Insider access ??? Insider access: we'll talk more about this in a moment, but a disguntled insider (or someone who can find/cultivate one) is actually a very powerful adversary. -- * Support ??? Support: some adversaries are on their own, poking at servers in their free time, whereas others are funded to develop campaigns full-time with teams around them to support their activities. Defending against one is very different from defending against the other. --- # Adversary models ### Can do some formal modeling e.g., the _Dolev-Yao_ attacker is very important in network security -- ### Informal shorthands often more immediately useful --- # Informal adversary models | | | |------|--------| | Accidental | Intelligence service | APT | Military | Competitor | Lookie-loo | Hacktivist | Organized crime | Honest-but-curious | Scammer | Insider | Script kiddie ??? | | | |------|--------| | Accidental | Violates security policy without meaning to | APT | Well-resourced, operate with impunity | Competitor | Industrial espionage | Hacktivist | Social or political motivation | Honest-but-curious | Executes protocols faithfully but sneaks a peek | Insider | Disgruntled employee, whistleblower, etc. | Intelligence service | Well-resourced, connected to non-cyber assets | Lookie-loo | Motivated by curiosity | Military | Connected to physical-world objectives | Organized crime | Financial incentive, well-organized markets | Scammer | Financial incentive, low effort | Script kiddie | Want to see what they can do --- # Abstraction ??? You've been thinking in a structured way about abstraction since your **first programming course**, and informally for long before that! Abstraction is useful; in some ways, it's the core of what all engineers do. -- ### What is abstraction? -- ### Why is it helpful? ??? Abstraction is useful, as it allows us to **ignore** some aspects of a problem while we **focus** on others — we can't **think about everything at once**! For example, it would be much harder to write Python code that translates objects to JSON respresentations if we had to be concerned with the implementation details of how, say, a hash map is implemented (what Marsenne prime is being used?), or what the virtual address of an object is, or how that virtual address is translated to a physical address, or which L2 cache line it's occupying! -- ### How is it deceptive? -- .footnote[ ["Towards a New Model of Abstraction in the Engineering of Software"](https://embeddedartistry.com/wp-content/uploads/2022/01/Towards-a-New-Model-of-Abstraction-in-Software-Engineering.pdf), G Kiczales, _IMSA'92: Proceedings of the 1992 Workshop on Reflection and Meta-level Architectures_, 1992. ["The Law of Leaky Abstractions"](https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-abstractions), J Spolsky, _Joel on Software_, 2002. ] ??? On the other hand, abstractions are **leaky**. A remote method invocation interface may hide all of the details of network configuration and method enumeration, but if the network goes down, it can't hide that problem (or at least not well!). Complex systems require thinking **across abstraction boundaries**; if you aren't, you can be sure that your attackers are! --- # Abstraction layers -- ### Common model of a computing system: <img src="../layers/simplistic.png" align="right"/> -- * attacker can attack the software -- * attacker can attack the hardware --- # More abstraction layers! <img src="../layers/more-realistic.png" align="right" width="400"/> ### More realistic model of a computer system: ??? The real world is complicated. We have lots of abstractions that go into the making of a computer system, and all of them leak! None of them fully hide the details of the layers below, and none are immune from the influence of the layers that sit on top of them. Security is **holistic** and **systemic**. -- * attacks can come at _any_ layer ??? Critically for security, the attacker often gets to meet you on a **field of their choosing**. If one abstraction layer of your system defends effectively against an attacker, they can often come at layers **above** or **below** your work. A bank's smart card can perform a lot of cryptographic operations to help safeguard your information, but those aren't enough by themselves. In a **lower** layer, an adversary can attempt to exploit **electrical side-channel information** of the card itself to learn secret information like cryptographic keys. At a **higher** layer, if the adversary can gather card details including the CVV2 code via a skimmer or by fooling the cardholder, all the side-channel security in the world can't protect you. Thus, your defences are often only as strong as **your weakest layer**. Example: [Bunker Buster, _The Daily WTF_](https://thedailywtf.com/articles/Bunker_Buster) -- * defence must happen at _every_ layer -- * attacks can be as hidden as implementation details ??? Technical people like engineers often don't like to think about the highest-level abstractions on this chart, but they are real! The best cryptography and other technical measures can be easily subverted if you can trick users into misusing systems, or if the economic incentives of a larger sociopolitical system reward bad behaviour. --- layout: true # Really? Users? ### Security is a _human_ discipline --- -- * attacker motivations --- * attacker motivations * defender motivations --- .floatright[ <img src="https://i.ytimg.com/vi/BN3B3fRpcso/maxresdefault.jpg" width="500"/> .credit[ <a href="https://www.imdb.com/title/tt0151804">Office Space (1999)</a> ] ] * attacker motivations * defender motivations * insider motivations ??? Insiders can **become** malicious --- layout: false # Secondary goal <img src="https://cdn.vox-cdn.com/assets/2407347/alma_whitten_headshot-post-lead.jpg" width="250" align="right"/> -- > Security is usually a **secondary goal**. People do not generally sit down at their computers wanting to manage their security; rather, they want to send email, browse web pages, or download software, and **they want security in place to protect them** while they do those things. It is easy for people to put off learning about security, or to optimistically assume that their security is working, while they focus on their primary goals. Designers of user interfaces for security should not assume that users will be motivated to read manuals or to go looking for security controls that are designed to be unobtrusive. .footnote[ _Usability of Security: A Case Study_, Whitten and Tygar, CMU-CS-98-155 ] ??? This quote is from Dr Whitten's 1999 PhD thesis (which came out during the same year as [Office Space](https://www.imdb.com/title/tt0151804/)!). Don't make users' lives **harder than they need to be**! You may turn them into **accidental insider adversaries**. --- # Trust and TCBs <img src="../layers/more-realistic.png" align="right" height="450"/> -- <a href="https://xkcd.com/2166"> <img src="https://imgs.xkcd.com/comics/stack.png" alt="XKCD: Stack" height="425" align="right"/> </a> -- ### What is trust? ??? Trust is typically a word that brings **warm and fuzzy feelings**, but not in this course! Do you trust your bank? **Only sort of!** You actually trust a combination of your bank teller, double-entry bookkeeping, security cameras, time vaults, police and security guards, but also — much more than most people think about — the [Canada Deposit Insurance Corporation](https://cdic.ca). Someone that you might really trust is a **venture capitalist**. If you meet with a **VC**, you will explain your clever idea for a **hugely profitable new business** but they will not **sign a non-disclosure agreement**. You will have no guarantee that they won't just **implement it themselves**... now _that_ is trust. Do you feel **warm and fuzzy** about that? -- ### "Trusted" vs "Trustworthy" ??? We should build systems that are **trustworthy** without assuming that they are **trusted**. --- # One definition of "trusted" -- > A trusted system is one whose failure can break the security policy .credit.centered[ Anderson, _Security Engineering_ ] ??? ### Or: "one that can get you fired" ### Or: "one that you can't really validate" -- In this view: ### Something you _have_ to trust, not _want_ to trust --- # TCB: Trusted Computing Base <img src="../layers/more-realistic.png" align="right" width="400"/> -- ### Everything you have to trust ??? A _trusted computing base_ is everything in a system that you are trusting, i.e., everything you are depending on in order for your part of a system to work correctly. Attacks against different layers have different costs and different levels of applicability. A supply-chain attack against a common [Node.js package](https://twitter.com/vxunderground/status/1523982714172547073) can be as cheap as a **domain name** and as easy as a modified **package.json**, introducing vulnerabilities into tens of thousands of other packages. A supply-chain attack against [a motherboard](https://www.bloomberg.com/news/features/2018-10-04/the-big-hack-how-china-used-a-tiny-chip-to-infiltrate-america-s-top-companies), however ([also described here](https://www.bloomberg.com/features/2021-supermicro/)) takes a lot more work, both to implement and then to exploit. However, it is also much more difficult to defend against! -- ### Goal: minimize! ??? Our goal, then, is not to **maximize TCBs** but to **minimize them**. The less we have to depend on, the better. --- # Today ### Abstraction and its problems ### Trust and TCBs ## Next time: ### Software security --- class: big, middle The End.