This book is a call to action. You can participate. This is for you. Artificial intelligence might end the world. More likely, it will crush our ability to make sense of the world—and so will crush our ability to act in it. AI will make critical decisions that we cannot understand. Governments will take radical actions that make no sense to their own leaders. Corporations, guided by artificial intelligence, will find their own strategies incomprehensible. University curricula will turn bizarre and irrelevant. Formerly-respected information sources will publish mysteriously persuasive nonsense. We will feel our loss of understanding as pervasive helplessness and meaninglessness. We may take up pitchforks and revolt against the machines—and in so doing, we may destroy the systems we depend on for survival.
, a free book available on the web. It’s not a review. I’m not interested in evaluating this text as a written artifact. If this essay pulls in arguments that weren’t in the book — well, it’s not like I’m grading his homework or anything. I just want to talk about

His in-progress book on meta-rationality, extremely lucid and meaningful. It’s played no small part in developing my intuition on what intelligence actually
Rangsit University Photos And Premium High Res Pictures
. On the other hand, that intuition has led me to be a frequent critic of AI risk. I’m biased in opposite directions, which puts me in an interesting position to evaluate this claim.
No need to keep you in suspense: the reason for this divergence is that the risks that worry Chapman aren’t the ones people typically bring up.
This book considers scenarios that are less bad than human extinction, but which could get worse than run-of-the-mill disasters that kill only a few million people. Previous discussions have mainly neglected such scenarios. Two fields have focused on comparatively smaller risks, and extreme ones, respectively. AI ethics concerns uses of current AI technology by states and powerful corporations to categorize individuals unfairly, particularly when that reproduces preexisting patterns of oppressive demographic discrimination. AI safety treats extreme scenarios involving hypothetical future technologies which could cause human extinction.It is easy to dismiss AI ethics concerns as insignificant, and AI safety concerns as improbable. I think both dismissals would be mistaken. We should take seriously both ends of the spectrum. However, I intend to draw attention to a broad middle ground of dangers: more consequential than those considered by AI ethics, and more likely than those considered by AI safety. Current AI is already creating serious, often overlooked harms, and is potentially apocalyptic even without further technological development.
Free To Think 2021
AI ethics is something worth taking seriously — but weirdly enough, it has a lot less to do with AI than you’d think. Recall
#2 showing US cancer diagnoses erroneously tied to turning 65 (because that’s the age Medicare makes diagnosis more accessible). We mention the problems that arise from using this data:
Let’s say that some well-meaning hospital executive reads the same study we did and thinks — wow, okay, we need to fix these diagnosis inequities. Let’s use machine learning to predict which patients are mostly likely to test positive for cancer and proactively reach out and get them tested. We’ll ignore the insurance gap entirely and look purely at the data! Well — we know what looking purely at the data gets us, don’t we? We just finished figuring out that it shows a massive spike in cancer diagnoses at age 65, and sniffing out massive spikes is what machine learning does best. As far as a predictive model trained on data from the United States is concerned, there really is an exactly-age-65 specific time bomb in your body that causes a spike in cancer diagnoses. How do you control for this bias in your model? Well...you don’t, really. You could artificially weight the scores to some target AoA, but what’s the “right” target for AoA, anyway? The fundamental problem is that you want your model to guess who has undetected cancer, but the only data you can feed it with are patients with detected cancer. So any correspondence break between cancer in the general population and the patients that actually get diagnosed can’t help but feed that bias into that model, compounding the tragedy of the original problem. The impact of insurance inequity is a group of 64 year olds who have undiagnosed cancer because they’re waiting for Medicare. The impact of doing statistical analysis on data generated by inequity and then using it to drive decisions is another group of 64 year olds who have undiagnosed cancer because they’re 64 year olds.
The Rangsit University (rsu) Careers Directory For Students
Is the conflation of cancer diagnoses with true incidence of cancer in a population. A non-AI, artisanal, hand-crafted algorithm on US healthcare data will face exactly the same selection biases. Work to make US health insurance more equitable would make all inference on the data (AI or otherwise) more accurate. The vital importance of getting this right is the essential thesis of

But the ethical issue is not an apocalypse scenario that’s capable of suddenly snowballing out of control; it’s that AI will help preserve the same old power imbalances we already have. Often the actual capabilities of the AI system aren’t even relevant to the severity of the ethical issue, aside from people being more willing to trust AI that’s superficially more “powerful”. AI ethics issues are specific, contextual battles like ethics issues always are, not explosive tail risks.
Conversely, “AI safety” is all about explosive tail risks. The fear is that someone makes an AI to build a better AI for making better AIs, this iterates repeatedly over a short period of time, bam, godlike intelligence. This isn’t something I take seriously. Training
Rangsit Png Images
; without the ability to interactively touch the world and develop new ways of seeing, a retrospective look at past data has a lot of unavoidable fragility.
The importance of interaction is a very Chapman-esque point, which is why I was surprised to hear he was working on an AI safety book.

’s official position on superintelligence is something like “Hey, it could be possible, so it’s a good reason to stop AI! It’s less likely than other alternatives and by definition we can’t reason about the unthinkable, so I’m mostly going to focus on other potential risks relating to AI”. But once he’s in the “radical progress without Scary AI” section, pages like “What kind of AI might accelerate technological progress” and “limits of experimental induction”, meant to describe a vision of better scientific progress without AI, also just-so-happen to shed a light on the exact flaws of the recursive superintelligence argument. I don’t begrudge Chapman his obliqueness here. If you’re making an anti-AI book for everyone, it’d be pretty stupid to take a pot shot at AI safety before going into your own arguments. But this is my commentary, and I can be direct: I’m not worried about recursive superintelligence, and this isn’t a book that acts worried about it either.
Pdf) (un)anticipated Futures Symposium 2012
What’s the broad middle ground we should be paying attention to instead? Chapman tries to decouple discussions of agency or human-like minds from evaluating the risk of AI systems. AI started as a branch of cybernetics, a field spawned during war to create automatic weapon targeting systems and bomb computers. As those tools became more and more refined, exact control of how to act with lethal force was gradually taken away from the humans at a point in time and pushed back to the architects of algorithms, and eventually into the Byzantine folds of a flowchart no one can quite map the full extent of. Some people think that a flowchart that’s Byzantine
Will “wake up” and suddenly make agential decisions in a mind-like way, some people (me!) think that’s a mistaken belief, and Chapman says - the flowchart is in control of lethal force,
Abundance dies twice - in the field, and in memory. One of the greatest struggles facing all restoration-initiatives in conservation is the tyranny of low expectations.

Bcct Link Magazine
An abundance of human discretion dies twice: when we lose the ability to decide the world around us, and when we forget that the world was ever expected to be so responsive. On Christmas in 1914, spontaneous ceasefires sprouted along the trenches of World War 1. Back then, men had the freedom to choose not to kill. Do you think our new bomb computers have an exception for Christmas programmed in? Is spontaneous peace still possible? Or must peace be made legible to the whole military-algorithmic complex before it can be enacted? How many values in how many databases must be updated for killing to stop for a day? Is that number low or high? Is it getting larger each year? Do you know how you’d go about finding that number?
The scariest section of the book is probably “At war with the machines”, which points out how much control we’ve already ceded when you learn to look for it. Recommender engines determine what you see. Tracking scripts embed themselves on your machine. Automated phishing scams target your specific vulnerabilities. AIs make spam to try to get promoted by
0 komentar
Posting Komentar