A Taxonomy Of AI Systems

(NOTE: the following is all highly speculative and not researched very well.)

In a blog post on domain-specific programming languages author Eric Raymond made a distinction between the kinds of problems best solved through raw automation and the kinds of problems best solved by making a human perform better.

This gave me an idea for a 4-quadrant graph that could serve as a taxonomy of various current and future AI systems. Here’s the setup: the horizontal axis runs Expert <–> Crowd and the vertical axis runs Judgment Enhancement <–> Automation.

Quadrant one (Q1) would contain quintessential human judgment amplifiers, like the kinds of programs talked about by Shyam Sankar in his TED talk or the fascinating-but-unproven-as-far-as-I-know “Chernoff faces”.

In Q2 we have mechanisms for improving the judgments of crowds. The only example I could really think of were prediction markets, though I bet you could make a case for market prices working as exactly this sort of mechanism.

In Q3 we have automated experts, the obvious example of which would be an expert system or possibly a strong artificial general intelligence.

And in Q4 we have something like a swarm of genetic algorithms evolved by making random or pseudo-random changes to a seed code and then judging the results against some fitness function.

Now, how should we match these systems with different problem domains?

It seems to me like Q1 systems would be better at solving problems that either a) have finite amounts of information that can be gathered by a single computer-aided human or b) are problems for which humans are uniquely suited to solve, like intuiting and interpreting the emotional states of other humans.

Chernoff faces, if we ever get them working right, are an especially interesting Q1 system because what they are supposed to do is take statistical information, which humans are notoriously dreadful at working with, and transform it into a “facial” format, which humans have enormously powerful built-in software for working with.

Q2 systems should be used to solve problems that require more information than a human can work with. Prediction markets are meant to use a profit motive to incentivize human experts to incorporate as much information as they can in as honest a way as they can, and over a span of time there are enough rounds of updates that the system as a whole produces a price which contains the aggregate wisdom of the individuals making the system up (At least I think that’s how they work).

Why can’t we have a prediction market that performs heart surgery? Because huge amounts of the relevant information is “organic”, i.e. muscle memory built up over dozens and eventually hundreds of similar procedures. This information isn’t written down anywhere and thus can’t be aggregated and incorporated into a “bet” by a human non-surgeon.

Based on some cursory research, my example of a Q3 system, i.e. expert systems, appear to be subdivided into knowledge bases and inference engines. I’d venture to guess that they are suitable wherever knowledge can be gathered and encoded in a way that allows computers to perform inferences and logical calculations on it. Wikipedia’s articlecontains a chart detailing some areas where expert systems have been used, and also points out that one drawback to expert systems is that they are unable to acquire new knowledge.

That’s a pretty serious handicap, and places further limits on what types of problem a Q3 system could solve.

Finally, Q4 systems are probably the strangest entities we’ve discussed so far, and the only examples I’m familiar with are from the field of evolvable hardware. IIRC using evolutionary algorithms to evolve circuits yields workable results which no human engineer would’ve thought of. That has to be useful somewhere, if only when trying to solve an exotic problem that’s stymied every attempt at a solution, right?

What is a Simulation?

While reading Paul Rosenbloom’s outstanding book “On Computing” I came across an interesting question: what is a simulation, and how is it different from an implementation? I posed this question on Facebook and, thanks to the superlative quality of my HiveMind I had a productive back-and-forth which helped me nail down a tentative answer. Here it is:

‘Simulation’ is a weaker designation than ‘implementation’. Things without moving parts (like rocks) can be simulated but not implemented. Engines, planets, and minds can be either simulated or implemented.

A simulation needs to lie within a certain band of verisimilitude, being minimally convincing at the lower end but not-quite-an-implementation on the other. An implementation amounts to a preservation of the components, their interactions, and the higher-level processes (in other words: the structure), but in a different medium. Further, implementation is neither process- nor medium-agnostic; not every system can rise to the level of an implementation in any arbitrarily-chosen medium.

A few examples will make this clearer.

Mario is neither a simulation not an implementation of an Italian plumber. If we ran him on the Sunway TaihuLight supercomputer and he could pass a casual version of the Turing test, I’d be prepared to say that he is a simulation of a human, but not an implementation. Were he vastly upgraded, run on a quantum computer, and able to pass as a human indefinitely, I’d say that counts as an implementation, so long as the architecture of his mind was isomorphic to that of an actual human. If it wasn’t, he would be an implementation of a human-level intelligence but not of a human per se.

A digital vehicle counts as a simulation if it behaves like a real vehicle within the approximate physics of the virtual environment. But it can never be an implementation of a vehicle because vehicles must bear a certain kind of relationship to physical reality. There has to be actual rubber on an actual road, metaphorically speaking. But a steam-powered Porsche might count as an implementation of a Porsche if it could be driven like one.

An art auction in the Sims universe would only be a simulation of an art auction, and not a very convincing one. But if the agents involved were at human-level intelligence, that would be an implementation of an art auction. Any replicated art within the virtual world wouldn’t even count as a simulation, and would just be a copy. Original art pieces within a virtual world might count as real art, however, because art doesn’t have the same requirement of being physically instantiated as a vehicle does.

Though we might have simulated rocks in video games, I’m not prepared to say a rock can ever be implemented. There just doesn’t seem to be anything to implement. Building an implementation implies that there is a process which can be transmogrified into a different medium, and, well, rocks just don’t do that much. But you could implement geological processes.

Conway’s Game of Life is only barely a simulation of life; it would probably be more accurate to say it’s the minimum viable system exhibiting life-like properties. But with the addition of a few more rules and more computing power it could become a simulation. It would take vastly more of both for it to ever be an implementation, however.

My friends’ answers differed somewhat from the above, and many correctly pointed out that the relevant definitions will depend somewhat on the context involved. But as of March 29th, 2017, I’m happy with the above and will use it while grappling with issues in AI, computing, and philosophy.

Machine Ethics is Still a Hard Problem

I have now read Digital Wisdom’s essay “Yes, Virginia, We *DO* Have a Deep Understanding of Morality and Ethics” twice, and I am unable to find even one place where the authors do justice to the claims they are criticizing.

With respect to selfishness they write:
“Followers of Ayn Rand (as well as most so-called “rationalists”) try to conflate the distinction between the necessary and healthy self-interest and the sociopathic selfish.”
This is simply untrue. The heroes of Atlas Shrugged work together to bring down a corrupt and parasitic system, John Galt refuses to be made an economic dictator even though doing so would allow him limitless power, and in The Fountainhead Howard Roark financially supports his friend, a sculptor, who otherwise would be homeless and starving.

Nothing — nothing — within Objectivism, Libertarianism, or anarcho-capitalism rules out cooperation. A person’s left and right hand may voluntarily work together to wield an axe, people may voluntarily work together to construct a house, and a coalition of multi-national corporations may voluntarily work together to establish a colony on the moon. Individuals uniting in the pursuit of a goal which is too large to be attempted by any of them acting alone is wonderful, so long as no one is being forced to act against their will. The fact that people are still misunderstanding this point must be attributed to outright dishonesty.

Things do not improve from here. AI researcher Steven Omohundro’s claim that without explicit instructions to do otherwise an AI system would behave in ways reminiscent of a human psychopath is rebutted with a simple question: “What happens when everyone behaves this way?” Moreover, the AI alarmists — a demimonde of which I count myself a member — “totally miss that what makes sense in micro-economics frequently does not make sense when scaled up to macro-economics (c.f. independent actions vs. cartels in the tragedy of the commons).”

I simply have no idea what the authors think they’re demonstrating by pointing this out. Are we supposed to assume that recursively self-improving AI systems of the kind described by Omohundro in his seminal “The Basic AI Drives” will only converge on subgoals which would make sense if scaled up to a full macroeconomic system? Evidently anyone who fails to see that an AI will be Kantian is a fear-mongering Luddite.

To make the moral turpitude of the “value-alignment crowd” all the more stark, we are informed that “…speaking of slavery – note that such short-sighted and unsound methods are exactly how AI alarmists are proposing to “solve” the “AI problem”.”

Again, this is just plain false. Coherent Extrapolated Volition and Value Alignment are not about slavery, they’re about trying to write computer code which, when going through billions of rewrites by an increasingly powerful recursive system still results in a goal architecture which can be safely implemented by a superintelligence.

And therein lies the rub. Given the title of the essay, what exactly does our “deep understanding of morality and ethics” consist of? Prepare yourself, because after you read the next sentence your life will never be the same:

At essence, morality is trivially simple – make it so that we can live together.”
I know, I know. Please feel free to take a moment to regain your sense of balance and clean up the blood loss that inevitably results from having such a railroad spike of thermonuclear insight driven into your brain.

In the name of all the gods Olde, New, and Forgotten, can someone please show me where in the voluminous less wrong archives anyone says that there won’t be short natural-language sentences which encapsulate human morality?

Proponents of the thesis that human values are complex and fragile are not saying that morality can’t be summarized in a way that is comprehensible to humans. They’re saying that those summaries prove inadequate when you start trying to parse them into conceptual units which are comprehensible to machines.

To see why, let’s descend from the rarefied terrain of ethics and discuss a more trivial problem: writing code which produces the Fibonacci sequence. Any bright ten year old could accomplish this task with a simple set of instructions: “start with the numbers 0 and 1. Each additional number is the sum of the two numbers that precede it. So the sequence goes 0, 1, 1, 2, 3, 5, 8…”

But pull up a command-line interface and try typing in those instructions. Computers, you see, are really rather stupid. Each and every little detail has to be accounted for when telling them which instructions to execute and in which order. Here is one python script which produces the Fibonacci sequence:


def fib(n):

    a,b = 1,1
    fib_list = []
    for i in range(n):
        fib_list.append(a)
        a,b = b, a+b
    return fib_list

You must explicitly store the initial values in two variables or the program won’t even start. You must build some kind of iterating data structure or the program won’t do anything at all. The values have to be updated and stored one-at-a-time or the values will appear and disappear. And if you mess something up, the program might start throwing errors, or worse, it may output a number sequence that looks correct but isn’t.

And really, this isn’t even that great of an example because the code isn’t that much longer than the natural language version and the Fibonacci sequence is pretty easy to identify. The difficulties become clearer when trying to get a car to navigate city traffic, read facial expressions, or abide by the golden rule. These are all things that can be explained to a human in five minutes because humans filter the instructions through cognitive machinery which would have to be rebuilt in an AI.

Digital Wisdom ends the article by saying that detailed rebuttals of Yudkowsky and Stuart Russell as well as a design specification for ethical agents will be published in the future. Perhaps those will be better. Based on what I’ve seen so far, I’m not particularly hopeful.

Peripatesis: E-Governance; Lighting Up The Dark; Regulating Superintelligences.

Nestled in the cold reaches of Northern Europe, Estonia is doing some very interesting things with the concept of ‘e-governance‘. Their small population, short modern history, and smattering of relatively young government officials make experimenting with Sovereignty easier than it would be in, say, The United States. The process of starting a business and paying taxes in Estonia has been streamlined, for example, leading to the predictable influx of ‘e-residents’ wanting to run their internet-based business from Estonia.

***

There are some truly fascinating advancements happening at the cutting edge of farming and horticulture. Some enterprising researchers have discovered a way to channel natural light into unlit places, and there are talks of using this technology to set up a public garden in the abandoned Williamsburg Bridge Trolley Terminal beneath New York City. It’s not really clear from the linked article whether or not all of this light is natural or whether or it’s a mix of natural and artificial light, but it’s still interesting.

I would love to see a variant of this technology utilized far and wide to foster localized farming and the greening of urban centers. Plenty of buildings have rooftop gardens now, but with a means of gathering and arbitrarily distributing sunlight it would be possible to have, say, one floor in ten of a big skyscraper devoted to a small orchard or garden space. Advanced greenhouses could be both heavily insulated and capable of showering their interior with photons, making farming at high altitudes and in colder climates more straightforward.

***

The BBC has a piece on ‘anti-languages’, slangs developed by insular communities like thieves or prison inmates to make their communication indecipherable to outsiders. They share the grammar of their parent language but use a plethora of new terms in place of old ones to achieve something akin to encryption.

These new terms — such as ‘bawdy basket’, which meant ‘thief’ in the English anti-language used among Elizabethan criminals — are generated through all sorts of techniques, including things like metaphor and reversing the spelling or meaning of terms from the parent language.

***

An essay by Marc McAllister at The Babel Singularity argues that laws enforcing human control over superintelligences are tantamount to slavery, and won’t be of much use any way because these beings will have moral concepts which we baseline humans simply can’t fathom with our outdated brains.

He seems to be missing the point of the arguments made by groups like MIRI and the Future of Life Institute. To the best of my knowledge no one is advocating that humans remain strictly in control of advanced AIs indefinitely. In fact, the opposite is true: the point of building a superintelligence is to eventually put it in charge of solving really hard problems on behalf of humanity. In other words, ceding control to it.

To that end, the efforts made by people who think about these issues professionally seem to be aimed at understanding human values, intelligence, and recursively improving algorithms well enough to: 1) encode those values into an AI; 2) Predict with an acceptably strict level of confidence that this human-compatible goal architecture will remain intact as the software rewrites itself; 3) reason, however dimly, about the resulting superintelligence. These are by no means trivial tasks. Human values are the messy, opaque result of millennia of evolution, and neither intelligence nor recursion are well understood.

But if we succeed in making a “Friendly” AI then control, in a ‘coercive sense’, won’t be necessary because its values will be aligned with our own.

***

Somewhat related: Big Think has published a very brief history of Artificial Intelligence. With the increasing sophistication and visibility of advancements in the field, understanding its roots becomes ever more important.

***

Vector Space Systems is a new player in an arena long dominated by Blue Origins, SpaceX, and Virgin Galactic. Their goal: to be to spaceflight what taxis are to terrestrial modes of transport. According to their website they have been quietly working on a micro satellite launch vehicle designed to carry payloads in the 5 – 50 kg range into orbit.

If they succeed this will allow companies wanting to develop new space technologies to launch more frequently and less expensively, driving faster growth in space commerce, exploration, and tourism.

Is Evolution Stoopid?

In a recent post I made the claim the evolution is a blind, stupid process that does what it does by brute-forcing through adjacent regions of possibility space with a total lack of foresight. When I said this during a talk I gave on superintelligence I met with some resistance along the lines of ‘calling evolution stupid is a mistake because sometimes there are design features in an evolved organism or process which are valuable even if human engineers are not sure why’.

This is true, but doesn’t conflict with the characterization of evolution as stupid because by that I just meant that evolution is incapable of the sort of planning and self-reflection that a human is capable of.

This is very different from saying that it’s trivial for a human engineer to out think evolution on any arbitrary problem. So far is I know nobody has figure out how to make replicators as good as RNA or how to make things that can heal themselves, both problems evolution has solved.

The difference is not unlike the difference between intelligence, which is something like processing speed, and wisdom, which is something like intelligence applied to experience.

You can be a math prodigy at the age of 7, but you must accrue significant experience before you can be a wisdom prodigy, and that has to happen at the rate of a human life. If one person is much smarter than another they may become wiser faster, but there’s still a hard limit to how fast you can become wise.

I’ve personally found myself in situations where I’ve been out-thought by someone who I’m sure isn’t smarter than me, simply because that other person has seen so many more things than I have.

Evolution is at one limit of the wisdom/intelligence distinction. Even zero intelligence can produce amazing results given a head start of multiple billions of years, and thus we can know ourselves to be smarter than evolution while humbly admitting that its designs are still superior to our own in many ways.

Processes Of Optimization.

In the beginning was the Bang, and for ages thereafter the universe did nought but sample randomly from the same distribution in the form of star and galaxy formation. And though the stars burned bright in the void, they had but a small influence on the speed with which the universe searched possibility space.

For the birth of stars did make the birth of planets more likely, which did make life more likely. And thus did each act as a gatekeeper to new regions of possibility space.

And lo, with the first self-replicators came the possibility of new organisms being created when mistakes occurred in the replication process. Eons later sex allowed existing DNA to be combined into novel configurations, and thus could possibility space be explored more quickly.

For verily is evolution a stupid process and its recursion weak, and it doth wobble after a drunkard’s fashion through possibility space with no insight, foresight, or intelligence.

And then there were brains, and with them the ability to improve upon evolution’s work. For some brains are able to plan for future goals and to imagine counterfactual situations, abilities which evolution possesses not. 

But alas, nervous systems never evolved much introspective depth, and had but the tiniest ability to recursively self-improve.

And then a small set of brains invented Science, which could accumulate many many more insights than any brain could in the span of its life. It was an age of optimism and plenty, and there was much rejoicing and proliferation of telescopes and gene sequencing and iphones throughout the land.

But even unto the present day Science has not learned enough to do anything more than weakly turn any optimization process back on itself.

And lo, from the cackling, structured madness of genetics, history, and culture did the universe cough up a series of sages, deep of insight, quick of thought, and usually possessed of tremendous social awkwardness.

After much study the sages warned that there might one day be a strong recursive process that could be a greater source of discontinuity than any that had come before it.

And tho did Einstein proclaim compound interest to be the greatest among the forces of heaven and earth, this was only partly true. For surely it is strong recursion which holdeth the greatest promise and the deepest peril.

Thus should ye heed this dire proclamation: work swiftly and work thoroughly, before the AI goeth ‘FOOM’.

Whither Discontinuity?

I’m writing a series of posts clarifying my position on the intelligence explosion hypothesis and today I want to discuss discontinuity.

This partially addresses the ‘explosion’ part of ‘intelligence explosion’. Given the fact that most developments in the history of the universe have not been discontinuous, what reason do we have to suspect that an AI takeoff might be?

Eliezer identifies the following five sources of discontinuity:

Cascades – Cascades occur when ‘one thing leads to another’. A (possibly untrue) example is the evolution of human intelligence.

It is conceivable that other-modeling abilities in higher primates became self-modeling abilities, which allowed the development of complex language, which allowed for the development of politics, which put selection pressure on the human ability to outwit opponents in competition for food and mates, which caused humans to ‘fall up the stairs’ and quickly become much smarter than the next smartest animal.

Cycles – Cycles are like cascades but the output hose is connected to the input hose. It’s possible for businesses or even individual people to capture enormous parts of a market by investing large fractions of their profits into infrastructure and research. Of course this isn’t the sort of extreme discontinuity we’re interested in, but it’s the same basic idea.

Insight – An insight is something like the theory of evolution by natural selection which, once you have it, dissolves lots of other mysteries which before might’ve looked only loosely connected. The resultant gain in knowledge can look like a discontinuity to someone on the outside who doesn’t have access to the insight.

Recursion – Is the turning of a process back on itself. An AI that manages to produce strong, sustained recursive self-improvements could rapidly become discontinuous with humans.

Magic – Magic is a term of art for any blank spaces in our maps. If something smarter than me turns its intelligence to the project of becoming smarter, then there should be results not accounted for in my analysis. I should expect to be surprised.

Any one of these things can produce apparent discontinuities, especially if they occur together. A self-improving AI could produce novel insights, make use of cascades and cycles, and might be more strongly recursive than any other known process.

Takeoff Speed II: Recalcitrance in AI pathways to Superintelligence.

I’m writing a series of posts clarifying my position on the intelligence explosion hypothesis. Last time I took a look at various non-AI pathways to Superintelligence and concluded that the recalcitrance profile for most of them was moderate to high.

This doesn’t mean it isn’t possible to reach Superintelligence via these routes, but it does indicate that doing so will probably be difficult even by the standards of people who think about building Superintelligences all day long.

AI-based pathways to Superintelligence might have lower recalcitrance than these alternatives, because of a variety of advantages a software mind could have over a biological one.

These advantages have been discussed at length elsewhere, but relevant to the present discussion is that software minds could have far greater introspective access to their own algorithms than humans do.

Of course programmers building such a mind might fear an intelligence explosion and endeavor to prevent this sort of deep introspection. But in principle an AI with such capabilities could become smart enough to start directly modifying and improving its own code.

Humans can only do a weak sort of introspection, and therefore can only do a weak sort of optimization to their thinking patterns. So far, anyway.

At a futurist party recently I was discussing these ideas with someone and they asked me what might happen if a recursively self-improving AI hit diminishing returns on each optimization. Might an intelligence explosion just sort of… fizzle out?

The answer is yes, that might happen. But so far as I can tell there isn’t any good reason to assume that that will happen, and thus the safest bet is to act as though it probably will happen and start thinking hard about how to steer this runaway process in a direction that leads to a valuable future.

Takeoff Speed, I: Recalcitrance In Non-AI Pathways to Superintelligence

I’m writing a series of posts clarifying my position on the Intelligence Explosion hypothesis. Though I feel that the case for such an event is fairly compelling, it’s far less certain how fast the ‘takeoff’ will be, where ‘takeoff’ is defined as the elapsed time from having a roughly human-level intelligence to a superintelligence.

Once we’ve invented a way for humans to become qualitatively smarter or made machines able to improve themselves should we expect greater-than-human intelligence in a matter of minutes or hours (a ‘fast takeoff’), over a period of weeks, months or years (a ‘moderate takeoff’), or over decades and centuries (a ‘slow takeoff’)? What sorts of risks might each scenario entail?

Nick Bostrom (2014) provides the following qualitative equation for thinking about the speed with which intelligence might explode:

Rate of Improvement = (optimization power) / (recalcitrance)

‘Recalcitrance’ here refers to how amenable a system might be to improvements, a value which varies enormously for different pathways to superintelligence.

A non-exhaustive list of plausible means of creating a superintelligence includes programming a seed AI which begins an improvement cascade, upgrading humans with smart drugs or computer interfaces, emulating a brain in a computer and then improving it or speeding it up, and making human organizations vastly superior.

These can broadly be lumped into ‘non-AI-based’ and ‘AI-based’ pathways, each of which has a different recalcitrance profile.

In the case of improving the human brain through drugs, genetic enhancements, or computers, we can probably expect the initial recalcitrance to be low because each of these areas of research are inchoate and there is bound to be low-hanging fruit waiting to be discovered.

The current generation of nootropics is very crude, so a few years or a decade of concerted, well-funded research might yield classes of drugs able to boost the IQs of even healthy individuals 20 or 30 points.

But while it may be theoretically possible to find additional improvements in this area, the brain is staggeringly complicated with many subtle differences between individuals, so in practice we are only likely to get so far in trying to enhance it through chemical means.

The same basically holds for upgrading the human brain via digital prosthetics. I don’t know of any reason that working memory can’t be upgrade with the equivalent of additional sticks of RAM, but designing components that the brain tolerates well, figuring out where to put them, and getting them where they need to go is a major undertaking.

Beyond this, the brain and its many parts interact with each other in complex and poorly-understood ways. Even if we had solved all the technical and biological problems, the human motivation system is something that’s only really understood intuitively, and it isn’t obvious that the original motivations would be preserved in a radically-upgraded brain.

Perhaps, then, we can sidestep some of these issues and digitally emulate a brain which we speed up a thousand times.

Though this pathway is very promising, no one is sure what would happen to a virtual brain running much faster than it’s analog counterpart is supposed to. It could think circles around the brightest humans or plunge into demented lunacy. We simply don’t know.

Finally, there appears to be a very steep recalcitrance gradient in improving human organizations, assuming you can’t also modify the humans involved.

Though people have figured out ways of allowing humans to cooperate more effectively (and I assume the role the internet has played in improving the ability to coordinate on projects large and small is too obvious to need elaboration), it’s difficult to imagine what a large-scale general method for optimizing networks of humans would even look like.

None of the above should be taken to mean that research into Whole Brain Emulation or Human-Computer interaction isn’t well worth doing. It is, but many people make the unwarranted assumption that the safest path to superintelligence is to start with a human brain because at least then we’d have something with recognizably human motivations which, conversely, would also understand us.

But the difficulties adumbrated may make it more likely that some self-improving algorithm crosses the superintelligence finish line first, meaning our research effort should be focused on machine ethics.

Perhaps more troubling still, it isn’t trivial to assume that we can manage brain upgrades, digital, chemical, or otherwise, in a precise enough manner to ensure that the resulting superintelligence is benevolent or even sane.

Peripatesis: Controlling A God, Hannibal Leaves Italy.

‘Peripatesis’ is a made-up word related to the word ‘peripatetic’, which is an adjective that means ‘roaming’ or ‘meandering’. I’ve always liked to think of knowledge as a huge structure through which a person could walk, sprint, dive, climb, or fly in as straightforward or peripatetic a fashion as they like.

Here’s are my recent wanderings and wonderings:

Bostrom, N., Superintelligence, p. 127-144

In the sprawling chapter 9 Bostrom discusses and finds problems with several proposed means of controlling a Superintelligence. These include boxing it, setting up tripwires, and building our preferences into its motivational system.

I plan on touching on these topics substantially in the future, so that’s all I’ll say about them for now.

Goldsworthy, A., The Fall of Carthage, p. 234- 244

Though Hannibal only received reinforcements on one occasion in 215, but his brother Hadsdrubal crossed the alps via the same path he did in 207 and his brother Mago landed in Genoa in 205.

Unfortunately for Hannibal, neither brother managed to accomplish much before being killed (or in Mago’s dying en route to Carthage of a wound sustained during combat). In 203 Hannibal received orders to evacuate and come to the defense of Carthage, which was being menaced by Roman invaders in North Africa.