It’s a gloomy, rainy, almost wintry day in Paris, which I don’t always love when it drizzles, and I’m starting to write the next entry in this newsletter, trying to figure out what, if anything, Christian Szegedy had in mind when he predicted, as I reported a few weeks ago, that “Autoformalization could enable the development of a human level mathematical reasoning engine in the next decade.” Is there exactly one “human level”? The expression is common among the knights of the artificial intelligence community, whose grail is something called “human level general AI.”
It’s a gloomy, rainy, almost wintry day in Paris, which I don’t always love when it drizzles, and I’m starting to write the next entry in this newsletter, trying to figure out what, if anything, Christian Szegedy had in mind when he predicted, as I reported a few weeks ago, that “Autoformalization could enable the development of a human level mathematical reasoning engine in the next decade.” Is there exactly one “human level”? The expression is common among the knights of the artificial intelligence community, whose grail is something called “human level general AI.”
It’s a gloomy, rainy, almost wintry day in Paris, which I don’t always love when it drizzles, and I’m starting to write the next entry in this newsletter, trying to figure out what, if anything, Christian Szegedy had in mind when he predicted, as I reported a few weeks ago, that “Autoformalization could enable the development of a human level mathematical reasoning engine in the next decade.” Is there exactly one “human level”? The expression is common among the knights of the artificial intelligence community, whose grail is something called “human level general AI.”
Comments 0