Since 1857, The Atlantic has been challenging established answers with tough questions. In this video, actor Michael K. Williams—best known as Omar Little from The Wire—wrestles with a question of his own: Is he being typecast?
What are you asking yourself about the world and its conventional wisdom? We want to hear your questions—and your thoughts on where to start finding the answers: hello@theatlantic.com. Each week, we’ll update this thread with a new question and your responses.
That’s the question that reader John Harris has been asking himself lately. He’s not alone: In 1862, one of The Atlantic’s founders, Ralph Waldo Emerson, wondered the same thing about aging. Acknowledging that “the creed of the street is, Old Age is not disgraceful, but immensely disadvantageous,” Emerson set out to explain the upsides of senescence. A common theme is the sense of serenity that comes with age and experience:
Youth suffers not only from ungratified desires, but from powers untried, and from a picture in his mind of a career which has, as yet, no outward reality. He is tormented with the want of correspondence between things and thoughts. … Every faculty new to each man thus goads him and drives him out into doleful deserts, until it finds proper vent. … One by one, day after day, he learns to coin his wishes into facts. He has his calling, homestead, social connection, and personal power, and thus, at the end of fifty years, his soul is appeased by seeing some sort of correspondence between his wish and his possession. This makes the value of age, the satisfaction it slowly offers to every craving. He is serene who does not feel himself pinched and wronged, but whose condition, in particular and in general, allows the utterance of his mind.
By 1928, advances in medicine had made it more possible to take a long lifespan for granted. In an Atlantic article titled “The Secret of Longevity” (unavailable online), Cary T. Grayson noted that “probably at no other time in the history of the human race has so much attention been paid to the problem of prolonging the span of life.” He offered a word of warning:
Any programme which has for its object the prolongation of life must also have, accompanying this increased span of life, the ability of the individual to engage actively and with some degree of effectiveness in the affairs of life. Merely to live offers little to the individual if he has lost the ability to think, to grieve, or to hope. There is perhaps no more depressing picture than that of the person who remains on the stage after his act is over.
On the other hand, as Cullen Murphy contended in our January 1993 issue, an eternity spent with no decrease in faculties wouldn’t necessarily be desirable either:
There are a lot of characters in literature who have been endowed with immortality and who do manage to keep their youth. Unfortunately, in many cases nobody else does. Spouses and friends grow old and die. Societies change utterly. The immortals, their only constant companion a pervading loneliness, go on and on. This is the pathetic core of legends like those of the Flying Dutchman and the Wandering Jew. In Natalie Babbitt’s Tuck Everlasting, a fine and haunting novel for children, the Tuck family has inadvertently achieved immortality by drinking the waters of a magic spring. As the years pass, they are burdened emotionally by an unbridgeable remoteness from a world they are in but not of.
Since antiquity, Murphy wrote, literature has had a fairly united stance on immortality: “Tamper with the rhythms of nature and something inevitably goes wrong.” After all, people die to make room for more people, and pushing lifespans beyond their ordinary limits risks straining resources as well as reshaping families.
Charles C. Mann examined some of those potential consequences in his May 2005 Atlantic piece “The Coming Death Shortage,” predicting a social order increasingly stratified between “the very old and very rich on top … a mass of the ordinary old … and the diminishingly influential young.” Presciently, a few years before the collapse of the real-estate bubble that wiped out millions of Americans’ retirement savings, Mann outlined the effects of an increased proportion of older people in the workforce:
When lifespans extend indefinitely, the effects are felt throughout the life cycle, but the biggest social impact may be on the young. According to Joshua Goldstein, a demographer at Princeton, adolescence will in the future evolve into a period of experimentation and education that will last from the teenage years into the mid-thirties. … In the past the transition from youth to adulthood usually followed an orderly sequence: education, entry into the labor force, marriage, and parenthood. For tomorrow’s thirtysomethings, suspended in what Goldstein calls “quasi-adulthood,” these steps may occur in any order.
In other words, Emerson’s period of “ungratified desires and powers untried” would be extended indefinitely. Talk about doleful deserts! On top of such Millennial malaise, Mann also predicted increased marital stress, declining birth rates, a depleted labor force, and a widespread economic slowdown as the world’s most powerful nations entered a “longevity crisis.”
But that’s just one vision. Another came from Gregg Easterbrook, who anticipated “a grayer, quieter, better future” in his October 2014 Atlantic article “What Happens When We All Live to 100?” His argument has some echoes of Emerson’s, but with modern science to back it up:
October 2014
Neurological studies of healthy aging people show that the parts of the brain associated with reward-seeking light up less as time goes on. Whether it’s hot new fashions or hot-fudge sundaes, older people on the whole don’t desire acquisitions as much as the young and middle-aged do. Denounced for generations by writers and clergy, wretched excess has repelled all assaults. Longer life spans may at last be the counterweight to materialism.
Deeper changes may be in store as well. People in their late teens to late 20s are far more likely to commit crimes than people of other ages; as society grays, the decline of crime should continue. Violence in all guises should continue downward, too. … Research by John Mueller, a political scientist at Ohio State University, suggests that as people age, they become less enthusiastic about war. Perhaps this is because older people tend to be wiser than the young—and couldn’t the world use more wisdom?
It’s a good point. Couldn’t we all use more wisdom, more experience, more opportunities to learn? Wouldn’t we make better use of our lives if our lives went on forever? Not so fast, Olga Khazan wrote last month:
A common fear about life in our brave, new undying world is that it will just be really boring, says S. Matthew Liao, director of the Center for Bioethics at New York University. Life, Liao explained, is like a party—it has a start and end time. … “But imagine there’s a party that doesn’t end,” he continued. “It would be bad, because you’d think, ‘I could go there tomorrow, or a month from now.’ There’s no urgency to go to the party anymore.”
The Epicureans of ancient Greece thought about it similarly, [psychologist Sheldon] Solomon said. They saw life as a feast: “If you were at a meal, you’d be satiated, then stuffed, then repulsed,” he said. “Part of what makes each of us uniquely valuable is the great story. We have a plot, and ultimately it concludes.”
Even so, some futurists believe immortality is within reach:
So, what do you think: Is there a limit to how long people should live? Is it selfish to want eternity for yourself, or would having even a few immortals around make the world better for everyone? Here’s one reader’s take:
This reminds me a bit of the cylons in the “new” Battlestar Galactica.
With the ability to reincarnate infinitely, and be effectively immortal, they were callous towards humans, and killed humans with impunity. It was only when their ability to reincarnate was ended and they became effectively mortal (and thus subject to basically the same rules of death as humans) that they were driven to behave in a moral way.
But another reader argues:
I for one think the world would be a better place if we collectively took a longer view, and what better way to do that than to give everyone a stake in it?
Because of the Internet I write more and receive feedback from people I know (on Facebook) and online strangers (on TAD and other platforms that use Disqus). I use it as a jumping-off place and resource for planning lessons for my high-school students in science.
However, I don’t practice music as often as I used to.
On a similar note, another reader confesses, “I draw less because I’m always on TAD”:
As a sketch artist, I appreciate my ability to Google things I want to draw for a reference point, but that doesn’t make me more creative. I already had the image in my head and the ability to draw. I honed my skills drawing people the old fashioned way, looking at pictures in books or live subjects and practicing till my fingers were going to fall off.
In my opinion, the internet also encourages people to copy the work of others that goes “viral” rather than creating something truly original. The fact that you can monetize that viral quality also makes it more likely that people will try to copy rather than create.
That’s the same reason a third reader worries that “the internet has become stifling for creativity”:
Maybe I am not looking in the right place, but most platforms seem to be more about reblogging/retweeting/reposting other people’s creations. Then there is the issue of having work stolen and credits removed.
As another reader notes, “This is the central conflict of fan fiction”:
It’s obviously creative. On the other hand, it is all based on blatant copying of another writer’s work. How much is this a huge expansion of a creative outlet, and how much is this actually people choosing to limit their own creativity by colonizing somebody else’s world rather than creating a new one?
For my part, I tend to think the internet has encouraged and elevated some amazing new forms of creativity based on reaction and re-creation, collaboration and synthesis. Take this delightful example:
Those creative forms are a big part of my job too: When I go to work, I’m either distilling my colleagues’ articles for our Daily newsletter or piecing together reader emails for Notes, and those curatorial tasks have been exciting and challenging in ways that I never expected. But I’ve also missed writing fiction and poetry and literary criticism, and I worry sometimes that I’m letting those creative muscles atrophy. If you’re a fanfic reader or writer (or videographer, or meme-creator, or content-aggregator) and would like to share your experience, please let us know: hello@theatlantic.com.
This next reader speaks up for creativity as “the product of synthesis”:
It’s not so much a quest for pure “originality,” as it is a quest for original perspectives or original articulations. I’d say that my creativity has been fueled by letting myself fall into occasional rabbit holes. Whether that’s plodding through artists I don’t know well on Spotify or following hyperlinks in a Wiki piece until I have forgotten about what it was that I initially wondered, that access to knowledge in a semi-random form triggers the old noggin like little else.
On the other hand: So much knowledge! So many rabbit holes! Jim is paralyzed:
I find many more ideas and inspirations, but the flow of information and ideas is so vast that I never find time to develop them. I need to get off the internet.
Diane is also exasperated:
The promise of digital technology was: spinning piles of straw into useful pieces of gold.
My reality is: looking for golden needles in a giant haystack of unusable straw.
I spend so much time looking for the few things actually useful to my project, my writing, my daily info needs, and by the end of the day I feel like I’ve wasted so much time and effort sorting through useless crap. And the pile of useless keeps getting bigger and bigger, like a bad dream.
This next reader provides some tips for productive discovery:
I am old enough to vaguely recall a time before I began to use the internet on a daily basis. What I would do, back then, when I got stuck and could not find a creative angle on a problem, was to go to some arbitrary corner of the library, take down the first book that caught my interest even though it had nothing to do with the problem at hand, and read a few pages—sometimes, the whole book. More often than not, it would trigger all sorts of analogies, and at least a few of them usually turned out to be fruitful. (Even if nothing turned out to be relevant, I usually still learned something interesting, so it was a win-win strategy.) It was a great way (to borrow Horace Walpole’s definition of serendipity) to make discoveries, by accidents and sagacity, of things one were not in quest of.
I try to use the internet in a somewhat similar fashion: When I’m stuck, I often spend a morning strolling around arbitrary corners of the internet, trying to discover stuff I did not know I was in quest of. Typically, I start in some academic resource like JSTOR. (I almost always start by limiting my search to articles at least 50 years old; it ensures that one does not end up reading fashionable stuff and thus thinking the same thoughts as all the other hamsters in the academic wheel. Also, older articles are usually far more well-written than the crap that results from the publish-or-perish system.) I am not above using e.g. Wikipedia, though, at least as a point of departure.
I also like reading old stuff in online newspaper/magazine archives. Sometimes, a stray remark in one of those wonderful 19th-century magazines written by and for men of letters is all you need to get a fresh angle on a familiar problem.
Gotta love those 19th-century magazines. In some ways, their mission wasn’t so different from that of the Facebook groups and Reddit threads and Disqus forums of today: creating a space for discourse and exchange and reflection, where exciting new ideas could bump up against each other. As James Russell Lowell, The Atlantic’s founding editor, wrote to a friend in 1857, “The magazine is to be free without being fanatical, and we hope to unite in it all available talent of all modes of opinion.” And as Terri, one of the founding members of TAD, reflects today:
TAD itself has been a creative endeavor for me and the other mods. Envisioning the community we wanted. Coming up with ideas to bring it to life. We developed ideas around the mix of politics, open and fun threads that the community has taken on and grown. It really has been a creative experience in collaboration on the internet.
Check out TAD’s whole discussion on creativity here, as well as many more. As for the offline benefits of online collaboration, take it from this reader—a “furniture maker and Weimaraner enthusiast”:
I would like to share a story about a project I am working on in which the internet has certainly aided my creativity. Zeus, our 8-month-old Weimaraner, is a couch hog. When my girlfriend and I sit down on the couch to watch TV, he will sit directly in front of us and bark until we make room for him. There are three large dog beds in the house, but Zeus steadfastly refuses to lie on the dog beds.
I am a member of a Weimaraner-owner Facebook group called Weim Crime. Several people in the group have had similar problems. We came up with a solution I tested out last week: build a dog bunk bed with one bed on the bottom and one bed about the same height as our couch.
It has worked out very well. Zeus quietly relaxes on the top dog bunk while we sit on the couch. I am now collecting feedback from that same group before building the more attractive final version. I have received very useful feedback—for example, lowering the top bunk deck to 18 inches or lower to prevent joint injuries. My end goal is to design and build a simple, low-cost dog bunk bed that is more attractive than the prototype and post a YouTube video showing other owners how to build a similar one.
This is just one silly project, but the feedback and interest I have receiving regarding the project has been really inspiring.
What questions about your day-to-day experience of the world have you been pondering? We welcome your feedback and inspirations. Check back Monday for the next discussion question in this series—and in the meantime, enjoy some Weimaraner art:
What the internet does to the mind is something of an eternal question. Here at The Atlantic, in fact, we pondered that question before the internet even existed. Back in 1945, in his prophetic essay “As We May Think,” Vannevar Bush outlined how technology that mimics human logic and memory could transform “the ways in which man produces, stores, and consults the record of the race”:
Presumably man’s spirit should be elevated if he can better review his shady past and analyze more completely and objectively his present problems. He has built a civilization so complex that he needs to mechanize his records more fully if he is to push his experiment to its logical conclusion and not merely become bogged down part way there by overtaxing his limited memory. His excursions may be more enjoyable if he can reacquire the privilege of forgetting the manifold things he does not need to have immediately at hand, with some assurance that he can find them again if they prove important.
Bush didn’t think machines could ever replace human creativity, but he did hope they could make the process of having ideas more efficient. “Whenever logical processes of thought are employed,” he wrote, “there is opportunity for the machine.”
Fast-forward six decades, and search engines had claimed that opportunity, acting as a stand-in for memory and even for association. In his October 2006 piece “Artificial Intelligentsia,” James Fallows confronted the new reality:
If omnipresent retrieval of spot data means there’s less we have to remember, and if categorization systems do some of the first-stage thinking for us, what will happen to our brains?
I’ve chosen to draw an optimistic conclusion, from the analogy of eyeglasses. Before corrective lenses were invented, some 700 years ago, bad eyesight was a profound handicap. In effect it meant being disconnected from the wider world, since it was hard to take in knowledge. With eyeglasses, this aspect of human fitness no longer mattered in most of what people did. More people could compete, contribute, and be fulfilled. …
It could be the same with these new computerized aids to cognition. … Increasingly we all will be able to look up anything, at any time—and, with categorization, get a head start in thinking about connections.
But in Nicholas Carr’s July 2008 piece “Is Google Making Us Stupid?,” he was troubled by search engines’ treatment of information as “a utilitarian resource to be mined and processed with industrial efficiency.” And he questioned the idea that artificial intelligence would make people’s lives better:
It suggests a belief that intelligence is the output of a mechanical process, a series of discrete steps that can be isolated, measured, and optimized. In Google’s world, the world we enter when we go online, there’s little place for the fuzziness of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive.
Even as Carr appreciated the ease of online research, he felt the web was “chipping away [his] capacity for concentration and contemplation.” It was as if the rote tasks of research and recall, far from wasting innovators’ time, were actually the building blocks of more creative, complex thought.
On the other hand, “you should be skeptical of my skepticism,” as Carr put it. And from the beginning, one great benefit of the internet was that it brought people in contact not just with information, but with other people’s ideas. In April 2016, Adrienne LaFrance reflected on “How Early Computer Games Influenced Internet Culture”:
In the late 1970s and early 1980s, game makers—like anyone who found themselves tinkering with computers at the time—were inclined to share what they learned, and to build on one another’s designs. … That same culture, and the premium it placed on openness, would eventually carry over to the early web: a platform that anyone could build on, that no one person or company could own. That idea is at the heart of what proponents for net neutrality are trying to protect—that is, the belief that openness is a central value, perhaps even the foundational value, of what is arguably the most important technology of our time.
But as tech culture evolved and pervaded life outside the web, even its problem-solving methods began to seem reductive at times. Ian Bogost outlined that paradox in November 2016 when a new product called ketchup leather was billed as the “solution” to soggy burgers:
The technology critic Evgeny Morozov calls this sort of thinking “solutionism”—the belief that all problems can be solved by a single and simple technological solution. … Morozov is concerned about solutionism because it recasts social conditions that demand deeper philosophical and political consideration as simple hurdles for technology. …
But solutionism has another, subtler downside: It trains us to see everything as a problem in the first place. Not just urban transit or productivity, but even hamburgers. Even ketchup!
So, what’s your personal experience of how the internet affects creativity? Can you point to a digital distraction—Netflix, say, or Flappy Bird—that’s enriched your thinking in other areas of your life? On the flip side of the debate, can you point to a tool like email or Slack that’s sharpened your efficiency but narrowed the scope of your ideas? We’d like to hear your stories; please send us a note: hello@theatlantic.com.