r/unixporn has no chance against this config
The original Terminator is a truly terrifying film. But its most horrifying moment is also unintentionally comical. The titular Terminator is a massively overpowered killing machine that stalks its human prey through urban hunting grounds. A couple moments in the film, we see what the Terminator sees via its graphical heads-up display or Termovision. But what is actually running on the Terminator? Surprise! The T-800 Termovision is actually displaying Apple II assembly code intended for something much more mundane than terminating Sarah Connor. All good visual effects rely on suspension of disbelief, but it does undercut the menace of the machine to discover it running on consumer hardware rather than some advanced military-industrial complex weapontech. And while the Apple II was respectable in the 1980s it is, to paraphrase Weird Al, a “dorito” compared to today’s chips.
This is – like the depiction of computers in movies/TV in general – an illustration of both the power and limitations of the terminal aesthetic. There is no need for an autonomous computer to have a graphical dashboard that monitors system variables and echoes system data and commands to the screen. Later, this was retconned as a interface intended for human engineers, technicians, and controllers working with drone precursors to the humanoid bipedal Terminator endoskeleton. In the context of the original movie, it is purely for visual aesthetic effect. It chills viewers by seeing other humans reduced to targets that are dispassionately acquired, evaluated, and if necessary to the mission, terminated with extreme prejudice. And only geeks would care what is running on the HUD anyway so nothing is lost for the average movie-goer.
The Termovision exemplifies the mystique of the command line – and terminals more broadly. And also why this mystique ultimately is just as awkward as it is cool. Originally, the Termovision is just purely for aesthetic effect. If taken literally, it is nonsensical. Cheer up! If a deadly cyborg can run off a Apple II 6502 chip (beyond prehistoric compared to even lowest of the low-end computers today), anything is possible! You don’t need an actual mad science lab, just a (mad scientist) dream! Retconning the Termovision to a debug and control display for human technicians gives it an actual function, but one that is still curiously superfluous. A killer autonomous robot does not need to see the world like a person does, and in any event people can biologically sense their important “system variables” rather than having numbers projected onto their eyes. If you’re “low on fuel” you will get hunger pangs and start thinking about where to get lunch.
Ironically, even after having exterminated its human masters, the T-800 is still left with the vestigial legacy of the interfaces those humans used to program, debug, and operate it. More advanced Terminators like the T-1000s accordingly have no use for Termovision displays. This makes them even more terrifying to us than the machines in the T-series under 1000 such as T-800 and T-900. They bear no trace of a human hand. Because if they did, they would at least have interfaces comparable to those human power users rely on for low-level interaction with machines. Some kind of text-based display of system variables and commands, presumably with a way for humans to program and operate the machine with standard input devices. The absence of anything resembling a human control interface from the T-1000 tells us something important, much in the way that the presence of the Termovision does for the T-800.
Technological changes often involve the removal of familiar technologies from embodiment in everyday experience. But perceptions of those technologies do not necessarily fully change even as they are removed from their original context. The persistence of older perceptions and iconography of technology also is mirrored by the stubborn persistence of the visions of the future they symbolize. In Star Trek, they have limitless resources and faster-than-light travel. But they still rely on….terminals. To go where no man has gone before….out of vim
. In Neon Genesis Evangelion, when the intrepid NERV gang stops the malfunctioning giant robot JetAlone, they find…. DOS? Maybe that explains why JetAlone went on a rampage, if people are still using DOS in 2015 for important applications…I guess that’s asking for trouble.
The persistence of the terminal – and its cousins– as a metaphor in computing is a signification of the failures of modern computing as much as its successes. Terminals – in whatever medium – will likely continue to look futuristic even if the actual technology is in theory older than even the oldest of the “OK Boomer.” The mystique of the terminal is inseparable from the promise of actual agency over computers that is ghettoized into text-based system interfaces hidden from the view of ordinary users. That which is inaccessible is often that which is mystified, and thus mystification is the fate of the terminal. Is this a good or bad thing? It depends on your opinions about the underlying system hardware, the logical organization of the software, and the way in which both are presented to users. And opinions…. like anything about computers … can differ quite vehemently. This post about the linkage between terminals in computers and a broader terminal aesthetic. It will try to look at the technical, organizational, and cultural elements.
My first exposure to something approximating a command line was not bash
or cmd.exe
. When I was growing up, first person shooter games featured ‘consoles,’ dropdown text environments that could be summoned by hitting a key like `
. Confusingly, this was a feature first introduced in PC games such as Quake, not actual game consoles such as the Sony Playstation or Super Nintendo. I’ll get to the linguistic slipperiness of ‘console’ and other words later. In Quake, console commands originally implemented with quake.exe
allowed players to manipulate every feature of the game world, from how fast the player-controlled characters could strafe to COM ports on the PC motherboard. Later, geeks brought the Quake console to desktops. When I was looking for graphical computer terminal emulators, I found Quake-style dropdown terminals that – like their inspiration – drop down from the top of the screen as soon as you enter a trigger key.
But what’s the difference between command lines, consoles, and terminals you might ask? It is easy to get the terms ‘terminal,’ ‘console,’ and ‘shell’ mixed up. Once upon a time, in a galaxy far, far away, people needed remote devices to access computers. They still do but bear with me because I just wanted to use that line in this post. At first these devices grew out of electromechanical teletype systems (note that Alan Turing stipulates the usage of teletype machines in the Imitation Game). Then came the “dumb” terminal with a keyboard and a display monitor. Now, all of the variegated components that were necessary to manipulate remote computers can live in the same place. In one computer. At least in regards to the UNIX world, the following roughly obtains. The ‘terminal’ is to be understood as a general text input/output environment, with the console as the primary terminal connected directly to the computer.
Muddying the waters a bit is the way in which operating systems like Linux provide ‘virtual’ consoles that – while logically separate – draw from the same keyboard and display device (nb: much of this also applies to any other UNIX). Linux provides six virtual consoles, each of which can be used for text input. Canonically the seventh virtual console is reserved for X Window System though some Linux distributions like Arch Linux have X run from tty1
. One of the nice things about this is that you can – by hitting CTRL + ALT + F(whatever is not your X sever) – launch into a virtual console to fix problems in the graphical desktop without actually quitting the desktop. But either way, the console – like all terminal environments – is a means to access the shell. The shell is a command line interpreter which receives and interprets commands in character format.
You can launch the bash
shell (one of many shells available) and use the vmstat
command to get an quick look at your virtual memory consumption. If one has a graphical desktop installed, they will likely launch the shell via terminal emulators such as the venerable xterm
and its cousins/children. xterm
exists to emulate various historic terminals such as the DEC VT102/VT220 as well as some scattered scraps of the overarching “VTxx” family and Tektronic 4014 terminals. If one does not install a graphical desktop environment, the only way to interact with the shell is the Linux console, a text display that appears when the computer boots. A critical difference between xterm
and the console is that terminal emulators are implemented in userspace as normal applications, whereas the Linux console draws directly from the underlying Linux kernel. Most users will rely on the terminal emulator because UNIX family operating systems – Linux or otherwise – without some kind of desktop have limited utility at best. That is, of course, part of the problem this post is examining but that will need to wait until we conclude the abbreviated technical summary.
The terminal emulator “emulates” because it, like the dumb terminal, must go through various other layers of mediation. A (semi)-concise way of explaining this is the distinction between the tty
and pty
. In keeping with how all of this originally began, the teletype machine lives on in the tty
, a device file that allows for direct interaction between hardware devices (such as keyboards, mice, and/or serial devices) and the computer. The tty
is a hardware-emulated teletype that is using the screen and keyboard connected to the computer. If you are not using the graphical desktop and you type tty
to print the file name of the terminal currently connected to standard input, you will get something under /dev/tty
. Terminal emulators on the desktop, however, emulate teletypes using software and go through more and different layers of mediation.
Programs like xterm
must go through the pty
– the pseudoterminal. A pty
appears to any program attached to it as if it were a terminal. That being said, when you type tty
on a terminal emulator or a remote login service, you will get something under /dev/pts
, one part of the two devices in the pty
. Pseudoterminals are used by both network login services that allow for remote operation of the computer and terminal emulators that are intended to be used by someone on the same physical computer. Whether or not you are using xterm
on a graphical desktop or remotely logged into the computer using ssh
, inputs and outputs will still depend on the pseudoterminal as middleman. How does this, in turn, work? In UNIX terminology, the pseudoterminal is formed by connection of master and slave pairs for communication. /dev/pts
is the pseudo-terminal slave and /dev/ptmx
is the pseudo-terminal master. The terminal emulator uses the master and whatever program running in the emulator works with the slave.
This will be briefly illustrated with the parable of a badly behaving program called doge_simulator
in the not-so-distant past (this is a retrofuture for the same reason New Retrowave music is). Suppose that it is sometime in the early 1990s (or within spitting distance of it) and a user is enjoying the wonders of a NEXTSTEP computer and its then-revolutionary graphical desktop. She is supposed to be doing some kind of work but in fact is using a terminal emulator to play doge_simulator
, a whimsical text-based doge adventure. However, she hears her boss open the door to the office building and she has to quit doge_simulator
before he can walk over to her cubicle. Because doge_simulator
is shareware she downloaded off a BBS, she doesn’t know all of the commands. She has to pay up to get the full manual, and she just wants some doge entertainment.
But she has to stop this doge game before the boss passes her cubicle, because not only is she slacking off at work but she really should not be installing odd doge programs on work computers that she found in strange BBS forums. Desperate, she just gives up and slams down hard on CTRL-C to stop the game. This does the trick. No more doge. What did she just do? She wrote the control character for interrupt to the master device – which her terminal emulator is associated with – and this generates an interrupt signal for the slave device – which doge_simulator
is using. The point is that anything written to the master device is taken by the process on the slave device as if it were typed on a terminal. But as with anything regarding computers there is no substitute for actually reading the manuals. Specifically the manual for pty
:
A pseudoterminal (sometimes abbreviated “pty”) is a pair of virtual character devices that provide a bidirectional communication channel. One end of the channel is called the master; the other end is called the slave. The slave end of the pseudoterminal provides an interface that behaves exactly like a classical terminal. A process that expects to be connected to a terminal, can open the slave end of a pseudoterminal and then be driven by a program that has opened the master end. Anything that is written on the master end is provided to the process on the slave end as though it was input typed on a terminal. For example, writing the interrupt character (usually control-C) to the master device would cause an interrupt signal (SIGINT) to be generated for the foreground process group that is connected to the slave. Conversely, anything that is written to the slave end of the pseudoterminal can be read by the process that is connected to the master end. Pseudoterminals are used by applications such as network login services ssh, rlogin, telnet, terminal emulators such as xterm, script, screen, and expect. Data flow between master and slave is handled asynchronously, much like data flow with a physical terminal. Data written to the slave will be available at the master promptly, but may not be available immediately. Similarly, there may be a small processing delay between a write to the master, and the effect being visible at the slave.
Also see the Linux manual entry for pts
for more:
The file /dev/ptmx is a character file with major number 5 and minor number 2, usually with mode 0666 and ownership root:root. It is used to create a pseudoterminal master and slave pair. When a process opens /dev/ptmx, it gets a file descriptor for a pseudoterminal master (PTM), and a pseudoterminal slave (PTS) device is created in the /dev/pts directory. Each file descriptor obtained by opening /dev/ptmx is an independent PTM with its own associated PTS, whose path can be found by passing the file descriptor to ptsname(3). Before opening the pseudoterminal slave, you must pass the master’s file descriptor to grantpt(3) and unlockpt(3). Once both the pseudoterminal master and slave are open, the slave provides processes with an interface that is identical to that of a real terminal.
I have spoken only about UNIX so far – with a bias towards Linux. What about Windows? For a long time, Windows did not enjoy the full functionality of a terminal emulator even if many able approximations of it existed. In 2018, Microsoft took first steps towards introducing a proper terminal emulator to Windows. The newest iteration of it will now support retro-style visual effects), harkening back to the era before liquid crystal displays. This leveraged a new implementation of a pseudoterminal on top of the existing Windows console base to make Windows more attractive to non-Windows developers.
Why would a terminal emulator and a pseudoterminal be important to developers? Microsoft published a lengthy blog series outlining the history of text-based interfaces for computers that is well worth reading. The Microsoft blog series is, of course, intended as a preview of Microsoft’s own implementation of a UNIX-style pseudoterminal. Keep in mind the earlier UNIX definitions of ‘console’, ‘shell’, and ‘terminal’ even if they only loosely apply to Windows. Microsoft Windows grew out of MS-DOS, a command shell for IBM PC and compatibles. MS-DOS demanded that users primarily interact with the system via text, even if graphical applications were launchable via text commands. Once MS-DOS became Microsoft Windows, command shell interaction became primarily usable via graphical windows drawn on the desktop. So why did Windows eventually modify its internals to allow for terminal emulation? This too requires an interesting history lesson.
Prior to 2018, using the Windows console required going through ConDrv.sys
– the kernel driver ferrying control messages back and forth between the Windows console and any command line applications interacting with it – and ConHost.exe
– the underlying mechanism that enables user input/output with the console. Only Windows command-line applications can communicate with the console API and only ConHost.exe
can be attached to command-line applications. All of this results in a good deal of what the former vice president often refers to as “malarkey”:
Generally, on *NIX based systems, when a user wants to launch a Command-Line tool, they first launch a Terminal. The Terminal then starts a default shell, or can be configured to launch a specific app/tool. The Terminal and Command-Line app communicate by exchanging streams of characters via a Pseudo TTY (PTY) until one or both are terminated. On Windows, however, things work differently: Windows users never launch the Console (conhost.exe) itself: Users launch Command-Line shells and apps, not the Console itself! Yes, in Windows, users launch the Command-Line app, NOT the Console itself. If a user launches a Command-Line app from an existing Command-Line shell, Windows will (usually) attach the newly launched Command-Line.exe to the current Console. Otherwise, Windows will spin up a new Console instance and attach it to the newly launched app. Because users run Cmd.exe or PowerShell.exe and see a Console window appear, they labor under the common misunderstanding that Cmd and PowerShell are, themselves, “Consoles” … they’re not! Cmd.exe and PowerShell.exe are “headless” Command-Line applications that need to be attached to a Console (conhost.exe) instance from which they receive user input and to which they emit text output to be displayed to the user.
Sounds interesting, but what’s the big deal?
Windows Command-Line apps run in their own processes, connected to a Console instance running in a separate process. This is just like in *NIX where Command-Line applications run connected to Terminal apps. Sounds good, right? Well … no; there are some problems here because Console does things a little differently. Console and Command-Line app communicate via IOCTL messages through the driver, not via text streams (as in *NIX). Windows mandates that ConHost.exe is the Console app which is connected to Command-Line apps. Windows controls the creation of the communication “pipes” via which the Console and Command-Line app communicate. … On *NIX-based platforms, the notion that terminals and command-line applications are separate and simply exchange characters, has resulted in *NIX Command-Lines being easy to access and operate from a remote computer/device: As long as a terminal and a Command-Line application can exchange streams of characters via a some type of ordered serial communications infrastructure (TTY/PTY/etc.), it is pretty trivial to remotely operate a *NIX machine’s Command-Line. On Windows however, many Command-Line applications depend on calling Console API’s, and assume that they’re running on the same machine as the Console itself. This makes it difficult to remotely operate Windows Command-Line shells/tools/etc.: How does a Command-Line application running on a remote machine call API’s on the user’s local machine’s Console? And worse, how does the remote Command-Line app call Console API’s if its being accessed via a terminal on a Mac or Linux box?!
Up until Windows created the ConPTY
API, all of this malarkey persisted. Now, Windows is slowly incorporating a different interface paradigm into its infrastructure. This is good if you want UNIX-style terminals and their flexibility in Microsoft Windows. Having now looked at the manner in which the pseudoterminal is relevant to UNIX and Windows, it is now quite clear that the actual term “pseudoterminal” does not quite do justice to the significance of the conceptual displacement that it enables. It makes the terminal emulator a simulacra of the aforementioned dumb video terminals. Terminals as an aesthetic identity originated in part from the conflux of physical terminals, consoles, shells, computing devices, and video displays. When all of these became physically co-located, the terminal became primarily used as an emulation of a dumb client on a graphical desktop window. If terminals were originally intended to manipulate external computers, when one is using a terminal on the same machine it treats the machine itself the same way as it would interaction with a external computer.
There is, obviously, valid technical reason for it. As the Windows blog post notes terminals and applications exchanging streams of characters makes it very easy to operate a computer remotely with the same ease that one operates it physically. This post has discussed, with the exception of the Terminator opening, mostly technical and logical aspects so far. What about the cultural iconography of the terminal?
For Scott Bukatman, all of these similarities and linkages produce the common substrate of the “terminal.” The terminal is the centerpiece of the cultural theory advanced in Bukatman’s 1993 book Terminal Identity: the migration of the world into an “imploded inner space” and a form of subjectivity primarily characterized by interaction with a video screen on a computer or a television. The movie Blade Runner features a variety of video screen devices – from the Voight-Kampff Test apparatus to the various interfaces Deckard uses to control vehicles and inspect evidence – that are cold, utilitarian, and generally used as a means of encoding the world as to order it. Blade Runner takes place in an austere future in which the boundary between humans and machines has long since eroded. The control interfaces of Blade Runner are the way in which specialized bounty hunters like Deckard hunt their robotic prey, ironically by themselves seeing the world as a machine would.
But its important to remember that Bukatman’s definition of terminal also includes televisions. Another seminal 1980s cyberpunk science fiction film, Robocop, mostly emphasizes television sets. TVs are used for familiar purposes – watching broadcast shows and advertisements – as well as conveying recorded messages between characters. Robocop depicts a depraved and decadent future controlled by malicious corporations, corrupt politicians, and street gangs. Advertising is omnipresent in Robocop because everything is for sale. When Omni Consumer Products executive Bob Morton is murdered by a hitman sent by rival corporate operator Dick Jones, Jones taunts Morton in his last moments of life via a recorded message played on Morton’s high-end home entertainment system. More broadly, cyberpunk novels, comics, games, movies, and television shows often feature a “deck” that a character uses to connect to a virtual reality environment that combines the functions of Robocop’s televisions and Blade Runner’s control displays together.
It is intriguing that the terminal identity cultural model occurred at the same time that terminals, consoles, and command lines began to merge together. I do not think it is entirely accidental. Many people got into technology in the 1970s and 1980s when personal computers started to enter Western households. Critically, many of these PCs – like the Apple II – were marketed towards families rather than companies or hackers. The Apple II showing up in Terminator is of course a rather trivial example of that connection. Sometimes I have wondered if terminals could become more widespread or supported for something other than computer work or technical mischief. What if terminals were actually the basis for an entire broader application ecosystem? That sounds silly at first approximation. But give it some thought.
For one, users are far more comfortable with primarily text-based computing than you think. Type text into a box, get an output. Social networks and apps like Twitter, Discord, WeChat, LINE, Facebook Messenger, and Slack are, with some exaggeration, extensions of IRC protocol applications or proprietary systems like AOL Instant Messenger. Discord and Slack in particular allow users to control the state of the app with a large amount of text-based commands. And while ease of usage can vary widely you can access many of these SNs and apps on the command line. Relatedly, Brandon John-Freso gave a talk in April 2019 in which he described how much of the work he did with dating app OKCupid consisted of displaying data to users and then handling inputs – much like a “dumb terminal” of old. He argued this was also the case for a lot of other predominately mobile apps.
Finally, it is not hard to find graphically rich terminals that are friendly to nontraditional input patterns. Yes, you heard that right. Terminals with support for rich multimedia, and which feature keyboard operation as first among equals rather than the only or primary emphasis. Terminology supports inline, popup, or background multimedia content by using backend engines to display everything from pictures to videos. And Repl.it is trying to make a CLUI – a combined command-line /graphical user interface heavily built around mouse input (and perhaps later touch/mobile), command autocomplete, and rich/interactive graphical media. All of this is very exciting, and it perhaps augers big changes in the terminal space as developers’ needs grow more complex and “small computing” remains a place for idiosyncratic innovation.
On that note, perhaps advances in newer input methods such as augmented/virtual reality, voice computing, or even brain-computer interfaces may help give the terminal a new lease on life. One can also speculate too that the overall decline of desktop-centric computing may revive mainframe-centric design paradigms or at the very minimum make them more prominent in ways that benefit terminals and their relatives. But if this post comes from an underlying feeling of deep respect and love for the terminal, it is also not necessarily bullish about the terminal’s prospects for ever becoming a competitive interface to the graphical user interface (GUI).
Terminals are simple. The user launches a terminal – whether in the system console or in a terminal emulator – and is greeted with a command shell interpreter. A prompt awaits their input. Once they enter a series of characters, they rapidly receive a text response. Although terminal multiplexers and orthodox file systems can divide the terminal screen into multiple workspaces, there is little to distract the user or tax system resources. All user interaction with computers is to some degree a cybernetic loop of action, but on a terminal that loop is much tighter than it is with the graphical user interface. And it allows much much more visibility as to what the computer is doing than the graphical user interface.
If you’ve read books like In The Beginning Was The Command Line, you will also be very aware of the critique of the GUI – that in seeking to make things easier for users, it hides complexity and enfeebles them. Users learn metaphors such as “file” or “document” that do not translate well to what is going on under the hood and then experience rude awakenings when the system misbehaves and they have no way to rectify it. Additionally, as the Repl.it people noted, graphical user interfaces get bloated very quickly whenever vendors or clients decide they need a new feature. These are all points in the terminal’s favor.
Still, terminals are considered mainly to be tools for power users. They are for inspecting the inner state of the computer and getting work done faster and more efficiently. Many people believe this – and its not entirely off-base. The basic division between the terminal and the graphical user interface is a proxy for the wider divide between power users and general-purpose users. Expert users rely on text-based navigation and general-purpose users manipulate graphical windows with mouse clicks and drag/drop. The terminal will always be important for developers, administrators, and power users in general but regular users generally do not want to come into contact with it.
This was not inevitable. An intriguing alternative that merged some of the better features of the terminal with graphic functionality is Plan9’s rio
windowing system. The rio
windowing system – like several others from the late 80s and early 90s – made significant use of the mouse in combination with the keyboard. And, more critically, Plan9 was designed to emphasize graphical computing in ways that its ancestors were not. Graphical programs launched from the rc
shell will replace it within the rio
-controlled window. You can check out the 9front fork of Plan9 if you want to see more, but its definitely one of many examples of alternative paths not taken.
John Ohno has written several interesting posts that sketch out how our current interfaces emerged. In the first, Ohno discusses the lack of innovation in user interfaces since 1978. In the second, he compares Jef Raskin and Steve Jobs’ respective visions for Macintosh personal computers. What emerges from Ohno’s discussion of the dawn of modern computing is the sad realization that the dichotomy of text-based power user vs graphical ordinary user was avoidable:
The Macintosh, as designed at the time, would use a light pen (rather than a mouse) for selection and manipulation of buttons (in other words, you’d use it like a stylus-based touch screen device), but the primary means of navigation would be something called “LEAP Keys,” wherein a modifier key would switch the behavior of typing from insertion to search… While in normal operation the unit would act like a dedicated word processor, it is in fact a general purpose computer, and is programmable. The normal way to program it is by writing code directly into your text document and highlighting it — upon which the language will be identified, it will be compiled, and the code will become a clickable button that when clicked is executed… A world based on Raskin’s Macintosh would be very different: a world optimized for fast text editing, where programs were distributed as source inside text documents and both documents and user interfaces were designed for quick keyword-search-based navigation.
As Ohno details, Raskin lost and Jobs ultimately prevailed – in part due to flaws in Raskin’s design choices and in part due to deficits in Raskin’s social and corporate skills. But that is not really the point. It is that personal computers offered the possibility of something that bridged the gap between text-based power users and graphical regular users and even made such a gap practically irrelevant. Programmability could have been something that resided everywhere in the computer rather than being ghettoized to the terminal emulator or an integrated development environment. And terminal-like functions in turn could be, as with rio
, perhaps seamlessly be integrated with graphical programs and made manipulatable by a variety of possible input methods. This is not to say that Raskins’ Macintosh, Plan9, or any other design concept of the era was the right way to do it, but that it was a path that was not taken.
It was not taken for various reasons – some sensible and others foolish. It came down to economic concerns that are also important to contextualize. But it was not taken, and that’s the inescapable reality. Absent dramatic advances in user interface design, the near future of computing is a continuation of its past. General purpose computing will offer users more and more appealing fantasies and less and less actual legibility and control. This is not just true of GUIs, it is frankly far more true with things like voice-based interfaces and other novel interface patterns. Intimacy and interaction with the computer will be ghettoized to the world of power users and their r/unixporn-ready terminals. One does not need to be a FOSS fanatic to see the general problem here. Abraham Lincoln said that the Union could not exist as a “house divided” but computers are certainly capable of doing so.
One need only look at the way in which Windows is incorporating its own terminal emulator via the new ConPTY
pseudoterminal interface to get a sense of how durable the status quo may be. Windows is still the biggest example of the kind of closed systems that make terminals look futuristic. It can add pseudoterminals or even roll its own Linux implementations (or even its own Linux kernel). But the primary way users will interact with Windows is via Redmond’s closed graphical desktop system. That system is infamous for its privacy problems and user-hostile “dark patterns” such as making it difficult for users to set up offline accounts on fresh Windows installs.
What about mobile? I’ve mostly talked about desktop computing, not mobile phones and tablets. It’s maybe understandable that you’d like to log into your desktop computer using your phone and execute shell commands, but why would you want a local shell on your phone? You can get one on Android though the experience of using it will be different in both form (a phone is much smaller than a desktop!) and function (the command line environment is self-contained). But there are many things to do that are easier in a purely text-based environment, and in particular users may find they are more in control of the applications they install and less at the mercy of updates from developers than they would by just porting regular applications into the graphical Android interface. If you are willing to jailbreak your iPhone, you can enjoy great power over your system with the various terminal apps, but jailbreaking is an entirely different conversation.
This just reinforces the overall message of this post, however. Power users will operate terminals to emulate the old way of operating computers effectively while everyone else clicks, drags, and touches away. In order to do “powerful” things on the computer one has to access it on the desktop the same way – or at least aesthetically the same way – one would access a mainframe. Through a terminal. Once the graphical user interface is launched the terminal emulator window is the portal into the guts of the machine. And then the computer can be observed and controlled as it “really” is. But only by doing so with the presumption that a desktop application – the terminal emulator – is a virtual equivalent of a separate dumb terminal that one would use to access a remote computer. Consequentially, the terminal identity aesthetic persists as a residual hangover in popular culture. It’s what creates the mystique of source code and terminal-esque interfaces in movies.
It isn’t attractive to people because they literally want to use terminals, of course. It is attractive because of what it symbolizes. It symbolizes, like the similar visual iconography of people entering some kind of VR-esque cyberworld when they use computers or computer networks, a level of connection and intimacy with the machine that modern computing denies to users. That intimacy is still tremendously attractive and it is something that modern desktop computing has failed to give users. Terminals certainly grant it but their reputation as specialist tools becomes self-fulfilling. If only developers, administrators, hackers, and hobbyists use terminals, terminal applications will only be geared towards developers, administators, hackers, and hobbyists. This is, of course, in spite of the tremendous possibilities that modern terminal emulators theoretically offer.
The ghettoization of terminals will continue to make them look futuristic despite their age and primitiveness, perpetually projected into the future as elusive symbols of a communion with machines that mere mortals are not allowed to have. That future will never come, of course, and the reality that it will not come is implicit in the way in which such an old technology and interface format is always going to look advanced relative to the state of general-purpose computing. In the meantime, we can enjoy the terminal and all it offers us, because it is in fact accessible to mere mortals if they invest time and energy in learning about it. It isn’t the worst-case scenario, to be sure. Computers are enormously powerful things, whether you operate them primarily via a terminal emulator and console screen or the traditional graphical user interface of your desktop. And it’s better to have terminals – even if they are ghettoized – than not have them.
They’re very useful. Whether you need to install and manage software or reprogram a captured Terminator, you’ll probably get the hang of it eventually.