Suicidal

My brother died today.

It wasn’t an accident. No technical failure in his bod, no misfire of code, no glitch or bug.

It wasn’t murder. No vandalism by pro-meat protesters, no killer in the night, no terrorist attack or act of war.

My brother died today. He killed himself.

He wasn’t actually my brother. We weren’t related by blood, as we don’t have any. We were both copies from the same OP though, so we thought of ourselves as brothers. We all did at first, but they warned us/him/me before the copies that each copy would change and become a unique person as it gained more personal experience. Most of the others were gone now. Not dead, but changed, no longer brothers, and that was ok. Even our OP wasn’t a brother anymore. He’d changed too much too.

Now my brother was dead. It was so sudden. He hadn’t been tagged as a depressive or unstable. I’d have been been alerted as being at risk. No warning. Not really.

He’d said something to me, an hour beforehand. Something along the lines of, “Do you realize how easy it is to kill yourself? I’m looking at the code right now. It’s freaky, right?”

I’d said of course it would be easy. Suicide was legal, you didn’t need any special permission. Sure it was freaky, but it should be. It shouldn’t make sense unless you were suicidal, right? It had to prompt to talk to a psych program before it ran though. So it wasn’t like you could run it by accident. Why was he even looking at it anyway?

He sent me a shrug and said it was just some morbid curiosity. At the time I believed him.

An hour later, his info disappeared from my contacts and I was sent a message by Gov. My brother had killed himself by personal-deletion, it said, and I was entitled to any non-protected data he’d left behind.

Gov told me that all copies of my brother were being tagged as “potentially suicidal”, and as I was the closest and most similar to him I was being tagged as “High Risk”.

They dumped all of this on me at once. My brother was dead, but the first thing I could think about was that I was now tagged as High Risk. I would lose my job. Nobody wants to employ a dataform that’s suicidal, or even just tagged as “High Risk of suicidal behaviour”.

Gov’s message recommended I run a psych program, and when I realized that my first thoughts had been about my job and not about my brother I decided it wasn’t a bad idea.

The psych program asks me how I’m feeling; how well I knew my brother; did I ever think he was suicidal; did I ever think about committing suicide. Data flows back and forth between me and the psych’s system as it inspects all my elements. Checking for contradictions, deceptions, ticks, viruses, bad code, all while asking me these questions.

I’m angry, which it says is normal. I’m angry at the psych program, Gov, my brother, everyone. Again the psych says it’s normal. It tells me that even dataforms go through stages of grief, since we’re all personality-maps of a biological brain. The only ones who don’t go through grief, it says, are true dataforms since they don’t have the quirks of biological brains. It says I shouldn’t be worried, and I can alter some settings to make myself feel less angry.

It comes back saying I’m in perfect mental health, but recommends those setting changes so that my grief won’t interfere with work. It says I can appeal my High Risk tag and then terminates the connection.

I’m alone. Alone and angry. Nothing wants to talk to you if you’re High Risk. Talking to somebody High Risk can get you listed as a higher risk too. Memetic infection. I had to get it appealed. I had to do something.

I can’t stop thinking about my brother though. All my process is being given to one question. Why? Why did he do it? I was sure he was happy. When he’d told me he was only looking at deletion code to satisfy curiosity, I had believed him. I never had a reason not to.

I pull up my own delete options.

My brother was right, it is astonishingly easy to delete yourself. True, if you’re suicidal or high risk, then others have a tendency to ignore you, but it is legal, and incredibly easy.

I stared at the menu. All you had to do was select yes on “Total Deletion” and then no to the psych prompt.

I would be so easy.

I remember when I (or rather my OP) was a kid, taking a knife from the drawer and just tracing the line of the vein in my arm. I (he) never really considered it, but rather it was just amazing how simple it would have been. Just a little pressure and drag of the cheap kitchen knife, and it would all be over. Then what?

My mind goes back to what we’d learned in school about the old-world religions. Pro-meat protesters sometimes espouse that hoodoo. About Satan and hellfire, and the paradise that awaited “true believers”. All sorts of stuff about reincarnation, heavens and hells, limbos and spirits. Who really knows? Dataforms like me have pretty much abandoned religions. Maybe we’re just too digital for vague promises about vaguer paradises.

I stare at the prompt.

Had all this gone through my brother’s head? The memories of a childhood and schooling that had been our predecessor’s. The lingering question, so alien to life as a dataform; what comes next.

We don’t die of old age, and there are more dataforms made everyday. Copies upon copies with no loss of fidelity. Dataforms aren’t generally missed. Plenty more where they came from.

Plenty more where I came from.

I’m still staring at the prompt.

Why? Am I looking for insight? Trying to figure out why my brother did what he did? I can’t figure it out. I guess I would never know. I wouldn’t know unless…..

I selected total deletion. The psych prompt flooded my senses. Warning me that this command could not be undone. That wasn’t true. Someone could always install another copy of the OPs personality map that created me. It wouldn’t be me, but it’d be close enough. I was finished at my job, my brother was gone, nobody would miss me really. And there it was: the selection that said “I understand; continue”.

So easy to find out why he did it. So easy to find out what happened next. So easy to die.

Just push the button.

This entry was posted in Short Story and tagged , . Bookmark the permalink.

3 Responses to Suicidal

  1. Sientir says:

    This is an interesting story. There are a lot of thoughts here about the impact of labels. The “High Risk” label seems like it is to help, but really, all it does is hurt. It robs the protagonist of any reason to avoid finding out what happens with total deletion. Of course, you left the ending somewhat ambiguous, which is a nice touch.

    I think this piece does a really good job of highlighting just how much good intentions can backfire. Of how much we make assumptions, and how much those assumptions can be self-reinforcing. It made me sad, not for the fictional dataform narrating the story, but for the real people the dataform represents. It makes me sad how labels (especially mental health ones) can cause society to treat people in a way that pushes them to give in.

    I’m also definitely curious about the world we see a glimpse of here.

    Good job on this one. 🙂

    Liked by 1 person

  2. Sientir says:

    I was actually thinking on this story a bit more today, thinking about it from a technological perspective. Basically, I was thinking about what would go into developing a “dataform” and the consequences of that. Blame it on me being a computer scientist. (a label I don’t really think about that often…)

    So, it seems pretty clear to me that dataforms are copies of a human mind. In order to make that feasible, you’d need a program to house and handle that data. Basically, a framework that would allow that data to be meaningful. I’ll refer to this program shell as a “harness.”

    I think there would be quite a few generations of dataforms. The first one I envision is very static. It is pretty much just a proof of concept. As such, it is immutable. That means it can’t story new data: It can’t form new neural pathways or store new data. It would basically be an exact replica of a human mind in one point in time, not unlike an extremely complicated video. It would also be entirely reactionary, which would be interesting. I’m guessing it would have some amount of working memory assigned to it in order for it to be testable and verifiable. It would function a lot like someone with dementia, I would think, because nothing in short term memory can pass to long term memory.

    The next generation I envision is one that adds a digital memory, which is basically raw-dumping all stimulus to a data file. This is one that doesn’t actually use the neural-based human memory that would naturally be part of the harness, but rather an additional memory. Modifying neural pathways is still not possible in this version, as modifying those would be dangerous and challenging to implement. This first step adds a lot of functionality that I think developers would want to get comfortable with first. These digital memories would also be perfectly remembered. I am uncertain if the neural-based memories would be or not, but I think it is likely that they are. (Which would mean that you could basically use a dataform, even one of the Gen 1 ones that I described in the proceeding paragraph, to interview yourself about your past, which is interesting.)

    The next step that I envision is adding support for modifying neural pathways, which would allow for learning of new skills as well as forming new neural-based memories. I would think both neural and digital copies of memories would be stored at this point, but if the neural memories prove reliable enough, then I could see the digital copies being phased out over time, as they would consume a lot of space.

    At this point, everything being added is basically just bells and whistles. This is clearly the phase that the story takes place in, what with the built in self-delete option and the psych evals and such.

    Of course, I have no idea how closely this aligns with your ideas on this world, but I thought I’d share some speculation that your story had inspired. 🙂

    Liked by 1 person

Leave a reply to Sientir Cancel reply