This Week in Tech
GPS. It lets us know where we are on the surface of our huge and amazing planet. Software allows us to combine our position with digitised maps and routing algorithms to find how to get to a specified destination. Our location can even be used to draw up lists of shops / attractions / faculties nearby to us, so that even if we are new to an area we can easily know what destinations are around us.
But, it is never quite so easy to let people know where we are. Sure we can share a location - if we are online and using the same app. Reading out lattitude and longitude co-ordinates to tell someone where we are is tedious - and often inaccurate. What is needed is an easy-to-communicate global standard method for communicating a location on the planet.
Enter "What Three Words". This amazing startup has divided the entire globe into a grid of 3m x 3m squares. Each grid has been given a name made up of three words. 26 Languages are supported (including isiZulu, isiXhosa and Afrikaans). These words are easy to remember, easy to communicate and can be typed directly into a mobile app or online browser map to find the location they represent.
It's a unique idea, well worth pushing as a global standard. It has tremendous potential for businesses and customers to quickly and accurately communicate location. In the UK emergency services are adopting it as a standard and it is rapidly garnering support in many other places (including here in SA).
What if the company fails? What happens to your ability to convert locations into words and vice-versa? To quote from the site:
|If we, what3words ltd, are ever unable to maintain the what3words technology or make arrangements for it to be maintained by a third-party (with that third-party being willing to make this same commitment), then we will release our source code into the public domain. We will do this in such a way and with suitable licences and documentation to ensure that any and all users of what3words, whether they are individuals, businesses, charitable organisations, aid agencies, governments or anyone else can continue to rely on the what3words system.|
I'd really recommend installing the app, using it and telling as many other people about it as possible.
And the article's headline? That's one of my favourite places to camp.
GauGAN - the AI Artist
A short while ago I wrote about 'This Person Is Not Real' - an AI project that created realistic human faces from scratch. nVidia is experimenting with an app that can turn MS Paint style sketches into realistic looking photographic images. The app is not generally available, relies on computers with AI CPUs (Tensor chips) and so is not something that you can rush out and try.
Some of the resulting images can look like bad uses of the cut, paste and clone stamp tools in Photoshop, but that even this much is possible is pretty amazing.
But the video is cool in a kinda awesome, breathtaking way. Well worth showing your learners.
Google- The serial App, product and Services killer.
I am a voracious reader of news. That's why I write this blog. I manage this by using RSS - and for a long time I relied on Google Reader as my go-to RSS reading tool. Seven years after creating it Google summarily cancelled Reader.
I also enjoy taking (and editing) photos. One of the best plugins tool suites for image editing is called the NIK Suite, of which Viveza is my favourite tool. Google bought the tool in 2012. It dropped prices drastically (from $500 to $130) and then, in 2016, started to give the suite away for free. In 2017 they decided to kill the NIK product line. Luckily Dx0 (a photography software company) bought the brand from them and has continued development.
The list of Apps and Services that have died at the hands of Google is long - and does not include examples such as the NIK photographic plugins (because they were bought out and so did not die). Many of these were not created by Google. They were bought; they had loyal, enthusiastic users who watched their favourite tools languish and die at the hands of a mindless behemoth that consumed them, used them up and excreted them on the dungheap of history.
How long is this list, you ask? Just take a look at KilledByGoogle.
Does that seem like the behaviour of a responsible digital citizen to you?
Talking of irresponsible: Facebook strikes again.
It might be a really good idea to change your Facebook or Instagram password. And anyother password that is the same as your Facebook password (you naughty user you!).
Why? Because it turns out that Facebook kept hundreds of millions of user's data stored on locally accessible computers in plain text (i.e. unencrypted format). That means any Facebook employee (or person with access to the data) could look up the password of almost any Facebook user.
Liklihood that someone actually looked up your password: Low. Change it anyway, to be safe. And think about just how irresponsible Facebook is when it comes to valuing / protecting your data and your privacy.
Malvertising vs Adware
CSO Online explains (includes a brief explanation of the use of steganography).
Fabian Fights Back - against Ransomware
Pay by Face
Not sure I'm ready for this. Apparently the Chinese are.
Follow up on Boeing 737 Max 8
Popular Science on software as part of aircraft design.ExtremeTech on how safety features that could have prevented the crashes were 'optional' (expensive) extras. CNN on how pilots with experience on other 737 models were 'trained' on the 737 Max 8 (with no reference to the new MCAS system in the course materials).
Profits over lives. Not looking good for Boeing.
I've known about people choosing to believe that the earth is flat for a while. What I have not known is the craziness of the world that these people inhabit. Ars Technica has an article that sums up the content of 'Behind the Curve' - a documentary screening on Netflix, Amazon and Google Play. Not really tech or IT related, but the article is worth reading and the documentary worth watching.
That's it for this week.
Software. The encoding of human thought and problem solving into steps that a brainless machine, unthinking can follow - and follow blindly and unquestioningly.
29 October 2018. Indonesian airline Lion Air had a plane crash 12 minutes after takeoff. 189 people die. The aircraft was a Boeing 737 Max 8. 10 March 2019. An Ethiopian airlines flight crashes shortly after takeoff. 159 people die. The aircraft was a Boeing 737 Max 8.
The 737 Max 8 is a new variation on an old design series. Changes in the physical design of the aircraft (including increasing the size and weight of the engines and moving the engines forward and higher on the wing) result in an aerodynamic tendency for the aircraft to 'nose up'. This can cause the aircraft to 'stall' - a condition where the wings lose all lift and the aircraft literally drops from the sky.
To prevent this 'nose up' tendency a new software system was created to help prevent stalls. The system takes input from two Angle of Attack (AOA) sensors as well as other aircraft sensors (airspeed, flaps, throttle, etc). If the system 'thinks' the aircraft might be about to stall, it automatically (without warning the pilot) pushes the aircraft nose down, preventing the stall. This system is meant to prevent the aircraft stalling (and crashing) when the aircraft is under manual pilot control and operating in tight turns or at low speeds. For information, it is called the MCAS (Manoeuvring Characteristics Augmentation System).
If it senses a too high angle of attack, the MCAS immediately takes control of the aircraft and pushes the nose down. The pilot is not notified This is awesome if the aircraft is actually about to stall. It will prevent a crash and save lives. If the aircraft is not about to stall and the sensor is faulty then what this does is put the aircraft into a dive - and causes a crash.
Pilots were not informed about the MCAS system, it was not documented in the manuals and pilots received no training on the system.
The consequence - when the sensors malfunction, the MCAS takes over. Untrained pilots end up fighting a plane that wants to nose down for no apparent reason. The normal controls to fight this - the yoke - are ineffective. The result: Tragedy. Death.
Boeing 737 Max 8 planes are grounded until Boeing releases a software fix in the next few weeks.
The fact is, getting software as bug free as possible matters.
I have often seen learners struggle and struggle to write a program - and stop the first time they get it to run successfully.
It is our responsibility to make our learners aware of the consequences of buggy software. The cost in millions of dollars to the economy of failed software. The potential for disaster, death, injury, bankruptcy and job losses when software goes bad... We have to install in them the awareness that a developers job goes way beyond simply getting the program to run. They have to test that it handles user errors gracefully. They need to test it with all kinds of incorrect and problematic input. They have to anticipate what they user can do wrong.
Their programs need to be resilient - or the consequences could be dire!
The videos below can be shown as some examples of consequences of software errors. The one titled 'Software disasters' is mainly interesting for the facts and figures at the start - the remainder is likely to put your learners to sleep due to the presentation style.
Other consequences of software errors:
Something worth spending a lesson on maybe - showing the videos, talking about consequences and responsibilities - and possibly integrating with things such as trace tables for algorithms or testing of software for major projects.
Hope it's useful!
The USB standard is a bit of an unholy mess. USB 3 is called USB 3.1 Gen 1. The actual, faster USB 3.1 is called USB 3.1 Gen 2. USB C is actually a cable connector and port and has nothing to do with the actual data speed and capabilities of the port which can range from USB 2 through both USB 3.1 Generations 1 & 2 to the super fast Thunderbolt 3 standard. Consumers are faced with the tedious task of figuring out just what type of connections and data speeds their computer's ports and cables support (if you would like more information and examples of the USB standards and cables, you can find it in the Hardware, Connections section of my online Grd 10 IT theory textbook at Learning Opportunities).
Someone needs to do something sensible to clear this up - just like the WiFi people did with WiFi by changing the names to WiFi 1 - 5 (the current WiFi AC standard). The next WiFi standard will be called WiFi 6. No more strange names to remember. The fastest WiFi will have the highest version number. Compatibility will be easy to check.
Well done those guys. Give yourselves a Bells...
The USB lot on the other hand should be ashamed of themselves.
There's a new, faster USB standard coming out this year. It will be called USB 3.2. All good and well.
The silly &*##$@^ have decided that USB 3.1 will no longer exist - everything moves up to become USB 3.2. The fastest USB will therefore be USB 3.2 Gen 2x2 - and will be capable of 20 Gbps. USB 3.1 Gen 1 becomes USB 3.2 Gen 1 - and so on.
In 2020 / 2021 USB 4 will make it onto the scene and will finally bring USB up to the speed and capabilities of Thunderbolt 3.
Undersea cables - just how miraculous are they?
OK. Most of us know that the international internet depends on undersea fibre optic cables that connect the continents. There are thousands of miles of physical cable - and nearly 400 individual cables lying in the ocean depths of our globe.
But if you are anything like me it is hard to conceive of just how a whole country's internet traffic can be squeezed into travelling over a single cable. I mean, just put too many people on your LAN or WiFi network or into the same cellular tower footprint (like at a sports stadium) and watch your data speed drop to a speed that a drunken, cripple snail could outrun on a bad day.
So how do they do it? Well, by tweaking the way that light is used to encode data. Popular Science has an article that explains, in depth, just how this is done in a new cable laid between Spain and the USA.
This cable contains 8 pairs of fibre optic cables. Fibre optic cables come in pairs because data can only travel one way in a single fibre optic strand - so you need one to send and one to receive.
If you are the TLDR; kind of person (Too Long, Didn't Read), the takeaway from the article is that the new technology means that 1 fibre optic strand in this new cable is capable of transmitting data at a rate of 4.6 Million HD movies a second (25.2 terabits a second).
That's how they do it! That's how miraculous these cables are!
Facebook moderator - a job straight from the lowest, deepest, darkest, hottest pit of hell.
People are weird. They have strange ideas of what is wrong and right. They also have a tendency to try to share disturbing, inappropriate, hateful things on social media sites.
Social media companies tend to have an over-arching view that they are 'platforms' not 'publishers'. The difference between the two is that if they are a 'platform' they can not be hold legally accountable for what their users post on the site. Despite this, the social media companies are also under a tremendous amount of social pressure to make sure that the content on display does not offend the majority of their users (after all they do not want users to leave the site).
So, content that is reported as being offensive or detected as being problematic in other ways has to be run through human screeners who have to check the content against multi-page lists of what the site regards as acceptable or not. They then have to either reject the content or allow it to remain on the site.
It is astonishing how little they pay these guys, considering how awful the job is....
Pencils Vs Keyboards
Daniel Lemire has written an interesting piece about how education stubbornly sticks to pen / pencil and paper as opposed to embracing the keyboard. An interesting and thought provoking piece.
The Piracy Problem
Poor quality. Bad sound. Over compression. Incomplete files. Mislabelled files. Incomplete downloadsWrong language. Subtitles. People moving around in front of the camera filming the screen. And so much more...
Generally speaking, Piracy is a sub-par media experience which usually is the culmination of a frustrating, time consuming process of scouring torrent and other sites to find and download the media you are looking for.
So why does it persist?
Enforced, unnatural scarcity and excessively high prices.
I have been a pirate. The reason in the past was that there was simply no legitimate means to get the media I wished to consume. I was willing to pay, but no one would take my money. I firmly believed (and still do) that the piracy problem which media companies wail and moan and gnash their teeth about is a direct consequence of their own policies and behaviours.
Most people prefer to pay a reasonable fee and have reliable, hassle free access to a quality media experience if at all possible.
Over the years, research has proven two things: piracy increases sales of (some) media and piracy is on the decrease thanks to services such as the iTunes store, Netflix, Amazon, etc - where those services are available that is.
HBO's Game of Thrones is the most pirated media in the world - but it is also hard to obtain. HBO is only available in limited regions. DVDs are only available on delayed release long after the broadcast schedule - and not everywhere around the world (I lived in Kenya for 4 years, there was NO legitimate source of DVDs anywhere). iTunes may be available in SA but does not offer the option to purchase TV series.
Blockbuster movies actually do suffer as a result of piracy. We live in a connected world. A buzz worthy movie released on a staggered global schedule must expect that people won't wait - they want to join the conversation, and if piracy is the only way to do so.... That's why the biggest blockbusters have a same day global release. But not all places have cinemas and cinema prices are exorbitant (especially in countries with low per capita income levels). Other avenues (DVD / TV broadcast) are delayed 6 - 9 months. If other, legitimate, non time-delayed avenues were available I'm sure that the piracy problem would decrease even more!
One for the IT teachers - we are all teaching programming wrong!
Bret Victor's interactive article on the topic. A must read. I wish the tool in his article existed! A very thought provoking and interesting read with a lot of ideas worth considering.
This person does not exist
In case you missed it, AI software can now generate portrait images of people that look completely realistic - except that the people in the image never ever existed.
Girls get Tech
The question of girls and tech and increasing their involvement is a perennial one. Something I as sure we all have wrestled with. There are no easy solutions. The Girl Scouts of America have some interesting research on the topic - well worth a read.
Our bright tech education future - according to SONA
Tablets. Ebooks. 6 Years.
So many questions....
Can I use that picture?
Maybe the infographic on this page can help answer the question. Buy it, print it and put it up in your classroom!
That's it for this week...
“Social media is the toilet of the internet.” - Lady Gaga.
Social Media is a new phenomenon. It is unprecedented in its reach and adoption speed. Its influence is hard to measure and judge accurately. No living human has ever seen anything like it before. We are living through its birth and evolution - and trying to make sense of it - at the same time that we are expected to teach and guide our learners about its functions, uses and (as the syllabus is fond of saying) its advantages and disadvantages.
Whilst much of the fuss and furore about social media centers on the concept of privacy and how the social media companies exploit and mine our data for their own financial benefit, too little attention is paid to the even larger and more insiduous problem of manipulative and addictive design. Think about this:
We live and work in an attention economy. In many ways, information is less important than user attention (also known as 'engagement').
Hundreds and thousands of hours of time, research and effort is therefore dedicated to figuring out how to make the 'service' something that you cannot live without - something you crave with all the intensity and harrowing, overwhelming need of the most enslaved drug addict.
Not because you really need it.
Not because it adds that much value and utility to your life.
Not because it makes you feel better.
Simply because it (the 'service') is purposefully designed to exploit every aspect of psychology, behavioural science, neuroscience and design finesse to hook you - so that a (very substantial) profit can be made from your addiction.
Whilst writing this article I checked on LinkedIn (i.e on just one source) - there were 635 psychology related jobs advertised for Facebook for the US alone.
There's a name for it. Persuasive Design. Books have been written about Persuasive Design. Courses created and presented at universities and online. Conferences arranged to help people and companies learn how to create and use it. Privacy is discarded and data gathered (with and without user consent) to be better able to implement it.
The biggest flaw of persuasive design is that we tend to focus on helping ourselves (the programmers / companies) rather than helping the users.
The motivation behind persuasive design is ostensibly to improve the user experience, to create a product that they enjoy using and want to use repeatedly.
The problem is: economics. Creating, maintaining and hosting an online service is not cheap. And people want to make a profit. As big a profit as they can. The problem is, users want to pay as little as they can. Subscriptions are not popular. The only solution anyone has come up with is advertising. Magazines and Newspapers have done it for the longest time. They exist (economically speaking) not as a source of news or entertainment but rather as a way to collect eyeballs and attention so that their creators can make a profit by selling advertising. Their disadvantage is that they are not interactive - they cannot provide the immediate Pavlovian feedback that keeps users coming back for more. They are unable to leverage the
"...subtle psychological tricks that can be used to make people develop habits, such as varying the rewards people receive to create “a craving”, or exploiting negative emotions that can act as “triggers”. “Feelings of boredom, loneliness, frustration, confusion and indecisiveness often instigate a slight pain or irritation and prompt an almost instantaneous and often mindless action to quell the negative sensation,”
Advertisers want to know that their messages are being seen. For that to happen Social Media sites need users to actively spend time on the site - time that they can measure and show as proof that adverts are being seen. Repeat Visitors and Time on Site are important metrics. They reflect an increased chance that the user is likely to see your advert multiple times which, in turn, increases the chance that they will respond to it or at least develop a familiarity with your brand or product - making it more likely they will seek it out when they need such a product in the future.
“The technologies we use have turned into compulsions, if not full-fledged addictions,” Eyal writes. “It’s the impulse to check a message notification. It’s the pull to visit YouTube, Facebook, or Twitter for just a few minutes, only to find yourself still tapping and scrolling an hour later.” None of this is an accident, he writes. It is all “just as their designers intended”.
We are all Experimental Subjects
It's not just all design and psychology theory though.
There are many millions of experiments being run on the internet every day to work out the best way to capture and keep user attention - and how to prod / tempt / guide / lure users into online actions that may not be in their own best interests.
How many of you / your learners know what ‘A’ - ‘B’ testing is?
How many know that every pixel of the Facebook interface is monitored and tested to see just how to maximise user engagement?
For example: If this 'Like' button is a little bigger and a slightly different shade of blue are users more or less likely to click on it? What about if the icon is bigger? Or if the icon is different? Or if we put the button above or below the article?
The best way to find out is to create multiple different versions of a web page, show it to different people and measure their responses. Then use statistical analysis to figure out which page is better at getting the user to do what you want them to do. This is what A - B testing is.
And it happens all the time. Without our knowledge or consent. We are all part of one continuous experiment on how to make users do things to maximise engagement and profits.
There are many web services to help you do A - B Testing. Google even offers one for free. It's called Google Optimise and they walk you through the steps of how to create and use an A - B test. You can do it yourself. It's easy!
We are all slaves to the Algorithm
The final trick in the arsenal is that Social Media controls what we see. On the one hand User Engagement can be maximised by reducing 'friction'. It's simple - don't display content that disagrees from the users' (measured and tracked) world view, interests, likes, political viewpoint, etc. If the user is liberal, decrease or remove the number or conservative articles, posts, etc. from their feed. If they are conservative, then remove the liberal content.
Data is gathered and collected and then algorithms process that data to narrow down the content presented to us. This in an effort to make the social media site a place we feel comfortable and relaxed and at home in, because our core identities are not being challenged by views that are different from our own.
Being comfortable and secure in our own identities helps to keep us online and 'engaging' for longer. Eli Pariser first popularised this concept in his book The Filter Bubble.
But algorithms don't just keep us in a safe protected filter bubble. Being in a safe, comfortable, familiar environment is not enough. The algorithms go further, seeking out ways to provoke us into response, into clicking and posting and forwarding and coming back later to carry on the 'engagement'.
For this the algorithms like to target our 'Lizard brain' - the most basic and primitive urges we all feel. Sure, a cute and happy post can make us feel good. But does it promote engagement? Not as much as something far more provocative.
"... anger is addictive—it feels good and overrides moral and rational responses because it originates from our primordial, original limbic system—the lizard brain"
"... anger makes people indiscriminately punitive, careless thinkers, and eager to take action. It colors our perception of what’s happening and skews ideas about what right action might be"
It contradicts common sense, but User Engagement is maximised most effectively by content that plays on our fears and provokes outrage, misery, jealousy or despair. This content does not objectively examine or discuss opposing views but rather presents them as a threat or disparages them as ridiculous. Your identity is affirmed by creating an 'us vs them' scenario that provokes and outrages you without making the site feel like a less safe place. Rather the site is you bastion, your place of security from which you can hurl abuse at your foes and be cheered on by likeminded people without ever having to listen to the voice of reason.
Complex issues are simplified to fit in a tweet or headline and the messages make us feel good, even while they make us mad. The simplification creates an illusion that problems are easier to solve than they are, indeed that all problems would be solved if only they (whoever they are) thought like us.
Algorithms maximise the spread of this type of content over longer, more rational arguments aimed at discovering the truth and promoting co-operation, conciliation and arriving at a shared truth. Instead they push us into enclaves, divide us into tribes that cling ever more tightly to what separates rather than what unites. The algorithms discard honest debate and rational discourse in favour of emotional outbursts and denialism.
"a cursory glance at the tenor of cultural discussion online and in the media reveals an outsized level of anger, hyperbole, incivility, and tribalism"
Why? - well, simply because short, outrage inducing pieces generate more 'engagement' than long, rational arguments do.
What is good for the individual, society and humanity at large is substituted for what will generate the most profits.
The algorithms are not trying to make the world a better place, not trying to benefit mankind. They are simply trying to maximise engagement and so maximise profit.
Maybe it is time to stop worrying about the symptoms of the Social Media malaise that affects us all (the collection, sale and exploitation of our private data). Perhaps we should rather worry about a culture that will unthinkingly maximise 'engagement' (and profits) without considering the broader impact these techniques have on us as individuals and on society as a whole.
“Facebook and Google assert with merit that they are giving users what they want,” McNamee says. “The same can be said about tobacco companies and drug dealers.”
Some resources you can use on this topic:
Last year I repeatedly wrote about fake news - largely quoting articles that revealed the latest bit of fake news - or which provided tips on how to avoid becoming a victim of fake news. Towards the end of the year I reduced the mentions of fake news in the blog. This was more to avoid ‘ranting’ than because the amount of fake news decreased in any way.
In the meanwhile I have spent quite some time thinking about why the problem of fake news exists. There are, of course, multiple factors that contribute to the phenomenon of fake news. Some of my conclusions are explained a little further into this post.
As teachers you don’t really have time for long, philosophical arguments and discussions of the topic. It is, however covered in the syllabus under the section relating to validating information / web sites. When teaching about fake news the core of the matter boils down to:
All the aspects of fake news are covered in detail in the Data Communications section of my Grd 10 IT Theory textbook (find it at learningopportunities.co.za - a year’s subscription is only R100).
What I really want to do in this blog is explore the WHY of fake news.
NB: The usual list of links and news comes after this longer than normal piece. Just scroll down to get to them if you want to skip this.
ALSO: This is my personal take on the issue. I am sharing it and inviting comment, not trying to be arrogant (assuming I have all the answers) or trying to teach you something that you may already know. If you don't like it or feel I am patronising you, then just jump to the end - it's not worth raising your blood pressure over!
Fake news is nothing new. It has been around in many forms throughout the milleni. Its sudden elevation to a problem that should be of grave concern to any thinking person is due to its scale, the quickness and ease with which it spreads through electronic media - and the inclination for large numbers of people to accept it as true without question (and even to defend it when it is questioned).
Unquestioning acceptance of (and belief in) fake news are core issues.
Let's go back in time to before the internet...
‘Cost of entry’ not only made it difficult for anyone to publish their version of the news, it also limited the number of publications available. News was available, but not in the form of the information deluge that we have to deal with today.
Though news sources were never entirely impartial, it was often much easier in the past to detect the bias and editorial commitment to truth of a publication - and evaluate the likelihood of it publishing untrue, fake or unverified content. Publications had clear reputations. Some were respected. Some not. In apartheid South Africa you were far more likely to find accurate news in ‘The Weekly Mail and Guardian’ than in the government controlled (at the time) ‘Citizen’. The ‘New York Times’ was clearly much more reliable than ‘The National Enquirer’.
Just the source of the news helped give you an idea of whether it was likey to be fake or not.
On top of that, publications checked the veracity of their content to satisfy the requirements of the armies of editors and lawyers that vetted any controversial content for fear of legal consequences or censure from professional bodies.
The publishing revolution.
Then along came the internet. And with it came blogs and social networks and video sharing sites and micro-blogging and photo sharing and instant messaging - and so on. Publishing your message suddenly has no cost. Reaching an audience of millions has no cost. Suddenly publishing is available to anyone with a computer and the skills to put up a web site. Media is democratised. We have a brave new world where information is 'free' and no rich media moguls or governments can block uncomfortable truths from coming out.
On top of this there is money to be made - especially if your message is new and short and controversial enough to get millions of eyeballs to look at it. Millions of eyeballs = $$$ in advertising. And anyone can do it. Even those for whom the truth is unimportant as long as they make $$$. Even those who don't care about truth or money but have some other goal to achieve (such as discrediting a person or making sure someone gets elected).
Now we have news that anyone can publish whether they researched it or made it up. They can publish it and reach audiences of millions around the world. So the door to fake news opens up.
What happened to fact checking?
Most of those eyeballs that earn the advertising dollars will only look at something once.
The first with the news gets the advertising bucks.
Suddenly speed to publication is more important than accuracy. So checking facts before publishing goes out the window (too slow: your post won't be first, the eyeballs and advertising dollars will go to the first one to publish). It's easier to apologise and retract afterwards - even though people will only remember the original, sensational, incorrect content.
So we have news that is not checked before it is published. The door to fake news is opened even wider.
The insulation of the Filter Bubble.
The burgeoning of sites gives us too many choices, too many sources of information.
So we tend to settle on one source - preferably a source that gives us our news the way we like it. And we like it all in one place, served up on a platter. Lots of people should use this news source - because after all, lots of people can't be wrong / fooled...
Our news source should preferably only deal with the topics we want to read about.
What we end up with is a news source that caters to our prejudices and preferences and which keeps us in a nice, cozy filter-bubble - and so obviates the need to think and engage with anything that disturbs our world view.
Social media companies know this - and also know that conflicting content that requires some effort to resolve is off-putting for the average reader. Anything off-putting is likely to reduce the user's screen time (and so the money that the social media site makes). So they filter the content. They only let the user see what they expect and like to see (whether it is true or fake). A user that is not conflicted or unhappy will keep clicking and scrolling for longer - and earn them more money.
It's hard to think that something might be fake if it is the only version of the news that you see - and if you only see the same news repeated across multiple stories without contradicting articles. So the filter bubble that these sites create make it more difficult to detect fake news (even Google is guilty of this - it generates its own filter bubble so you are likely to see only search results that match up with the content of the news that you read).
Now people are only reading news that matches what they think they know to be right. Their ability to identify fake news decreases.
The problem of collation.
Often the simplest way out is to get our news on social media.
This way our news is all in one place and many millions share the same news source, so the news must be correct, mustn't it?
The internet floods us with information - too much information. From all types of sources. Good or bad. True or fake. Real or rumour / gossip / propaganda.
Social Media news at least seems to control this flood - but getting all the news through social media has another effect. To us, the end users, all the news comes from the same place - the social media site.
It's hard to use the tool of 'checking the quality of the source publication' when all the news seems to come from the same source (most people when asked where they get their news will answer 'Facebook' - not publication xyz through Facebook). It seems as if even recognising and acknowledging the real source of the content is too much effort for us.
The melding of all news sources into one means that media reputation (i.e. 'you can't believe that - everyone knows that publication X is junk') no longer applies. The true is published next to the fake in the same place. It's so much easier just to believe it all than to try to figure out the difference.
Fast and Furious
The tsunami of information and 'news' on social media has another consequence. We are overcome with a sense that information is a huge, daunting, unclimbable mountain that we shy away from. Our lives are too busy to 'read all that shit'... So we want our information doled out in bite sized, pre-digested, simplified chunks - which we only skim read in any case.This skimming forces headline creators to try all sorts of tricks to grab our attention - even if it means bending the truth or completely fabricating the story.
Only the sensational gets our attention.
And it is the sensational that gets shared.
We are far more likely to 'share' something short and sensational (or 'cute' or 'inspirational') than a meaty, in-depth discussion of any topic at all.
And the more sensational a news item is, the faster we are likely to share it - with as many people as possible. We also want to be 'first' with our shares. Often we share fake news without even stopping for a moment to think about whether it is true or not.
So it is that fake news spreads quicker than a measles epidemic in an anti-vaccination community.
Now we have news that spreads so fast and is shared by so many people that it becomes difficult to think that it might be fake.
Lazy / Partisan readers
Many people are too lazy to check the accuracy of news - that would mean reading multiple sources, comparing the differing facts, thinking - and then forming your own view. It's just much easier to accept what you've read as 'true' - after all it was in the news.
The last piece of the puzzle ties in with the filter bubble mentioned earlier. Partisanship means that people cling to ideas (political and otherwise) that form part of their identity. They are unwilling to question news that affirms their ideas - and quick to reject any news that conflicts with their ideas.
The issue for them is not the accuracy of the news but their own perception of themselves and their view of reality. They will accept and defend any fake news that confirms them. They will reject any true news that threatens or contradicts them. The truth does not matter. Only what they believe in.
Often these are the people creating the fake news in the first place - for consumption by people who share their world view.
For them the truth will never matter.
Weekly news summary:
The following links provided courtesy of Claire Smuts
Hardware / Software
That's it for this week.
086 293 2702 or 012 546 5313
012 565 6469
Copyright Study Opportunities 2016 - 2019. All rights reserved.