Should man-made algorithms replace human judgment in moderating the way we behave on social media platforms, and can artificial intelligence replace human intelligence in warfare?

When does it become the responsibility of manufacturers to safeguard advances in technology for the good of the consumer? These are just a few of the questions we’ve got a take on with this week’s Tech5…

GDPR panic in the House of Commons

Over the last month, you’ve probably been on the receiving end of a flurry of countless emails urging you to ‘opt-in’ to continue receiving updates from your favourite online shopping havens. Unless you’ve been living under a sizeable rock, you’ll have heard ‘GDPR’ being banded about in your workplace.

Here at Intercity, we’ve got our very own GDPR guru in Naome Harrison who’s been priming us all for its introduction on 25 May 2018. The same can’t be said for MPs who felt that the advice they had been given to delete all casework from before June 2018 was ‘ludicrous’.

Being asked to delete all information about previous constituents and their concerns from before the last general election would leave MPs in a difficult position to be able to do their work. You’d think that those forming our Government and considering the impact of the General Data Protection Regulation would be better informed.

The advice coming from the Information Commissioner’s Office is one of:

“not looking [for] perfection”, but instead a commitment to operate within the new framework

Are you GDPR ready?

29 million posts over the first three months of 2018 break Facebook’s rules on hate speech, graphic violence, terrorism and sex.

Computers give each of us the opportunity to assume an online identity. But some people use Facebook as a forum for hate, violence, and sexually explicit content. Facebook, with the help of artificial intelligence and 15,000 human moderators has for the first time published figures on the number of posts breaching its policies on appropriate online content.

The algorithms that have been used detected that 99.5% of support for Islamic State came from terrorist affiliated groups, with only 0.5% of support coming from the public. But there’s work to be done, with other algorithms only identifying 38% of hate speech, with the rest of the concerns acted upon being raised by other users reporting offensive content.

Around 3-4% of all Facebook users are actually fake. As Facebook ramps up its operations to make it a more appealing platform, is it wasted energy when we might be migrating to other social media platforms that satisfy our 8-second attention span?

Keyless cars can turn into silent killers

Since 2006, it’s reported that 28 people have died and 45 people have suffered injury from drivers failing to turn off their keyless cars. With over 17 million keyless cars being sold in the US every year alone, we probably need to pay more attention.

With keyless ignition systems, drivers have a fob where they ‘press the button’ to start their cars, but it allows them to walk away from their vehicle with the engine running. Cars that are stored in garages are the potential culprits for high levels of carbon monoxide that can turn lethal. Some car manufacturers have built in safety measures that turn the engine of a keyless car off if the fob isn’t within the vehicle.

Every driver takes on responsibilities for themselves and others when they own a car, key or not…

Google break their own apps

Auto-play can be a minor annoyance when you want to quickly load a webpage without an auto-play video serving as a loud interruption. In the process of Google’s intervention, they inadvertently managed to break a series of apps, games, and other interactive art which stopped some alerts and caused complaints from developers.

To give developers a chance to update the code in their apps, Google removed its auto-play policy. With a 57.4% share of users across desktop and mobile, will Chrome’s silencing of video and audio have the desired outcome if it affects developers?

When AI empowers armoury

We’ve all been primed for the rise of the machines, but Google are being pressured to abandon their project which currently sees them developing Artificial Intelligence technology for the US military. A controversial Pentagon programme, ‘Project Maven’ has been challenged by over 3,100 of Google’s employees as well as experts who aren’t convinced by the conglomerate’s ethics.

The firm’s ‘Don’t Be Evil’ motto would seem to encourage them to steer clear of developing software to assist drones in targeting objects on the ground that removes the need for human oversight. Ethically, should Google be in the business of war when it means developing algorithms that can target and kill from a distance?

[subscribe-form]