Posted on Leave a comment

Australia’s privacy watchdog has launched an investigation into two retail giants over their use of facial recognition technology.

Hardware firm Bunnings and department store Kmart collect customers’ “faceprints” in some locations.

Consumer advocacy group Choice says the technology is unethical, invasive and being used without proper consent or reasoning.

Both retailers defended its use as an anti-theft and safety measure.

The Australian Information Commissioner said her office had opened an investigation to determine whether they had breached privacy laws.

Australian retailers can only collect sensitive biometric information if “reasonably necessary” for their operations and they have “clear consent”, Angelene Falk said.

“While deterring theft and creating a safe environment are important goals, using high privacy impact technologies in stores carries significant privacy risks,” Commissioner Falk said last month, after the use of the technology was first revealed.

“Retailers need to be able to demonstrate that it is a proportionate response.”

Last year, she found convenience store chain 7-Eleven had interfered with customers’ privacy by collecting faceprints in a similar case.

The watchdog said it was also “conducting inquiries” about another retail company, The Good Guys, which has paused its use of facial recognition technology.

The Australian Human Rights Commission has called for a ban on the technology until Australia has specific laws to regulate its use. It followed police in Western Australia using it for Covid isolation checks.

■ The nation where your ‘faceprint’ is already being tracked

■ Facebook to end use of facial recognition software

Choice said Bunnings and Kmart were only disclosing their use of the technology in small “conditions of entry” notices at the front of stores, and in privacy policies online.

The consumer group surveyed more than 1,000 households and found more than 75% had no idea the technology was in use.

“Using facial recognition technology in this way is similar to Kmart, Bunnings or The Good Guys collecting your fingerprints or DNA every time you shop,” said Choice’s Kate Bower.

Bunnings said its use of the technology had been inaccurately characterised and there were strict controls around its use.

The data collected is not used for marketing purposes, it says, and the only images retained are of people banned from stores or those suspected of illegal or threatening conduct.

“In recent years, we’ve seen an increase in the number of challenging interactions our team have had to handle in our stores and this technology is an important tool in helping us to prevent repeat abuse and threatening behaviour towards our team and customers,” said chief operating officer Simon McDowell.

A spokesperson for Kmart also said the technology was on “trial” to prevent theft and was subject to strict controls.

www.bbc.co.uk/news/world-australia-62145154

Posted on Leave a comment

Tactical Releases Aussie-Made TPS13-3.5DC Switchmode Power Supply

Tactical Power Products have released the Australian-made TPS13-3.5DC 13.5Vdc 3.5A Switchmode power supply which is specifically suited for the access control and alarm installations.

Featuring heavy-duty aluminium construction, which offers excellent heatsinking, the power supply is equipped with features that include 13.5vdc 3.5A Continuous output, a battery charger delivering 13-5vdc at 300mA, AC present signal via green LED, AC fail via an SPDT relay, and low battery signal via SPDT relay and red LED indication. There’s red LED for reverse battery, fuse failure signaled by red LED, current limit and short circuit protection, and operating temperature between 0-40C.

This power supply has been tested by Austest Laboratories a leading Australian NATA accredited laboratory and fully complies with AS/NZS60950.1:2015 electrical safety and AS/NZSCISPR32:2015 EMC, with a certificate of compliance issued by SAA Approvals: SAA 203368-EA

Browse our full range of Tactical Powersupplies at great prices

Posted on Leave a comment

How Amazon’s Ring, the privacy-busting doorbell surveillance tool, is extending its influence with police across the US

The number of police forces joining Ring’s partnership program across America more than doubled in 2020, despite there being little evidence that it’s an effective crimefighting device. The rise is sparking major privacy concerns.

Newly released figures indicate that nearly 2,000 police departments across the US are partnered with Amazon’s Ring, in the process expanding the reach of the highly controversial civilian surveillance network yet further.

Ring, bought by the e-commerce giant in February 2018 for a fee that could be as much as US$1.8 billion, is best-known for producing a range of ‘smart’ doorbells, which house high-definition cameras, motion sensors, microphones, and speakers.

Not long after its purchase, the partnership program was launched – under its auspices, Ring offers authorities access to video footage recorded by the millions of internet-connected devices its customers have mounted to their homes.

In turn, citizens alerted to suspicious or outright criminal activity outside their residences by Ring’s motion sensors can submit reports directly to law enforcement via an accompanying app, ‘Neighbors’.

The figures show a staggering 1,189 departments nationwide joined the program in 2020 – the total now stands at 2,014, including 305 fire departments.

Only two US states – Montana and Wyoming – aren’t home to forces enrolled in the program, which saw partnered departments collectively request videos related to over 22,335 incidents during 2020 alone.

Ring hails the initiative as a ‘Neighborhood Watch’ revolution, which makes areas safer by helping deter andsolve crimes. Its website features numerous clips of apparent criminals caught in the act on doorsteps, and a May 2019 case in which Ring footage played a pivotal role in the identification and capture of an individual who abducted an eight-year-old was well-publicised.

Conversely, several slick promotional videos in which various police departments touted Ring’s crime-fighting capabilities have since been removed from the web.

In one such segment, which focused on the company’s partnership with police in Winter Park, Florida, the local department’s chief spoke effusively about the “value” of Ring cameras “in helping us solve crimes” – as police officers “cannot be everywhere,” the force was said to “rely” on citizens using the ‘Neighbors’ app to report incidents.

The reason for the video’s deletion is unclear, although it may be related to a February 2020 NBC News investigationwhich found Winter Park authorities had in fact not made a single arrest facilitated by footage obtained from Ring cameras since it partnered with the company in April 2018.

The story was much the same elsewhere. NBC interviewed 40 law enforcement agencies in eight states that had partnered with Ring for at least three months – three said the ease with which citizens can share Ring videos meant officers wasted time reviewing clips of issues such as raccoons running around and petty disagreements between neighbors, while others noted the deluge of footage rarely led to positive identifications of suspects, let alone arrests.

A total of 13 agencies had made zero arrests as a result of Ring footage, and several others, including those in large US cities, simply didn’t know how many arrests had been made as a result of their Ring partnership, even though they’d been partnered with the company for over a year.

Concerns over the program’s law-keeping efficacy are nonetheless somewhat secondary to its grave privacy implications.

Digital rights group Electronic Frontier Foundation (EFF) has long been a fervent critic of the system, dubbing it “a perfect storm of privacy threats” and contending that Ring and comparable ‘home security’ providers serve to greatly inflate paranoia about crime, transforming every innocent delivery person, charity fundraiser, or election canvasser into a potential – if not likely – criminal with every motion sensor update beamed.

“By sending photos and alerts every time the camera detects motion or someone rings the doorbell, the app can create an illusion of a household under siege,” EFF argues. “It turns what seems like a perfectly safe neighborhood into a source of anxiety and fear. This raises the question: do you really need Ring, or have Amazon and the police misled you into thinking that you do?”

Police departments are greatly incentivized by Ring to further this feedback loop. In areas where police are partnered with the company, departments are granted credits with which they can buy more cameras to distribute to residents, for every resident who downloads the Neighbors app. As such, officers are encouraged to act as unadvertised sales reps for Ring.

https://www.rt.com/op-ed/514289-amazon-ring-privacy-us/

Posted on Leave a comment

The Future is NOW!

AI Surveillance & Deterrence.

Intelligent detection & built-in deterrence


New VIP Vision™ Pro AI turret with siren & strobe light


Easily add alarm functions to your CCTV systems.

The new deterrence alarm turret dome combines AI perimeter protection & people counting with built-in siren / strobe alarm to deliver a comprehensive security solution.

Stay alerted & deter intruders in real-time. The deterrence alarm responds to recorder alarm events, activating the light and siren to deter & warn the potential intruder, while also notifying the user via mobile app. 

  • Automated Deterrence – Deter intruders during set hours of the day
  • Siren & Strobe – Customisable siren tone or recording plays on trigger
  • Alarm Events – Responds to tripwire, intrusion, motion detect. & more 


NEW – Pro AI Series 5.0MP turret dome camera with deterrence alarm
Active Deterrence – Built in strobe & siren that react to recorder events
AI Surveillance – People and vehicle identification via IVS & SMD+

Posted on Leave a comment

Avigilon SOFTWARE DEFECT ADVISORY: ACC 7.6.2.2 AND LATER

Avigilon has become aware of a software defect in Avigilon Control Center (ACC) software versions 7.6.2.2 and later which can result in a restart of the ACC™ Server component and an interruption in viewing and recording video.

The issue can occur when a configured Analytic Event causes a rule to generate an image snapshot or video clip.

Affected customers are those that are using the Send email or Send notification to Central Monitoring Station rule actions with any Avigilon analytics cameras or with other cameras and an Avigilon analytics or AI Appliance.

Disabling email rule actions with attached images or video is a known workaround.


This issue has been resolved in a software version ACC 7.10.2.14 for Windows systems and firmware version 5.2.2.60-ACC.7.10.2.14 for ACC ES, ACC ES Rugged and AI Appliances, now available for download.


Affected customers should upgrade as soon as possible to avoid unscheduled server restarts, and all other customers are encouraged to upgrade as soon as convenient.

Posted on Leave a comment

Singapore will be the first country in the world to use facial verification in its national identity scheme.

The biometric check will give Singaporeans secure access to both private and government services.
The government’s technology agency says it will be “fundamental” to the country’s digital economy.
It has been trialled with a bank and is now being rolled out nationwide. It not only identifies a person but ensures they are genuinely present.
“You have to make sure that the person is genuinely present when they authenticate, that you’re not looking at a photograph or a video or a replayed recording or a deepfake,” said Andrew Bud, founder and chief executive of iProov, the UK company that is providing the technology.

The technology will be integrated with the country’s digital identity scheme SingPass and allows access to government services.
“This is the first time that cloud-based face verification has been used to secure the identity of people who are using a national digital identity scheme,” said Mr Bud.
Verification or recognition?
Both facial recognition and facial verification depend on scanning a subject’s face, and matching it with an image in an existing database to establish their identity.
The key difference is that verification requires the explicit consent of the user, and the user gets something in return, such as access to their phone or their bank’s smartphone app.

Facial recognition technology, by contrast, might scan the face of everyone in a train station, and alert the authorities if a wanted criminal walks past a camera.
“Face recognition has all sorts of social implications. Face verification is extremely benign,” said Mr Bud.
Privacy advocates, however, contend that consent is a low threshold when dealing with sensitive biometric data.
“Consent does not work when there is an imbalance of power between controllers and data subjects, such as the one observed in citizen-state relationships,” said Ioannis Kouvakas, legal officer with London-based Privacy International.
Business or government?
In the US and China, tech companies have jumped on the facial verification bandwagon.
For example, a range of banking apps support Apple Face ID or Google’s Face Unlock for verification, and China’s Alibaba has a Smile to Pay app.
Many governments are already using facial verification too, but few have considered attaching the technology to a national ID.
In some cases that’s because they don’t have a national ID at all. In the US, for example, most people use state-issued drivers’ licences as their main form of identification.
China hasn’t attempted to link facial verification to its national ID, but last year enacted rules forcing customers to have their faces scanned when they buy a new mobile phone, so that they could be checked against the ID provided.
Nevertheless, facial verification is already widespread in airports, and many government departments are using it, including the UK Home Office and National Health Service and the US Department of Homeland Security.
How will it be used?
Singapore’s technology is already in use at kiosks in branches of Singapore’s tax office, and one major Singapore bank, DBS, allows customers to use it to open an online bank account.
It is also likely to be used for verification at secure areas in ports and to ensure that students take their own tests.
It will be available to any business that wants it, and meets the government’s requirements.
“We don’t really restrict how this digital face verification can be used, as long as it complies with our requirements,” said Kwok Quek Sin, senior director of national digital identity at GovTech Singapore.
“And the basic requirement is that it is done with consent and with the awareness of the individual.”
GovTech Singapore thinks the technology will be good for businesses, because they can use it without having to build the infrastructure themselves.
Additionally, Mr Kwok said, it is better for privacy because companies won’t need to collect any biometric data.
In fact, they would only see a score indicating how close the scan is to the image the government has on file.

https://www.bbc.co.uk/news/business-54266602