Categories
Media News Security

Man convicted of “extensive data breach”

Man convicted of “extensive data breach” in Bergen District Court

Article from Digi / BT / NTB

A 30-year-old man in Bergen District Court has been sentenced to 14 days suspended prison for data breach by the Norwegian Public Roads Administration. The man says he wanted to develop an app.

In addition to the conditional prison sentence, the foreign man living in Bergen is sentenced to give up two hard drives and one SSD disk, writes Bergens Tidende.

The defendant wanted to develop an app that would allow contact with the owner of a motor vehicle without exchanging personal information, according to the judgment.

The man extracted information about Norwegian car owners from the Roads Administration’s website, but this went beyond what the Norwegian Public Roads Administration intended to offer of information through the service. Therefore, he is convicted of violation of section 207 of the Penal Code for burglary in computer systems.

The defendant understood that this was not how the service should be used, the court believes.

But the court also states that the information he obtained was legally obtained through a request for access.

The man’s defender, attorney Alexander Gonzalo Sele, says he and the client will go through the verdict and consider whether to appeal.

– We believe the judgment raises fundamental questions about what can be characterized as a data breach. He has retrieved information that was publicly available and that one could also find using a regular telephone directory, Sele says, pointing out that the client did not get any sensitive information.

© NTB

Source: digi.no (Article in Norwegian)

Improbus’ comments

The verdict (case number TBERG-2019-141281) is available online, in Norwegian (check Google Translate for an OK English translation).

According to the accusation (and verdict), the accused accessed publicly available web resources served by the Norwegian Public Roads Administration.

The accused then opened several browser tabs, and changed the individual URLs slightly, to see if the different http requests yielded individual, but still relevant results.

The accused allegedly then proceeded to collect the output of the respective web outputs provided by the site; storing them in a local database; one record for each http request.

Bergen District Court has ruled that even though the information gained and stored was already publicly available, nor did any damage or presented the server with a significant load of any kind – the action is still to be perceived as illegal.

Since the information from the Norwegian Public Roads Administration’s web site already was publicly available, it is obvious to think that this system behavior was intentional.

It is obvious to Improbus that what has been explained as misuse of a minor design flaw, has not been misused for evil purposes at all, but rather as a means for retrieving public data in an efficient, easy and convenient way.

If the data had been private or sensitive, the situation would have been quite different – maybe not technically or juridically, but at least ethically and morally.

It is sad to see that neither the courts nor the police able to keep up with current knowledge about the common usage of information systems.

If this really is a criminal act, it is nonetheless a victimless one.

Categories
Media Syndicated

FCC Proposes to Fine Wireless Carriers $200M for Selling Customer Location Data

The U.S. Federal Communications Commission (FCC) today proposed fines of more than $200 million against the nation’s four largest wireless carriers for selling access to their customers’ location information without taking adequate precautions to prevent unauthorized access to that data. While the fines would be among the largest the FCC has ever levied, critics say the penalties don’t go far enough to deter wireless carriers from continuing to sell customer location data.

The FCC proposed fining T-Mobile $91 million; AT&T faces more than $57 million in fines; Verizon is looking at more than $48 million in penalties; and the FCC said Sprint should pay more than $12 million.

An FCC statement (PDF) said “the size of the proposed fines for the four wireless carriers differs based on the length of time each carrier apparently continued to sell access to its customer location information without reasonable safeguards and the number of entities to which each carrier continued to sell such access.”

The fines are only “proposed” at this point because the carriers still have an opportunity to respond to the commission and contest the figures. The Wall Street Journal first reported earlier this week that the FCC was considering the fines.

The commission said it took action in response to a May 2018 story broken by The New York Times, which exposed how a company called Securus Technologies had been selling location data on customers of virtually any major mobile provider to law enforcement officials.

That same month, KrebsOnSecurity broke the news that LocationSmart — a data aggregation firm working with the major wireless carriers — had a free, unsecured demo of its service online that anyone could abuse to find the near-exact location of virtually any mobile phone in North America.

In response, the carriers promised to “wind down” location data sharing agreements with third-party companies. But in 2019, Joseph Cox at Vice.com showed that little had changed, detailing how he was able to locate a test phone after paying $300 to a bounty hunter who simply bought the data through a little-known third-party service.

Gigi Sohn is a fellow at the Georgetown Law Institute for Technology Law and Policy and a former senior adviser to former FCC Chair Tom Wheeler in 2015. Sohn said this debacle underscores the importance of having strong consumer privacy protections.

“The importance of having rules that protect consumers before they are harmed cannot be overstated,” Sohn said. “In 2016, the Wheeler FCC adopted rules that would have prevented most mobile phone users from suffering this gross violation of privacy and security. But [FCC] Chairman Pai and his friends in Congress eliminated those rules, because allegedly the burden on mobile wireless providers and their fixed broadband brethren would be too great. Clearly, they did not think for one minute about the harm that could befall consumers in the absence of strong privacy protections.”

Sen. Ron Wyden (D-Ore.), a longtime critic of the FCC’s inaction on wireless location data sharing, likewise called for more stringent consumer privacy laws, calling the proposed punishment “comically inadequate fines that won’t stop phone companies from abusing Americans’ privacy the next time they can make a quick buck.”

“Time and again, from Facebook to Equifax, massive companies take reckless disregard for Americans’ personal information, knowing they can write off comparatively tiny fines as the cost of doing business,” Wyden said in a written statement. “The only way to truly protect Americans’ personal information is to pass strong privacy legislation like my Mind Your Own Business Act [PDF] to put teeth into privacy laws and hold CEOs personally responsible for lying about protecting Americans’ privacy.”

Source: KrebsOnSecurity.

Categories
Media Syndicated

Zyxel 0day Affects its Firewall Products, Too

On Monday, networking hardware maker Zyxel released security updates to plug a critical security hole in its network attached storage (NAS) devices that is being actively exploited by crooks who specialize in deploying ransomware. Today, Zyxel acknowledged the same flaw is present in many of its firewall products.

This week’s story on the Zyxel patch was prompted by the discovery that exploit code for attacking the flaw was being sold in the cybercrime underground for $20,000. Alex Holden, the security expert who first spotted the code for sale, said at the time the vulnerability was so “stupid” and easy to exploit that he wouldn’t be surprised to find other Zyxel products were similarly affected.

Now it appears Holden’s hunch was dead-on.

“We’ve now completed the investigation of all Zyxel products and found that firewall products running specific firmware versions are also vulnerable,” Zyxel wrote in an email to KrebsOnSecurity. “Hotfixes have been released immediately, and the standard firmware patches will be released in March.”

The updated security advisory from Zyxel states the exploit works against its UTM, ATP, and VPN firewalls running firmware version ZLD V4.35 Patch 0 through ZLD V4.35 Patch 2, and that those with firmware versions before ZLD V4.35 Patch 0 are not affected.

Zyxel’s new advisory suggests that some affected firewall product won’t be getting hotfixes or patches for this flaw, noting that the affected products listed in the advisory are only those which are “within their warranty support period.”

Indeed, while the exploit also works against more than a dozen of Zyxel’s NAS product lines, the company only released updates for NAS products that were newer than 2016. Its advice for those still using those unsupported NAS devices? “Do not leave the product directly exposed to the internet. If possible, connect it to a security router or firewall for additional protection.”

Hopefully, your vulnerable, unsupported Zyxel NAS isn’t being protected by a vulnerable, unsupported Zyxel firewall product.

CERT’s advisory on the flaw rate this vulnerability at a “10” — its most severe. My advice? If you can’t patch it, pitch it. The zero-day sales thread first flagged by Holden also hinted at the presence of post-authentication exploits in many Zyxel products, but the company did not address those claims in its security advisories.

Recent activity suggests that attackers known for deploying ransomware have been actively working to test the zero-day for use against targets. Holden said the exploit is now being used by a group of bad guys who are seeking to fold the exploit into Emotet, a powerful malware tool typically disseminated via spam that is frequently used to seed a target with malcode which holds the victim’s files for ransom.

“To me, a 0day exploit in Zyxel is not as scary as who bought it,” he said. “The Emotet guys have been historically targeting PCs, laptops and servers, but their venture now into IoT devices is very disturbing.”

Source: KrebsOnSecurity.

Categories
News

Improbus acquires ICEC

Improbus has acquired ICEC; International Center for Emergency Communication.

As of today, 01.01.2019, both companies will act as one.

The companies believe that their product portfolios complement each other, especially in the areas of ​​emergency communication, security, training, and education.

Most ICEC products and services will be fully incorporated into the Improbus product portfolio within a month, while specialized courses or custom services will remain under the ICEC brand.

For more information, please contact Improbus via Telegram (chat) or email.

Categories
Media Security

How PhotoDNA for Video is being used to fight online child exploitation

In the past, when someone tipped off the Internet Watch Foundation’s (IWF) criminal content reporting hotline to an online video they thought included child sexual abuse material, an analyst at the U.K. nonprofit often had to watch or fast forward through the entire video to investigate it.

Because people sharing videos of child sexual abuse often embed this illegal content in an otherwise innocuous superhero flick, cartoon or home movie, it could take 30 minutes or several hours to find the content in question and determine whether the video should be taken down and reported to law enforcement.

Last year, IWF, a global watchdog organization, started leveraging PhotoDNA — a tool originally developed by Microsoft in 2009 for still images — to identify videos that have been flagged as child sexual abuse material. Now it often takes only a minute or two for an analyst to find illegal content.

Microsoft Cybercrime Center. Photo: Benjamin Benschneider.

Microsoft is now making PhotoDNA for Video available for free, and any organization worldwide interested in using the technology can visit the Microsoft PhotoDNA website to find out more, or to contact the team.

“It’s made a huge difference for us. Until we had PhotoDNA for Video, we would have to sit there and load a video into a media player and really just watch it until we found something, which is extremely time-consuming,” says Fred Langford, deputy chief executive of IWF, which collaborates with sexual abuse reporting hotlines in 45 countries around the world.

“This means we can identify and disrupt online sexual abuse and help victims much faster,” says Langford.

“We don’t want this illegal content shared on our products and services. And we want to put the PhotoDNA tool in as many hands as possible to help stop re-victimization.”

Courtney Gregoire, Microsoft Digital Crimes Unit

PhotoDNA for Video builds on the same technology employed by PhotoDNA, a tool Microsoft developed with Dartmouth College that is now used by over 200 organizations around the world to curb sexual exploitation of children. Microsoft leverages PhotoDNA to protect its customers from inadvertently being exposed to child exploitation content, helping to provide a safe experience for them online.

PhotoDNA has also enabled content providers to remove millions of illegal photographs from the internet; helped convict child sexual predators; and, in some cases, helped law enforcement rescue potential victims before they were physically harmed.

In the meantime, though, the volume of child sexual exploitation material being shared in videos instead of still images has ballooned. The number of suspected videos reported to the CyberTipline managed by the National Center for Missing and Exploited Children (NCMEC) in the United States increased tenfold from 312,000 in 2015 to 3.5 million in 2017. As required by federal law, Microsoft reports all instances of known child sexual abuse material to NCMEC.

Microsoft has long been committed to protecting its customers from illegal content on its products and services, and applying technology the company already created to combating this growth in illegal videos was a logical next step.

“Child exploitation video content is a crime scene. After exploring the development of new technology and testing other tools, we determined that the existing, widely used PhotoDNA technology could also be used to effectively address video,” says Courtney Gregoire, Assistant General Counsel with Microsoft’s Digital Crimes Unit. “We don’t want this illegal content shared on our products and services. And we want to put the PhotoDNA tool in as many hands as possible to help stop the re-victimization of children that occurs every time a video appears again online.”

A recent survey of survivors of child sexual abuse from the Canadian Centre for Child Protection found that the online sharing of images and videos documenting crimes committed against them intensified feelings of shame, humiliation, vulnerability and powerlessness. As one survivor was quoted in the report: “The abuse stops and at some point also the fear for abuse; the fear for the material never ends.”

The original PhotoDNA helps put a stop to this online recirculation by creating a “hash” or digital signature of an image: converting it into a black-and-white format, dividing it into squares and quantifying that shading. It does not employ facial recognition technology, nor can it identify a person or object in the image. It compares an image’s hash against a database of images that watchdog organizations and companies have already identified as illegal. IWF, which has been compiling a reference database of PhotoDNA signatures, now has 300,000 hashes of known child sexual exploitation materials.

PhotoDNA for Video breaks down a video into key frames and essentially creates hashes for those screenshots. In the same way that PhotoDNA can match an image that has been altered to avoid detection, PhotoDNA for Video can find child sexual exploitation content that’s been edited or spliced into a video that might otherwise appear harmless.

“When people embed illegal videos in other videos or try to hide them in other ways, PhotoDNA for Video can still find it. It only takes a hash from a single frame to create a match,” says Katrina Lyon-Smith, senior technical program manager who has implemented the use of PhotoDNA for Video on Microsoft’s own services.

PhotoDNA for Video is one of many technologies used by Microsoft to protect customers online. Photo: Benjamin Benschneider.

Organizations that are already using an on-premise version of PhotoDNA to remove illegal images will be able to seamlessly add the capability to identify videos. Microsoft is also looking for partners to test the video technique on its PhotoDNA Cloud Service.

Automated tools like PhotoDNA have made a huge difference in the fight against online child exploitation, particularly for smaller companies that otherwise wouldn’t have the capacity or know how to find illegal content on their apps and websites, says Cecelia Gregson, a senior King County prosecutor and attorney for the Washington Internet Crimes Against Children Task Force.

Gregson estimates that 90 percent of the cases she investigates now come from CyberTipline reports submitted by companies using PhotoDNA to keep their platforms clean. Under federal law, all internet and email service providers are required to report knowledge of child pornography to NCMEC.

“It’s made a huge difference…We can identify and disrupt online sexual abuse and help victims much faster.”

Fred Langford, Internet Watch Foundation

“This is not about looking at someone’s online shopping patterns or uploaded family photos. We are seeking files depicting the sexual abuse of children,” says Gregson. “We are concerned with protecting child victims, and about making sure the places you go online and your children go online are not riddled with images of child abuse and exploitation. The technology can also help us identify child sexual predators whose collections of images can cause further psychological, emotional and mental trauma to their victims.”

Since PhotoDNA and other tools became widely available, the number of reports to NCMEC’s CyberTipline has grown from 1 million in 2014 to 10 million in 2017, says John Shehan, vice president for NCMEC’s exploited children division.

“These technologies allow companies, especially the hosting providers, to identify and remove child sexual content more quickly,” says Shehan. “That’s a huge public benefit.”

Learn how to detect, remove and report child sexual abuse materials with PhotoDNA for video, or contact photodnarequests@microsoft.com. Follow @MSFTissues on Twitter.

Source: Microsoft.

Categories
Media Security

PhotoDNA scans images for child abuse

Internet service providers may have better success at scanning their networks to actively seek out illicit images of child abuse, thanks to technology donated by Microsoft and Dartmouth College.

On Wednesday, the software giant and the well-known college announced that they had developed a software program to match modified images to the original by using a form of robust hashing that can ignore certain types of changes, such as resizing, cropping and the inclusion of text. The team donated the program, dubbed PhotoDNA, to the National Center for Missing and Exploited Children.

The NCMEC will make the program available to ISPs to detect the “worst of the worst” in child pornography — those images that show pre-pubescent children being sexually abused, said Ernie Allen, CEO and president of the NCMEC.

The intent is to “use the technology very narrowly and very specifically,” Allen said.

The agreement follows a number of other successful initiative in fighting child abuse online. In June 2008, three ISPs signed an agreement with the New York State Attorney General’s office to police their networks for child pornography and donate money to the state and the NCMEC to fund investigations. In 2007, MySpace agreed with the attorneys general of more than 40 states to turn over information regarding sex offenders on its network.

While law enforcement has successfully prosecuted hundreds of cases of possession and distribution of illicit images, a small number of cases have underscored overzealous prosecutions. In one case, a Massachusetts government agency fired and reported one of its workers for having child pornography on his laptop, but a later investigation showed that the lack of functioning antivirus software resulted in his laptop being compromised and subsequently filled with illicit images.

Microsoft has already tested the software on its networks and plans to roll out the tool to scan public sources for images for child pornography, said Brad Smith, senior vice president and general counsel at the software giant.

“It is not enough to catch the perpetrators, we have to stop the images to prevent the subjects from being a victim again,” Smith said.

While Microsoft will scan public sources for matches to a small database of the worst abuse images, the software giant will not scan private data nor communications, Smith said. ISPs, the government and privacy advocates should discuss the legal and policy issues of such scanning, he said.

Child pornography is a major priority of law enforcement and the detection of images of abuse has grown significantly, according to the NCMEC. Since 2003, the organization has viewed and analyzed 30 million images classified as child pornography, the group claims. Allen predict that the group will deal with another 9 million in 2010.

Much of the increase in child pornography is due to the Internet’s ability to allow communities to form among traders of child pornography, he said.

“They (the criminals) no longer view themselves as aberrant,” Allen said. “We made enormous progress on the commercial side … but it has migrated to the noncommercial side.”

In the latest announcement, a large scale test of the PhotoDNA tool found that less than one false positive occurred in every billion images scanned, said Hany Farid, a professor of computer science at Dartmouth and co-developer of PhotoDNA. In addition, the software recognizes about 98 percent of images derived from those in its database.

“We tested it over billions and billions of images,” he said. “We tried very hard to make it very efficient … and to minimize the false alarm rate.”

Source: SecurityFocus.