This is default featured slide 1 title
This is default featured slide 2 title

Monthly Archives: May 2017

Microsoft Inches Toward a World Without Passwords

Microsoft on Tuesday announced the general availability of its phone sign-in for customers with Microsoft accounts — a system that could be the beginning of the end for passwords.

The new system requires that customers add their accounts to the Microsoft Authenticator app, which comes in both iOS and Android versions, noted Alex Simons, director of program management of the Microsoft Identity Division.

After supplying a username, a member will get a mobile phone notification. Tapping “approve” on the app will authenticate the member’s information.

The new phone sign-in process is easier than two-factor authentication, according to Simons. 2FA requires users first to enter passwords, and then to enter a code delivered via text or email.

The new process is safer than password-only systems, which can be forgotten, stolen for use in a phishing scheme, or otherwise compromised, he said.

Microsoft Authenticator

Microsoft Authenticator, introduced last summer, started out as a replacement for earlier authentication apps, both for enterprise use in Azure AD and consumer use in regular Microsoft accounts. The initial version allowed fingerprint authentication in place of passcodes, and offered support for wearables including Apple Watch and Samsung Gear.

Setting up Microsoft’s new phone-in system is easy. If customers already have Microsoft Authenticator for their personal accounts, they can select the dropdown button on the account tile and select “enable phone sign-in.”

Android users will be prompted to set up the authenticator. iPhones will set up the authenticator automatically. Users who don’t have a phone available can elect to access their accounts using a password.

Microsoft has not made the phone sign-in system available to Windows Phone users.

Windows Phone makes up less than 5 percent of the active Authenticator Apps user base, Simons noted, so the company has prioritized iOS and Android. When the system achieves success on those two platforms, Microsoft will consider making it ready for Windows Phone.

Password Problems

The idea of moving away from passwords has been around for years, in part due to their vulnerability to hacking.

Microsoft CEO Satya Nadella and Cloud Platform General Manager Julia White discussed the idea of moving away from passwords at the Government Cloud Forum in November 2015.

Microsoft then employed Windows 10 Password to give customers a smart card level of threat detection, using the card as the first level of protection, then Windows Hello for confirmation through biometrics, such as face recognition, iris scanning or fingerprints.

Better Than 2FA?

The new functionality from Microsoft is not groundbreaking, but it represents a true upgrade from traditional password authentication methods, suggested Rik Ferguson, vice president for security research at Trend Micro.

“This technology is definitely an improvement over using authenticator apps to generate one-time passwords, which can still be hijacked through a man-in-the-browser attack,” he told the E-Commerce Times.

The new app represents true two-factor authentication in the same way Apple uses its Trusted Device authentication or Google uses its prompts.

Using interactive prompts or using an out-of-band trusted device like a smartphone rather than one-time passwords from an authenticator app or SMS does away with having data pass through the same browser, Ferguson added.

However the new system doesn’t necessarily make logins more secure, Trend Micro Cloud Security VP Mark Nunnikhoven told the E-Commerce Times.

Microsoft’s approach substitutes “something you know,” the password, with “something you have,” the phone, he said, but it is not as strong as genuine two-factor identification.

Linux Securing Your System Bit by Bit

As daunting as securing your Linux system might seem, one thing to remember is that every extra step makes a difference. It’s almost always better to make a modest stride than let uncertainty keep you from starting.

Fortunately, there are a few basic techniques that greatly benefit users at all levels, and knowing how to securely wipe your hard drive in Linux is one of them. Because I adopted Linux primarily with security in mind, this is one of the first things I learned. Once you have absorbed this lesson, you will be able to part with your hard drives safely.

As you might have deduced, the usual way of deleting doesn’t always cut it. The most often-used processes for deleting files — clicking “delete” in the operating system or using the “rm” command — are not secure.

When you use one of these methods, all your hard drive does is mark the area where the deleted file used to be as available for new data to be written there. In other words, the original state of the bits (1s and 0s) of the deleted file are left intact, and forensic tools can recover the files.

This might seem like a bad idea, but it makes sense. Hard drives are designed to optimize hardware integrity, not security. Your hard drive would wear out very quickly if it reset the bits of a deleted file to all 0s every time you deleted a file.

Another process devised with hard drive lifespan in mind is “wear leveling,” a firmware routine that saves each new file in a random location on the drive. This prevents your drive from wearing out data cells, as those near the beginning of the drive would suffer the most wear if it saved data sequentially. However, this means it is unlikely that you ever would naturally overwrite a file just through long-term use of the drive.

So, what does it mean to “securely wipe” a hard drive?

Moving Raw Bits

Secure deletion involves using a program to overwrite the hard drive manually with all 0s (or random data). This useless data overwrites the entire drive, including every bit of every saved and deleted file. It even overwrites the operating system, leaving nothing for a malicious actor to exploit.

Since the command line is usually the simplest way of going about manual operations like this, I will go over this method. The best utility for this is the “dd” command.

The “dd” commamd can be used for many things besides secure deleting, like making exact backups or installing Linux distributions to USB flash drives, but what makes it so versatile is that whereas commands like “mv” and “cp” move around files as file objects, “dd” moves data around as a stream of raw bits. Essentially, while “mv” and “cp” see files, “dd” only sees bits.

What “dd” does is very simple: It takes an input and sends it to an output. Your Linux system has a stream of 0s it can read located at /dev/zero. This is not a normal file — it’s an endless stream of 0s represented as a file.

This will be our input for a wipe operation, for the purpose of this tutorial. The output will be the device to be overwritten. We will not be overwriting an actual running system, as 1) you probably wouldn’t want to; and 2) it actually wouldn’t work, because your system would overwrite the part of the system responsible for performing the overwrite before the overwrite was complete.

Securely erasing external storage devices, like USB flash drives and external hard drives is pretty straightforward, but for wiping your computer’s onboard hard drive, there are some extra steps involved.

The Live-Boot Option

If you can’t use a running system to wipe an onboard drive, how do you perform the operation? The answer is live-booting. Many Linux distributions, including those not explicitly specialized for the purpose, can be loaded and run on a computer from a connected USB drive instead of its onboard drive. When booted this way, the computer’s onboard drive is not accessed at all, since the system’s data is read entirely from the USB drive.

Since you likely installed your system from a bootable USB drive, it is best to use that. To live-boot, we have to change the place where the computer checks to find an operating system to run by entering the BIOS menu.

The BIOS is the firmware code that is loaded before any part of any OS is run, and by hitting the right key at boot time, we can access its menu. This key is different on different computers. It’s usually one of the “F” keys, but it might be something else, so it might take a few tries to figure it out, but the first screen that displays should indicate where to look.

Once you find it, insert the live-boot USB, reboot the computer directly into the BIOS menu, and select the option to change the boot order. You should then see a list of storage devices, including the inserted USB. Select this and the live system should come up.

Locating the Right Address

Before we do any deleting, we have to figure out which address our system assigns to the drive to be deleted (i.e., the target drive). To do that, we will use the “lsblk” command, for “list block devices.” It returns information about attached block devices, which are essentially hard drive-type devices.

Before running the command, take note of the target drive’s storage size, and detach all devices connected to your computer EXCEPT the drive storing the system you are live-booting from. Then, run “lsblk” with no arguments or options.

$ lsblk

The only device that should appear is your onboard hard drive and the live-booted USB. You will notice that “lsblk” returns a name (under “NAME”) beginning with “sd” and then a letter, with branching lines to the same name appended with a number. The name the branches originate from is the name of the “file” serving as the address of the drive in the /dev directory, a special directory that represents devices as files so the system can interact with them.

You should see an entry with the size of the USB drive hosting the live-boot system and a path under “MOUNTPOINT”, and (only) one other entry with the size of your target drive with no mount point listed. This second entry gives you the address for the output of “dd”. For instance, if your target drive corresponds to the name “sdb”, then that means /dev/sdb is the address.

However, to identify the address of an external drive you want to delete, run “lsblk” once with no device attached, check the (single) entry against your onboard drive’s size and make a note of its address, connect your target drive, run “lsblk” again, and check that its size corresponds to that of one of the entries in the output.

The output of the second “lsblk” command should now return two entries instead of one, and one of them should match target’s size. If your system is configured to automatically access inserted drives, you should see a path including “/media” under “MOUNTPOINT”, but otherwise the target drive should list nothing in that column.

As these addresses correspond to hard drives, it is important to be EXTREMELY careful to give the right one, because otherwise you will delete the wrong drive. As I noted earlier, if you accidentally give the address of your running system as the output, the command will immediately start writing zeros until you stop it (by hitting “Ctrl-c”) or your system crashes, resulting in irrecoverable data loss either way.

For example, since the letters are assigned alphabetically starting (usually) with the running system, if a single connected external drive is the target, it probably will be addressed as /dev/sdb. But, again, check this carefully, because it may be different for you.

Foiling Identity Thieves

Now we’re ready to delete. All we do is invoke “dd,” give /dev/zero as the input, and give our target (for this example, /dev/sdb) as the output. “dd” is an old command from the time before Linux, so it has a somewhat odd syntax. Instead of options prepended with dashes (“-“), it uses “if=” for “input file” and “of=” for “output file.” Our command, then, looks like this.

$ dd if=/dev/zero of=/dev/sdb

Depending on how big the target drive is, and how fast your processor is, this could take a while. With a powerful processor wiping a 16-GB flash drive, this could take as little as 10 minutes. For an average processor writing over a 1-TB drive, though, it could take a whole day. You can do other things with your computer (though not with that terminal), but they probably will be slower, as this is a comparatively processor-intensive task.

Though this is probably not something you’ll do often, knowing how definitely will serve you well in the rare instances when need to. Identity theft from forensically analyzing discarded drives happens all the time, and this simple procedure will go a long way toward defending against it.

Microsoft Working together for Expands Linux Container Support in Windows Server

Microsoft has decided to expand its support for Linux containers in the next release of Windows Server.

Linux containers and workloads will work natively on Windows Server, said Erin Chapple, general manager for the server operating system, in an online post last week.

The company also will extend Window Server’s Hyper-V isolation capability, which was introduced in the 2016 release of the operating system.

“This means customers will no longer have to deploy two separate container infrastructures to support both their Windows and Linux-based applications,” Chapple wrote.

What’s more, Windows Bash also is coming to the next edition of Windows Server. That’s good news for developers.

“This unique combination allows developer and application administrators to use the same scripts, tools, procedures and container images they have been using for Linux containers on their Windows Server container host,” Chapple explained.

Slimmer Nano Server

Microsoft also has improvements in store for the container performance of its Nano Server productm Chapple noted.

Nano Server, introduced in 2015, is a purpose-built operating system designed to run born-in-the-cloud applications and containers.

“The idea was to make it tiny, and allow each developer to add only the necessary elements for their specific micro-services to it,” explained Ben Bernstein, CEO of Twistlock.

“It’s more compliant, stable and secure,” he told LinuxInsider. “The image does exactly what the developer adds to it and nothing more — no weird under-the-hood elements.”

The next release of Windows Server will focus on making Nano Server the very best container image possible, Chapple wrote.

Customers will see Nano Server images shrink in size by more than 50 percent, which will decrease startup times and improve container density, she noted.

Targeting Pain Points

Reducing the size of an operating system inside a container is important for reserving resources for the primary application running in the virtual box.

“Ideally, you’d want the underlying operating system to be zero, because you want it entirely out of the way,” said Rob Enderle, principal analyst at the Enderle Group.

“This isn’t there yet,” he told LinuxInsider, “but it’s very thin and gets out of the way as much as possible.”

The size of Windows containers is one of three pain points with Microsoft’s implementation of the technology, noted Amir Jerbi, CTO of Aqua Security.

“The size of Windows containers compared to Linux containers is very big — over 1 gigabyte,” he told LinuxInsider. “This will reduce that by 50 percent.”

Running Linux containers natively on Microsoft server and Linux tools on Windows make things simpler for shops using both operating systems, Jerbi added.

Linux Dominates Containers

Microsoft’s container strategy aligns the company with current customer demand, Jerbi said.

” Organizations are looking to normalize operation processes and tools,” he noted. “Having a single platform that runs both Windows and Linux containers helps with that.”

Microsoft’s moves reflect its recognition of the state of the container space.

“In reality, 99 percent of container images are Linux images,” observed Twistlock’s Bernstein.

“Since we are talking about containers that act as micro-services and, in turn, engage with each others’ containers, a Windows-containers-only environment is not realistic,” he pointed out. “For Microsoft to bootstrap any usage of Windows containers, it must support usage of existing Linux images.”

Containers have become important for developing software in today’s application environments. They can shorten development cycles. They allow software to be run anywhere — on premises or in the cloud. They also can simplify the development process because of the multitude of ready-made images.

“Studies show that containers boost productivity,” Bernstein said, “which is why software product companies want to adopt them.”

Google Gives Up Scanning Personal Gmail

Google recently announced the end of its policy of scanning user emails for targeted advertising purposes — a controversial practice that riled privacy advocates and spurred legal challenges.

Gmail is the world’s most widely used email provider, with more than 1.2 billion users.

Google attributed its decision to gains it has made in the enterprise. Its G Suite business over the past year has more than doubled in size to 3 million paying corporate customers, who are not subject to the scanning process.

“G Suite’s Gmail is already not used as input for ads personalization, and Google has decided to follow suit later this year in our free consumer email service,” said Diane Greene, senior vice president at Google Cloud. “This decision brings Gmail ads in line with how we personalize ads for other Google products.”

Ads are based on user settings, and users can disable personalization, Greene noted.

G Suite will continue to be ad-free, she said.

Legal Fight

The policy change represents a major step forward for online privacy, said Marc Rotenberg, executive director of the Electronic Privacy Information Center, which has challenged the Google practice in court.

“EPIC opposed Google scanning email from the start and won several significant battles, including the 2014 decision to end scanning of student emails,” he told TechNewsWorld. “Keep in mind also that Google was scanning the email of non-Gmail users, which raised problems under federal wiretap law and was the frequent target of lawsuits.”

Rotenberg cited a specific case One case that is pending appeal before the Massachusetts Supreme Judicial Court, Marquis v. Google, is a class action, Rotenberg noted. It was launched by a resident who alleged his AOL account had been scanned for advertising purposes.

The suit argues that the practice amounts to wiretapping, because Massachusetts is a two-party state that requires both parties’ consent prior to recording any information.

A settlement was reached late last year in a California class action brought by Daniel Matera and Susan Rashkis, who accused Google of violating federal wiretapping and state privacy laws by scanning non-Gmail accounts for advertising purposes.

As part of that settlement, Google agreed to pay US$2.2 million in legal fees, but a federal judge earlier this year rejected the agreement.

Enterprise Concerns

As Google makes further inroads into the cloud business, it recognizes that customers are going to be very wary of anything that threatens their privacy and security when compared against incumbent cloud services providers, noted Jeff Kaplan, managing director of ThinkStrategies.

“Google has always assumed that its users accept the implicit cost of using its free app,” he told TechNewsWorld, which is “that they will be targets of its ads and other search engine marketing mechanisms.

“However, as it tries to build its enterprise business, Google has recognized it must abandon this tactic to remain competitive with other enterprise and collaboration alternatives, such as Microsoft Office 365,” Kaplan said.

It’s not likely that the new privacy objective will harming Google’s ability to generate revenue, said Jim McGregor, principal analyst at Tirias Research.

“Google gathers tons of information from other sources,” he told TechNewsWorld, “and already has massive amounts of data on just about everything, including individuals.”