I've been playing with this for a few days -- hopefully I've somehow managed to come up with something that might be useful. An example of why I wanted to write a sort of broad, general overview of security as it is today, presented itself earlier in the week when UnHackMe was featured on GOTD. I'm not going to say it's good or it's bad, but rather the point I'd make is that if they were better informed, a lot of people probably wouldn't have bothered.
What you need to do, IMHO anyway, is perform a risk reward assessment, deciding for yourself how far you want to go, or put another way, what level of risk do you personally find acceptable. If you imagine 2 end points, one where you do nothing at all regarding security & privacy, & the other where you do everything short of so-called going off the grid, the further you move towards that 2nd point the harder life becomes. A PC that never has had a network connection & has never seen a CD/DVD or USB stick is most secure, but it's also the least useful -- what can you do with it without introducing data or software, & from that point on, it's less secure.
On encryption... The demands are very different when you're talking about a network vs. your own personal device(s). Encryption prevents someone without the key from accessing whatever's been encrypted. The weak point is that if someone's looking over your shoulder so-to-speak when you unencrypt something, it's useless. You can encrypt whatever files, or the entire disk, & if someone has or gets access while you're using it, e.g. through a backdoor, encryption won't slow them down in the least. The advantage of a laptop with an encrypted hard drive is that no one has access if you lose it -- the advantage of a PC with an encrypted hard drive is if it's stolen or seized -- the advantage of that encrypted drive is zero while you're using either.
For mal-ware it's all about survival, & that means avoiding detection, discouraging analysis, resisting removal, & persisting after re-boots [or even OS re-installs]. A root kit might install a part of itself in the 1st, hidden portion of a hard drive, which makes it hard to get rid of without re-writing that portion of the hard drive -- difficult unless you fully wipe the disk. And because it could load before Windows starts, it could tell Windows to not see some of its files, so they were in effect invisible. None-the-less they can often be detected, & they can be removed. The evolution of the root kit is to hide in firmware, in the code embedded in whatever chips. I've seen reports of this re: routers, anything USB, drives, & the bios itself.
Alternatives include hiding in the registry, with no files ever written to disk. Since the weakness of writing files to disk is that they might be blacklisted, file names & signatures can change very rapidly, or even be designed so each time the same file is written it has a different name & signature. The ultimate goal may be to reside solely in memory, where detection etc. is much harder, when possible, which brings about the problem of how to survive a reboot. So the innocent appearing bits may be stored on the drive or in the registry, & assembled into the working mal-ware in memory, with every step being hidden & encrypted. As an extra precaution, mal-ware can check its environment, sometimes pretty thoroughly, e.g. it won't attempt to run if it detects certain software, or types of software, remaining dormant or maybe even deleting itself.
[Simply trying whatever software in a VM or using a virtualization app (e.g. Time Freeze) can mean nothing if the mal-ware detects either, & so remains dormant. Then of course once it's run in a normal Windows install the mal-ware feels free to do its thing.]
With any actual files being able to morph & change so much & so rapidly, analyzing behavior becomes much more important for security software. In this cat & mouse game mal-ware of course responds to the challenge, hiding the ways it collects data, hiding collected data, & hiding the export of that data to the C&C [Command & Control] servers.
Whatever the mal-ware state of the art is, there's a good chance that no one outside of its masters knows about it -- we only know about the stuff that's been caught. The somewhat perverse good news is that in the vast majority of cases, the bad guys/gals have little if any need for the latest mal-ware tech, since we generally make it so easy for them. Most of the time, IF they don't want to use typical human weaknesses to trick us, they'll simply wait for companies like Microsoft & Adobe to patch their latest security holes -- typically those companies will publish details on what they fixed, & why, & just as typically huge numbers of people & companies will ignore those fixes, so bad guys/gals just have to follow the directions more-or-less to craft new exploits & gobble up more victims.
One of the things Microsoft does with Windows 10 is to make it harder for you to ignore updates -- in the Home version it's said that you will not be able to delay or turn them off. And if your 3rd party security software subscription expires, Windows 10 will reactivate its built-in Defender [rebranded Security Essentials].
For more hardened targets there are 0-day security flaws & weaknesses that haven't been patched yet. One would hope that they haven't been patched yet because the companies responsible don't know about them, but sadly that isn't always the case. 0-days are for the most part considered rare & valuable commodities, so unless you've got loads of money to spare they're not often used -- someone will catch on & that 0-day becomes worthless, so they're held in reserve for high value targets where you can expect to make more than you spent finding & developing an exploit for, or buying that 0-day.
There's a 0-day market populated in large parts by governments & corporations. The US Navy was recently caught advertising for 0-days & related exploits. This brings up a lot of discussion of whether it's right for someone to decide that you should remain vulnerable so that they might have an extra trick in their toolbox. If they have it, does someone else? Or will someone else discover [or steal] it & unleash say the worst ransom-ware ever? Is it worth the risk if they can keep you [& your country] safe? Do you trust that the people who bought those 0-days, with your tax dollars, will not use them on you & yours? There have been stories [not always widely reported] of western governments [e.g. the US] going after their own citizens as well as people in friendly countries that they did not like because of their politics.
FWIW June's updates for Windows included patches for 0-days revealed when they were used against Kaspersky & governments participating in the Iran negotiations -- there were likely more targets but they started removing anything related to the exploit as soon as they found out that they had been discovered. The malware was labeled Duqu 2.0 by Kaspersky, is related to Duqu, which itself was tied to Stuxnet -- remember Stuxnet, & how the US gov leaked its involvement in developing/deploying Stuxnet?
Duqu 2.0 brings up another issue that's being debated -- security certificates. The idea is that only highly trusted authorities issue trusted certificates to verified & trusted companies, e.g. so that you know the software you're going to install came from a known good source & can be trusted. Turns out that some of these highly trusted authorities shouldn't be, as they've issued certificates when/where they should not have to people that should not have had them. In the US lawmakers are discussing whether they can & should eliminate trusted authorities that are subject to their respective government. Stolen & sometimes duplicated certificates have long been used by criminals, & that's where the folks behind Duqu & Duqu 2.0 come in -- they used new, original, never-before-seen, stolen certificates. That they used so many, so freely, feeds the suspicion that they've got more than plenty from multiple sources to spare.
If there are that many legitimate but stolen certificates out there, can & should certificates themselves be trusted? If you're a US citizen do you have faith that any suspected cache of stolen certificates is safe, & won't be used improperly on Americans? If you're not a US citizen, how do you feel about this suspected cache in the hands of the US government?
There's also the question that Stuxnet & Flame brought up -- do we want this sort of mal-ware tech used when its very use means those methods & code will [if eventually] be available to anyone anywhere? Is it a good idea to give bad guys/gals & governments tools that could be used for cyber crime &/or warfare -- tools that they could not reasonably be expected to come up with on their own?
Mal-ware &/or exploits generally seek to use your hardware for the purpose of others, seek to gather [steal] data, seek to spy on you, seek to hurt you, or as with ransom-ware, force you to pay something. Some of that can be accomplished without planting a single line of code on your device(s). Every precaution you take with your devices to keep them secure may not be worth much at all if cyber criminals achieve their same goals on-line.
VPNs seem to be growing in popularity -- they certainly seem to be growing in number -- with many using them for security when using public WiFi [which isn't a bad idea]. If you use a VPN to hide your ip address OTOH you might want to read an article by Kaspsersky -- while it focuses on Tor, it points out a few ways that web sites can obtain your ip despite your using some sort of proxy. It also talks about a new way in which your device might be fingerprinted using Javascript [measureText() and getBoundingClientRect()] -- the results appear to be unique to each device so far. https://securelist.com/analysis/publications/70673/uncovering-tor-users-where-anonymity-ends-in-the-darknet/
There is a public &/or social component to all of this... As the Snowdon & other docs & leaks point out, the best & brightest minds working for government(s) are used for cyber intelligence, or in a word, offense. Defending you personally is not their job, & in fact, according to the Snowdon docs, when we got into China's networks and found them using bot-nets based in the US, the NSA decided to use those bot-nets themselves. I have not seen any reports of any governments taking some of these best & brightest minds and setting them to work protecting you.
What I have seen plenty of unfortunately are proposals for rules & regulations that are somehow supposed to make our stuff more secure. In most cases unintended [they seem unintended on the surface at least] consequences abound, e.g. often there's little or no exemptions for the security research & researchers that result in many [most?] of the security fixes we see, so they might be equally subject to prosecution as the worst cyber criminals. Most often in the US these proposals are basically money &/or power grabs for government &/or favored corporations or political allies, many [most?] of which cost you in money & freedoms.
That stuff of course makes no difference whatsoever when agencies like the OPM hire people to run their IT with zero relevant knowledge, training, or expertise, & then break all the rules already in place. Personally I find it laughable when the government points to the great skill of the allegedly Chinese hackers who got into the OPM's databases, & says that's why we need all this new stuff. Basically they had zero security measures in place, failed to comply with minimal rules & standards, & one of their employees fell for a phishing e-mail. New regulations & money might help, but IMHO only if they meant that the people in charge of this mess went to jail if/when it wasn't cleaned up & kept that way. Considering their pay, benefits, & untouchable job security, the carrot approach certainly has not worked, so maybe a big, Big stick would?