About security, whether in the digital or physical realm.
Posted on July 29, 2019 at 20:08
I have been signing outgoing electronic mail, on and off, for many years.
It’s getting harder.
Posted on July 8, 2018 at 15:35
It has been more than a decade since I wrote
Responsible Behaviour,
in which I mused about how many Wikipedia articles
the man on the Clapham omnibus would need to read to understand
a particular cryptography-related joke. I saw this, in part, as a proxy
for whether cryptography was becoming mainstream. I ended with:
Do you agree? More interestingly, what do you think the answer
will be in ten years?
Posted on April 28, 2018 at 14:41
My podcast application of choice is Overcast by Marco Arment. He has
just released Overcast 4.2, and the announcement is notable for its
enlightened approach to user privacy:
Your personal data isn’t my business — it’s a liability. I want as little as
possible. I don’t even log IP addresses anymore.
If I don’t need your email address, I really don’t want it.
…
And the first time you launch 4.2, people with email-based accounts will
be encouraged to migrate them to anonymous accounts:
Of course it’s not possible for all applications to operate anonymously, but the
principle is important: you should collect only as much personal information as
you require and no more. Anything more than this is a GDPR concern and a
data breach waiting to happen.
Posted on February 9, 2018 at 07:45
This site is going all-HTTPS, all the time. Read on for background and details.
[2018-03-11: HSTS implemented with max-age=1800
, i.e., 30 minutes.]
[2018-04-16: HSTS implemented with max-age=31536000
, i.e., one year.]
Posted on May 22, 2014 at 15:44
The key ceremony for the REEP service took place on 2014-05-18 after the REFEDS
meeting in Dublin,
Ireland.
I witnessed this ceremony and was convinced that the key
attached to this post as a
self-signed X.509 certificate was generated during the ceremony within the
hardware security module in Sweden that will be used by the REEP service to sign
metadata served by it. To certify this, I have generated a
detached signature file
for reep.pem
using my PGP key.
To the extent that you trust me to have taken care while witnessing the
ceremony, you may find that validating my signature on reep.pem
gives you some
comfort that metadata documents signed by the private key associated with
reep.pem
are, indeed, legitimate outputs of the REEP service.
As an aside about the ceremony itself, proof that a particular computational
event has occurred in a particular way is almost impossible in a world of
networking and virtual machines. We’ve known this for a long time: the paranoia
goes back at least as far as Ken Thomson’s
Reflections on Trusting Trust.
We’re not quite living in
The Matrix,
but the evidence of ones senses doesn’t really go very far
towards absolute proof. So what the other witnesses and I did during the
ceremony — all we could do, really — was gain confidence by asking
questions, taking photographs of the steps and trying to think of ways to
validate them. For example, I was later able to verify that the pkcs11-tool
command being used was indeed the one which would be installed on a system
running 64-bit Ubuntu 12.04. Unless, of course, Leif foresaw that trick and
subverted the md5sum
command as well. It’s turtles all the way down.
Posted on January 3, 2011 at 13:15
I run a simple X.509 Certification Authority for internal systems, and certain
external systems used by clients (the majority of external systems use
commercial certificates). From 2011-01-02, this CA will use a new root
certificate:
The SHA1 fingerprint for this certificate is:
34:6E:CB:19:25:15:E7:94:ED:AF:A4:F1:C4:79:BF:92:C5:8B:3C:D5
For reference, the previous root certificate is here:
The last certificate issued under the old root certificate expires on
2011-01-23.
Posted on January 4, 2008 at 21:39
If you’re at all interested in physical security as well as computer security (or, alternatively, if you find it interesting to think about security systems as opposed to just components of those systems) a new TV show called Tiger Team might be worth a look.
The idea is pretty self-explanatory if you’ve heard of the concept of a tiger team elsewhere: this is a “reality” show in which the heroes break real-world security systems using a combination of technology, brass neck and dumpster diving. Rather like Mission: Impossible but without Peter Graves and (so far) without the rubber masks. What’s not to like?
Unfortunately, I can’t see any evidence that this series will be shown anywhere
here in the UK, but you can stream the pilot episode from the cable channel’s
web site, at least for now. It’s interesting to watch the ways in which the
target’s (fairly good) security fails when approached in the right way, and the
presentation isn’t too grating even for my sensitive British ears. Some of what
you see is obviously re-enactment, but I guess that’s “reality” TV for you.
[Updated 2019-05-02: pilot episode no longer available.]
Posted on November 15, 2007 at 14:16
Bruce Schneier reports that one of the pseudo-random number generators in the recently released NIST Special Publication 800-90 (.pdf) appears to include something that looks awfully like an intentional back door:
What Shumow and Ferguson showed is that these numbers have a relationship with
a second, secret set of numbers that can act as a kind of skeleton key. If you
know the secret numbers, you can predict the output of the random-number
generator after collecting just 32 bytes of its output. To put that in real
terms, you only need to monitor one TLS
internet encryption connection in order to crack the security of that protocol.
If you know the secret numbers, you can completely break any instantiation of Dual_EC_DRBG.
It’s possible that this is accidental; if it is deliberate, the prime suspects are the NSA, who have been pushing to get this algorithm adopted for some time. So much for the usual outsider’s paranoia about how the evil TLA might be compromising our cryptography for their own nefarious ends. That’s not the scary part, though; the really scary part is the thought that perhaps that isn’t what is going on:
If this story leaves you confused, join the club. I don’t understand why the
NSA was so insistent about including Dual_EC_DRBG in the standard. It makes no
sense as a trap door: It’s public, and rather obvious. It makes no sense from
an engineering perspective: It’s too slow for anyone to willingly use it. And
it makes no sense from a backwards-compatibility perspective: Swapping one
random-number generator for another is easy.
Shumow and Ferguson’s presentation (.pdf) is short, and although there are some squiggly letters in it you don’t need to understand the mathematics of elliptic curves to follow the argument.
I look forward to seeing how this one plays out.
(Via Schneier on Security.)
Posted on August 7, 2007 at 19:19
In the wake of the Californian voting machine review, Matt Blaze and Jutta Degener invite us to play Security Public Relations Excuse Bingo:
- We read Schneier’s book
- La, la, la we’re not listening
- You’ll be hearing from our lawyers
- No one would ever think of that
- Our proprietary encryption algorithms prevent that
- … and so on ad nauseam
(Via Matt Blaze.)
[2018-07-30: updated to point to Matt Blaze’s new site.]
Posted on August 5, 2007 at 19:20
Marcus Ranum has started podcasting. The second episode in his Rear Guard podcast is a short but nicely put together rant explaining the parlous state of computer security today in terms of a dysfunctional relationship between practitioners and their organisations:
It’s clear that security will be exactly as bad as it can possibly be while
still allowing senior managers to survive. Whenever it gets across that line
— worse than it can possibly be — there will be a brief fire-drill
in order to duct tape things back together again until next time.
Last week a friend remarked, after hearing one of my long rants on an unrelated subject, that I had a very cynical view of the situation. “Thank you”, I replied, quite seriously. Marcus Ranum has a very cynical view of the security landscape: not completely without rays of hope, but nevertheless aware that a lot of bad things happen out of pure unenlightened self-interest.
Posted on July 30, 2007 at 13:34
When your browser connects to a web site protected by transport layer security of some kind (usually by accessing an https://
URL) there’s a negotiation between the two parties. Each party (browser, server) comes to the negotiation with a list of cipher suites that it is prepared to use, and the result is that one of these suites is chosen for the connection.
Recently I ran into a situation where Firefox 2.0 wasn’t connecting to a site which Firefox 1.5 had no problems with. It’s pretty hard to figure out which cipher suites Firefox is prepared to use from its documentation, so I decided to determine the answer directly by snooping on the negotiation part of the protocol.
Read on for method and results.
Posted on April 9, 2007 at 18:51
Posted on December 16, 2006 at 17:59
In Real-World Passwords, Bruce Schneier analyses a corpus of passwords retrieved from a phishing attack on the MySpace social networking site.
The good news is that it’s clear that users are slowly becoming more aware of
the security risks of bad password choice. The bad news is that things haven’t
got all that much better, really. Scheier’s punchline:
We used to quip that “password” is the most common password.
Now it’s “password1.” Who said users haven’t learned anything about security?
These days, it’s hard for me to get up much enthusiasm for any security solution that involves a lot of user education. As well as the apathy factor and the dancing pig factor, we’re fast outrunning the ability of even the most well educated user to keep up with the bad guys. I include myself with the mass of the bewildered in this respect, as evidenced by my previous post on remembering secure passwords.
The longer term answer to these problems has to involve a move away from relying solely on inherently weak technologies like passwords and towards technologies like multi-factor authentication and federated identity systems. If we don’t have to rely on the human brain’s limited ability to remember lots of secure (and therefore inherently hard to remember) passwords, we might stand a fighting chance of building secure systems.
Posted on November 30, 2006 at 17:50
Today was the official launch of the UK Federation, or the UK Access Management Federation for Education and Research to give its Sunday name. This is a huge deal for everyone involved, myself included: some people have been working towards this point since around 2000 (I’m a relative newcomer, only having put a couple of years into it so far).
In the longer term, this will be a fairly important system for many more people:
after all, the UK Federation is a federated identity framework for the whole of
the UK education and research sectors, which I’m told involve perhaps
18 million people. If we do our job well over the next few years, though,
the best case is that like all good infrastructure it will just sink down below
the point where people even notice it. That’s a hard job, and we’ve only just
started on it.
Posted on November 22, 2006 at 10:21
The virtual world of Second Life has recently been suffering from a series of attacks from what has been referred to as “grey goo”, a term which is a direct reference to the scenario of uncontrolled exponential growth in nanotech replicators. The result of a grey goo attack is that the world fills up with junk that prevents anyone getting anything else done.
I haven’t covered this before because it is well known to the point of infuriation to most people connected with Second Life. What’s been more interesting recently is that people outside that community have started picking up issues like this from Second Life, particularly people more commonly associated with security in general. For example, Ed Felten wrote a couple of articles recently about the “copybot”, which allows you to make a copy of anything you can see in-world without paying for it (with some limitations, which aren’t relevant to this discussion). Professor Felten is perhaps most well known for his work on the SDMI challenge, US v. Microsoft and more recently the (in-)security of electronic voting machines.
Directly on point to the grey goo attacks is Eric Rescorla’s Rescorla-goo;
again, this is a bit off what most people would think of as Eric’s
normal beat.
But that’s my point: if you’re involved however peripherally in security systems, you walk into something like Second Life and see a lot of problems waiting to happen; as Ed Felten puts it, these are really issues “from the It-Was-Only-a-Matter-of-Time file”. New systems should be learning from the mistakes of the past, not blundering through a series of unworkable solutions every time until they get to something that works until the next bad guy comes along. Unfortunately, that doesn’t seem to be how the world operates. Ed Felten has another appropriate quote for this: “Given a choice between dancing pigs and security, users will pick dancing pigs every time.”
If you’re interested in a bit more comment about the grey goo problem per se,
I attach the comment I added to Eric Rescorla’s article
below.
Posted on November 21, 2006 at 22:38
I learned the difference between haphazard and random a long time ago, on a
university statistics course. Since then, I’ve been wary of inventing passwords
by just “thinking random” or using an obfuscation algorithm on something
memorable (“replace Es by 3s, replace Ls by 7s”, or whatever). The concern is
that there is really no way to know how much
entropy
there is in such a token (in the information
theoretic sense), and it is probably less than you might think. People tend to
guess high when asked how much entropy there is in something; most are surprised
to hear that English text is down around one bit per letter, depending on the
context.
If you know how much information entropy there is in your password, you have a
good idea of how much work it would take for an attacker to guess your password
by brute force: N
bits of entropy means they have to try 2^N
possibilities.
One way to do this that I’ve used for several years is to take a fixed amount of
real randomness and express it in hexadecimal. For example, I might say this to
get a password with 32 bits (4 bytes) of entropy:
$ dd if=/dev/random bs=1 count=4 | od -t x1
...
0000000 14 37 a8 37
A password like 1437a837
is probably at the edge of memorability for most
people, but I know that it has 32 bits worth of strength to it. So, what is one
to do if there is a need for a stronger password, say one containing 64 bits of
entropy? Certainly d4850aca371ce23c
isn’t the answer for most of us.
When I was faced with a need to generate a higher entropy — but memorable — password recently, I remembered a technique used by some of the one-time password systems and described in RFC 2289. This uses a dictionary of 2048 (2^11
) short English words to represent fragments of a 64-bit random number; six such words suffice to represent the whole 64-bit string with two bits left over for a checksum. In this scheme, our unmemorable d4850aca371ce23c
becomes:
RUSE MET LORD CURT REEL ION
I couldn’t find any code that allowed me to go from the hexadecimal representation of a random bit string to something based on RFC 2289, so I wrote one myself. You can download SixWord.java if you’d like to see what I ended up with or need something like this yourself.
The code is dominated by an array holding the RFC 2289 dictionary of 2048 short
words, and another array holding the 27 test vectors given in the RFC. When
run, the program runs the test vectors then prompts for a hex string. You can
use spaces in the input if you’re pasting something you got out of od
, for
example. The result should be a six word phrase you might have a chance of
remembering. But if you put 64 bits worth of randomness in, you know that
phrase will still have the same strength as a password as the hex gibberish did.
Posted on November 13, 2006 at 12:47
I generated my first PGP RSA keypair way back in 1993. Some friends and I played around with PGP for e-mail for a while, but at the time few people knew about encryption and even fewer cared: the “no-one would want to read my mail” attitude meant that convincing people they should get their heads round all of this was a pretty hard sell. The fact that the software of the day was about as user-friendly as a cornered wolverine didn’t help either.
The PGP software had moved forward a fair bit both technically and in terms of usability (up to “cornered rat”) by 2002, when I generated my current DSS keypair. By this time, it was pretty common to see things like security advisories signed using PGP, but only the geekiest of the geeks bothered with e-mail encryption.
Here we are in 2006: I still use this technology primarily to check signatures on things like e-mailed security advisories (I use Thunderbird and Enigmail), but I’ve finally found a need to use my own key, and it isn’t for e-mail.
Over the years, PGP (now standardised as OpenPGP) has become the main way of signing open source packages so that downloaders have a cryptographic level of assurance that the package they download was built by someone they trust. Of course, the majority of people still don’t check these signatures but systems like RPM often do so on their behalf behind the scenes.
I’ve agreed to take on some limited package build responsibilities for such a project recently, so I’ve installed the latest versions of everything and updated my about page so that people can get copies of my public keys. Of course, there is no particular reason anyone should trust those keys; this is supposed to be where the web of trust is supposed to come in, by allowing someone to build a path to my keys through a chain of people they trust (directly or indirectly). Unfortunately, my current public key is completely unadorned by useful third-party signatures. If you think you can help change that (i.e., you already know me, already have an OpenPGP keypair and would be willing to talk about signing my public key) please let me know.
Posted on October 6, 2006 at 11:17
Another short, cogent essay from Bruce Schneier, this time on why it makes sense to be Screening People with Clearances:
Why should we waste time at airport security, screening people with U.S.
government security clearances? …
Poole argued that people with government security clearances, people who are
entrusted with U.S. national security secrets, are trusted enough to be
allowed through airport security with only a cursory screening. …
To someone not steeped in security, it makes perfect sense. But it’s a
terrible idea, and understanding why teaches us some important security
lessons.
This is worth reading just to understand how a U.S. security clearance isn’t
quite the concrete thing you perhaps assumed it was, but I think the comments on
“subjective agenda” are important too. After all, if the people who make the
rules aren’t bound by them, what incentive do they have to make sensible rules?
I think it would be fair to guess, for example, that the average lawmaker hasn’t
spent a lot of time recently standing in an airport in their stockinged feet
with their permitted items in a transparent bag.
(Via Schneier on Security.)
Posted on August 30, 2006 at 10:53
Skinflints of the world rejoice; Ross Anderson’s textbook Security Engineering is now available for free download:
My book on Security Engineering is now available online for free download
here.
I have two main reasons. First, I want to reach the widest possible audience,
especially among poor students. Second, I am a pragmatic libertarian on free
culture and free software issues; …
I’d been discussing this with my publishers for a while. They have been
persuaded by the experience of authors like David MacKay, who found that putting
his excellent book on coding theory
online actually helped its sales. …
(Via Light Blue Touchpaper.)
Posted on August 16, 2006 at 15:24
Everybody loves Eric Raymond is a pretty weird web comic to start with, combining as it often does obscure open-source in-jokes with the premise that Richard Stallman, Eric Raymond and Linus Torvalds all live together in a flat somewhere.
Today’s episode jumps over into the even more obscure realm of crypto in-jokes, with the even weirder premise that Bruce Schneier is actually a cryptographic Chuck Norris.
Clicking through to the interactive Bruce Schneier Facts Database is well worth while. My favourite random fact so far is:
Bruce Schneier doesn’t even trust Trent. Trent has to trust Bruce Schneier.
Obscure enough for you?
Posted on July 19, 2006 at 17:07
Not installing security updates isn’t really a viable strategy these days. Even
waiting a few days to see whether other people have trouble with the update is
problematic when a zero day
exploit might be available.
It’s a bit like playing Russian Roulette in a room full of people who feel their
job is to point their guns at you until you pull the trigger.
Obviously this goes wrong once in a while. The recent Samba 3.0.23 update broke access from Windows and Mac machines on my Fedora Core 4 system, but some people with Fedora Core 5 are reporting that all logins to their systems are disabled.
After a bit of searching around and trying various things, I found that in my
case I could bring my system back to life by “upgrading” to an older version of
the four packages in question.
There is some indication that version 3.0.23a will be out real soon now… but that doesn’t really make me feel completely happy. Nor does the realisation that my FC4 system will officially be “legacy” next week and I’ll need an upgrade to at least FC5 to stay within my “properly supported” comfort zone.
This kind of thing does seem to happen more often with Fedora, and anecdotally seems to be related to their strategy of pulling in new releases rather than back-porting security fixes. Moving to a more “enterprise” style system for the places where I need stability rather than the latest features is probably the right answer for me; once RHEL 5 is out I will probably take a close look at it and the equivalent CentOS release.
[Update 20060729: the 3.0.23a release doesn’t fix the problem, at least for me.]
Posted on November 29, 2005 at 15:10
Of course, the real reason I was in
Windermere was not to photograph ducks but to present some slides on the discovery problem in Shibboleth. You can download a copy of the presentation “WAYFs and Discovery” here (1.4MB PDF).
The abstract (accidentally omitted from the meeting material) was:
The standard model of Identity Provider discovery in Shibboleth deployments
is that of a federation-supplied, central discovery service called a WAYF.
Although an essential backstop, this approach has significant shortcomings.
We present some recent work in the area of multi-federation WAYFs, and review
alternative discovery technologies (both present and future) that allow
deployers to improve the user experience.
My co-author Rod Widdowson can be found here.
Posted on October 13, 2005 at 19:05
Speaking of identity, Dick Hardt of Sxip gave a cracking keynote at this year’s Open Source Conference.
If you’re at all interested in digital identity (and you’re not allergic to Larry Lessig’s presentation style), I highly recommend spending taking the fifteen minutes required to watch this. It is very light on technical details, but gets across the critical differences between “old style” digital identity and the so-called “Identity 2.0” systems that are starting to emerge. It even manages to be entertaining while it does so. And the pictures of a Vancouver “Cold Beer and Wine” store bring back memories…
Posted on July 15, 2005 at 10:45
This last week, the security people at my wife’s place of work have instituted a new policy of X-raying lunchtime sandwiches purchased outside the building. Yesterday, a security guy I’ve been saying “hi” to regularly for a year asked me to present a credential I’ve never had (and then let me talk him out of it, which didn’t improve my opinion much). And of course, our politicians have gone into emergency “let’s sneak some laws past quick, before people start thinking again” mode.
None of this was very surprising; by now everyone is used to the suffocating results of the knee-jerk “must be seen to do something” reaction after a major incident. Whether the security measures imposed make sense in any way is another question, and I’ve always put a lot of it down to woolly thinking.
A newly published interview with Bruce Schneier at Turnrow reminds me that many of these measures make more sense if you think about them as security decisions being taken by someone else, ostensibly for your benefit, but within the decision-maker’s agenda rather than your own. Cutting it down to the bone, if someone is making a cost/benefit analysis on your behalf, they are likely to make sure that they will benefit while you pay the cost. If you can throw the cost (in money, convenience, or loss of civil liberties) over the wall to someone else you can justify almost anything, no matter how small the benefit.
This is an excellent interview, distilling most of the important points of
Schneier’s book Beyond Fear into a couple of pages. Worth reading, and
worth passing around to people when they ask why something incomprehensible is
being foisted on them in the name of “security”.
[via Schneier on Security, of course]
Posted on April 21, 2005 at 15:35
Since I last wrote about the problem
with hashes, there has been a fair bit of activity and some progress:
- An internet draft is available describing the nature of the attacks on hash functions, and how different internet applications are affected. [2018-03-02: This became RFC 4270.]
- According to the OpenSSL changes file, additional hash algorithms are going to be supported in version 0.9.8. There is no indication of a date for that release, though.
- Don Eastlake’s internet draft on Additional XML Security Uniform Resource Identifiers (URIs) has progressed to its final status as RFC 4051.
I have updated my previous article to reflect this.
[2018-03-02: The Hoffman draft is now RFC 4270.]
Posted on February 21, 2005 at 10:42
People in the know are reporting that the 160-bit Secure Hash Algorithm has been broken by a group in China. When the group’s paper is published we’ll all be able to judge, but the initial reports indicate that SHA-1 has about 11 bits worth (2000 times) less collision resistance than its output length would suggest. This isn’t a huge surprise; there were some indications last year that this might happen eventually although I don’t think anyone expected things to move so quickly.
The break is a big deal for academic cryptographers, but it doesn’t seem to represent an immediate disaster in practice. Existing digital signatures and certificates are probably safe for now, in particular, as the kind of attacks you can mount against a system using collisions mainly apply to new signatures. The revised 69-bit strength of SHA-1 is still good today against all but fairly wealthy adversaries; Bruce Schneier has some estimates of how rich.
Obviously, there will now be a move towards beefier hash algorithms like SHA-256 or SHA-512 (PDF link). In the long run, because these come from the same family they may turn out to give only temporary respite. More immediately, they aren’t implemented by all current cryptographic libraries: for example, they have been in Java since 1.4 but the extremely popular OpenSSL package doesn’t ship with support for them yet.
Further up the stack, many standards already allow for selectable algorithms and for negotiated selection of them at run time. That’s harder to do with digital signature applications, because in this kind of context there isn’t normally a way to negotiate algorithms. This means, for example, that public suppliers of digital certificates probably won’t be able to shift from SHA-1 very soon, as many of the systems that use their certificates (browsers, for example) are based on cryptographic libraries that don’t support anything stronger than SHA-1 today.
One place where there seems to be a complete absence of a “Plan B” at present is the joint IETF/W3C standard for digital signatures in XML (XMLDSIG). The published standard from 2002 (also RFC 3275) only discusses SHA-1. Moving away from this position requires several steps:
- Implementation of additional algorithms in the basic cryptographic libraries such as OpenSSL (already true for some libraries, OpenSSL has code checked in but not released). [20050421: OpenSSL change log indicates this is scheduled for the 0.9.8 release.]
- A specification of URIs naming additional algorithms for XMLDSIG (Don Eastlake has an Internet Draft on this dating from last year.) [20050421: this is now RFC 4051.]
- Access to those algorithms from XMLDSIG implementations, such as Apache XML Security.
- Either:
- Some sort of standard specifying that XMLDSIG implementations should implement additional algorithms, or
- A similar kind of must-implement agreement even higher in the stack, in my area of interest either at the level of SAML or Shibboleth.
- Last but not least: everyone installs new versions of all of the above.
None of this sounds like it is going to happen overnight. It is important that
it all happens some time soon, though, as the general feeling seems to be that
it is likely that further progress will be made against SHA-1; it is just the
timing that is unknown.
Posted on February 18, 2005 at 15:23
I got an interesting phish
in today’s e-mail. Here’s how it looked in
Thunderbird:
Dera Baalcrys Membre,
Tsih eamil was setn by the Braclays svreer
to verify yoru eiaml addrsse.
…and so on. My initial fears that the bad guys have finally lost it and
just given up were allayed when I looked at the actual source of the message:
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: 7bit
…
De‮ra‬ Ba‮alcr‬ys
Memb‮re‬,
What is going on here? The message body is an attempt at
Unicode.
Code point 8238 is “right-to-left override”; code point 8236 is “pop directional formatting”. The sections contained within the
“‮
…‬
” groups are therefore supposed to be
printed backwards.
How delightfully creative. Except that the message is marked as being encoded
in ISO-8859-1, which doesn’t contain those code points. All the cleverness
(probably aimed at some mail program that accepts the invalid code points) was
ignored, leaving gibberish. The good news is that even if they fix that, the
presence of “‮
” in e-mail is going to be a pretty good indicator of
something phishy going on.
Posted on December 29, 2004 at 12:57
Netcraft have released an anti-phishing toolbar. This sounded like a great idea right up to the point where I realised I couldn’t use it because I don’t use Internet Explorer.
That’s right, this is at present something for those people who are (a) security conscious enough to read Netcraft’s newsletters but (b) not security conscious enough to have heeded the warnings to stop using Internet Explorer.
Apparently, a Firefox version of the toolbar will be made available. Until then, this idea looks just a little cynical and pointless. They are really pleased with their TV coverage, though.
Posted on December 14, 2004 at 14:28
Bruce Schneier is a well respected professional paranoid (“internationally renowned security technologist” is the way his web site puts it). He recently updated his list of tips for safe personal computing after a gap of a few years. Both old and new lists are full of sensible things you can do to make yourself more secure: if you do these things, you will be more safe. If you don’t do these things, you should at least have a rationale ready.
This year’s list is about 50% longer than the May 2001 version; I guess that doesn’t surprise me, as the environment has taken several steps in the direction of “more evil” since then. For example, phishing for bank account information was relatively unknown “way back then”. In the last year or so, this particular attack has grown by a factor of twelve (or more, depending on who you listen to) to the point where there are so many of these things in my inbox that it is sometimes hard to believe that anyone is taken in any more.
Having said which, the really interesting thing about the new list is that it is mainly the same as the old list. There are a couple of new things (buy a cheap NAT firewall box for home, don’t ever use Internet Explorer) but most of the changes seem to be rewording, clarifications and more detail.
I would personally be very interested to see Bruce’s own take on what he thinks has changed over the period. I’d also like to see him renew this list regularly. The only thing I worry about is that if the environment continues to get more hostile and nothing else improves, we are likely to need a list with just one entry: Trust No One.
Posted on August 31, 2003 at 17:36
A recent article about the SoBig.F virus in the Economist magazine mentioned the idea of a so-called “Warhol Worm”. I’d never heard this term before, so I went looking for the original use. Nicholas Weaver of UCB turns out to have coined this term to denote a worm that could infect every potential host in 15 minutes. This is of course a reference to Andy Warhol’s quip that “In the future, everybody will have 15 minutes of fame”.
If you read Weaver’s article, though, you’ll see that the important thing isn’t how long a worm is famous for. Instead, he postulates (among other mechanisms) an author who quietly scans the internet for a particular vulnerability for some time, perhaps weeks or months, in order to build a list of susceptible machines. When the worm is released, these machines are used as the initial attack set. Combining a “hitlist” of 10,000 to 50,000 machines with other techniques, the result would be very fast infection of all potential machines, certainly far faster than security software vendors could possibly respond.
SoBig.F wasn’t a Warhol Worm, and I don’t know that we’ve seen one yet. The
possibility that someone might use this “hitlist scanning” technique is just
another reason to keep up to date with all those security patches, even for
vulnerabilities for which no exploit is yet known.
Posted on June 1, 2003 at 12:43
Security problems are usually built right into products and called “features”.
Sometimes, though, the vendor provides them free of charge as an after-market
upgrade. This particularly egregious example comes from Palm Europe.