Diversity of signature schemes

1,643 views
Skip to first unread message

Moody, Dustin (Fed)

unread,
Jan 21, 2021, 11:13:03 AM1/21/21
to pqc-forum
All,

NIST notes that in the third round there are 3 signature finalists, and 3 signature alternates.  Recent cryptanalysis has impacted the analysis of both Rainbow and GeMSS.  We are concerned about a lack of diversity of signature schemes for security and application reasons.  As a starting point for discussion, we would like to point out a couple of paragraphs from NISTIR 8309, our 2nd Round report:

 

"NIST sees SPHINCS+ as an extremely conservative choice for standardization. If NIST’s confidence in better performing signature algorithms is shaken by new analysis , SPHINCS+ could provide an immediately available algorithm for standardization at the end of the third round. "

And

 

"NIST is pleased with the progress of the PQC standardization effort but recognizes that current and future research may lead to promising schemes which were not part of the NIST PQC Standardization Project. NIST may adopt a mechanism to accept such proposals at a later date. In particular, NIST would be interested in a general-purpose digital signature scheme which is not based on structured lattices." 

We would like to ask for feedback from the community about this issue.  Thank you.


Dustin Moody

NIST PQC team


Edoardo Persichetti

unread,
Jan 21, 2021, 11:54:32 AM1/21/21
to Moody, Dustin (Fed), pqc-forum
Hi Dustin (and all)

If you are talking about re-opening the call, or anyway starting a second phase, dedicated just to signatures, I think it is a good idea. I believe the current concerns you voiced below are legitimate. 

If I am not wrong, this possibility was already mentioned in some of the earlier talks by the NIST team members. Surely, standardizing an alternative (say “backup”) scheme, as you say, does not have the same time priority as standardizing the main scheme (likely one of the lattice schemes, at this point), so performing this at a later stage makes sense to me.

Best,
Edoardo

On Jan 21, 2021, at 11:12 AM, 'Moody, Dustin (Fed)' via pqc-forum <pqc-...@list.nist.gov> wrote:

    EXTERNAL EMAIL : Exercise caution when responding, opening links, or opening attachments.
 
-- 
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/DM6PR09MB5096782DB2F165C8A2E0ADB5E5A10%40DM6PR09MB5096.namprd09.prod.outlook.com.

gard...@gmail.com

unread,
Jan 21, 2021, 2:55:40 PM1/21/21
to pqc-forum, epersi...@fau.edu, pqc-forum, dustin...@nist.gov
Hi All,

For what it's worth, and speaking for myself and not my organization. I'd like to see more than one choice available in order to have some degree of cryptographic agility. 

I don't think opening the floor to new schemes is a bad idea, but I also think that so far SPHINCS+ looks like a decent tool and it would be a shame to not include it in the toolbox.  Obviously it would have to be the subset which doesn't include Haraka as mentioned in NIST.IR.8309.pdf. 

Discussions with coworkers has brought up the idea that SPHINCS+ doesn't add to diversity given the existence of LMS/XMSS.  I don't fully agree with that statement, I believe there's a significant chance for misuse/implementation error when LMS/XMSS are used in a distributed system, for example a Signing as a service use case.  The need to ensure a leaf isn't used twice will either constrain the ability to scale or impose complicated tree hierarchy/partitioning schemes and accepting the loss of portions of the tree whenever an instance fails. 

And then as to the viability in TLS I'm not sure this is the right forum to be an arbiter of market viability.  Removing the tool blocks us from having a backup algorithm and from discovering Innovative solutions to problems using that tool.

Best Regards,
Michael

Watson Ladd

unread,
Jan 21, 2021, 11:01:53 PM1/21/21
to Moody, Dustin (Fed), pqc-forum
I would welcome SPHINCS standardization, even if not all applications
can use it, since some can especially when worried about high security
and with infrequent signing. I would hope that NIST could eventually
standardize SQI-Sign if future research continues to make it a bit
faster and justify its security claims further: it's a very appealing
candidate for some settings.

>
>
> Dustin Moody
>
> NIST PQC team
>
>
> --
> You received this message because you are subscribed to the Google Groups "pqc-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
> To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/DM6PR09MB5096782DB2F165C8A2E0ADB5E5A10%40DM6PR09MB5096.namprd09.prod.outlook.com.



--
Astra mortemque praestare gradatim

D. J. Bernstein

unread,
Jan 23, 2021, 9:37:56 AM1/23/21
to pqc-forum
Is there a reason that NIST didn't react analogously to, e.g.,
https://eprint.iacr.org/2020/707?

This is one of many recent papers improving lattice attacks. (See
Section 9 of https://ntruprime.cr.yp.to/nist/ntruprime-20201007.pdf for
a survey of a dozen more.) This particular paper focuses on enumeration
cost, reducing cost C to cost roughly C^0.68, obtaining what appear to
be the fastest attacks known against various NISTPQC candidates in
various reasonable post-quantum security metrics.

Did NIST highlight this attack? Suggest that the speedup is a matter of
concern? Suggest taking action accordingly? Where are the principles
that anyone can read to understand which attacks NIST is choosing to
highlight? The call for proposals certainly doesn't make this clear.

For comparison, NIST's intermediate and final AES reports tried to
systematically list and quantify the whole history of attacks against
AES candidates, along with all sorts of other comparative information.
NIST's reports for NISTPQC have much less information, even though
NISTPQC is much more complicated and much more risky.

I also don't understand why NIST's reaction to (some) attacks is being
expressed solely as concern about a "lack of diversity". Shouldn't the
top concern here be about the terrifying possibility that what the
community ends up deploying will be breakable? If this occurs then it
will be a security disaster. Saying "surely we'll recognize the disaster
and switch to this backup system" doesn't magically erase the damage.

Shouldn't attacks make us ask, for example, whether NIST's procedures
have been overloading cryptanalysts, and increasing security risks by
applying performance pressures beyond actual user requirements, and
generally failing to recognize how risky NISTPQC is?

The call for proposals said that security was job #1. It didn't claim
that security failures are just fine as long as there's a backup plan.
NIST seems to have started pushing the whole NISTPQC "diversity" idea
only later, after >90% of the NISTPQC submissions had suffered public
losses of security. It's horrifying to imagine that NIST is thinking
something like "It's not a problem if we standardize a lattice system
that turns out to be weak, since we'll standardize a backup too and just
tell people to switch to that"; the policy should instead be "Security
failures are never acceptable, whether or not there is a backup plan".

---Dan

P.S. The call for proposals does mention diversity, but in a completely
different context: "If there are important applications that require
radically different performance tradeoffs, NIST may need to standardize
more than one algorithm to meet these diverse needs."
signature.asc

daniel.apon

unread,
Jan 24, 2021, 3:29:02 PM1/24/21
to pqc-forum, D. J. Bernstein, pqc-...@list.nist.gov
Hi Dan,

Yep-- we're aware of 2020/707. (It's probably an okay assumption to make that if something is posted to ePrint or published in a major conference, we read it carefully. You might even want to assume that those in the public research community that are engaged with post-quantum cryptography and standardization also read it. I wouldn't object to such an assertion!)

Regarding your comments about a "backup plan," this seems to me like a significantly misleading characterization of our position on the topic.
If we standardize a structured lattice signature scheme, we have no intention of picking one that we believe to be weak in any regard. Surely analysis on such schemes should be done seriously and thoroughly throughout the third round, and it seems possible to complete such analyses within that timeframe.

However, maybe it's possible to walk and chew bubble gum at the same time. (For non-native English speakers, this is an idiomatic phrase meaning, "to multi-task," or "to do two different things at once.")

So I guess I would turn a question back to you (in the spirit of fostering more discussion on the topic of this thread): Do you believe there is value in having standardized more than one general-purpose signature scheme, from disjoint computational hardness assumptions? Should we adopt a mechanism for doing so? Particularly, for the case of non-lattice PQ-signature schemes? (If no, outright, then sure-- that would be a valid type of feedback to us. If so, but you think it is too difficult to consider two disjoint types of cryptosystems simultaneously, then sure-- that would be a valid type of feedback to us. But maybe you would think that the community is capable of considering more than one topic at the same time? Looking for your input here.)

Also, I'd point out Edoardo's comment: " If you are talking about re-opening the call, or anyway starting a second phase, dedicated just to signatures, I think it is a good idea."
It is certainly a possibility to consider something like this in the context of signatures, but it definitely depends on whether people, broadly, think it would be useful or not. After all, I don't think anyone expected NIST to just disappear..

Mike Hamburg

unread,
Jan 25, 2021, 2:20:40 PM1/25/21
to Moody, Dustin (Fed), pqc-forum
Hello all,

Here is my feedback on these two paragraphs.

I think that NIST should have made SPHINCS+ a finalist, and promoting it to a finalist now would be appropriate.  Sure, it doesn’t fit in all applications, and it also doesn’t need as much analysis, so it kind of made sense to de-prioritize it in round 3.  But it will be useful as a standard, and promoting it to a finalist will signal that NIST intends to standardize it relatively soon.  NIST has already suggested this in a less official way in the second round report, and now they should consider their conditions to have been met for promoting SPHINCS+.

For near term deployments, PQ crypto is already a fallback.  We don’t know when or if large-scale QCs will be built, and deployments with a long replacement cycle (embedded, industrial systems, etc) are at risk if they don’t support at least one PQ encryption algorithm and at least one PQ signature algorithm, if applicable, to replace their classical algorithms.  Some systems can accept SPHINCS+’s significant performance hit, perhaps not for immediate deployment or as the best case, but at least as a hedge against QCs being built before the system can be completely replaced.  Also, such systems tend to be conservative and to prefer standardized crypto, and they may have long design cycles as well.  So promoting SPHINCS+ to finalist with the intent to standardize it unless there are significant issues would be a useful signal to industry that they should start designing and testing SPHINCS+ in their systems as fallback against QC development.

SPHINCS+ shouldn’t primarily be a fallback against Falcon or Dilithium being broken.  NIST should extend Round 3 until they, and the community, are confident that their choice is very unlikely to be broken, though I’m sure some developers will still prefer SPHINCS+ as an extra-conservative option.  If it looks like getting confidence on Falcon vs Dilithium vs nothing will take a long time, then NIST could extend Round 3 but standardize SPHINCS+ so at least we will have one general-purpose PQ signature.


As for the second paragraph, it makes complete sense that NIST should eventually consider other PQ signature schemes for standardization.  I’m a little concerned if they want this for security reasons, especially because SPHINCS+ can partially fill that role.  But sure, diversity is nice to hedge against a small risk of a catastrophic break.  Application reasons definitely make sense because lattice signatures are somewhat large.  Anyway, researchers should be aware that alternative sigs are an eventual goal of NIST.

But I don’t think NIST should focus attention on standardizing new PQC algorithms until the current process is over, unless there is a breakthrough algorithm that has both great performance / usability properties, and a security proof based on mature assumptions.  Systems like SQI-Sign won’t be ready for years anyway, so we can postpone standardization work until after the current PQC project is over.

Regards,
— Mike



Nicolas Sendrier

unread,
Jan 25, 2021, 2:43:09 PM1/25/21
to pqc-forum

Dear Dustin and NIST,

To us, it appears clearly that, at least for code-based signatures--Hamming, rank, and possibly other metrics--the scientific situation has evolved significantly and positively since the 2017 call. We believe that numerous actors are now in a position to submit new secure and efficient proposals for digital signatures, including general purpose ones.

Besides variations on the Stern scheme (i.e. non-interactive zero-knowledge--a la Fiat Shamir schemes) which in our opinion are interesting options, there have been recently some further developments. For instance if we just cite on our own recent works, there is a hash-and-sign signature (Wave) and a rank metric variation of the Schnorr-Lyubashevsky approach (Durandal). We cannot speak for others, but we certainly plan to submit some proposals if a call is made.

Best regards,

Thomas Debris-Alazard
Philippe Gaborit
Nicolas Sendrier
Jean-Pierre Tillich

Dan Brown

unread,
Jan 29, 2021, 1:47:55 PM1/29/21
to Moody, Dustin (Fed), pqc-forum

Hi Dustin and PQC Forum,

 

My 2c,

 

Generally speaking, a Round 4 for both signatures and KEMs would be good, because it could increase diversity and thereby increase security in strongest-link multiple-scheme cryptography (and perhaps agility in single-scheme crypto, if downgrade attacks can be thwarted out-of-band) – all this assuming that readying Round 4 does not delay finishing Round 3.

​​​​​

Code-based and isogeny-based signatures would certainly help diversity.  A hash-based signature is quite conservative, as you say, so well worth standardizing as part of Round 3.

 

Best regards,

 

Dan

--

You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/DM6PR09MB5096782DB2F165C8A2E0ADB5E5A10%40DM6PR09MB5096.namprd09.prod.outlook.com.


This transmission (including any attachments) may contain confidential information, privileged material (including material protected by the solicitor-client or other applicable privileges), or constitute non-public information. Any use of this information by anyone other than the intended recipient is prohibited. If you have received this transmission in error, please immediately reply to the sender and delete this information from your system. Use, dissemination, distribution, or reproduction of this transmission by unintended recipients is not authorized and may be unlawful.

伊藤忠彦

unread,
Feb 3, 2021, 2:24:24 PM2/3/21
to pqc-forum, dustin...@nist.gov

Hi Dustin and PQC forum

 

I would like to comment about Separation of hash with signature algorithms.

I am sorry if my concern had been discussed, or my statement or paper below is not clear enough.

<https://eprint.iacr.org/2020/990>

 

As far as I understand, current candidates for digital signature does not need to support separation of hash.

However, I really think a mechanism similar to separation of hash is necessary for interoperability with traditional crypto system (especially with HSM).

 

We often separate Public key cryptography function into two entities, like data controller and key manager, 

so that data controller can calculate hash, and key manager able to focus on key management and asymmetric operation with HSM.

This feature is also important to sign large content (e.g., 10GigaByte data) with HSM.

 

With above background, we hope at least one of finalist signature algorithms support separation of hash (or similar mechanism to handle large file).

 

Regards Tadahiko Ito


2021年1月22日金曜日 1:13:03 UTC+9 dustin...@nist.gov:

伊藤忠彦

unread,
Feb 4, 2021, 6:46:15 AM2/4/21
to pqc-forum, 伊藤忠彦, dustin...@nist.gov

I think I should clarify my statement bit more.

 

I believe my problem can be solved with following approaches.

1) Disassemble current PQC signature into two, and implement them

2) add external hash protocol with current PQC signature, and use that composite protocol for digital signature

 

At the very beginning I thought 2) would be more tangible choice, but some people said 1) is better.

I am not sure where to discuss about choice of  2), but here seems to be suitable place to discuss about choice of 1).

 

If anyone have any opinion about choice of 1), I would love to hear that.

 

Regards Tadahiko Ito


2021年2月4日木曜日 4:24:24 UTC+9 伊藤忠彦:

David A. Cooper

unread,
Feb 10, 2021, 3:22:02 PM2/10/21
to 伊藤忠彦, pqc-forum
Hello Tadahiko,

I believe that it is reasonable for the signature standard to support the separation of the hashing from the use of the private key in order to support scenarios in which it is not feasible to send the entire message to the cryptographic module that holds the private key. This is something that can be supported using any digital signature algorithm, using one of the two methods below, so I don't believe it will be a consideration in deciding which schemes to standardize, but it is something that can be considered when writing the final standard specification.

In my personal opinion, option 1 should be used unless there is some reason that it shouldn't be used. In the case of currently standardized algorithms, the Cryptographic Algorithm Validation Program (CAVP) allows for the validation of implementations of RSA and ECDSA that take the hash of a message as input rather than the message itself (see "Component Testing" in https://csrc.nist.gov/Projects/cryptographic-algorithm-validation-program). In the case of EdDSA, the draft version of FIPS 186-5 defines a prehash version.

In the case of GeMSS and Rainbow, it seems straightforward to just allow the hash of the message to be provided as the input. While I am not a cryptographer, it also seems to me that in the case of Dilithium it would be acceptable to allow the input to the cryptographic module to be μ = H(tr || M) rather than M, since tr is a publicly known value. On the other hand, Falcon and SPHINCS+ both used randomized hashing, and allowing an external entity to choose the randomizer may compromise or weaken the security of these schemes. So, the only option would be to add prehash versions, where the prehashing is performed using a hash function with collision resistance comparable to the security level of the scheme. Similarly, Picnic would not seem to allow for performing the hashing externally, but the specification already mentions the possibility of a prehash version, where the hash of the message is signed rather than the message itself.

While I agree that the signature standard should support the separation of hashing from the use of the private key, I am surprised by the claim that there are uses cases that involve signing very large (e.g, 10 GB) messages. The paper that you referenced mentions examples such as document and code signing. However, I was under the impression that both of these use cases used the Cryptographic Message Syntax (CMS) or something comparable. In most cases a CMS signature is created by first creating a set of signed attributes, one of which is the digest of the document to be signed, and then digitally signing the set of signed attributes. So, the message that is actually signed (the set of signed attributes) is relatively short (perhaps 200 to 300 bytes), even if the document to be signed is very large. While there is a version of CMS in which there are no signed attributes and the digital signature is computed for the document itself, I did not think that version in scenarios such as document or code signing.

Thanks,

David

Stern, Morgan B

unread,
Feb 11, 2021, 5:00:41 PM2/11/21
to pqc-...@list.nist.gov, Moody, Dustin (Fed)

SPHINCS+ is based on a very mature primitive and is a sound design. That said, some choices should be tweaked before standardization occurs. In particular, in SPHINCS+-SHA-256 there is an issue with the definition of the H_msg function so that the security of the signature presently relies on the multi-target second pre-image resistance of the SHA-256 hash function.

 

As defined in section 7.2.2 of the round 3 specification,

 

H_msg(R, PK.seed, PK.root, M) = MGF1-SHA-256(SHA-256(R||PK.seed||PK.root||M), m).

 

where M is the message, m is a system parameter, R is public (pseudo)randomness, and both PK.seed and PK.root are publicly known.

 

This means that the H_msg is MGF1-SHA-256(K, m) where K is a 256-bit string. Each message signed by Alice provides a distinct K and an (R,M) pair that yields that K. If any other nonce/message pair (R',M') yielded the same K, then the FORS and Hypertree signature for (R',M') would be the same as those already published for (R,M).

 

For example, after signing two messages M1 and M2, a signer will have produced a K1 and K2. An attacker will now produce a valid signature if they can find an (R,M) pair where SHA-256(R||PK.seed||PK.root||M) is either K1 or K2.

 

Morgan Stern

NSA Cybersecurity

 

 

From: 'Moody, Dustin (Fed)' via pqc-forum <pqc-...@list.nist.gov>

Sent: Thursday, January 21, 2021 11:13 AM
To: pqc-forum <pqc-...@list.nist.gov>

--

Watson Ladd

unread,
Feb 12, 2021, 2:06:44 AM2/12/21
to Stern, Morgan B, pqc-...@list.nist.gov, Moody, Dustin (Fed)


On Thu, Feb 11, 2021, 2:00 PM 'Stern, Morgan B' via pqc-forum <pqc-...@list.nist.gov> wrote:

SPHINCS+ is based on a very mature primitive and is a sound design. That said, some choices should be tweaked before standardization occurs. In particular, in SPHINCS+-SHA-256 there is an issue with the definition of the H_msg function so that the security of the signature presently relies on the multi-target second pre-image resistance of the SHA-256 hash function.

 

As defined in section 7.2.2 of the round 3 specification,

 

H_msg(R, PK.seed, PK.root, M) = MGF1-SHA-256(SHA-256(R||PK.seed||PK.root||M), m).

 

where M is the message, m is a system parameter, R is public (pseudo)randomness, and both PK.seed and PK.root are publicly known.

 

This means that the H_msg is MGF1-SHA-256(K, m) where K is a 256-bit string. Each message signed by Alice provides a distinct K and an (R,M) pair that yields that K. If any other nonce/message pair (R',M') yielded the same K, then the FORS and Hypertree signature for (R',M') would be the same as those already published for (R,M).

 

For example, after signing two messages M1 and M2, a signer will have produced a K1 and K2. An attacker will now produce a valid signature if they can find an (R,M) pair where SHA-256(R||PK.seed||PK.root||M) is either K1 or K2.


Is there a particular tweak you would suggest to avoid it? Off the top of my head this sort of multitartget seems almost unavoidable.

 

Morgan Stern

NSA Cybersecurity

 

 

From: 'Moody, Dustin (Fed)' via pqc-forum <pqc-...@list.nist.gov>
Sent: Thursday, January 21, 2021 11:13 AM
To: pqc-forum <pqc-...@list.nist.gov>
Subject: [Non-DoD Source] [pqc-forum] Diversity of signature schemes

 

All,

 

NIST notes that in the third round there are 3 signature finalists, and 3 signature alternates.  Recent cryptanalysis has impacted the analysis of both Rainbow and GeMSS.  We are concerned about a lack of diversity of signature schemes for security and application reasons.  As a starting point for discussion, we would like to point out a couple of paragraphs from NISTIR 8309, our 2nd Round report:

 

"NIST sees SPHINCS+ as an extremely conservative choice for standardization. If NIST’s confidence in better performing signature algorithms is shaken by new analysis , SPHINCS+ could provide an immediately available algorithm for standardization at the end of the third round. "

And

 

"NIST is pleased with the progress of the PQC standardization effort but recognizes that current and future research may lead to promising schemes which were not part of the NIST PQC Standardization Project. NIST may adopt a mechanism to accept such proposals at a later date. In particular, NIST would be interested in a general-purpose digital signature scheme which is not based on structured lattices." 

We would like to ask for feedback from the community about this issue.  Thank you.

 

Dustin Moody

NIST PQC team

 

--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/DM6PR09MB5096782DB2F165C8A2E0ADB5E5A10%40DM6PR09MB5096.namprd09.prod.outlook.com.

--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

Andreas Hülsing

unread,
Feb 12, 2021, 3:02:33 AM2/12/21
to pqc-...@list.nist.gov

Dear Morgan, dear all,


I am not sure what this message is exactly about.


I totally agree with you. Indirectly, SPHINCS+ needs multi-target second preimage resistance of SHA2-256 (as it is implied by the properties we require). SPHINCS+ needs that SHA2-256 is an interleaved target subset resilient hash function family as stated in  Section 9 which implies multi-target second preimage resistance. A definition of this property and an analysis of the complexity of generic classical and quantum attacks can be found in Section 9.


To phrase it differently, the "attack approach" that you describe would violate interleaved target subset resilience, and is therefore not possible if SHA2 has the required security properties.


I hope this answers your unstated questions?


Best wishes,


Andreas

Scott Fluhrer (sfluhrer)

unread,
Feb 12, 2021, 4:41:29 PM2/12/21
to Stern, Morgan B, Moody, Dustin (Fed), pqc-forum

Thank you for your kind words about SPHINCS+.

 

As for the attack, I would note that it only applies to NIST Level 5 (256 bit hash) parameter sets; for Level 1 and Level 3 parameter sets, there are other attacks that are easier.

 

The Sphincs+ design assumes that no more than 2^64 signatures are generated by one private key.  And, because we include the public seed and root in the SHA256, that means that an attacker cannot attack two different public keys at once.

 

So, assuming that an attacker has 2^64 signatures signed with a single private key; he then generates various R, M values and computes the corresponding SHA256(R||PK.seed||PK.root||M) value, and sees if he has a valid signature that used that value; that hash is 256 bits (obviously), and if we assume that SHA-256 acts like a random oracle, he has a probability 2^{-192} of being one of the 2^64 values – hence this will require an expected 2^192 attempts before succeeding.

 

For Level 3, there are a number of other ways to forge by performing an expected 2^192 hashes (which are a lot easier because they don’t involve wading through a large pile of targets), and so this approach doesn’t reduce the security (and it’s even more so for Level 1).

 

On the other hand, your attack does appear to be valid against SHA-256 Level 5 parameter sets.  And, it is a place where we don’t meet our intention of relying only on (second) preimage resistance (as seen by my assumption that SHA-256 acts like a random oracle).

伊藤忠彦

unread,
Feb 13, 2021, 9:56:12 PM2/13/21
to David A. Cooper, pqc-forum

Thank you David


Thank you very much for comment and information about separation of hash. 

It would be very helpful for us.

I discussed with our pdf implementer and confirmed that most modern PDF implementations use CMS as you noted. Combining his and your comment, so far, we are now thinking that current best practice for us should be a requirement (or prioritization of) something like CMS, for large file signing services which would support PQC (which should be better for cryptographic and security agility points also).


Regards Tadahiko Ito



2021年2月11日(木) 5:22 'David A. Cooper' via pqc-forum <pqc-...@list.nist.gov>:
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

Kelsey, John M. (Fed)

unread,
Feb 16, 2021, 1:57:07 PM2/16/21
to Scott Fluhrer (sfluhrer), Stern, Morgan B, Moody, Dustin (Fed), pqc-forum

I believe there’s also a long-message second preimage attack that applies here.  (Ray Perlner pointed this out in a discussion.) 

 

  1. I request a signature on a single very long message (just under 2^{64} bits long), with about 2^{55} intermediate hash values.
  2. I choose an arbitrary R, and construct an expandable message after R || PK.seed || PK.root
  3. I do a 2^{201} search to find a linking message. 
  4. I can now produce a forgery—a new message with a valid signature. 

 

This is a known (published in 2005!) second-preimage attack on SHA256 (or any other MD hash), but it works after a single very long message query. 

 

If we are allowed to query many very long messages,  we can do still better.  With 2^{64} maximum-length messages, we have 2^{119} target intermediate hash values, so we can do the attack for around 2^{137} work, again producing a forgery.  In this case, we’d also get an attack that would lower the classical security even of the 192-bit (n=24) version.


Disclaimers:

 

  1. This seems like a really implausible attack scenario—we’re not talking about anything that’s actually going to happen!
  2. I haven’t costed out these attacks in detail here—I’m just counting hash operations.   

 

It seems like just doubling the size of this internal seed value, SHA256(R || PK.seed || PK.root || M) for the 256-bit classical security level would be enough to block the attack. 

 

--John

 

Andreas Hülsing

unread,
Mar 17, 2021, 12:03:14 PM3/17/21
to Kelsey, John M. (Fed), Scott Fluhrer (sfluhrer), Stern, Morgan B, Moody, Dustin (Fed), con...@sphincs.org, pqc-forum

Dear all,

The SPHINCS+ team discussed the points raised by Morgan Stern and John Kelsey on list in earlier mails. We agree that the construction of H_msg in case of the SHA2 instantiations can be done better.

We will make the following two changes to the construction of H_msg for SHA2:

1.) For the L5 parameters we change the hash function used to instantiate H_Msg and PRF_msg to SHA2-512. The change for H_msg resolves the issue raised by Morgan Stern. The change for PRF_msg is just for consistency.

2.) We change the construction of H_msg to include the string "R || PK.seed" in the MGF1 call, i.e., H_msg := MGF1-SHA-X(R || PK.seed || SHA-X(R || PK.seed || PK.root || M ), m) (where X is 256 for L1 & L3, and 512 for L5) . This prevents the multi-target long-message second preimage attacks pointed out by John that would still reduce the security of the L3 parameters by a small amount.

While we do deploy change number two, we do this as it is essentially for free. However, we want to note that we consider the long message attacks entirely impractical. Let me give a small example:

The long message attack (when using the old SHA2-256 construction for H_msg) has a success probability of ~ [hash queries] * [signature queries] * [message length] / [chaining value size] where the message length is expressed in number of message blocks. Phrased differently, we use log_2( [signature queries] * [message length]) bits of security. Assuming the maximum values, we have 2^64 signature queries with a maximum length of 2^55 blocks resulting in a loss of 119 bits. This would clearly violate the L3 security level where [chaining value size] = 2^256 and hence we end up far below 2^192 classical attack complexity.

However, signature queries are answered by the honest user. Assume our user was signing messages on a single machine and that one SHA2-256 compression function call could be done in 2^-22 seconds (~200ns). The total amount of compression function calls required to compute above hashes is 2^64 * 2^55 = 2^119. At above speed that is 2^97 seconds. Considering that a year has about 2^25 seconds this means a total of 2^72 years (and an honest user will not speed up / massively parallelize the computation for our adversary).

Sure, a user might use a key on multiple machines. However, even with a million machines that use the same key (which leads to entirely different security issues) we are talking about 2^52 years. Turning this around, if the key is used for a standard lifetime of about 2^2 years, on about a million machines continuously generating signatures, the message blocks of all signed messages will sum up to about 2^69 (you may distribute this between message length and number of messages as you like). And this would end up with an attack complexity of about 2^187 (i.e., just below 2^192). Given all the simplifying assumptions that I made (like the machines continuously signing and not counting signing time), I think we would be safe to consider this unrealistic as an L3 attack.

The numbers above can also be turned around to argue that it seems unlikely that signers will ever use a key pair for 2^64 signatures: Ignoring the signing time of SPHINCS+ and just counting the message compression, it requires one million machines that use the same key pair to continuously sign messages for four years to sign a total of 2^64 messages of 64 bytes. Considering that SPHINCS+ signatures rather take a bit more than a millisecond, this adds a factor 1000, either to the number of machines (one billion) or to the number of years (4000). In even the most demanding use-cases we have seen so far, signers sign substantially below these limits.

Best wishes,

Andreas for the SPHINCS+ team


--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

Derek Atkins

unread,
Mar 22, 2021, 11:04:47 AM3/22/21
to john....@nist.gov, ie...@huelsing.net, mbs...@nsa.gov, sflu...@cisco.com, dustin...@nist.gov, con...@sphincs.org, pqc-...@list.nist.gov
HI,

On Wed, 2021-03-17 at 17:02 +0100, Andreas Hülsing wrote:
[snip]

Sure, a user might use a key on multiple machines. However, even with a million machines that use the same key (which leads to entirely different security issues) we are talking about 2^52 years. Turning this around, if the key is used for a standard lifetime of about 2^2 years, on about a million machines continuously generating signatures, the message blocks of all signed messages will sum up to about 2^69 (you may distribute this between message length and number of messages as you like). And this would end up with an attack complexity of about 2^187 (i.e., just below 2^192). Given all the simplifying assumptions that I made (like the machines continuously signing and not counting signing time), I think we would be safe to consider this unrealistic as an L3 attack.

We're talking signatures here, not key agreement keys.  I think signature keys need a lifetime more like 2^5 or possibly even 2^6 years, not 2^2.  Some might even argue 2^7!   Granted, this doesn't significantly change your analysis (what's the difference between 2^52 vs 2^48 years in the scale of human life??).

-derek

-- 
Derek Atkins
Chief Technology Officer
Veridify Security - Securing the Internet of Things®

Office: 203.227.3151  x1343
Direct: 617.623.3745
Mobile: 617.290.5355
Email: DAt...@Veridify.com

This email message may contain confidential, proprietary and / or legally privileged information and intended only for the use of the intended recipient(s) and others specifically authorized. Any disclosure, dissemination, copying, distribution or use of the information contained in this email message, including any attachments, to or by anyone other than the intended recipient is strictly prohibited.  If you received this in error, please immediately advise the sender by reply email or at the telephone number above, and then delete, shred, or otherwise dispose of this message.

Ruben Niederhagen

unread,
Mar 22, 2021, 12:14:38 PM3/22/21
to con...@sphincs.org, pqc-...@list.nist.gov
Hi Derek!

On 3/22/21 4:04 PM, Derek Atkins wrote:
> We're talking signatures here, not key agreement keys. I think
> signature keys need a lifetime more like 2^5 or possibly even 2^6
> years, not 2^2. Some might even argue 2^7! Granted, this doesn't
> significantly change your analysis (what's the difference between
> 2^52 vs 2^48 years in the scale of human life??).
I believe you are thinking of the public key operation here - so
verifying a signature even many years later?

This is not affected by the claim of Andreas; he was considering of the
private key usage - for the signing of messages and documents.

So - after two to four years, you should indeed expire your private
signing key (you may use it for a last time to sign a new key pair) -
the 'old' signatures of course remain valid and the 'old' public key
still can be used for their verification.

Ruben

Russ Housley

unread,
Mar 22, 2021, 12:19:26 PM3/22/21
to Ruben Niederhagen, con...@sphincs.org, pqc-...@list.nist.gov
Ruben:
I think that Derek is pointing out that CAs very often use a private key for much longer than 4 years.

Russ

Gregor Seiler

unread,
Mar 22, 2021, 12:19:51 PM3/22/21
to pqc-...@list.nist.gov
Hi,

On Wed, 2021-03-17 at 17:02 +0100, Andreas Hülsing wrote:
> The numbers above can also be turned around to argue that it seems
> unlikely that signers will ever use a key pair for 2^64 signatures:
> Ignoring the signing time of SPHINCS+ and just counting the message
> compression, it requires one million machines that use the same key
> pair to continuously sign messages for four years to sign a total of
> 2^64 messages of 64 bytes. Considering that SPHINCS+ signatures rather
> take a bit more than a millisecond, this adds a factor 1000, either to
> the number of machines (one billion) or to the number of years
(4000). > In even the most demanding use-cases we have seen so far,
signers sign
> substantially below these limits.

But this is on a single core, no? So maybe there is a factor of 2^6 if
the servers have 64 cores each? And then there is Moore's law. If it
continues for a couple of more decades, then in 40 years the 2^64
signatures can maybe be produced on a single server in a year. Also this
is without dedicated hardware accelerators. The Bitcoin network is
currently doing 2^67 hashes per second.

Cheers,
Gregor

Bas Westerbaan

unread,
Mar 22, 2021, 12:23:51 PM3/22/21
to Gregor Seiler, pqc-...@list.nist.gov

> But this is on a single core, no? So maybe there is a factor of 2^6 if the servers have 64 cores each? And then there is Moore's law. If it continues for a couple of more decades, then in 40 years the 2^64 signatures can maybe be produced on a single server in a year. Also this is without dedicated hardware accelerators. The Bitcoin network is currently doing 2^67 hashes per second.

Let’s consider Let’s Encrypt. They issue about 2M certificates per day. Over the past four years this daily rate grew by about 1/2M per year. If this trend continues, which I doubt, then they’ll have issued 2^45.5 certificates in 20 years.

Best,

Bas

Brent Kimberley

unread,
Mar 22, 2021, 12:42:35 PM3/22/21
to Bas Westerbaan, Gregor Seiler, pqc-...@list.nist.gov
The growth seems linear as of late. But when (from a 'shale chart' perspective) will the fin field-effect transistor be depreciated?
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fa%2Flist.nist.gov%2Fd%2Fmsgid%2Fpqc-forum%2F7407B2C0-588C-40D0-ADD6-4CAA12D03A99%2540westerbaan.name&amp;data=04%7C01%7C%7Cbb72d31f685c45915de108d8ed4ee22f%7C52d7c9c2d54941b69b1f9da198dc3f16%7C0%7C1%7C637520270356465950%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=Ht%2BJTESXObPbm6DjcxRmcgY6yGclLB7Vysfj%2Fl7xNbY%3D&amp;reserved=0.
THIS MESSAGE IS FOR THE USE OF THE INTENDED RECIPIENT(S) ONLY AND MAY CONTAIN INFORMATION THAT IS PRIVILEGED, PROPRIETARY, CONFIDENTIAL, AND/OR EXEMPT FROM DISCLOSURE UNDER ANY RELEVANT PRIVACY LEGISLATION. No rights to any privilege have been waived. If you are not the intended recipient, you are hereby notified that any review, re-transmission, dissemination, distribution, copying, conversion to hard copy, taking of action in reliance on or other use of this communication is strictly prohibited. If you are not the intended recipient and have received this message in error, please notify me by return e-mail and delete or destroy all copies of this message.
Reply all
Reply to author
Forward
0 new messages