Your Perfect Assignment is Just a Click Away

We Write Custom Academic Papers

100% Original, Plagiarism Free, Customized to your instructions!

glass
pen
clip
papers
heaphones

Fault Injection Attack On Deep Neural Network – Powerpoint

Fault Injection Attack On Deep Neural Network – Powerpoint

Fault Injection Attack On Deep Neural Network – Powerpoint

Pitfalls in Ultralightweight Authentication Protocol Designs

Gildas Avoine, Xavier Carpent, and Julio Hernandez-Castro

Abstract—This article introduces prudent engineering practices and offers recommendations to follow, together with typical mistakes

to avoid, when designing new ultralightweight authentication protocols. This work can help, as a sanity check, designers of RFID, NFC,

and sensor networks based security solutions to improve the security, reliability, and longevity of ultralightweight authentication protocol

designs. Additionally, it aims to help reviewers to quickly distinguish what is really new and worthy in a research area that has been

flooded lately with proposals of dubious quality.

Index Terms—Ultralightweight protocols, authentication, cryptanalysis, security, privacy, RFID, NFC, sensor networks

Ç

1 INTRODUCTION

RFID authentication is a very active and challengingresearch topic. It has strong requirements such as secu- rity of course, but also privacy, and yet very strong con- straints such as powerful adversaries and very constrained devices. This last point in particular is the focus of a signifi- cant part of the research community, and has spawned hun- dreds of papers [1]. Indeed, there is a strong incentive to build RFID tags below 5 cents that still provide some level of security. Such tags can not contain a battery, must make do with very few logic gates, and must be able to complete an authentication in very little time.

In response to that, a number of protocols have been created that rely on extremely simple operations to do basic cryptography. Such protocols have been named “ultralightweight”, a term coined by Chien in [2]. Although there is no precise definition adopted by authors of ultra- lightweight protocols, they can be broadly defined as proto- cols that are aimed at requiring about 1.5K GE (gate equivalents)1 or less, relying on only very simple operations on the tag’s side, and willing to accept compromises to offer some security level (See Chien’s definition [2]). This includes, in particular, most of the authentication protocols aimed at passive and inexpensive RFID tags.

As one might expect, most of these protocols are usually very weak, and are typically broken very quickly. More

importantly, there tends to be a certain lack of originality, novelty, and scientific soundness in these proposals, which is counter-productive and contributes to the relative stagna- tion in the field. A plausible reason why these papers are accepted is that reviewers not familiar with the related liter- ature fail to notice common mistakes in these protocols.

In this paper, we give a mature perspective on the situa- tion in the field of ultralightweight authentication protocols, and provide a description of most of the typical flaws that frequently undermine new proposals. This may be useful for potential reviewers to quickly determine if a proposal is worthy of publication or could be broken easily. It could also be used as a sanity check for ultralightweight protocol designers. Some authors, notably D’Arco and De Santis, highlighted in [5] a set of slightly different but related and often complementary observations to the ones we will examine in further detail and expand on this work.

The structure of the article is the following. In Section 2, we give some statistics related to the field and give addi- tional motivations for the article. In Section 3 we give a very quick description of the attacker model in ultralightweight authentication. In Section 4, we present the most typical mistakes found in ultralightweight protocols. In Section 5, we mention the problematic of dubious security proofs and detail why they are usually flawed. In Section 6, we present some recent proposals and propose attacks and comments on them based on the discussions from Sections 4 and 5. In Section 7, we end on a more positive note by presenting what we believe are the more promising approaches, and we finally present our conclusions in Section 8.

2 MOTIVATIONS

This section presents a few facts on the state of the art of ultralightweight protocols.

Table 1 presents a list of most prominent ultralightweight authentication protocols, and the first attack on each of them, when applicable. It also shows the date of publication of each article and highlights the time in months elapsed between the publication of a protocol and the publication of an attack on it. What we consider to be a “first attack” is the

1. It is relevant to point out that gate counts differ in the litterature, sometimes by a large margin. See a discussion on the subject in [3], [4].

� G. Avoine is with the D�epartement d’ingnierie informatique, Universit�e catholique de Louvain, Belgium, Brabant Wallon, Belgium. E-mail: gildas.avoine@uclouvain.be.

� X. Carpent is with the D�epartement d’ingnierie informatique, Universit�e catholique de Louvain, Louvain-la-Neuve, Brabant Wallon, Belgium. E-mail: xavier.carpent@uclouvain.be.

� J. Hernandez-Castro is with the School of Computing, University of Kent, Canterbury, Kent, United Kingdom. E-mail: J.C.Hernandez-Castro@kent.ac.uk.

Manuscript received 9 June 2014; revised 11 Sept. 2015; accepted 10 Oct. 2015. Date of publication 19 Oct. 2015; date of current version 2 Aug. 2016. For information on obtaining reprints of this article, please send e-mail to: reprints@ieee.org, and reference the Digital Object Identifier below. Digital Object Identifier no. 10.1109/TMC.2015.2492553

IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 15, NO. 9, SEPTEMBER 2016 2317

1536-1233 � 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.Authorized licensed use limited to: Kennesaw State University. Downloaded on May 19,2021 at 17:10:51 UTC from IEEE Xplore. Restrictions apply.

 

 

chronologically first traceability or disclosure attack that tar- gets the specific construction of a protocol (we discarded desynchronization attacks because, although important, they do not target specific protocol weaknesses but rather general issues in their structure). The dates, both for proto- cols and attacks, are taken to be the earliest after becoming public (e.g., eprint or date of conference over proceedings).

Fig. 1 shows the distribution of the time required for the publication of an attack, based on data from Table 1. Although there are many factors influencing the data, it is clear that the time for publication of a serious attack is extremely short. The average number of months required is about 9.5.2 Half of the protocols are broken within eight months, and most of them within a year.

Note that depending on the journal or conference, it takes significant time to publish an attack on a protocol. We esti- mate that at least three to four months are required from fin- ished research to publication. Generally, it might also take time for the proceedings to become available. Note finally that some of these protocols have been published in rela- tively low-visibility venues, which has definitely helped increase their life expectancy. It is probably safe to assume that on average, a typical ultralightweight protocol is bro- ken in under four months following its publication.

In most cases, these protocols repeat time and again the same typical mistakes, and no progress is achieved. Many

proposals present striking similarities with existing proto- cols, and often have the exact same flaws, or even worse ones. Sometimes, hash functions or ciphers are used along- side usual ultralightweight operations such as XOR, addi- tions, rotations, etc. (we look at an example in Section 6). This makes the protocol basically not less expensive than a classical challenge-response, which is no longer ultralight- weight, and sometimes weaker. Moreover, there are typi- cally one to three papers presenting attacks on each protocol, and again they often exploit the same typical weaknesses.

In the following, we present briefly the attack model, and then go through the typical flaws that commonly appear in these ultralightweight protocols.

3 GENERIC ATTACK MODEL

Ultralightweight authentication protocols must primarily ensure that neither an impersonation attack, nor a key- recovery attack (even partial-recovery) is feasible. Protocol analyses usually consider the Dolev-Yao model [48], namely a model where the adversary is powerful but not almighty: the adversary cannot guess random numbers chosen in a sufficiently large space, she cannot decrypt or create valid ciphertexts without the correct secret, and she cannot

TABLE 1 List of Ultralightweight Protocols, the First Attack on them, and the Relevant Dates

Protocol Paper Publication Date First Attack First Attack Date Months

LMAP [6] 07-2006 [7] 05-2007 10 M2AP [8] 09-2006 [7] 05-2007 8 EMAP [9] 11-2006 [10] 04-2007 5 SLMAP [11] 10-2007 [12] 05-2009 19 SASI [2] 12-2007 [13] 06-2008 6 Qingling-Yiju-Yonghua [14] 8-2008 [15] 7-2009 11 Gossamer [16] 09-2008 N/A N/A N/A LMAP++ [17] 09-2008 [18] 04-2011 31 David-Prasad [19] 06-2009 [20] 06-2010 12 Lee-Hsieh-You-Chen [21] 08-2009 [22] 02-2010 6 Yeh-Lo-Winata [23] 02-2010 [24] 10-2010 8 Eghdamian-Samsudin [25] 11-2011 [26] 06-2012 7 RPAP [27] 10-2011 [26] 06-2012 8 NRS [28] 11-2011 [29] 12-2013 25 LPP [30] 12-2011 [29] 12-2013 24 PUMAP [31] 3-2012 [26] 06-2012 3 DIDRFID/SIDRFID [32] 5-2012 [26] 06-2012 1 RAPP [33] 5-2012 [26] 06-2012 1 Improved LMAP+ [34] 5-2012 [26] 06-2012 1 RIPTA-DA [35] 7-2012 [36] 7-2013 12 Pang-Li-He-Alramadhan-Wang [37] 3-2013 [38] 12-2013 9 DT [39] 5-2013 [40] 06-2013 1 RAPLT [41] 10-2013 this article 11-2013� 1� UMAP [42] 9-2013 this article 11-2013� 2� Ling-Shen [43] 11-2013 this article, same as [13] on [2] 02-2014� 3� LPCP [44] 11-2013 [38] 12-2013 1

� Date when the attack was found, not published. Numbers omitted in the discussions below for fairness.

Fig. 1. Histogram of the time for ultralightweight protocols to be broken.

2. We discarded Gossamer in these calculations since no attack has been published on it yet to the best of our knowledge. There are, how- ever, existing desynchronization attacks (see [45], [46], and [47]), but they are related to the structure of Gossamer, not its operations. These papers propose different fixes to the problem.

2318 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 15, NO. 9, SEPTEMBER 2016

Authorized licensed use limited to: Kennesaw State University. Downloaded on May 19,2021 at 17:10:51 UTC from IEEE Xplore. Restrictions apply.

 

 

retrieve private keys from public information. However, she can easily communicate with both a tag or a reader, eaves- drop communications in both directions, or perform relay attacks (possibly modifying the messages). This last point is often a source of confusion in ultralightweight proposals, which treat active attackers as being more powerful than eavesdroppers, despite relay attacks being in fact not harder than eavesdropping (see for instance the libnfc library [49]).

Ultralightweight authentication protocols usually also attempt to ensure privacy. The privacy property is com- monly defined in authentication protocols as the inability for an adversary to track a prover. Privacy is consequently also called untraceability and is represented by an experi- ment where two interactions prover–verifier are provided to the adversary, and the latter must recognize which one was produced by his target (previously queried—possibly with restrictions – during a learning phase). Several untra- ceability models exist, with different experiments. The most widely used models are due to Juels and Weis [50], Vaude- nay [51], Deursen, Mauw, and Radomirovic [52], Hermans, Pashalidis, Vercauteren, and Preneel [53], and Avoine, Coi- sel, and Martin [54]. It is today strongly advised to use such a well-established model instead of ad-hoc ones. A compari- son of existing models was done by Coisel and Martin [55].

Designing a privacy-friendly authentication protocol is a difficult question. While a classical challenge-response proto- col (where the identity of the prover is not sent in the clear on the channel) seems to be privacy-friendly at first sight, it suf- fers from a linear complexity search (all the keys stored by the verifier must be checked to identify the correct prover) and does not ensure forward-privacy: if the prover is com- promised, the privacy of the past interactions is no longer ensured. The latter issue can be mitigated by updating the key, but desynchronization attacks must be prevented in such a case. Vaudenay actually demonstrated that only pub- lic-key cryptography can ensure the highest attainable privacy level [56]. Ultralightweight authentication protocols do not necessarily target the highest privacy level, though. Avoine, Coisel, and Martin’s model of privacy [54] for instance introduced the notion of existential and universal traceability (intuitively, a protocol is existentially traceable if it is possible to identify a tag from two communication traces, unless a legitimate authentication took place between them).

4 COMMON FLAWS

4.1 Linearity and T-Functions

We use here the same definition of T-functions proposed by Klimov and Shamir in [57], which is the following: A T-function is a mapping from n-bit words to n-bit words in which for each 0 � i < n, the bit i in any output word depends only on bits 0; 1; . . . ; i of any input word. This con- cept is very useful because all the Boolean operations and most of the numeric operations in modern processors are T- functions. Additionally, the composition of T-functions results also in a T-function.

These have many nice properties, but almost by defini- tion also a quite undesirable one: they achieve very poor levels of diffusion. By definition, in a T-function it is not possible that all output bits depend on all input bits, which is the ideal scenario for security purposes. This is

particularly dangerous in cryptographic applications, light- weight or otherwise. The only reasonable way to address this shortcoming is by combining these operations with other which do not exhibit this characteristic. But unfortu- nately many designers do not follow this simple combina- tion rule, and have proposed schemes entirely based on T-functions which are doomed to fail.

We start by showing some common mistakes found in the UMAP family of protocols. These were pioneer pro- posals in many ways but they incurred in what, with the hindsight gained after more than 8 years of research in the area, now looks like quite elementary mistakes. We show in the following the scheme used in LMAP, presented in [6] and pictured in Fig. 2, which is still one of the most clear examples of this vulnerability.

As we can see in Fig. 2, this early protocol is composed exclusively of T-functions (modular addition,�,_). This seri- ously limits its security due to their poor diffusion character- istics. Not only the message generation is marred by this, but it also affects the key update phase. This same pitfall is pres- ent in all members of the UMAP protocol family. The quick appearance of multiple attacks like those of [7], [58], [59] is, in retrospect, hardly surprising. Despite this, recent proposals continue to rely heavily and carelessly on T-functions.

Not only T-functions but also linearity should be avoided, or at least dealt with carefully in cryptographic protocols. Formally, an operation f is defined as linear in this context if fðx�yÞ¼ fðxÞ�fðyÞ. If a protocol or primitive is linear, it always accepts a better (and in most cases much better) attack than exhaustive key search. Not to mention linear

Fig. 2. The LMAP protocol.

AVOINE ET AL.: PITFALLS IN ULTRALIGHTWEIGHT AUTHENTICATION PROTOCOL DESIGNS 2319

Authorized licensed use limited to: Kennesaw State University. Downloaded on May 19,2021 at 17:10:51 UTC from IEEE Xplore. Restrictions apply.

 

 

cryptanalysis. This, of course, flagrantly violates one of the basic design principles of every cryptographic design, hence the reasoning behind avoiding linearity at any price, as it constitutes a vulnerability in itself.

This may be clear to most members of the cryptography community, but seems less obvious for some of the design- ers of lightweight protocols. Most people know that some of the common bitwise operations are linear, but many seem to forget that permutations in general and rotations (for more security implications of the rotation operation, please see Section 4.3) in particular are also linear operations, so one should not base the security of a protocol on those oper- ations alone, because the composition of linear operations is also linear.

Nonlinearity needs to be injected at some stage of the design, or otherwise the protocol remains linear overall and, consequently, very insecure. Despite this, we can find in the literature many examples of designers disregarding this basic policy. Particularly notable and common exam- ples are the many proposals whose security seems to be based almost exclusively in the use of Cyclic Redundancy Codes (CRCs) that many authors seem to forget are also lin- ear and, as such, offer very little security if at all.

This is excusable to a certain extend in proposals that want to be compliant with the EPC-C1-G2 standard recom- mendations, but then the security claims of the proposed protocols should be adjusted down accordingly. CRCs are linear and they should under no circumstances be employed in cryptography. For a further example of this, see Section 5.2.

As an additional example of how pervasive this mistake is, the papers [44] and [37] are good very recent illustrations of this common failure. Not surprisingly they were both quickly dismantled in [38].

4.2 Biased Output

Another important weakness of many lightweight schemes proposed in the last years is that some of the operations used have a biased output, a feature that in many cases lead to additional security vulnerabilities. This is typical of Bool- ean functions such as OR (_) and AND (^), where x_y and x^y have, for unbiased random values of x and y, heavily (75 percent) biased outputs, respectively, towards 1 and 0. This can constitute a security weaknesses because these two Boolean functions leak information for both of their argu- ments. For example, if xi _yi ¼ 0, then xi ¼ yi ¼ 0, which discloses both the inputs. This happens roughly 25 percent of the times (similarly with AND, of course). This could be more than enough to, after seeing some exchanges, be able to completely recover all the inputs. An additional undesir- able property derived from these highly biased outputs is the possibility of message insertion. For instance, imagine that one of the exchanged messages, used for synchroniza- tion or authentication purposes, is the result of one of this biased operators. Then, an attacker can easily impersonate it with a high probability, so it is a very bad idea to use it for authentication purposes, as proposed in many protocols. Let’s exemplify again over the LMAP protocol. We have message B ¼ðIDS _K2Þþn1, a typical construction that has been seen in this or similar forms many times later. The issue here is that B is very biased and thus can be

impersonated with high probability even for an attacker who doesn’t know any of the secrets K2 or n1. Even more worryingly, as the value of B is public (it is exchanged in

the open), the attacker can use B �1 as a very good approxi- mation to the secret n1 (on average, 75 percent of the bits are correct), and this approximation can be used later in other parts of the protocol to approximate the secret.

4.3 Rotations

Rotations have been used for a long time in cryptography. Modern block ciphers and hash functions still mostly rely on ARX (addition, rotation, XOR) designs. Rotations are extremely cheap to implement, and they bring diffusion, which complements nicely the addition and the XOR (which exhibit poor diffusion properties, as shown in Section 4.1). Fixed-amount rotations are typically used in ARX designs, but data-dependent rotations, as first featured in the RC5 block cipher [60], also exist.

The SASI [2] protocol was the first ultralightweight authentication protocol to feature data-dependent rotations as far as we know. Since then, most ultralightweight proto- cols have used them, and in many cases they are the weak spot for ad-hoc attacks. Two types of rotations are typically used in these protocols: modular rotations (as used in RC5) and Hamming weight rotations. In the following, we denote the rotation operation by Rotðx; yÞ, in accordance with the literature. The rotation consists in a circular left shift of x by rðyÞ positions, where rðyÞ¼ y mod L for modular rotations, and rðyÞ¼HðyÞ for Hamming weight rotations.

One thing to bear in mind is that a rotation is a permuta- tion and is therefore linear, inheriting all the pitfalls com- mented in Section 4.1.

The most important shortcoming with data-dependent rotations is that there are only L possible outputs (where L is the size in bits of the input). Moreover, the distribution of these outputs is known a priori. The modulo operation has a uniform distribution over uniform input, and the Hamming weight has binomial distribution BðL; 1=2Þ over uniform input. Fig. 3 shows the distribution of probability of rðyÞ for the two rotations, with y a uniform random variable and with L ¼ 96.3

Modular rotations have a maximal entropy (log 2L ¼ 6:58 bits), since each shift is equiprobable. This minimizes the advantage of the adversary for guessing the output. Neverthe- less, they are quite probable to behave like the identity opera- tion (probability of 1=L). This has led to many attacks, whether when the attacker assumes this blindly (as the desynchronization attack on PUMAP in [26]) or when she is able to verify it (as the traceability attack on modular SASI in [61]).

Fig. 3. Distribution of probability of rðyÞ for the modular and Hamming weight rotations, with y a uniform random variable and L ¼ 96.

3. This is the value that is typically used in the literature for key lengths.

2320 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 15, NO. 9, SEPTEMBER 2016

Authorized licensed use limited to: Kennesaw State University. Downloaded on May 19,2021 at 17:10:51 UTC from IEEE Xplore. Restrictions apply.

 

 

Hamming weight rotations have worse entropy (e.g., 4:34 bits in the case of L ¼ 96) than modular rotations. In partic- ular, the three-sigma rule implies that, for instance in the case of L ¼ 96, the number of bits rotated is between 33 and 63 in 99.7 percent of cases. It is therefore even easier for an attacker to guess the output of a rotation. However, Ham- ming weight rotations virtually never behave like the iden- tity. Indeed, this would require either rðyÞ¼ 0 or rðyÞ¼ L, which happens only when y ¼ 0 or y ¼ 2L �1, respectively.

Mod n cryptanalysis [62] is also a promising tool to attack schemes using rotations and additions, although it has never been applied in the cryptanalysis of an ultralight- weight protocol. It has, on the other hand, been used to suc- cessfully attack block ciphers such as RC5P and M6, which use the same kind of operations.

4.4 Message Composition

Securely designing the messages exchanged over an ultra- lightweight protocol is a difficult and open problem. Keep- ing the secrets exchanged as secure as possible against any leakage is indeed a big challenge, particularly in such con- strained environments. Generally speaking, the messages should guarantee good confusion and diffusion properties for the secrets. That is, the secret should be thoroughly involved in the construction of the messages, and a subtle change in the secret should result in completely different messages. In block ciphers, for instance, these two proper- ties are usually achieved by using iterated substitution-per- mutation networks. However, due to the constraints of ultralightweight protocols, messages are usually built using a handful of operations, and in many cases good confusion and diffusion levels are not satisfied.

In LMAP for instance, the key update phase is defined by

IDSðnþ1Þ ¼ ðIDSðnÞ þðnðnÞ2 �K ðnÞ 4 ÞÞ� ID; (1)

where we can see that the ID, a secret that the protocol is designed to protect, is simply XORed with a mixture of pub- lic and secret values. This operation clearly exhibits poor confusion and diffusion properties. This quite frequent fea- ture heuristically leads to major leakage of secret bits, as the rest of the message the ID is combined with may include bias, or be partially known by the adversary.

There are many more examples that exhibit transgres- sions to this rule of thumb for minimizing secret leakage, even within the most recent proposals. Consider for instance this message of the RAPP protocol [33]:

C ¼ Perðn1 �K1; n1 �K3Þ� ID; (2) where we can see again that the ID is at the out most posi- tion of the message construction. This would have no seri- ous consequences if the rest of the message was perfectly random and unknown to an adversary (as in a One Time Pad), but this is not the case here.

4.5 Knowledge Accumulation

4.5.1 Knowledge Accumulation of a Static Secret

If partial leakage of a static secret occurs in a round of a pro- tocol, there is an obvious traceability issue. Indeed, it becomes possible for an attacker to correlate two leaked

traces of an eavesdropped exchange. A typical example is recovering the least significant bit of the static identifier (see for instance [13], the first traceability attack on SASI).

More importantly, an attacker is sometimes able to recover the full static secret after a few rounds. Indeed, she can combine the different observations using Bayesian inference. An example of such an attack was the full cryptanalysis of SASI [63].

4.5.2 Key Updating

One of the initial goals of synchronized protocols is to pro- vide forward privacy. Forward privacy is a stronger notion than privacy. Simply put, a protocol is said to be forward pri- vate if an attacker, having recovered the internal state of a tag, is not able to recognize the tag in past interaction traces. For a more formal definition, see [64]. Forward privacy can- not be achieved in a protocol if the secrets used in the exchange are all static. Indeed, if the attacker knows the secrets of a tag at some point, it also knows them in the past, since the secret does not change in the tag’s lifetime. There- fore, she can recompute the messages sent by a tag in previ- ous interactions, and recognize it easily. Note that a changing secret is required for forward privacy, but it does not guarantee it (indeed, there are many synchronized proto- cols that are not private, and therefore not forward private).

A positive side effect of changing the secrets is that it is conceivably harder to obtain the full secret at any given time, if only a partial leakage is obtained at every authenti- cation round. This seems to be a good feature as it is intui- tively harder to hit a moving target that a static one. However, this does not make the full cryptanalysis impossi- ble, just slightly harder, as has been demonstrated with the Tango attacks described in Section 4.7.3.

4.6 Desynchronization

Desynchronization attacks are active attacks. We study them here despite some ultralightweight protocols not claiming resistance against active attackers, only against passive ones. Nevertheless, as we point in Section 3, active attacks are par- ticularly relevant in the considered context. In any case, most authors aim to achieve resilience against active attacks, including some defense against desynchronization.

Desynchronization generally occurs when and active attacker is able to stop the prover and/or the verifier to engage in further successful executions of the protocol. It is generally achieved by tricking the prover, the verifier, or both, into believing that a successful authentication session has taken place, thus updating internal counters and varia- bles in an asynchronous way. Common implementations of this attack involve sending specially crafted messages to make only one of the involved parties reach an irreversible state from which it will not be capable of communicating in the future with other parties.

Many ultralightweight authentication protocols have been shown to be vulnerable to attacks of this kind. A wide- spread defense mechanism consists in storing the last state (pseudonym and keys) used, together with the current one, and use the former if the latter fails (as first suggested in LMAP [6], by using a flag that is activated when an authenti- cation session terminates abnormally). This approach

AVOINE ET AL.: PITFALLS IN ULTRALIGHTWEIGHT AUTHENTICATION PROTOCOL DESIGNS 2321

Authorized licensed use limited to: Kennesaw State University. Downloaded on May 19,2021 at 17:10:51 UTC from IEEE Xplore. Restrictions apply.

 

 

however modifies the behavior of the protocol, which opens the door to other attacks such as timing attacks [65]. More- over, it comes with the cost of storing an extra copy of the state, which is relatively expensive in terms of area. Although not exactly flawed, this solution is not perfect.

Desynchronization protection in ultralightweight authentication protocols is relatively poorly studied on its own, and in any case caution is recommended when analyz- ing the security against such attacks.

An example of two classical desynchronization attacks can be found in [66]. Desynchronization can be considered as a type of Denial of Service attack, and as most DoS attacks, it does not admit an easy, ideal solution.

4.7 Vulnerability to Generalist Attacks

Order Solution Now

Our Service Charter

1. Professional & Expert Writers: Homework Place only hires the best. Our writers are specially selected and recruited, after which they undergo further training to perfect their skills for specialization purposes. Moreover, our writers are holders of masters and Ph.D. degrees. They have impressive academic records, besides being native English speakers.

2. Top Quality Papers: Our customers are always guaranteed papers that exceed their expectations. All our writers have +5 years of experience. This implies that all papers are written by individuals who are experts in their fields. In addition, the quality team reviews all the papers before sending them to the customers.

3. Plagiarism-Free Papers: All papers provided by Homework Place are written from scratch. Appropriate referencing and citation of key information are followed. Plagiarism checkers are used by the Quality assurance team and our editors just to double-check that there are no instances of plagiarism.

4. Timely Delivery: Time wasted is equivalent to a failed dedication and commitment. Homework Places is known for timely delivery of any pending customer orders. Customers are well informed of the progress of their papers to ensure they keep track of what the writer is providing before the final draft is sent for grading.

5. Affordable Prices: Our prices are fairly structured to fit all groups. Any customer willing to place their assignments with us can do so at very affordable prices. In addition, our customers enjoy regular discounts and bonuses.

6. 24/7 Customer Support: At Homework Place, we have put in place a team of experts who answer all customer inquiries promptly. The best part is the ever-availability of the team. Customers can make inquiries anytime.