path: root/inotail.c
diff options
Diffstat (limited to 'inotail.c')
1 files changed, 4 insertions, 2 deletions
diff --git a/inotail.c b/inotail.c
index b61e30a..2fa7017 100644
--- a/inotail.c
+++ b/inotail.c
@@ -173,7 +173,7 @@ static off_t lines_to_offset_from_end(struct file_struct *f, unsigned long n_lin
if (buf[i] == '\n') {
if (--n_lines == 0) {
- return offset += i + 1; /* We don't want the first \n */
+ return offset + i + 1; /* We don't want the first \n */
@@ -263,6 +263,7 @@ static int tail_pipe_from_begin(struct file_struct *f, unsigned long n_units, co
while (n_units > 0) {
if ((bytes_read = read(f->fd, buf, BUFSIZ)) <= 0) {
+ /* Interrupted by a signal, retry reading */
if (bytes_read < 0 && (errno == EINTR || errno == EAGAIN))
@@ -283,6 +284,7 @@ static int tail_pipe_from_begin(struct file_struct *f, unsigned long n_units, co
+ /* Print remainder of the current block */
if (++i < block_size)
write(STDOUT_FILENO, &buf[i], bytes_read - i);
} else {
@@ -619,7 +621,7 @@ static int handle_inotify_event(struct inotify_event *inev, struct file_struct *
/* Seek to old file size */
- if ((ret = lseek(f->fd, f->size, SEEK_SET)) == (off_t) -1) {
+ if (!IS_PIPELIKE(finfo.st_mode) && (ret = lseek(f->fd, f->size, SEEK_SET)) == (off_t) -1) {
fprintf(stderr, "Error: Could not seek in file '%s' (%s)\n", f->name, strerror(errno));
goto ignore;
/> Recently, in commit 13aa93c70e71 ("random: add and use memzero_explicit() for clearing data"), we have found that GCC may optimize some memset() cases away when it detects a stack variable is not being used anymore and going out of scope. This can happen, for example, in cases when we are clearing out sensitive information such as keying material or any e.g. intermediate results from crypto computations, etc. With the help of Coccinelle, we can figure out and fix such occurences in the crypto subsytem as well. Julia Lawall provided the following Coccinelle program: @@ type T; identifier x; @@ T x; ... when exists when any -memset +memzero_explicit (&x, -0, ...) ... when != x when strict @@ type T; identifier x; @@ T x[...]; ... when exists when any -memset +memzero_explicit (x, -0, ...) ... when != x when strict Therefore, make use of the drop-in replacement memzero_explicit() for exactly such cases instead of using memset(). Signed-off-by: Daniel Borkmann <> Cc: Julia Lawall <> Cc: Herbert Xu <> Cc: Theodore Ts'o <> Cc: Hannes Frederic Sowa <> Acked-by: Hannes Frederic Sowa <> Acked-by: Herbert Xu <> Signed-off-by: Theodore Ts'o <> 2014-10-02crypto: sha - Handle unaligned input data in generic sha256 and sha512.David S. Miller1-1/+2 Like SHA1, use get_unaligned_be*() on the raw input data. Reported-by: Bob Picco <> Signed-off-by: David S. Miller <> Signed-off-by: Herbert Xu <> 2013-05-28crypto: sha512_generic - set cra_driver_nameJussi Kivilinna1-0/+2 'sha512_generic' should set driver name now that there is alternative sha512 provider (sha512_ssse3). Signed-off-by: Jussi Kivilinna <> Signed-off-by: Herbert Xu <> 2013-04-25crypto: sha512 - Expose generic sha512 routine to be callable from other modulesTim Chen1-6/+7 Other SHA512 routines may need to use the generic routine when FPU is not available. Signed-off-by: Tim Chen <> Signed-off-by: Herbert Xu <> 2012-08-01crypto: sha512 - use crypto_[un]register_shashesJussi Kivilinna1-15/+5 Combine all shash algs to be registered and use new crypto_[un]register_shashes functions. This simplifies init/exit code. Signed-off-by: Jussi Kivilinna <> Signed-off-by: Herbert Xu <> 2012-04-05crypto: sha512 - Fix byte counter overflow in SHA-512Kent Yoder1-1/+1 The current code only increments the upper 64 bits of the SHA-512 byte counter when the number of bytes hashed happens to hit 2^64 exactly. This patch increments the upper 64 bits whenever the lower 64 bits overflows. Signed-off-by: Kent Yoder <> Cc: Signed-off-by: Herbert Xu <> 2012-02-16crypto: sha512 - use standard ror64()Alexey Dobriyan1-9/+4 Use standard ror64() instead of hand-written. There is no standard ror64, so create it. The difference is shift value being "unsigned int" instead of uint64_t (for which there is no reason). gcc starts to emit native ROR instructions which it doesn't do for some reason currently. This should make the code faster. Patch survives in-tree crypto test and ping flood with hmac(sha512) on. Signed-off-by: Alexey Dobriyan <> Signed-off-by: Herbert Xu <> 2012-02-05crypto: sha512 - Avoid stack bloat on i386Herbert Xu1-36/+32 Unfortunately in reducing W from 80 to 16 we ended up unrolling the loop twice. As gcc has issues dealing with 64-bit ops on i386 this means that we end up using even more stack space (>1K). This patch solves the W reduction by moving LOAD_OP/BLEND_OP into the loop itself, thus avoiding the need to duplicate it. While the stack space still isn't great (>0.5K) it is at least in the same ball park as the amount of stack used for our C sha1 implementation. Note that this patch basically reverts to the original code so the diff looks bigger than it really is. Cc: Signed-off-by: Herbert Xu <> 2012-01-26crypto: sha512 - Use binary and instead of modulusHerbert Xu1-2/+2 The previous patch used the modulus operator over a power of 2 unnecessarily which may produce suboptimal binary code. This patch changes changes them to binary ands instead. Signed-off-by: Herbert Xu <> 2012-01-15crypto: sha512 - reduce stack usage to safe numberAlexey Dobriyan1-24/+34 For rounds 16--79, W[i] only depends on W[i - 2], W[i - 7], W[i - 15] and W[i - 16]. Consequently, keeping all W[80] array on stack is unnecessary, only 16 values are really needed. Using W[16] instead of W[80] greatly reduces stack usage (~750 bytes to ~340 bytes on x86_64). Line by line explanation: * BLEND_OP array is "circular" now, all indexes have to be modulo 16. Round number is positive, so remainder operation should be without surprises. * initial full message scheduling is trimmed to first 16 values which come from data block, the rest is calculated before it's needed. * original loop body is unrolled version of new SHA512_0_15 and SHA512_16_79 macros, unrolling was done to not do explicit variable renaming. Otherwise it's the very same code after preprocessing. See sha1_transform() code which does the same trick. Patch survives in-tree crypto test and original bugreport test (ping flood with hmac(sha512). See FIPS 180-2 for SHA-512 definition Signed-off-by: Alexey Dobriyan <> Cc: Signed-off-by: Herbert Xu <> 2012-01-15crypto: sha512 - make it work, undo percpu message scheduleAlexey Dobriyan1-5/+1 commit f9e2bca6c22d75a289a349f869701214d63b5060 aka "crypto: sha512 - Move message schedule W[80] to static percpu area" created global message schedule area. If sha512_update will ever be entered twice, hash will be silently calculated incorrectly. Probably the easiest way to notice incorrect hashes being calculated is to run 2 ping floods over AH with hmac(sha512): #!/usr/sbin/setkey -f flush; spdflush; add IP1 IP2 ah 25 -A hmac-sha512 0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000025; add IP2 IP1 ah 52 -A hmac-sha512 0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000052; spdadd IP1 IP2 any -P out ipsec ah/transport//require; spdadd IP2 IP1 any -P in ipsec ah/transport//require; XfrmInStateProtoError will start ticking with -EBADMSG being returned from ah_input(). This never happens with, say, hmac(sha1). With patch applied (on BOTH sides), XfrmInStateProtoError does not tick with multiple bidirectional ping flood streams like it doesn't tick with SHA-1. After this patch sha512_transform() will start using ~750 bytes of stack on x86_64. This is OK for simple loads, for something more heavy, stack reduction will be done separatedly. Signed-off-by: Alexey Dobriyan <> Cc: Signed-off-by: Herbert Xu <>