Searched hist:b (Results 1 - 25 of 3814) sorted by relevance

1234567891011>>

/src/sys/compat/linux32/arch/aarch64/
H A Dlinux32_exec_machdep.c1.2 Sun Apr 09 00:29:26 GMT 2023 riastradh compat_linux32: KASSERT(A && B) -> KASSERT(A); KASSERT(B)

1.2 Sun Apr 09 00:29:26 GMT 2023 riastradh compat_linux32: KASSERT(A && B) -> KASSERT(A); KASSERT(B)

/src/tools/llvm-lib/libclangStaticAnalyzerCore/
H A DMakefile1.1 Thu Feb 17 18:35:37 GMT 2011 joerg branches: 1.1.2;
Update LLVM/clang snapshot. This brings in support for
.pushsection/.popsection and fixes clang -B to be incremental.

/src/tools/llvm-lib/libclangStaticAnalyzerFrontend/
H A DMakefile1.1 Thu Feb 17 18:35:37 GMT 2011 joerg branches: 1.1.2;
Update LLVM/clang snapshot. This brings in support for
.pushsection/.popsection and fixes clang -B to be incremental.

/src/lib/librt/
H A Dsem_destroy.31.1 Fri Jan 24 01:52:44 GMT 2003 thorpej Add librt, which provides POSIX 1003.1b Real-time extensions not
present in libc. Currently includes 1003.1b semaphores.

1.1 Fri Jan 24 01:52:44 GMT 2003 thorpej Add librt, which provides POSIX 1003.1b Real-time extensions not
present in libc. Currently includes 1003.1b semaphores.

H A Dsem_post.31.1 Fri Jan 24 01:52:44 GMT 2003 thorpej branches: 1.1.54;
Add librt, which provides POSIX 1003.1b Real-time extensions not
present in libc. Currently includes 1003.1b semaphores.

1.1 Fri Jan 24 01:52:44 GMT 2003 thorpej branches: 1.1.54;
Add librt, which provides POSIX 1003.1b Real-time extensions not
present in libc. Currently includes 1003.1b semaphores.

H A Dsem_wait.31.1 Fri Jan 24 01:52:45 GMT 2003 thorpej branches: 1.1.54;
Add librt, which provides POSIX 1003.1b Real-time extensions not
present in libc. Currently includes 1003.1b semaphores.

1.1 Fri Jan 24 01:52:45 GMT 2003 thorpej branches: 1.1.54;
Add librt, which provides POSIX 1003.1b Real-time extensions not
present in libc. Currently includes 1003.1b semaphores.

H A Dshlib_version1.1 Fri Jan 24 01:52:45 GMT 2003 thorpej Add librt, which provides POSIX 1003.1b Real-time extensions not
present in libc. Currently includes 1003.1b semaphores.

1.1 Fri Jan 24 01:52:45 GMT 2003 thorpej Add librt, which provides POSIX 1003.1b Real-time extensions not
present in libc. Currently includes 1003.1b semaphores.

H A Dsem_getvalue.31.1 Fri Jan 24 01:52:44 GMT 2003 thorpej branches: 1.1.6;
Add librt, which provides POSIX 1003.1b Real-time extensions not
present in libc. Currently includes 1003.1b semaphores.
1.1 Fri Jan 24 01:52:44 GMT 2003 thorpej branches: 1.1.6;
Add librt, which provides POSIX 1003.1b Real-time extensions not
present in libc. Currently includes 1003.1b semaphores.
/src/sys/modules/if_run/
H A Dif_run.ioconf1.1 Wed May 30 14:30:35 GMT 2012 nonaka branches: 1.1.4;
Add a driver for Ralink Technology RT2700U/RT2800U/RT3000U USB IEEE
802.11a/b/g/n wireless network devices, ported from OpenBSD by FUKAUMI Naoki,
arranged by me.
/src/sys/arch/atari/dev/
H A Dclockvar.h1.1 Tue Jul 07 15:37:02 GMT 2009 tsutsui branches: 1.1.2;
Explicitly initialize the MFP Timer-B for delay(9) from atari_hwinit()
rather than using if(!atari_realconfig) in clockmatch().
(I doubt the latter one has actually been called..)

/src/sys/arch/sh3/conf/
H A Dstd.sh3eb1.1 Sat Feb 02 03:06:04 GMT 2008 uwe branches: 1.1.2; 1.1.4; 1.1.12;
Add back std.sh3. For now it contains obvious EXEC_* options and
CPU_IN_CKSUM.

Introduce std.sh3e{b,l} that include std.sh3 and just add
corresponding endianness options.

H A Dstd.sh3el1.1 Sat Feb 02 03:06:04 GMT 2008 uwe branches: 1.1.2; 1.1.4; 1.1.12;
Add back std.sh3. For now it contains obvious EXEC_* options and
CPU_IN_CKSUM.

Introduce std.sh3e{b,l} that include std.sh3 and just add
corresponding endianness options.

/src/regress/sys/arch/arm/
H A DMakefile.inc1.1 Mon Aug 06 22:29:58 GMT 2001 bjh21 On ARM ELF systems, check that the kernel aligns the stack pointer to an
8-byte boundary (as mandated by ATPCS B-01) when delivering a signal.

/src/regress/sys/arch/arm/sigstackalign/
H A Dstackptr.S1.1 Mon Aug 06 22:29:59 GMT 2001 bjh21 On ARM ELF systems, check that the kernel aligns the stack pointer to an
8-byte boundary (as mandated by ATPCS B-01) when delivering a signal.

/src/games/hack/
H A Dhelp1.2 Sun Dec 15 22:02:52 GMT 2002 pooka fix confusion between east and west for y,u,b, and n.

from Soren Jacobsen in misc/19397

/src/sys/dev/microcode/zyd/
H A Dbuild.c1.1 Sat Jun 09 11:20:54 GMT 2007 kiyohara branches: 1.1.8; 1.1.12; 1.1.54;
Added OpenBSD's zyd(4) driver.
Supports ZyDAS ZD1211/ZD1211B USB IEEE 802.11b/g wireless network device.

H A Dmicrocode.h1.1 Sat Jun 09 11:20:54 GMT 2007 kiyohara branches: 1.1.8; 1.1.12; 1.1.54;
Added OpenBSD's zyd(4) driver.
Supports ZyDAS ZD1211/ZD1211B USB IEEE 802.11b/g wireless network device.

H A Dzyd-zd12111.1 Sat Jun 09 11:20:54 GMT 2007 kiyohara branches: 1.1.8; 1.1.12; 1.1.54;
Added OpenBSD's zyd(4) driver.
Supports ZyDAS ZD1211/ZD1211B USB IEEE 802.11b/g wireless network device.

H A Dzyd-zd1211b1.1 Sat Jun 09 11:20:55 GMT 2007 kiyohara branches: 1.1.8; 1.1.12; 1.1.54;
Added OpenBSD's zyd(4) driver.
Supports ZyDAS ZD1211/ZD1211B USB IEEE 802.11b/g wireless network device.

/src/tests/lib/libcurses/check_files/
H A Dtimeout.chk1.2 Mon Aug 29 00:41:38 GMT 2011 christos we never get the b character since we timeout and drain!

/src/sys/fs/nfs/client/
H A Dnfs_clvfsops.c1.3 Mon Sep 03 16:29:34 GMT 2018 riastradh Rename min/max -> uimin/uimax for better honesty.

These functions are defined on unsigned int. The generic name
min/max should not silently truncate to 32 bits on 64-bit systems.
This is purely a name change -- no functional change intended.

HOWEVER! Some subsystems have

#define min(a, b) ((a) < (b) ? (a) : (b))
#define max(a, b) ((a) > (b) ? (a) : (b))

even though our standard name for that is MIN/MAX. Although these
may invite multiple evaluation bugs, these do _not_ cause integer
truncation.

To avoid `fixing' these cases, I first changed the name in libkern,
and then compile-tested every file where min/max occurred in order to
confirm that it failed -- and thus confirm that nothing shadowed
min/max -- before changing it.

I have left a handful of bootloaders that are too annoying to
compile-test, and some dead code:

cobalt ews4800mips hp300 hppa ia64 luna68k vax
acorn32/if_ie.c (not included in any kernels)
macppc/if_gm.c (superseded by gem(4))

It should be easy to fix the fallout once identified -- this way of
doing things fails safe, and the goal here, after all, is to _avoid_
silent integer truncations, not introduce them.

Maybe one day we can reintroduce min/max as type-generic things that
never silently truncate. But we should avoid doing that for a while,
so that existing code has a chance to be detected by the compiler for
conversion to uimin/uimax without changing the semantics until we can
properly audit it all. (Who knows, maybe in some cases integer
truncation is actually intended!)

1.3 Mon Sep 03 16:29:34 GMT 2018 riastradh Rename min/max -> uimin/uimax for better honesty.

These functions are defined on unsigned int. The generic name
min/max should not silently truncate to 32 bits on 64-bit systems.
This is purely a name change -- no functional change intended.

HOWEVER! Some subsystems have

#define min(a, b) ((a) < (b) ? (a) : (b))
#define max(a, b) ((a) > (b) ? (a) : (b))

even though our standard name for that is MIN/MAX. Although these
may invite multiple evaluation bugs, these do _not_ cause integer
truncation.

To avoid `fixing' these cases, I first changed the name in libkern,
and then compile-tested every file where min/max occurred in order to
confirm that it failed -- and thus confirm that nothing shadowed
min/max -- before changing it.

I have left a handful of bootloaders that are too annoying to
compile-test, and some dead code:

cobalt ews4800mips hp300 hppa ia64 luna68k vax
acorn32/if_ie.c (not included in any kernels)
macppc/if_gm.c (superseded by gem(4))

It should be easy to fix the fallout once identified -- this way of
doing things fails safe, and the goal here, after all, is to _avoid_
silent integer truncations, not introduce them.

Maybe one day we can reintroduce min/max as type-generic things that
never silently truncate. But we should avoid doing that for a while,
so that existing code has a chance to be detected by the compiler for
conversion to uimin/uimax without changing the semantics until we can
properly audit it all. (Who knows, maybe in some cases integer
truncation is actually intended!)

1.3 Mon Sep 03 16:29:34 GMT 2018 riastradh Rename min/max -> uimin/uimax for better honesty.

These functions are defined on unsigned int. The generic name
min/max should not silently truncate to 32 bits on 64-bit systems.
This is purely a name change -- no functional change intended.

HOWEVER! Some subsystems have

#define min(a, b) ((a) < (b) ? (a) : (b))
#define max(a, b) ((a) > (b) ? (a) : (b))

even though our standard name for that is MIN/MAX. Although these
may invite multiple evaluation bugs, these do _not_ cause integer
truncation.

To avoid `fixing' these cases, I first changed the name in libkern,
and then compile-tested every file where min/max occurred in order to
confirm that it failed -- and thus confirm that nothing shadowed
min/max -- before changing it.

I have left a handful of bootloaders that are too annoying to
compile-test, and some dead code:

cobalt ews4800mips hp300 hppa ia64 luna68k vax
acorn32/if_ie.c (not included in any kernels)
macppc/if_gm.c (superseded by gem(4))

It should be easy to fix the fallout once identified -- this way of
doing things fails safe, and the goal here, after all, is to _avoid_
silent integer truncations, not introduce them.

Maybe one day we can reintroduce min/max as type-generic things that
never silently truncate. But we should avoid doing that for a while,
so that existing code has a chance to be detected by the compiler for
conversion to uimin/uimax without changing the semantics until we can
properly audit it all. (Who knows, maybe in some cases integer
truncation is actually intended!)

1.3 Mon Sep 03 16:29:34 GMT 2018 riastradh Rename min/max -> uimin/uimax for better honesty.

These functions are defined on unsigned int. The generic name
min/max should not silently truncate to 32 bits on 64-bit systems.
This is purely a name change -- no functional change intended.

HOWEVER! Some subsystems have

#define min(a, b) ((a) < (b) ? (a) : (b))
#define max(a, b) ((a) > (b) ? (a) : (b))

even though our standard name for that is MIN/MAX. Although these
may invite multiple evaluation bugs, these do _not_ cause integer
truncation.

To avoid `fixing' these cases, I first changed the name in libkern,
and then compile-tested every file where min/max occurred in order to
confirm that it failed -- and thus confirm that nothing shadowed
min/max -- before changing it.

I have left a handful of bootloaders that are too annoying to
compile-test, and some dead code:

cobalt ews4800mips hp300 hppa ia64 luna68k vax
acorn32/if_ie.c (not included in any kernels)
macppc/if_gm.c (superseded by gem(4))

It should be easy to fix the fallout once identified -- this way of
doing things fails safe, and the goal here, after all, is to _avoid_
silent integer truncations, not introduce them.

Maybe one day we can reintroduce min/max as type-generic things that
never silently truncate. But we should avoid doing that for a while,
so that existing code has a chance to be detected by the compiler for
conversion to uimin/uimax without changing the semantics until we can
properly audit it all. (Who knows, maybe in some cases integer
truncation is actually intended!)

1.3 Mon Sep 03 16:29:34 GMT 2018 riastradh Rename min/max -> uimin/uimax for better honesty.

These functions are defined on unsigned int. The generic name
min/max should not silently truncate to 32 bits on 64-bit systems.
This is purely a name change -- no functional change intended.

HOWEVER! Some subsystems have

#define min(a, b) ((a) < (b) ? (a) : (b))
#define max(a, b) ((a) > (b) ? (a) : (b))

even though our standard name for that is MIN/MAX. Although these
may invite multiple evaluation bugs, these do _not_ cause integer
truncation.

To avoid `fixing' these cases, I first changed the name in libkern,
and then compile-tested every file where min/max occurred in order to
confirm that it failed -- and thus confirm that nothing shadowed
min/max -- before changing it.

I have left a handful of bootloaders that are too annoying to
compile-test, and some dead code:

cobalt ews4800mips hp300 hppa ia64 luna68k vax
acorn32/if_ie.c (not included in any kernels)
macppc/if_gm.c (superseded by gem(4))

It should be easy to fix the fallout once identified -- this way of
doing things fails safe, and the goal here, after all, is to _avoid_
silent integer truncations, not introduce them.

Maybe one day we can reintroduce min/max as type-generic things that
never silently truncate. But we should avoid doing that for a while,
so that existing code has a chance to be detected by the compiler for
conversion to uimin/uimax without changing the semantics until we can
properly audit it all. (Who knows, maybe in some cases integer
truncation is actually intended!)

1.3 Mon Sep 03 16:29:34 GMT 2018 riastradh Rename min/max -> uimin/uimax for better honesty.

These functions are defined on unsigned int. The generic name
min/max should not silently truncate to 32 bits on 64-bit systems.
This is purely a name change -- no functional change intended.

HOWEVER! Some subsystems have

#define min(a, b) ((a) < (b) ? (a) : (b))
#define max(a, b) ((a) > (b) ? (a) : (b))

even though our standard name for that is MIN/MAX. Although these
may invite multiple evaluation bugs, these do _not_ cause integer
truncation.

To avoid `fixing' these cases, I first changed the name in libkern,
and then compile-tested every file where min/max occurred in order to
confirm that it failed -- and thus confirm that nothing shadowed
min/max -- before changing it.

I have left a handful of bootloaders that are too annoying to
compile-test, and some dead code:

cobalt ews4800mips hp300 hppa ia64 luna68k vax
acorn32/if_ie.c (not included in any kernels)
macppc/if_gm.c (superseded by gem(4))

It should be easy to fix the fallout once identified -- this way of
doing things fails safe, and the goal here, after all, is to _avoid_
silent integer truncations, not introduce them.

Maybe one day we can reintroduce min/max as type-generic things that
never silently truncate. But we should avoid doing that for a while,
so that existing code has a chance to be detected by the compiler for
conversion to uimin/uimax without changing the semantics until we can
properly audit it all. (Who knows, maybe in some cases integer
truncation is actually intended!)

/src/sys/arch/hppa/dev/
H A Dlcd.c1.2 Mon Sep 03 16:29:24 GMT 2018 riastradh Rename min/max -> uimin/uimax for better honesty.

These functions are defined on unsigned int. The generic name
min/max should not silently truncate to 32 bits on 64-bit systems.
This is purely a name change -- no functional change intended.

HOWEVER! Some subsystems have

#define min(a, b) ((a) < (b) ? (a) : (b))
#define max(a, b) ((a) > (b) ? (a) : (b))

even though our standard name for that is MIN/MAX. Although these
may invite multiple evaluation bugs, these do _not_ cause integer
truncation.

To avoid `fixing' these cases, I first changed the name in libkern,
and then compile-tested every file where min/max occurred in order to
confirm that it failed -- and thus confirm that nothing shadowed
min/max -- before changing it.

I have left a handful of bootloaders that are too annoying to
compile-test, and some dead code:

cobalt ews4800mips hp300 hppa ia64 luna68k vax
acorn32/if_ie.c (not included in any kernels)
macppc/if_gm.c (superseded by gem(4))

It should be easy to fix the fallout once identified -- this way of
doing things fails safe, and the goal here, after all, is to _avoid_
silent integer truncations, not introduce them.

Maybe one day we can reintroduce min/max as type-generic things that
never silently truncate. But we should avoid doing that for a while,
so that existing code has a chance to be detected by the compiler for
conversion to uimin/uimax without changing the semantics until we can
properly audit it all. (Who knows, maybe in some cases integer
truncation is actually intended!)

1.2 Mon Sep 03 16:29:24 GMT 2018 riastradh Rename min/max -> uimin/uimax for better honesty.

These functions are defined on unsigned int. The generic name
min/max should not silently truncate to 32 bits on 64-bit systems.
This is purely a name change -- no functional change intended.

HOWEVER! Some subsystems have

#define min(a, b) ((a) < (b) ? (a) : (b))
#define max(a, b) ((a) > (b) ? (a) : (b))

even though our standard name for that is MIN/MAX. Although these
may invite multiple evaluation bugs, these do _not_ cause integer
truncation.

To avoid `fixing' these cases, I first changed the name in libkern,
and then compile-tested every file where min/max occurred in order to
confirm that it failed -- and thus confirm that nothing shadowed
min/max -- before changing it.

I have left a handful of bootloaders that are too annoying to
compile-test, and some dead code:

cobalt ews4800mips hp300 hppa ia64 luna68k vax
acorn32/if_ie.c (not included in any kernels)
macppc/if_gm.c (superseded by gem(4))

It should be easy to fix the fallout once identified -- this way of
doing things fails safe, and the goal here, after all, is to _avoid_
silent integer truncations, not introduce them.

Maybe one day we can reintroduce min/max as type-generic things that
never silently truncate. But we should avoid doing that for a while,
so that existing code has a chance to be detected by the compiler for
conversion to uimin/uimax without changing the semantics until we can
properly audit it all. (Who knows, maybe in some cases integer
truncation is actually intended!)

1.2 Mon Sep 03 16:29:24 GMT 2018 riastradh Rename min/max -> uimin/uimax for better honesty.

These functions are defined on unsigned int. The generic name
min/max should not silently truncate to 32 bits on 64-bit systems.
This is purely a name change -- no functional change intended.

HOWEVER! Some subsystems have

#define min(a, b) ((a) < (b) ? (a) : (b))
#define max(a, b) ((a) > (b) ? (a) : (b))

even though our standard name for that is MIN/MAX. Although these
may invite multiple evaluation bugs, these do _not_ cause integer
truncation.

To avoid `fixing' these cases, I first changed the name in libkern,
and then compile-tested every file where min/max occurred in order to
confirm that it failed -- and thus confirm that nothing shadowed
min/max -- before changing it.

I have left a handful of bootloaders that are too annoying to
compile-test, and some dead code:

cobalt ews4800mips hp300 hppa ia64 luna68k vax
acorn32/if_ie.c (not included in any kernels)
macppc/if_gm.c (superseded by gem(4))

It should be easy to fix the fallout once identified -- this way of
doing things fails safe, and the goal here, after all, is to _avoid_
silent integer truncations, not introduce them.

Maybe one day we can reintroduce min/max as type-generic things that
never silently truncate. But we should avoid doing that for a while,
so that existing code has a chance to be detected by the compiler for
conversion to uimin/uimax without changing the semantics until we can
properly audit it all. (Who knows, maybe in some cases integer
truncation is actually intended!)

1.2 Mon Sep 03 16:29:24 GMT 2018 riastradh Rename min/max -> uimin/uimax for better honesty.

These functions are defined on unsigned int. The generic name
min/max should not silently truncate to 32 bits on 64-bit systems.
This is purely a name change -- no functional change intended.

HOWEVER! Some subsystems have

#define min(a, b) ((a) < (b) ? (a) : (b))
#define max(a, b) ((a) > (b) ? (a) : (b))

even though our standard name for that is MIN/MAX. Although these
may invite multiple evaluation bugs, these do _not_ cause integer
truncation.

To avoid `fixing' these cases, I first changed the name in libkern,
and then compile-tested every file where min/max occurred in order to
confirm that it failed -- and thus confirm that nothing shadowed
min/max -- before changing it.

I have left a handful of bootloaders that are too annoying to
compile-test, and some dead code:

cobalt ews4800mips hp300 hppa ia64 luna68k vax
acorn32/if_ie.c (not included in any kernels)
macppc/if_gm.c (superseded by gem(4))

It should be easy to fix the fallout once identified -- this way of
doing things fails safe, and the goal here, after all, is to _avoid_
silent integer truncations, not introduce them.

Maybe one day we can reintroduce min/max as type-generic things that
never silently truncate. But we should avoid doing that for a while,
so that existing code has a chance to be detected by the compiler for
conversion to uimin/uimax without changing the semantics until we can
properly audit it all. (Who knows, maybe in some cases integer
truncation is actually intended!)

1.2 Mon Sep 03 16:29:24 GMT 2018 riastradh Rename min/max -> uimin/uimax for better honesty.

These functions are defined on unsigned int. The generic name
min/max should not silently truncate to 32 bits on 64-bit systems.
This is purely a name change -- no functional change intended.

HOWEVER! Some subsystems have

#define min(a, b) ((a) < (b) ? (a) : (b))
#define max(a, b) ((a) > (b) ? (a) : (b))

even though our standard name for that is MIN/MAX. Although these
may invite multiple evaluation bugs, these do _not_ cause integer
truncation.

To avoid `fixing' these cases, I first changed the name in libkern,
and then compile-tested every file where min/max occurred in order to
confirm that it failed -- and thus confirm that nothing shadowed
min/max -- before changing it.

I have left a handful of bootloaders that are too annoying to
compile-test, and some dead code:

cobalt ews4800mips hp300 hppa ia64 luna68k vax
acorn32/if_ie.c (not included in any kernels)
macppc/if_gm.c (superseded by gem(4))

It should be easy to fix the fallout once identified -- this way of
doing things fails safe, and the goal here, after all, is to _avoid_
silent integer truncations, not introduce them.

Maybe one day we can reintroduce min/max as type-generic things that
never silently truncate. But we should avoid doing that for a while,
so that existing code has a chance to be detected by the compiler for
conversion to uimin/uimax without changing the semantics until we can
properly audit it all. (Who knows, maybe in some cases integer
truncation is actually intended!)

1.2 Mon Sep 03 16:29:24 GMT 2018 riastradh Rename min/max -> uimin/uimax for better honesty.

These functions are defined on unsigned int. The generic name
min/max should not silently truncate to 32 bits on 64-bit systems.
This is purely a name change -- no functional change intended.

HOWEVER! Some subsystems have

#define min(a, b) ((a) < (b) ? (a) : (b))
#define max(a, b) ((a) > (b) ? (a) : (b))

even though our standard name for that is MIN/MAX. Although these
may invite multiple evaluation bugs, these do _not_ cause integer
truncation.

To avoid `fixing' these cases, I first changed the name in libkern,
and then compile-tested every file where min/max occurred in order to
confirm that it failed -- and thus confirm that nothing shadowed
min/max -- before changing it.

I have left a handful of bootloaders that are too annoying to
compile-test, and some dead code:

cobalt ews4800mips hp300 hppa ia64 luna68k vax
acorn32/if_ie.c (not included in any kernels)
macppc/if_gm.c (superseded by gem(4))

It should be easy to fix the fallout once identified -- this way of
doing things fails safe, and the goal here, after all, is to _avoid_
silent integer truncations, not introduce them.

Maybe one day we can reintroduce min/max as type-generic things that
never silently truncate. But we should avoid doing that for a while,
so that existing code has a chance to be detected by the compiler for
conversion to uimin/uimax without changing the semantics until we can
properly audit it all. (Who knows, maybe in some cases integer
truncation is actually intended!)

/src/usr.bin/vndcompress/
H A Dcommon.h1.8 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.8 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.8 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.8 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.8 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.8 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.8 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.8 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.8 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.8 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.8 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.8 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.8 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.8 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

H A Dofftab.c1.15 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.15 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.15 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.15 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.15 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.15 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.15 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.15 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.15 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.15 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.15 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.15 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.15 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.15 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

H A Dvnduncompress.c1.14 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.14 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.14 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.14 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.14 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.14 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.14 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.14 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.14 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.14 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.14 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.14 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.14 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

1.14 Sat Jul 29 21:04:07 GMT 2017 riastradh Clarify compile-time and run-time arithmetic safety assertions.

This is an experiment with a handful of macros for writing the
checks, most of which are compile-time:

MUL_OK(t, a, b) Does a*b avoid overflow in type t?
ADD_OK(t, a, b) Does a + b avoid overflow in type t?
TOOMANY(t, x, b, m) Are there more than m b-element blocks in x in type t?
(I.e., does ceiling(x/b) > m?)

Addenda that might make sense but are not needed here:

MUL(t, a, b, &p) Set p = a*b and return 0, or return ERANGE if overflow.
ADD(t, a, b, &s) Set s = a+b and return 0, or return ERANGE if overflow.

Example:

uint32_t a = ..., b = ..., y = ..., z = ..., x, w;

/* input validation */
error = MUL(size_t, a, b, &x);
if (error)
fail;
if (TOOMANY(uint32_t, x, BLKSIZ, MAX_NBLK))
fail;
y = HOWMANY(x, BLKSIZ);
if (z > Z_MAX)
fail;
...
/* internal computation */
__CTASSERT(MUL_OK(uint32_t, Z_MAX, MAX_NBLK));
w = z*y;

Obvious shortcomings:

1. Nothing checks your ctassert matches your subsequent arithmetic.
(Maybe we could have BOUNDED_MUL(t, x, xmax, y, ymax) with a
ctassert inside.)

2. Nothing flows the bounds needed by the arithmetic you use back
into candidate definitions of X_MAX/Y_MAX.

But at least the reviewer's job is only to make sure that (a) the
MUL_OK matches the *, and (b) the bounds in the assertion match the
bounds on the inputs -- in particular, the reviewer need not derive
the bounds from the context, only confirm they are supported by the
paths to it.

This is not meant to be a general-purpose proof assistant, or even a
special-purpose one like gfverif <http://gfverif.cryptojedi.org/>.
Rather, it is an experiment in adding a modicum of compile-time
verification with a simple C API change.

This also is not intended to serve as trapping arithmetic on
overflow. The goal here is to enable writing the program with
explicit checks on input and compile-time annotations on computation
to gain confident that overflow won't happen in the computation.

Completed in 132 milliseconds

1234567891011>>