Home | History | Annotate | Download | only in raidframe
History log of /src/sys/dev/raidframe/rf_dagutils.c
RevisionDateAuthorComments
 1.58  23-Jul-2021  oster Extensive mechanical changes to the pools used in RAIDframe.

Alloclist remains not per-RAID, so initialize that pool
separately/differently than the rest.

The remainder of pools in RF_Pools_s are now per-RAID pools. Mostly
mechanical changes to functions to allocate/destroy per-RAID pools.
Needed to make raidPtr available in certain cases to be able to find
the per-RAID pools.

Extend rf_pool_init() to now populate a per-RAID wchan value that is
unique to each pool for a given RAID device.

TODO: Complete the analysis of the minimum number of items that are
required for each pool to allow IO to progress (i.e. so that a request
for pool resources can always be satisfied), and dynamically scale
minimum pool sizes based on RAID configuration.
 1.57  10-Oct-2019  christos branches: 1.57.12;
fix the function pointer and callback mess:
- callback functions return 0 and their result is not checked; make them void.
- there are two types of callbacks and they used to overload their parameters
and the callback structure; separate them into "function" and "value"
callbacks.
- make the wait function signature consistent.
 1.56  10-Feb-2019  christos Introduce PR_ZERO to avoid open-coding memset()s everywhere. OK riastradh@.
 1.55  09-Feb-2019  christos - Change the allocation macros to be more like function calls
- Change sizeof(type) -> sizeof(*variable)
- Use macros for the long buffer length allocations
- Remove "bit polishing" memsets() -- do them only once
- Remove unnecessary casts

Thanks to oster@ for finding bugs and testing.
 1.54  07-Jan-2016  joerg branches: 1.54.18;
Don't use for (...); by using an explicit continue as body.
 1.53  11-May-2011  mrg branches: 1.53.14; 1.53.32;
convert the main raidPtr mutex to a kmutex, and add a couple of cv's to
cover the old sleep/wakeup points for adding_hot_spare and waitForReconCond.
convert all remaining simple_lock's to kmutexes (they're not used or compiled
right now... even with all options enabled) and remove the support for them.

this leaves just a pair of tsleep()/wakeup() calls using old scheduling APIs.
 1.52  15-Mar-2009  cegger branches: 1.52.4; 1.52.6;
ansify function definitions
 1.51  04-Mar-2007  christos branches: 1.51.40; 1.51.50; 1.51.56;
Kill caddr_t; there will be some MI fallout, but it will be fixed shortly.
 1.50  16-Nov-2006  christos branches: 1.50.4;
__unused removal on arguments; approved by core.
 1.49  12-Oct-2006  christos - sprinkle __unused on function decls.
- fix a couple of unused bugs
- no more -Wno-unused for i386
 1.48  09-Jan-2006  oster branches: 1.48.18; 1.48.20;
rf_DiskUnlockFunc and rf_DiskUnlockFuncForThreads are never used. Punt them.
rf_DiskUnlockUndoFunc is in the same boat. Punt it too.
 1.47  11-Dec-2005  christos branches: 1.47.2;
merge ktrace-lwp.
 1.46  29-May-2005  christos branches: 1.46.2;
- avoid variable shadowing
- add a lot of const
- remove parameters from functin declarations
 1.45  27-Feb-2005  perry branches: 1.45.2;
nuke trailing whitespace
 1.44  09-Apr-2004  oster branches: 1.44.4; 1.44.6;
These changes complete the effective removal of malloc() from all
write paths within RAIDframe. They also resolve the "panics with
RAID 5 sets with more than 3 components" issue which was present
(briefly) in the commits which were previously supposed to address
the malloc() issue.

With this new code the 5-component RAID 5 set panics are now gone.

It is also now also possible to swap to RAID 5.

The changes made are:

1) Introduce rf_AllocStripeBuffer() and rf_FreeStripeBuffer() to
allocate/free one stripe's worth of space. rf_AllocStripeBuffer() is
used in rf_MapUnaccessedPortionOfStripe() where it is not sufficient to
allocate memory using just rf_AllocBuffer(). rf_FreeStripeBuffer() is
called from rf_FreeRaidAccDesc(), well after the DAG is finished.

2) Add a set of emergency "stripe buffers" to struct RF_Raid_s.
Arrange for their initialization in rf_Configure(). In low-memory
situations these buffers will be returned by rf_AllocStripeBuffer()
and re-populated by rf_FreeStripeBuffer().

3) Move RF_VoidPointerListElem_t *iobufs from the dagHeader into
into struct RF_RaidAccessDesc_s. This is more consistent with the
original code, and will not result in items being freed "too early".

4) Add a RF_RaidAccessDesc_t *desc to RF_DagHeader_s so that we have a
way to find desc->iobufs.

5) Arrange for desc in the DagHeader to be initialized in InitHdrNode().

6) Don't cleanup iobufs in rf_FreeDAG() -- the freeing is now delayed
until rf_FreeRaidAccDesc() (which is how the original code handled the
allocList, and for which there seem to be some subtle, undocumented
assumptions).

7) Rename rf_AllocBuffer2() to be rf_AllocBuffer() and remove the
former rf_AllocBuffer(). Fix all callers of rf_AllocBuffer().
(This was how it was *supposed* to be after the last time these
changes were made, before they were backed out).

8) Remove RF_IOBufHeader and all references to it.

9) Remove desc->cleanupList and all references to it.

Fixes PR#20191
 1.43  23-Mar-2004  oster branches: 1.43.2;
Partially back out some changes that were causing grief with
RAID5 sets with more than 3 drives. Still need to figure out why
the original changes were losing, but need the version in tree reliable
first!

Huge THANKS to Juergen Hannken-Illjes for helping track down
the changes that were causing the lossage.
 1.42  20-Mar-2004  oster Change signature of rf_AllocBuffer() to take a dag_h and buffer size
instead of an PDA and an alloclist. This lets us do the vple dance
inside of rf_AllocBuffer().

Cleanup usage of rf_AllocIOBuffer() and use rf_AllocBuffer() instead.

Fix all uses of rf_AllocBuffer() to conform to the new way of doing
things.
 1.41  20-Mar-2004  oster For each RAID set, pre-allocate a number of "emergency buffers" to be
used in the event that we can't malloc a buffer of the appropriate
size in the traditional way. rf_AllocIOBuffer() and rf_FreeIOBuffer()
deal with allocating/freeing these structures. These buffers are
stored in a list on the 'iobuf' list. iobuf_count keeps track of how
many buffers are available, and numEmergencyBuffers is the effective
"high-water" mark for the freelist. The buffers allocated by
rf_AllocIOBuffer() are stripe-unit sized, which is the maximum
size requested by any of the callers.

Add an iobufs entry to RF_DagHeader_s. Use it for keeping track of
buffers that get allocated from the free-list.

Add a "generic list" pool (VoidPointerListElement Pool) for elements
used to maintain a list of allocated memory. [It is somewhat less
than ideal to add another little pool to handle this...]

Teach rf_AllocBuffer() to use the new rf_AllocIOBuffer(). Modify
other Mallocs to use rf_AllocIOBuffer(), and to update dag_h->iobufs as
appropriate.

Update rf_FreeDAG() to handle cleanup of dag_h->iobufs.

While here, add some missing pool_destroy() calls for a number of pools.

With these changes, it should (in theory) be possible to swap on
RAID 5 sets again. That said, I've not had any success there yet --
but the last issue I saw at least wasn't in RAIDframe. :-}

[There is room for this code to become a bit more consise, but I
wanted to do a checkpoint here with something known to work :) ]
 1.40  19-Mar-2004  oster Introduce a dual-purpose pool for providing pointer and param "caches"
for RF_DagNode_t's. Scale the structure size based on RF_MAXCOL.
Use the new allocation method in InitNode(). Note that we can't get
rid of the mallocs in there until we can prove that this new
allocation method is a strict upper bound. Unless someone tries
running a RAID set with 40 components, the mallocs here shouldn't
shouldn't be an issue. (and if someone does make a set with 40 components
they will run into other issues with other constants long before
then)
 1.39  19-Mar-2004  oster Take care of six more mallocs:

- Pull rf_FreePhysDiskAddr() out from under a #ifdef, since we're now
going to use it.

- Add a pda_cleanup_list into the DAG header. Use it in rf_FreeDAG() to
cleanup any PDA's that get allocated but have no "easy" way of being
located and freed when the DAG completes.

- numStripeUnitsAccessed is a per-stripe value, and has a maximum
value equal to the number of colums (thus limited by RF_MAXCOL).
Use this knowledge to set a high-bound on overlappingPDAs, and stuff
it on the stack instead of malloc'ing it all the time! This costs us
a whopping 40 bytes on the stack, but saves a malloc() and a free().
 1.38  18-Mar-2004  oster - Introduce a 'dagnode' pool. Initialize it and allow for cleanup.
Provide rf_AllocDAGNode() and rf_FreeDAGNode() to handle
allocation/freeing.

- Introduce a "nodes" linked list of RF_DagNode_t's into the DAG header.
Initialize nodes in InitHdrNode(). Arrange for nodes cleanup in rf_FreeDAG().

- Add a "list_next" to RF_DagNode_t to keep track of nodes on the
above "nodes" list. (This is distinct from the "next" field of
RF_DagNode_t, which keeps track of the firing order of nodes.)
"list_next" gets used in the cleanup routines, and in traversing
through a set of nodes that belong to a particular set of nodes
(e.g. those belonging to xorNodes for a given DAG).

- use rf_AllocDAGNode() instead of mallocs of variable-sized arrays of
RF_DagNode_t's. Mostly mechanical changes to convert the DAG construction
from "access nodes via an array index" to "access nodes via a 'nextnode'
pointer".

- rework a couple of tricky spots where assumptions about the node order
was being abused.

- performance remains consistent with performance before these changes.

[Thanks to Simon Burge (simonb at you.know.where) for looking over
the mechanical changes to make sure I didn't biff anything.]
 1.37  07-Mar-2004  oster - Introduce rf_pools which contains all of the various global pools used
by RAIDframe. Convert all other RAIDframe global pools to use pools
defined within this new structure.
- Introduce rf_pool_init(), used for initializing a single pool in
RAIDframe. Teach each of the configuration routines to use
rf_pool_init().
- Cleanup a few pool-related comments.
- Cleanup revent initialization and #defines.
- Add a missing pool_destroy() for the reconbuffer pool.

(Saves another 1K off of an i386 GENERIC kernel, and makes
stuff a lot more readable)
 1.36  07-Mar-2004  oster - Introduce rf_pools which contains all of the various global pools used
by RAIDframe. Convert all other RAIDframe global pools to use pools
defined within this new structure.
- Introduce rf_pool_init(), used for initializing a single pool in
RAIDframe. Teach each of the configuration routines to use
rf_pool_init().
- Cleanup a few pool-related comments.
- Cleanup revent initialization and #defines.
- Add a missing pool_destroy() for the reconbuffer pool.

(Saves another 1K off of an i386 GENERIC kernel, and makes
stuff a lot more readable)
 1.35  07-Mar-2004  oster Re-work rf_GenerateFailedAccessASMs() to simplify things a bit.
rf_AllocBuffer() is available, so use it to get buffer space instead
of the previous RF_Malloc() bits. Saves a few bytes, but more
importantly makes the code much more readable.
 1.34  06-Mar-2004  oster Pretty up a bit of unused code.
 1.33  06-Mar-2004  oster rf_AllocBuffer() doesn't do anything with its dag_h parameter. Nuke
it, and adjust callers.
 1.32  06-Mar-2004  oster Minor tabbing cleanup. No functional change.
 1.31  05-Mar-2004  oster Introduce RF_DEBUG_DAG and use it to #if-out rf_dagDebug sections.
(i386 GENERIC kernel shrinks by 1.6K)
 1.30  05-Mar-2004  oster - remove the RF_*_INC's, as necessary. They are not needed any more.
- introduce RF_MIN_*'s, as necessary. These will indicate the
low-water mark for pools as well as the pool_prime() value.
- add pool_setlowat() for the critical pools.
- pool_prime() and pool_setlowat() the raidframe_cbufpool.
- re-order some pool_prime()'s and pool_sethiwat()'s for clarity.
 1.29  29-Feb-2004  oster Adjust _rf_ShutdownCreate() so that it is willing to wait for more
memory. Since we only now ever "return(0)", just return (void)
instead.

Cleanup all uses of rf_ShutdownCreate() to not worry about
it ever failing. Shaves another 600 bytes off of an i386 GENERIC kernel.
 1.28  29-Feb-2004  oster We'd better have gotten a dag header from the pool. In any event, callers
arn't checking what we return anyway. (Cleanup memory allocations.)
 1.27  29-Feb-2004  oster Stripe functions are now handled by a linked-list instead of a
runtime-variable array.

Fix a bug where stripeFuncs was being freed, and then being used after
(in the case of numStripesBailed > 0).
 1.26  27-Feb-2004  oster Add forgotten pool_destroy().
 1.25  27-Feb-2004  oster Use a dynamically allocated linked list of dagLists instead of using a
dynamically allocated variable-sized array (dagArray). Convert code
to use the new linked list stuff instead of the array stuff (the ratio
of one dagList per stripe still applies). The big advantage is in
being able to more efficiently allocate the dagLists on-the-fly, and
not have to know the size(s) of the array beforehand.
 1.24  10-Jan-2004  oster Since the LOCK and UNLOCK flags are never used, no need in extracting them.
Collapse the related variables down to zero. That means 'flags' is 0
as well. Nuke the extraction macros, a bunch of the variables, and replace
'flags' as well.
 1.23  30-Dec-2003  oster Some days you wonder if some of the function declaration consistency
was just an accident in the first place. Cleanup function decls and
a few comments. [ok.. so I wasn't going to fix this many.. but once
you're on a roll....]
 1.22  29-Dec-2003  oster - first kick at a major reworking of RAIDframe's memory allocation code:
- all freelists converted to pools
- initialization of structure members in certain cases where
code was relying on specific allocation and usage properties
to keep structures in a "known state" (that doesn't work with
pools!).
- make most pool_get() be "PR_WAITOK" until they can be analyzed
further, and/or have proper error handling added.
- all RF_Mallocs zero the space returned, so there is no difference
between RF_Calloc and RF_Malloc. In fact, all the RF_Calloc()'s
do is tend to do is get things horribly confused.
Make RF_Malloc() the "general memory allocator", with
RF_MallocAndAdd() the "general memory allocator with
allocation list".
- some of these RF_Malloc's et al. are destined to disappear.
- remove rf_rdp_freelist entirely (it's not used anywhere!)
- remove: #include "rf_freelist.h"
- to the files that were relying on the above, add: #include "rf_general.h"
- add: #include "rf_debugMem.h" to rf_shutdown.h to make it happy
about the loss of: #include "rf_freelist.h".

This shrinks an i386 GENERIC kernel by approx 5K. RAIDframe now
weighs in at about 162K on i386.
 1.21  29-Dec-2003  oster [Having received a definite lack of strenuous objection, a small amount
of strenuous agreement, and some general agreement, this commit is
going ahead because it's now starting to block some other changes I
wish to make.]

Remove most of the support for the concept of "rows" from RAIDframe.
While the "row" interface has been exported to the world, RAIDframe
internals have really only supported a single row, even though they
have feigned support of multiple rows.

Nothing changes in configuration land -- config files still need to
specify a single row, etc. All auto-config structures remain fully
forward/backwards compatible.

The only visible difference to the average user should be a
reduction in the size of a GENERIC kernel (i386) by 4.5K. For those
of us trolling through RAIDframe kernel code, a lot of the driver
configuration code has become a LOT easier to read.
 1.20  09-Feb-2003  jdolecek branches: 1.20.2;
constify some
 1.19  22-Nov-2002  oster rf_SelectMirrorDiskPartition() is only needed in a few cases. #if it
out in the rest. Thanks to Krister!
 1.18  23-Sep-2002  oster Remove unneeded variables and lame assignments. Thanks Simon B.!
 1.17  21-Sep-2002  oster rf_MakePropListEntry isn't used anywhere, so nuke it. Thanks Krister!
 1.16  19-Sep-2002  oster Another couple of functions that arn't used unless one is debugging RAIDframe.
 1.15  14-Sep-2002  oster Everyone and their dog was using RF_ERRORMSG3 to print out the same
sort of error message, over and over again, in different files.
Rather than having the same text repeated in multiple .o files,
create a couple of little functions to do the printing, and save a
bundle of space. Also improves readability of code.
 1.14  02-Aug-2002  oster Nuke stuff dealing with the experimental memChunk code. It's unused, and
currently only contributing to bloat.
 1.13  13-Jul-2002  oster Most folks won't need the DAG printing and verification routines.
Introduce a #define to toggle them on/off. Disable calls to
rf_PrintDAGList(). Saves ~6K on GENERIC+DEBUG kernel on i386.
 1.12  13-Jul-2002  oster Minor cleanup.
 1.11  13-Jul-2002  oster rf_compute_workload_shift() is only used by the CHAINDECLUSTER stuff,
so only include it if needed.
 1.10  04-Mar-2002  wiz branches: 1.10.6;
Correct misspellings of "failed".
 1.9  13-Nov-2001  lukem add RCSIDs
 1.8  04-Oct-2001  oster Step 2 of the disentanglement. We now look to <dev/raidframe/*> for
the stuff that used to live in rf_types.h, rf_raidframe.h, rf_layout.h,
rf_netbsd.h, rf_raid.h, rf_decluster,h, and a few other places.
Believe it or not, when this is all done, things will be cleaner.

No functional changes to RAIDframe.
 1.7  18-Jul-2001  thorpej branches: 1.7.2;
bzero -> memset
 1.6  09-Dec-1999  oster branches: 1.6.6; 1.6.8;
Trust only the data disk if the mirror is not known to be up-to-date.
(this should have been committed with a previous fix for the same
problem in another function in this file :( )
 1.5  09-Nov-1999  oster If we have a choice: do not trust the parity disk for read
balancing in a RAID 1 set if we know that the parity might not
be up-to-date. Thanks to Thor for bringing this to my attention.
 1.4  13-Aug-1999  oster branches: 1.4.2; 1.4.4; 1.4.8;
rf_sys.h does not need to be #included in any of these files, and, actually,
is no longer needed at all.
 1.3  05-Feb-1999  oster branches: 1.3.2;
Phase 2 of the RAIDframe cleanup. The source is now closer to KNF
and is much easier to read. No functionality changes.
 1.2  26-Jan-1999  oster RAIDframe cleanup, phase 1. Nuke simulator support, user-land driver,
out-dated comments, and other unneeded stuff. This helps prepare
for cleaning up the rest of the code, and adding new functionality.

No functional changes to the kernel code in this commit.
 1.1  13-Nov-1998  oster RAIDframe, version 1.1, from the Parallel Data Laboratory at
Carnegie Mellon University. Full RAID implementation, including
levels 0, 1, 4, 5, 6, parity logging, and a few other goodies.
Ported to NetBSD by Greg Oster.
 1.3.2.2  16-Dec-1999  he Pull up revision 1.6 (requested by oster):
Trust only the data disk if the mirror is not known to
be up-to-date.
 1.3.2.1  09-Nov-1999  he Pull up revision 1.5 (requested by oster):
Do not trust the parity disk for read balancing in a RAID 1 set
if we know that the parity might not be up-to-date (and if we
have a choice in the matter).
 1.4.8.1  27-Dec-1999  wrstuden Pull up to last week's -current.
 1.4.4.1  15-Nov-1999  fvdl Sync with -current
 1.4.2.1  20-Nov-2000  bouyer Update thorpej_scsipi to -current as of a month ago
A i386 GENERIC kernel compiles without the siop, ahc and bha drivers
(will be updated later). i386 IDE/ATAPI and ncr work, as well as
sparc/esp_sbus. alpha should work as well (untested yet).
siop, ahc and bha will be updated once I've updated the branch to current
-current, as well as machine-dependant code.
 1.6.8.5  10-Oct-2002  jdolecek sync kqueue with -current; this includes merge of gehenna-devsw branch,
merge of i386 MP branch, and part of autoconf rototil work
 1.6.8.4  06-Sep-2002  jdolecek sync kqueue branch with HEAD
 1.6.8.3  16-Mar-2002  jdolecek Catch up with -current.
 1.6.8.2  10-Jan-2002  thorpej Sync kqueue branch with -current.
 1.6.8.1  03-Aug-2001  lukem update to -current
 1.6.6.9  11-Dec-2002  thorpej Sync with HEAD.
 1.6.6.8  18-Oct-2002  nathanw Catch up to -current.
 1.6.6.7  17-Sep-2002  nathanw Catch up to -current.
 1.6.6.6  13-Aug-2002  nathanw Catch up to -current.
 1.6.6.5  01-Aug-2002  nathanw Catch up to -current.
 1.6.6.4  01-Apr-2002  nathanw Catch up to -current.
(CVS: It's not just a program. It's an adventure!)
 1.6.6.3  14-Nov-2001  nathanw Catch up to -current.
 1.6.6.2  22-Oct-2001  nathanw Catch up to -current.
 1.6.6.1  24-Aug-2001  nathanw Catch up with -current.
 1.7.2.1  11-Oct-2001  fvdl Catch up with -current. Fix some bogons in the sparc64 kbd/ms
attach code. cd18xx conversion provided by mrg.
 1.10.6.2  29-Aug-2002  gehenna catch up with -current.
 1.10.6.1  15-Jul-2002  gehenna catch up with -current.
 1.20.2.5  10-Nov-2005  skrll Sync with HEAD. Here we go again...
 1.20.2.4  04-Mar-2005  skrll Sync with HEAD.

Hi Perry!
 1.20.2.3  21-Sep-2004  skrll Fix the sync with head I botched.
 1.20.2.2  18-Sep-2004  skrll Sync with HEAD.
 1.20.2.1  03-Aug-2004  skrll Sync with HEAD
 1.43.2.1  11-Apr-2004  tron Pull up revision 1.44 (requested by oster in ticket #123):
These changes complete the effective removal of malloc() from all
write paths within RAIDframe. They also resolve the "panics with
RAID 5 sets with more than 3 components" issue which was present
(briefly) in the commits which were previously supposed to address
the malloc() issue.
With this new code the 5-component RAID 5 set panics are now gone.
It is also now also possible to swap to RAID 5.
The changes made are:
1) Introduce rf_AllocStripeBuffer() and rf_FreeStripeBuffer() to
allocate/free one stripe's worth of space. rf_AllocStripeBuffer() is
used in rf_MapUnaccessedPortionOfStripe() where it is not sufficient to
allocate memory using just rf_AllocBuffer(). rf_FreeStripeBuffer() is
called from rf_FreeRaidAccDesc(), well after the DAG is finished.
2) Add a set of emergency "stripe buffers" to struct RF_Raid_s.
Arrange for their initialization in rf_Configure(). In low-memory
situations these buffers will be returned by rf_AllocStripeBuffer()
and re-populated by rf_FreeStripeBuffer().
3) Move RF_VoidPointerListElem_t *iobufs from the dagHeader into
into struct RF_RaidAccessDesc_s. This is more consistent with the
original code, and will not result in items being freed "too early".
4) Add a RF_RaidAccessDesc_t *desc to RF_DagHeader_s so that we have a
way to find desc->iobufs.
5) Arrange for desc in the DagHeader to be initialized in InitHdrNode().
6) Don't cleanup iobufs in rf_FreeDAG() -- the freeing is now delayed
until rf_FreeRaidAccDesc() (which is how the original code handled the
allocList, and for which there seem to be some subtle, undocumented
assumptions).
7) Rename rf_AllocBuffer2() to be rf_AllocBuffer() and remove the
former rf_AllocBuffer(). Fix all callers of rf_AllocBuffer().
(This was how it was *supposed* to be after the last time these
changes were made, before they were backed out).
8) Remove RF_IOBufHeader and all references to it.
9) Remove desc->cleanupList and all references to it.
Fixes PR#20191
 1.44.6.1  19-Mar-2005  yamt sync with head. xen and whitespace. xen part is not finished.
 1.44.4.1  29-Apr-2005  kent sync with -current
 1.45.2.1  17-Jun-2005  tron Pull up revision 1.46 (requested by oster in ticket #472):
- avoid variable shadowing
- add a lot of const
- remove parameters from function declarations
 1.46.2.3  03-Sep-2007  yamt sync with head.
 1.46.2.2  30-Dec-2006  yamt sync with head.
 1.46.2.1  21-Jun-2006  yamt sync with head.
 1.47.2.1  15-Jan-2006  yamt sync with head.
 1.48.20.2  10-Dec-2006  yamt sync with head.
 1.48.20.1  22-Oct-2006  yamt sync with head
 1.48.18.1  18-Nov-2006  ad Sync with head.
 1.50.4.1  12-Mar-2007  rmind Sync with HEAD.
 1.51.56.1  13-May-2009  jym Sync with HEAD.

Commit is split, to avoid a "too many arguments" protocol error.
 1.51.50.1  28-Apr-2009  skrll Sync with HEAD.
 1.51.40.1  04-May-2009  yamt sync with head.
 1.52.6.1  06-Jun-2011  jruoho Sync with HEAD.
 1.52.4.1  31-May-2011  rmind sync with head
 1.53.32.1  19-Mar-2016  skrll Sync with HEAD
 1.53.14.1  03-Dec-2017  jdolecek update from HEAD
 1.54.18.2  13-Apr-2020  martin Mostly merge changes from HEAD upto 20200411
 1.54.18.1  10-Jun-2019  christos Sync with HEAD
 1.57.12.1  01-Aug-2021  thorpej Sync with HEAD.

RSS XML Feed