1 ; $NetBSD: milli.S,v 1.3 2022/06/13 16:00:05 skrll Exp $ 2 ; 3 ; $OpenBSD: milli.S,v 1.5 2001/03/29 04:08:20 mickey Exp $ 4 ; 5 ; (c) Copyright 1986 HEWLETT-PACKARD COMPANY 6 ; 7 ; To anyone who acknowledges that this file is provided "AS IS" 8 ; without any express or implied warranty: 9 ; permission to use, copy, modify, and distribute this file 10 ; for any purpose is hereby granted without fee, provided that 11 ; the above copyright notice and this notice appears in all 12 ; copies, and that the name of Hewlett-Packard Company not be 13 ; used in advertising or publicity pertaining to distribution 14 ; of the software without specific, written prior permission. 15 ; Hewlett-Packard Company makes no representations about the 16 ; suitability of this software for any purpose. 17 ; 18 19 ; Standard Hardware Register Definitions for Use with Assembler 20 ; version A.08.06 21 ; - fr16-31 added at Utah 22 ;~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 23 ; Hardware General Registers 24 r0: .equ 0 25 26 r1: .equ 1 27 28 r2: .equ 2 29 30 r3: .equ 3 31 32 r4: .equ 4 33 34 r5: .equ 5 35 36 r6: .equ 6 37 38 r7: .equ 7 39 40 r8: .equ 8 41 42 r9: .equ 9 43 44 r10: .equ 10 45 46 r11: .equ 11 47 48 r12: .equ 12 49 50 r13: .equ 13 51 52 r14: .equ 14 53 54 r15: .equ 15 55 56 r16: .equ 16 57 58 r17: .equ 17 59 60 r18: .equ 18 61 62 r19: .equ 19 63 64 r20: .equ 20 65 66 r21: .equ 21 67 68 r22: .equ 22 69 70 r23: .equ 23 71 72 r24: .equ 24 73 74 r25: .equ 25 75 76 r26: .equ 26 77 78 r27: .equ 27 79 80 r28: .equ 28 81 82 r29: .equ 29 83 84 r30: .equ 30 85 86 r31: .equ 31 87 88 ; Hardware Space Registers 89 sr0: .equ 0 90 91 sr1: .equ 1 92 93 sr2: .equ 2 94 95 sr3: .equ 3 96 97 sr4: .equ 4 98 99 sr5: .equ 5 100 101 sr6: .equ 6 102 103 sr7: .equ 7 104 105 ; Hardware Floating Point Registers 106 fr0: .equ 0 107 108 fr1: .equ 1 109 110 fr2: .equ 2 111 112 fr3: .equ 3 113 114 fr4: .equ 4 115 116 fr5: .equ 5 117 118 fr6: .equ 6 119 120 fr7: .equ 7 121 122 fr8: .equ 8 123 124 fr9: .equ 9 125 126 fr10: .equ 10 127 128 fr11: .equ 11 129 130 fr12: .equ 12 131 132 fr13: .equ 13 133 134 fr14: .equ 14 135 136 fr15: .equ 15 137 138 fr16: .equ 16 139 140 fr17: .equ 17 141 142 fr18: .equ 18 143 144 fr19: .equ 19 145 146 fr20: .equ 20 147 148 fr21: .equ 21 149 150 fr22: .equ 22 151 152 fr23: .equ 23 153 154 fr24: .equ 24 155 156 fr25: .equ 25 157 158 fr26: .equ 26 159 160 fr27: .equ 27 161 162 fr28: .equ 28 163 164 fr29: .equ 29 165 166 fr30: .equ 30 167 168 fr31: .equ 31 169 170 ; Hardware Control Registers 171 cr0: .equ 0 172 173 rctr: .equ 0 ; Recovery Counter Register 174 175 cr8: .equ 8 ; Protection ID 1 176 177 pidr1: .equ 8 178 179 cr9: .equ 9 ; Protection ID 2 180 181 pidr2: .equ 9 182 183 cr10: .equ 10 184 185 ccr: .equ 10 ; Coprocessor Configuration Register 186 187 cr11: .equ 11 188 189 sar: .equ 11 ; Shift Amount Register 190 191 cr12: .equ 12 192 193 pidr3: .equ 12 ; Protection ID 3 194 195 cr13: .equ 13 196 197 pidr4: .equ 13 ; Protection ID 4 198 199 cr14: .equ 14 200 201 iva: .equ 14 ; Interrupt Vector Address 202 203 cr15: .equ 15 204 205 eiem: .equ 15 ; External Interrupt Enable Mask 206 207 cr16: .equ 16 208 209 itmr: .equ 16 ; Interval Timer 210 211 cr17: .equ 17 212 213 pcsq: .equ 17 ; Program Counter Space queue 214 215 cr18: .equ 18 216 217 pcoq: .equ 18 ; Program Counter Offset queue 218 219 cr19: .equ 19 220 221 iir: .equ 19 ; Interruption Instruction Register 222 223 cr20: .equ 20 224 225 isr: .equ 20 ; Interruption Space Register 226 227 cr21: .equ 21 228 229 ior: .equ 21 ; Interruption Offset Register 230 231 cr22: .equ 22 232 233 ipsw: .equ 22 ; Interruption Processor Status Word 234 235 cr23: .equ 23 236 237 eirr: .equ 23 ; External Interrupt Request 238 239 cr24: .equ 24 240 241 ppda: .equ 24 ; Physical Page Directory Address 242 243 tr0: .equ 24 ; Temporary register 0 244 245 cr25: .equ 25 246 247 hta: .equ 25 ; Hash Table Address 248 249 tr1: .equ 25 ; Temporary register 1 250 251 cr26: .equ 26 252 253 tr2: .equ 26 ; Temporary register 2 254 255 cr27: .equ 27 256 257 tr3: .equ 27 ; Temporary register 3 258 259 cr28: .equ 28 260 261 tr4: .equ 28 ; Temporary register 4 262 263 cr29: .equ 29 264 265 tr5: .equ 29 ; Temporary register 5 266 267 cr30: .equ 30 268 269 tr6: .equ 30 ; Temporary register 6 270 271 cr31: .equ 31 272 273 tr7: .equ 31 ; Temporary register 7 274 275 ;~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 276 ; Procedure Call Convention ~ 277 ; Register Definitions for Use with Assembler ~ 278 ; version A.08.06 ~ 279 ;~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 280 ; Software Architecture General Registers 281 rp: .equ r2 ; return pointer 282 283 mrp: .equ r31 ; millicode return pointer 284 285 ret0: .equ r28 ; return value 286 287 ret1: .equ r29 ; return value (high part of double) 288 289 sl: .equ r29 ; static link 290 291 sp: .equ r30 ; stack pointer 292 293 dp: .equ r27 ; data pointer 294 295 arg0: .equ r26 ; argument 296 297 arg1: .equ r25 ; argument or high part of double argument 298 299 arg2: .equ r24 ; argument 300 301 arg3: .equ r23 ; argument or high part of double argument 302 303 ;_____________________________________________________________________________ 304 ; Software Architecture Space Registers 305 ; sr0 ; return link form BLE 306 sret: .equ sr1 ; return value 307 308 sarg: .equ sr1 ; argument 309 310 ; sr4 ; PC SPACE tracker 311 ; sr5 ; process private data 312 ;_____________________________________________________________________________ 313 ; Software Architecture Pseudo Registers 314 previous_sp: .equ 64 ; old stack pointer (locates previous frame) 315 316 ;~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 317 ; Standard space and subspace definitions. version A.08.06 318 ; These are generally suitable for programs on HP_UX and HPE. 319 ; Statements commented out are used when building such things as operating 320 ; system kernels. 321 ;;;;;;;;;;;;;;;; 322 ; Additional code subspaces should have ALIGN=8 for an interspace BV 323 ; and should have SORT=24. 324 ; 325 ; For an incomplete executable (program bound to shared libraries), 326 ; sort keys $GLOBAL$ -1 and $GLOBAL$ -2 are reserved for the $DLT$ 327 ; and $PLT$ subspaces respectively. 328 ;;;;;;;;;;;;;;; 329 330 .text 331 .EXPORT $$remI,millicode 332 ; .IMPORT cerror 333 $$remI: 334 .PROC 335 .CALLINFO NO_CALLS 336 .ENTRY 337 addit,= 0,arg1,r0 338 add,>= r0,arg0,ret1 339 sub r0,ret1,ret1 340 sub r0,arg1,r1 341 ds r0,r1,r0 342 or r0,r0,r1 343 add ret1,ret1,ret1 344 ds r1,arg1,r1 345 addc ret1,ret1,ret1 346 ds r1,arg1,r1 347 addc ret1,ret1,ret1 348 ds r1,arg1,r1 349 addc ret1,ret1,ret1 350 ds r1,arg1,r1 351 addc ret1,ret1,ret1 352 ds r1,arg1,r1 353 addc ret1,ret1,ret1 354 ds r1,arg1,r1 355 addc ret1,ret1,ret1 356 ds r1,arg1,r1 357 addc ret1,ret1,ret1 358 ds r1,arg1,r1 359 addc ret1,ret1,ret1 360 ds r1,arg1,r1 361 addc ret1,ret1,ret1 362 ds r1,arg1,r1 363 addc ret1,ret1,ret1 364 ds r1,arg1,r1 365 addc ret1,ret1,ret1 366 ds r1,arg1,r1 367 addc ret1,ret1,ret1 368 ds r1,arg1,r1 369 addc ret1,ret1,ret1 370 ds r1,arg1,r1 371 addc ret1,ret1,ret1 372 ds r1,arg1,r1 373 addc ret1,ret1,ret1 374 ds r1,arg1,r1 375 addc ret1,ret1,ret1 376 ds r1,arg1,r1 377 addc ret1,ret1,ret1 378 ds r1,arg1,r1 379 addc ret1,ret1,ret1 380 ds r1,arg1,r1 381 addc ret1,ret1,ret1 382 ds r1,arg1,r1 383 addc ret1,ret1,ret1 384 ds r1,arg1,r1 385 addc ret1,ret1,ret1 386 ds r1,arg1,r1 387 addc ret1,ret1,ret1 388 ds r1,arg1,r1 389 addc ret1,ret1,ret1 390 ds r1,arg1,r1 391 addc ret1,ret1,ret1 392 ds r1,arg1,r1 393 addc ret1,ret1,ret1 394 ds r1,arg1,r1 395 addc ret1,ret1,ret1 396 ds r1,arg1,r1 397 addc ret1,ret1,ret1 398 ds r1,arg1,r1 399 addc ret1,ret1,ret1 400 ds r1,arg1,r1 401 addc ret1,ret1,ret1 402 ds r1,arg1,r1 403 addc ret1,ret1,ret1 404 ds r1,arg1,r1 405 addc ret1,ret1,ret1 406 ds r1,arg1,r1 407 addc ret1,ret1,ret1 408 movb,>=,n r1,ret1,remI300 409 add,< arg1,r0,r0 410 add,tr r1,arg1,ret1 411 sub r1,arg1,ret1 412 remI300: add,>= arg0,r0,r0 413 414 sub r0,ret1,ret1 415 bv r0(r31) 416 nop 417 .EXIT 418 .PROCEND 419 420 bit1: .equ 1 421 422 bit30: .equ 30 423 bit31: .equ 31 424 425 len2: .equ 2 426 427 len4: .equ 4 428 429 #if 0 430 $$dyncall: 431 .proc 432 .callinfo NO_CALLS 433 .export $$dyncall,MILLICODE 434 435 bb,>=,n 22,bit30,noshlibs 436 437 depi 0,bit31,len2,22 438 ldw 4(22),19 439 ldw 0(22),22 440 noshlibs: 441 ldsid (22),r1 442 mtsp r1,sr0 443 be 0(sr0,r22) 444 stw rp,-24(sp) 445 .procend 446 447 $$sh_func_adrs: 448 .proc 449 .callinfo NO_CALLS 450 .export $$sh_func_adrs, millicode 451 ldo 0(r26),ret1 452 dep r0,30,1,r26 453 probew (r26),r31,r22 454 extru,= r22,31,1,r22 455 bv r0(r31) 456 ldws 0(r26),ret1 457 .procend 458 #endif 459 460 temp: .EQU r1 461 462 retreg: .EQU ret1 ; r29 463 464 .export $$divU,millicode 465 .import $$divU_3,millicode 466 .import $$divU_5,millicode 467 .import $$divU_6,millicode 468 .import $$divU_7,millicode 469 .import $$divU_9,millicode 470 .import $$divU_10,millicode 471 .import $$divU_12,millicode 472 .import $$divU_14,millicode 473 .import $$divU_15,millicode 474 $$divU: 475 .proc 476 .callinfo NO_CALLS 477 ; The subtract is not nullified since it does no harm and can be used 478 ; by the two cases that branch back to "normal". 479 comib,>= 15,arg1,special_divisor 480 sub r0,arg1,temp ; clear carry, negate the divisor 481 ds r0,temp,r0 ; set V-bit to 1 482 normal: 483 add arg0,arg0,retreg ; shift msb bit into carry 484 ds r0,arg1,temp ; 1st divide step, if no carry 485 addc retreg,retreg,retreg ; shift retreg with/into carry 486 ds temp,arg1,temp ; 2nd divide step 487 addc retreg,retreg,retreg ; shift retreg with/into carry 488 ds temp,arg1,temp ; 3rd divide step 489 addc retreg,retreg,retreg ; shift retreg with/into carry 490 ds temp,arg1,temp ; 4th divide step 491 addc retreg,retreg,retreg ; shift retreg with/into carry 492 ds temp,arg1,temp ; 5th divide step 493 addc retreg,retreg,retreg ; shift retreg with/into carry 494 ds temp,arg1,temp ; 6th divide step 495 addc retreg,retreg,retreg ; shift retreg with/into carry 496 ds temp,arg1,temp ; 7th divide step 497 addc retreg,retreg,retreg ; shift retreg with/into carry 498 ds temp,arg1,temp ; 8th divide step 499 addc retreg,retreg,retreg ; shift retreg with/into carry 500 ds temp,arg1,temp ; 9th divide step 501 addc retreg,retreg,retreg ; shift retreg with/into carry 502 ds temp,arg1,temp ; 10th divide step 503 addc retreg,retreg,retreg ; shift retreg with/into carry 504 ds temp,arg1,temp ; 11th divide step 505 addc retreg,retreg,retreg ; shift retreg with/into carry 506 ds temp,arg1,temp ; 12th divide step 507 addc retreg,retreg,retreg ; shift retreg with/into carry 508 ds temp,arg1,temp ; 13th divide step 509 addc retreg,retreg,retreg ; shift retreg with/into carry 510 ds temp,arg1,temp ; 14th divide step 511 addc retreg,retreg,retreg ; shift retreg with/into carry 512 ds temp,arg1,temp ; 15th divide step 513 addc retreg,retreg,retreg ; shift retreg with/into carry 514 ds temp,arg1,temp ; 16th divide step 515 addc retreg,retreg,retreg ; shift retreg with/into carry 516 ds temp,arg1,temp ; 17th divide step 517 addc retreg,retreg,retreg ; shift retreg with/into carry 518 ds temp,arg1,temp ; 18th divide step 519 addc retreg,retreg,retreg ; shift retreg with/into carry 520 ds temp,arg1,temp ; 19th divide step 521 addc retreg,retreg,retreg ; shift retreg with/into carry 522 ds temp,arg1,temp ; 20th divide step 523 addc retreg,retreg,retreg ; shift retreg with/into carry 524 ds temp,arg1,temp ; 21st divide step 525 addc retreg,retreg,retreg ; shift retreg with/into carry 526 ds temp,arg1,temp ; 22nd divide step 527 addc retreg,retreg,retreg ; shift retreg with/into carry 528 ds temp,arg1,temp ; 23rd divide step 529 addc retreg,retreg,retreg ; shift retreg with/into carry 530 ds temp,arg1,temp ; 24th divide step 531 addc retreg,retreg,retreg ; shift retreg with/into carry 532 ds temp,arg1,temp ; 25th divide step 533 addc retreg,retreg,retreg ; shift retreg with/into carry 534 ds temp,arg1,temp ; 26th divide step 535 addc retreg,retreg,retreg ; shift retreg with/into carry 536 ds temp,arg1,temp ; 27th divide step 537 addc retreg,retreg,retreg ; shift retreg with/into carry 538 ds temp,arg1,temp ; 28th divide step 539 addc retreg,retreg,retreg ; shift retreg with/into carry 540 ds temp,arg1,temp ; 29th divide step 541 addc retreg,retreg,retreg ; shift retreg with/into carry 542 ds temp,arg1,temp ; 30th divide step 543 addc retreg,retreg,retreg ; shift retreg with/into carry 544 ds temp,arg1,temp ; 31st divide step 545 addc retreg,retreg,retreg ; shift retreg with/into carry 546 ds temp,arg1,temp ; 32nd divide step, 547 bv 0(r31) 548 addc retreg,retreg,retreg ; shift last retreg bit into retreg 549 ;_____________________________________________________________________________ 550 ; handle the cases where divisor is a small constant or has high bit on 551 special_divisor: 552 blr arg1,r0 553 comib,>,n 0,arg1,big_divisor ; nullify previous instruction 554 zero_divisor: ; this label is here to provide external visibility 555 556 addit,= 0,arg1,0 ; trap for zero dvr 557 nop 558 bv 0(r31) ; divisor == 1 559 copy arg0,retreg 560 bv 0(r31) ; divisor == 2 561 extru arg0,30,31,retreg 562 b,n $$divU_3 ; divisor == 3 563 nop 564 bv 0(r31) ; divisor == 4 565 extru arg0,29,30,retreg 566 b,n $$divU_5 ; divisor == 5 567 nop 568 b,n $$divU_6 ; divisor == 6 569 nop 570 b,n $$divU_7 ; divisor == 7 571 nop 572 bv 0(r31) ; divisor == 8 573 extru arg0,28,29,retreg 574 b,n $$divU_9 ; divisor == 9 575 nop 576 b,n $$divU_10 ; divisor == 10 577 nop 578 b normal ; divisor == 11 579 ds r0,temp,r0 ; set V-bit to 1 580 b,n $$divU_12 ; divisor == 12 581 nop 582 b normal ; divisor == 13 583 ds r0,temp,r0 ; set V-bit to 1 584 b,n $$divU_14 ; divisor == 14 585 nop 586 b,n $$divU_15 ; divisor == 15 587 nop 588 ;_____________________________________________________________________________ 589 ; Handle the case where the high bit is on in the divisor. 590 ; Compute: if( dividend>=divisor) quotient=1; else quotient=0; 591 ; Note: dividend>==divisor iff dividend-divisor does not borrow 592 ; and not borrow iff carry 593 big_divisor: 594 sub arg0,arg1,r0 595 bv 0(r31) 596 addc r0,r0,retreg 597 .procend 598 .end 599 600 t2: .EQU r1 601 602 ; x2 .EQU arg0 ; r26 603 t1: .EQU arg1 ; r25 604 605 ; x1 .EQU ret1 ; r29 606 ;_____________________________________________________________________________ 607 608 $$divide_by_constant: 609 .PROC 610 .CALLINFO NO_CALLS 611 .export $$divide_by_constant,millicode 612 ; Provides a "nice" label for the code covered by the unwind descriptor 613 ; for things like gprof. 614 615 $$divI_2: 616 .EXPORT $$divI_2,MILLICODE 617 COMCLR,>= arg0,0,0 618 ADDI 1,arg0,arg0 619 bv 0(r31) 620 EXTRS arg0,30,31,ret1 621 622 $$divI_4: 623 .EXPORT $$divI_4,MILLICODE 624 COMCLR,>= arg0,0,0 625 ADDI 3,arg0,arg0 626 bv 0(r31) 627 EXTRS arg0,29,30,ret1 628 629 $$divI_8: 630 .EXPORT $$divI_8,MILLICODE 631 COMCLR,>= arg0,0,0 632 ADDI 7,arg0,arg0 633 bv 0(r31) 634 EXTRS arg0,28,29,ret1 635 636 $$divI_16: 637 .EXPORT $$divI_16,MILLICODE 638 COMCLR,>= arg0,0,0 639 ADDI 15,arg0,arg0 640 bv 0(r31) 641 EXTRS arg0,27,28,ret1 642 643 $$divI_3: 644 .EXPORT $$divI_3,MILLICODE 645 COMB,<,N arg0,0,$neg3 646 647 ADDI 1,arg0,arg0 648 EXTRU arg0,1,2,ret1 649 SH2ADD arg0,arg0,arg0 650 B $pos 651 ADDC ret1,0,ret1 652 653 $neg3: 654 SUBI 1,arg0,arg0 655 EXTRU arg0,1,2,ret1 656 SH2ADD arg0,arg0,arg0 657 B $neg 658 ADDC ret1,0,ret1 659 660 $$divU_3: 661 .EXPORT $$divU_3,MILLICODE 662 ADDI 1,arg0,arg0 663 ADDC 0,0,ret1 664 SHD ret1,arg0,30,t1 665 SH2ADD arg0,arg0,arg0 666 B $pos 667 ADDC ret1,t1,ret1 668 669 $$divI_5: 670 .EXPORT $$divI_5,MILLICODE 671 COMB,<,N arg0,0,$neg5 672 ADDI 3,arg0,t1 673 SH1ADD arg0,t1,arg0 674 B $pos 675 ADDC 0,0,ret1 676 677 $neg5: 678 SUB 0,arg0,arg0 679 ADDI 1,arg0,arg0 680 SHD 0,arg0,31,ret1 681 SH1ADD arg0,arg0,arg0 682 B $neg 683 ADDC ret1,0,ret1 684 685 $$divU_5: 686 .EXPORT $$divU_5,MILLICODE 687 ADDI 1,arg0,arg0 688 ADDC 0,0,ret1 689 SHD ret1,arg0,31,t1 690 SH1ADD arg0,arg0,arg0 691 B $pos 692 ADDC t1,ret1,ret1 693 694 $$divI_6: 695 .EXPORT $$divI_6,MILLICODE 696 COMB,<,N arg0,0,$neg6 697 EXTRU arg0,30,31,arg0 698 ADDI 5,arg0,t1 699 SH2ADD arg0,t1,arg0 700 B $pos 701 ADDC 0,0,ret1 702 703 $neg6: 704 SUBI 2,arg0,arg0 705 EXTRU arg0,30,31,arg0 706 SHD 0,arg0,30,ret1 707 SH2ADD arg0,arg0,arg0 708 B $neg 709 ADDC ret1,0,ret1 710 711 $$divU_6: 712 .EXPORT $$divU_6,MILLICODE 713 EXTRU arg0,30,31,arg0 714 ADDI 1,arg0,arg0 715 SHD 0,arg0,30,ret1 716 SH2ADD arg0,arg0,arg0 717 B $pos 718 ADDC ret1,0,ret1 719 720 $$divU_10: 721 .EXPORT $$divU_10,MILLICODE 722 EXTRU arg0,30,31,arg0 723 ADDI 3,arg0,t1 724 SH1ADD arg0,t1,arg0 725 ADDC 0,0,ret1 726 $pos: 727 SHD ret1,arg0,28,t1 728 SHD arg0,0,28,t2 729 ADD arg0,t2,arg0 730 ADDC ret1,t1,ret1 731 $pos_for_17: 732 SHD ret1,arg0,24,t1 733 SHD arg0,0,24,t2 734 ADD arg0,t2,arg0 735 ADDC ret1,t1,ret1 736 737 SHD ret1,arg0,16,t1 738 SHD arg0,0,16,t2 739 ADD arg0,t2,arg0 740 bv 0(r31) 741 ADDC ret1,t1,ret1 742 743 $$divI_10: 744 .EXPORT $$divI_10,MILLICODE 745 COMB,< arg0,0,$neg10 746 COPY 0,ret1 747 EXTRU arg0,30,31,arg0 748 ADDIB,TR 1,arg0,$pos 749 SH1ADD arg0,arg0,arg0 750 751 $neg10: 752 SUBI 2,arg0,arg0 753 EXTRU arg0,30,31,arg0 754 SH1ADD arg0,arg0,arg0 755 $neg: 756 SHD ret1,arg0,28,t1 757 SHD arg0,0,28,t2 758 ADD arg0,t2,arg0 759 ADDC ret1,t1,ret1 760 $neg_for_17: 761 SHD ret1,arg0,24,t1 762 SHD arg0,0,24,t2 763 ADD arg0,t2,arg0 764 ADDC ret1,t1,ret1 765 766 SHD ret1,arg0,16,t1 767 SHD arg0,0,16,t2 768 ADD arg0,t2,arg0 769 ADDC ret1,t1,ret1 770 bv 0(r31) 771 SUB 0,ret1,ret1 772 773 $$divI_12: 774 .EXPORT $$divI_12,MILLICODE 775 COMB,< arg0,0,$neg12 776 COPY 0,ret1 777 EXTRU arg0,29,30,arg0 778 ADDIB,TR 1,arg0,$pos 779 SH2ADD arg0,arg0,arg0 780 781 $neg12: 782 SUBI 4,arg0,arg0 783 EXTRU arg0,29,30,arg0 784 B $neg 785 SH2ADD arg0,arg0,arg0 786 787 $$divU_12: 788 .EXPORT $$divU_12,MILLICODE 789 EXTRU arg0,29,30,arg0 790 ADDI 5,arg0,t1 791 SH2ADD arg0,t1,arg0 792 B $pos 793 ADDC 0,0,ret1 794 795 $$divI_15: 796 .EXPORT $$divI_15,MILLICODE 797 COMB,< arg0,0,$neg15 798 COPY 0,ret1 799 ADDIB,TR 1,arg0,$pos+4 800 SHD ret1,arg0,28,t1 801 802 $neg15: 803 B $neg 804 SUBI 1,arg0,arg0 805 806 $$divU_15: 807 .EXPORT $$divU_15,MILLICODE 808 ADDI 1,arg0,arg0 809 B $pos 810 ADDC 0,0,ret1 811 812 $$divI_17: 813 .EXPORT $$divI_17,MILLICODE 814 COMB,<,N arg0,0,$neg17 815 ADDI 1,arg0,arg0 816 SHD 0,arg0,28,t1 817 SHD arg0,0,28,t2 818 SUB t2,arg0,arg0 819 B $pos_for_17 820 SUBB t1,0,ret1 821 822 $neg17: 823 SUBI 1,arg0,arg0 824 SHD 0,arg0,28,t1 825 SHD arg0,0,28,t2 826 SUB t2,arg0,arg0 827 B $neg_for_17 828 SUBB t1,0,ret1 829 830 $$divU_17: 831 .EXPORT $$divU_17,MILLICODE 832 ADDI 1,arg0,arg0 833 ADDC 0,0,ret1 834 SHD ret1,arg0,28,t1 835 $u17: 836 SHD arg0,0,28,t2 837 SUB t2,arg0,arg0 838 B $pos_for_17 839 SUBB t1,ret1,ret1 840 841 $$divI_7: 842 .EXPORT $$divI_7,MILLICODE 843 COMB,<,N arg0,0,$neg7 844 $7: 845 ADDI 1,arg0,arg0 846 SHD 0,arg0,29,ret1 847 SH3ADD arg0,arg0,arg0 848 ADDC ret1,0,ret1 849 $pos7: 850 SHD ret1,arg0,26,t1 851 SHD arg0,0,26,t2 852 ADD arg0,t2,arg0 853 ADDC ret1,t1,ret1 854 855 SHD ret1,arg0,20,t1 856 SHD arg0,0,20,t2 857 ADD arg0,t2,arg0 858 ADDC ret1,t1,t1 859 860 COPY 0,ret1 861 SHD,= t1,arg0,24,t1 862 $1: 863 ADDB,TR t1,ret1,$2 864 EXTRU arg0,31,24,arg0 865 866 bv,n 0(r31) 867 868 $2: 869 ADDB,TR t1,arg0,$1 870 EXTRU,= arg0,7,8,t1 871 872 $neg7: 873 SUBI 1,arg0,arg0 874 $8: 875 SHD 0,arg0,29,ret1 876 SH3ADD arg0,arg0,arg0 877 ADDC ret1,0,ret1 878 879 $neg7_shift: 880 SHD ret1,arg0,26,t1 881 SHD arg0,0,26,t2 882 ADD arg0,t2,arg0 883 ADDC ret1,t1,ret1 884 885 SHD ret1,arg0,20,t1 886 SHD arg0,0,20,t2 887 ADD arg0,t2,arg0 888 ADDC ret1,t1,t1 889 890 COPY 0,ret1 891 SHD,= t1,arg0,24,t1 892 $3: 893 ADDB,TR t1,ret1,$4 894 EXTRU arg0,31,24,arg0 895 896 bv 0(r31) 897 SUB 0,ret1,ret1 898 899 $4: 900 ADDB,TR t1,arg0,$3 901 EXTRU,= arg0,7,8,t1 902 903 $$divU_7: 904 .EXPORT $$divU_7,MILLICODE 905 ADDI 1,arg0,arg0 906 ADDC 0,0,ret1 907 SHD ret1,arg0,29,t1 908 SH3ADD arg0,arg0,arg0 909 B $pos7 910 ADDC t1,ret1,ret1 911 912 $$divI_9: 913 .EXPORT $$divI_9,MILLICODE 914 COMB,<,N arg0,0,$neg9 915 ADDI 1,arg0,arg0 916 SHD 0,arg0,29,t1 917 SHD arg0,0,29,t2 918 SUB t2,arg0,arg0 919 B $pos7 920 SUBB t1,0,ret1 921 922 $neg9: 923 SUBI 1,arg0,arg0 924 SHD 0,arg0,29,t1 925 SHD arg0,0,29,t2 926 SUB t2,arg0,arg0 927 B $neg7_shift 928 SUBB t1,0,ret1 929 930 $$divU_9: 931 .EXPORT $$divU_9,MILLICODE 932 ADDI 1,arg0,arg0 933 ADDC 0,0,ret1 934 SHD ret1,arg0,29,t1 935 SHD arg0,0,29,t2 936 SUB t2,arg0,arg0 937 B $pos7 938 SUBB t1,ret1,ret1 939 940 $$divI_14: 941 .EXPORT $$divI_14,MILLICODE 942 COMB,<,N arg0,0,$neg14 943 $$divU_14: 944 .EXPORT $$divU_14,MILLICODE 945 B $7 946 EXTRU arg0,30,31,arg0 947 948 $neg14: 949 SUBI 2,arg0,arg0 950 B $8 951 EXTRU arg0,30,31,arg0 952 953 .PROCEND 954 .END 955 956 rmndr: .EQU ret1 ; r29 957 958 .export $$remU,millicode 959 $$remU: 960 .proc 961 .callinfo NO_CALLS 962 .entry 963 964 comib,>=,n 0,arg1,special_case 965 sub r0,arg1,rmndr ; clear carry, negate the divisor 966 ds r0,rmndr,r0 ; set V-bit to 1 967 add arg0,arg0,temp ; shift msb bit into carry 968 ds r0,arg1,rmndr ; 1st divide step, if no carry 969 addc temp,temp,temp ; shift temp with/into carry 970 ds rmndr,arg1,rmndr ; 2nd divide step 971 addc temp,temp,temp ; shift temp with/into carry 972 ds rmndr,arg1,rmndr ; 3rd divide step 973 addc temp,temp,temp ; shift temp with/into carry 974 ds rmndr,arg1,rmndr ; 4th divide step 975 addc temp,temp,temp ; shift temp with/into carry 976 ds rmndr,arg1,rmndr ; 5th divide step 977 addc temp,temp,temp ; shift temp with/into carry 978 ds rmndr,arg1,rmndr ; 6th divide step 979 addc temp,temp,temp ; shift temp with/into carry 980 ds rmndr,arg1,rmndr ; 7th divide step 981 addc temp,temp,temp ; shift temp with/into carry 982 ds rmndr,arg1,rmndr ; 8th divide step 983 addc temp,temp,temp ; shift temp with/into carry 984 ds rmndr,arg1,rmndr ; 9th divide step 985 addc temp,temp,temp ; shift temp with/into carry 986 ds rmndr,arg1,rmndr ; 10th divide step 987 addc temp,temp,temp ; shift temp with/into carry 988 ds rmndr,arg1,rmndr ; 11th divide step 989 addc temp,temp,temp ; shift temp with/into carry 990 ds rmndr,arg1,rmndr ; 12th divide step 991 addc temp,temp,temp ; shift temp with/into carry 992 ds rmndr,arg1,rmndr ; 13th divide step 993 addc temp,temp,temp ; shift temp with/into carry 994 ds rmndr,arg1,rmndr ; 14th divide step 995 addc temp,temp,temp ; shift temp with/into carry 996 ds rmndr,arg1,rmndr ; 15th divide step 997 addc temp,temp,temp ; shift temp with/into carry 998 ds rmndr,arg1,rmndr ; 16th divide step 999 addc temp,temp,temp ; shift temp with/into carry 1000 ds rmndr,arg1,rmndr ; 17th divide step 1001 addc temp,temp,temp ; shift temp with/into carry 1002 ds rmndr,arg1,rmndr ; 18th divide step 1003 addc temp,temp,temp ; shift temp with/into carry 1004 ds rmndr,arg1,rmndr ; 19th divide step 1005 addc temp,temp,temp ; shift temp with/into carry 1006 ds rmndr,arg1,rmndr ; 20th divide step 1007 addc temp,temp,temp ; shift temp with/into carry 1008 ds rmndr,arg1,rmndr ; 21st divide step 1009 addc temp,temp,temp ; shift temp with/into carry 1010 ds rmndr,arg1,rmndr ; 22nd divide step 1011 addc temp,temp,temp ; shift temp with/into carry 1012 ds rmndr,arg1,rmndr ; 23rd divide step 1013 addc temp,temp,temp ; shift temp with/into carry 1014 ds rmndr,arg1,rmndr ; 24th divide step 1015 addc temp,temp,temp ; shift temp with/into carry 1016 ds rmndr,arg1,rmndr ; 25th divide step 1017 addc temp,temp,temp ; shift temp with/into carry 1018 ds rmndr,arg1,rmndr ; 26th divide step 1019 addc temp,temp,temp ; shift temp with/into carry 1020 ds rmndr,arg1,rmndr ; 27th divide step 1021 addc temp,temp,temp ; shift temp with/into carry 1022 ds rmndr,arg1,rmndr ; 28th divide step 1023 addc temp,temp,temp ; shift temp with/into carry 1024 ds rmndr,arg1,rmndr ; 29th divide step 1025 addc temp,temp,temp ; shift temp with/into carry 1026 ds rmndr,arg1,rmndr ; 30th divide step 1027 addc temp,temp,temp ; shift temp with/into carry 1028 ds rmndr,arg1,rmndr ; 31st divide step 1029 addc temp,temp,temp ; shift temp with/into carry 1030 ds rmndr,arg1,rmndr ; 32nd divide step, 1031 comiclr,<= 0,rmndr,r0 1032 add rmndr,arg1,rmndr ; correction 1033 ; .exit 1034 bv,n 0(r31) 1035 nop 1036 ; Putting >= on the last DS and deleting COMICLR does not work! 1037 ;_____________________________________________________________________________ 1038 special_case: 1039 addit,= 0,arg1,r0 ; trap on div by zero 1040 sub,>>= arg0,arg1,rmndr 1041 copy arg0,rmndr 1042 .exit 1043 bv,n 0(r31) 1044 nop 1045 .procend 1046 .end 1047 1048 ; Use bv 0(r31) and bv,n 0(r31) instead. 1049 ; #define return bv 0(%mrp) 1050 ; #define return_n bv,n 0(%mrp) 1051 1052 .align 16 1053 $$mulI: 1054 1055 .proc 1056 .callinfo NO_CALLS 1057 .export $$mulI, millicode 1058 combt,<<= %r25,%r26,l4 ; swap args if unsigned %r25>%r26 1059 copy 0,%r29 ; zero out the result 1060 xor %r26,%r25,%r26 ; swap %r26 & %r25 using the 1061 xor %r26,%r25,%r25 ; old xor trick 1062 xor %r26,%r25,%r26 1063 l4: combt,<= 0,%r26,l3 ; if %r26>=0 then proceed like unsigned 1064 1065 zdep %r25,30,8,%r1 ; %r1 = (%r25&0xff)<<1 ********* 1066 sub,> 0,%r25,%r1 ; otherwise negate both and 1067 combt,<=,n %r26,%r1,l2 ; swap back if |%r26|<|%r25| 1068 sub 0,%r26,%r25 1069 movb,tr,n %r1,%r26,l2 ; 10th inst. 1070 1071 l0: add %r29,%r1,%r29 ; add in this partial product 1072 1073 l1: zdep %r26,23,24,%r26 ; %r26 <<= 8 ****************** 1074 1075 l2: zdep %r25,30,8,%r1 ; %r1 = (%r25&0xff)<<1 ********* 1076 1077 l3: blr %r1,0 ; case on these 8 bits ****** 1078 1079 extru %r25,23,24,%r25 ; %r25 >>= 8 ****************** 1080 1081 ;16 insts before this. 1082 ; %r26 <<= 8 ************************** 1083 x0: comb,<> %r25,0,l2 ! zdep %r26,23,24,%r26 ! bv,n 0(r31) ! nop 1084 1085 x1: comb,<> %r25,0,l1 ! add %r29,%r26,%r29 ! bv,n 0(r31) ! nop 1086 1087 x2: comb,<> %r25,0,l1 ! sh1add %r26,%r29,%r29 ! bv,n 0(r31) ! nop 1088 1089 x3: comb,<> %r25,0,l0 ! sh1add %r26,%r26,%r1 ! bv 0(r31) ! add %r29,%r1,%r29 1090 1091 x4: comb,<> %r25,0,l1 ! sh2add %r26,%r29,%r29 ! bv,n 0(r31) ! nop 1092 1093 x5: comb,<> %r25,0,l0 ! sh2add %r26,%r26,%r1 ! bv 0(r31) ! add %r29,%r1,%r29 1094 1095 x6: sh1add %r26,%r26,%r1 ! comb,<> %r25,0,l1 ! sh1add %r1,%r29,%r29 ! bv,n 0(r31) 1096 1097 x7: sh1add %r26,%r26,%r1 ! comb,<> %r25,0,l0 ! sh2add %r26,%r29,%r29 ! b,n ret_t0 1098 1099 x8: comb,<> %r25,0,l1 ! sh3add %r26,%r29,%r29 ! bv,n 0(r31) ! nop 1100 1101 x9: comb,<> %r25,0,l0 ! sh3add %r26,%r26,%r1 ! bv 0(r31) ! add %r29,%r1,%r29 1102 1103 x10: sh2add %r26,%r26,%r1 ! comb,<> %r25,0,l1 ! sh1add %r1,%r29,%r29 ! bv,n 0(r31) 1104 1105 x11: sh1add %r26,%r26,%r1 ! comb,<> %r25,0,l0 ! sh3add %r26,%r29,%r29 ! b,n ret_t0 1106 1107 x12: sh1add %r26,%r26,%r1 ! comb,<> %r25,0,l1 ! sh2add %r1,%r29,%r29 ! bv,n 0(r31) 1108 1109 x13: sh2add %r26,%r26,%r1 ! comb,<> %r25,0,l0 ! sh3add %r26,%r29,%r29 ! b,n ret_t0 1110 1111 x14: sh1add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1112 1113 x15: sh2add %r26,%r26,%r1 ! comb,<> %r25,0,l0 ! sh1add %r1,%r1,%r1 ! b,n ret_t0 1114 1115 x16: zdep %r26,27,28,%r1 ! comb,<> %r25,0,l1 ! add %r29,%r1,%r29 ! bv,n 0(r31) 1116 1117 x17: sh3add %r26,%r26,%r1 ! comb,<> %r25,0,l0 ! sh3add %r26,%r1,%r1 ! b,n ret_t0 1118 1119 x18: sh3add %r26,%r26,%r1 ! comb,<> %r25,0,l1 ! sh1add %r1,%r29,%r29 ! bv,n 0(r31) 1120 1121 x19: sh3add %r26,%r26,%r1 ! comb,<> %r25,0,l0 ! sh1add %r1,%r26,%r1 ! b,n ret_t0 1122 1123 x20: sh2add %r26,%r26,%r1 ! comb,<> %r25,0,l1 ! sh2add %r1,%r29,%r29 ! bv,n 0(r31) 1124 1125 x21: sh2add %r26,%r26,%r1 ! comb,<> %r25,0,l0 ! sh2add %r1,%r26,%r1 ! b,n ret_t0 1126 1127 x22: sh2add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1128 1129 x23: sh2add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r26,%r1 1130 1131 x24: sh1add %r26,%r26,%r1 ! comb,<> %r25,0,l1 ! sh3add %r1,%r29,%r29 ! bv,n 0(r31) 1132 1133 x25: sh2add %r26,%r26,%r1 ! comb,<> %r25,0,l0 ! sh2add %r1,%r1,%r1 ! b,n ret_t0 1134 1135 x26: sh1add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1136 1137 x27: sh1add %r26,%r26,%r1 ! comb,<> %r25,0,l0 ! sh3add %r1,%r1,%r1 ! b,n ret_t0 1138 1139 x28: sh1add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1140 1141 x29: sh1add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r26,%r1 1142 1143 x30: sh2add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1144 1145 x31: zdep %r26,26,27,%r1 ! comb,<> %r25,0,l0 ! sub %r1,%r26,%r1 ! b,n ret_t0 1146 1147 x32: zdep %r26,26,27,%r1 ! comb,<> %r25,0,l1 ! add %r29,%r1,%r29 ! bv,n 0(r31) 1148 1149 x33: sh3add %r26,0,%r1 ! comb,<> %r25,0,l0 ! sh2add %r1,%r26,%r1 ! b,n ret_t0 1150 1151 x34: zdep %r26,27,28,%r1 ! add %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1152 1153 x35: sh3add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_t0 ! sh3add %r26,%r1,%r1 1154 1155 x36: sh3add %r26,%r26,%r1 ! comb,<> %r25,0,l1 ! sh2add %r1,%r29,%r29 ! bv,n 0(r31) 1156 1157 x37: sh3add %r26,%r26,%r1 ! comb,<> %r25,0,l0 ! sh2add %r1,%r26,%r1 ! b,n ret_t0 1158 1159 x38: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1160 1161 x39: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r26,%r1 1162 1163 x40: sh2add %r26,%r26,%r1 ! comb,<> %r25,0,l1 ! sh3add %r1,%r29,%r29 ! bv,n 0(r31) 1164 1165 x41: sh2add %r26,%r26,%r1 ! comb,<> %r25,0,l0 ! sh3add %r1,%r26,%r1 ! b,n ret_t0 1166 1167 x42: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1168 1169 x43: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r26,%r1 1170 1171 x44: sh2add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1172 1173 x45: sh3add %r26,%r26,%r1 ! comb,<> %r25,0,l0 ! sh2add %r1,%r1,%r1 ! b,n ret_t0 1174 1175 x46: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0 ! add %r1,%r26,%r1 1176 1177 x47: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0 ! sh1add %r26,%r1,%r1 1178 1179 x48: sh1add %r26,%r26,%r1 ! comb,<> %r25,0,l0 ! zdep %r1,27,28,%r1 ! b,n ret_t0 1180 1181 x49: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0 ! sh2add %r26,%r1,%r1 1182 1183 x50: sh2add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1184 1185 x51: sh3add %r26,%r26,%r1 ! sh3add %r26,%r1,%r1 ! b e_t0 ! sh1add %r1,%r1,%r1 1186 1187 x52: sh1add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1188 1189 x53: sh1add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r26,%r1 1190 1191 x54: sh3add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1192 1193 x55: sh3add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_t0 ! sh1add %r1,%r26,%r1 1194 1195 x56: sh1add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_shift ! sh3add %r1,%r29,%r29 1196 1197 x57: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r1,%r1 1198 1199 x58: sh1add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_2t0 ! sh2add %r1,%r26,%r1 1200 1201 x59: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_t02a0 ! sh1add %r1,%r1,%r1 1202 1203 x60: sh2add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1204 1205 x61: sh2add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_t0 ! sh2add %r1,%r26,%r1 1206 1207 x62: zdep %r26,26,27,%r1 ! sub %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1208 1209 x63: zdep %r26,25,26,%r1 ! comb,<> %r25,0,l0 ! sub %r1,%r26,%r1 ! b,n ret_t0 1210 1211 x64: zdep %r26,25,26,%r1 ! comb,<> %r25,0,l1 ! add %r29,%r1,%r29 ! bv,n 0(r31) 1212 1213 x65: sh3add %r26,0,%r1 ! comb,<> %r25,0,l0 ! sh3add %r1,%r26,%r1 ! b,n ret_t0 1214 1215 x66: zdep %r26,26,27,%r1 ! add %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1216 1217 x67: sh3add %r26,0,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r26,%r1 1218 1219 x68: sh3add %r26,0,%r1 ! sh1add %r1,%r26,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1220 1221 x69: sh3add %r26,0,%r1 ! sh1add %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r26,%r1 1222 1223 x70: zdep %r26,25,26,%r1 ! sh2add %r26,%r1,%r1 ! b e_t0 ! sh1add %r26,%r1,%r1 1224 1225 x71: sh3add %r26,%r26,%r1 ! sh3add %r1,0,%r1 ! b e_t0 ! sub %r1,%r26,%r1 1226 1227 x72: sh3add %r26,%r26,%r1 ! comb,<> %r25,0,l1 ! sh3add %r1,%r29,%r29 ! bv,n 0(r31) 1228 1229 x73: sh3add %r26,%r26,%r1 ! sh3add %r1,%r26,%r1 ! b e_shift ! add %r29,%r1,%r29 1230 1231 x74: sh3add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1232 1233 x75: sh3add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r26,%r1 1234 1235 x76: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1236 1237 x77: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r26,%r1 1238 1239 x78: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_2t0 ! sh1add %r1,%r26,%r1 1240 1241 x79: zdep %r26,27,28,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0 ! sub %r1,%r26,%r1 1242 1243 x80: zdep %r26,27,28,%r1 ! sh2add %r1,%r1,%r1 ! b e_shift ! add %r29,%r1,%r29 1244 1245 x81: sh3add %r26,%r26,%r1 ! sh3add %r1,%r1,%r1 ! b e_shift ! add %r29,%r1,%r29 1246 1247 x82: sh2add %r26,%r26,%r1 ! sh3add %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1248 1249 x83: sh2add %r26,%r26,%r1 ! sh3add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r26,%r1 1250 1251 x84: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1252 1253 x85: sh3add %r26,0,%r1 ! sh1add %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r1,%r1 1254 1255 x86: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_2t0 ! sh1add %r1,%r26,%r1 1256 1257 x87: sh3add %r26,%r26,%r1 ! sh3add %r1,%r1,%r1 ! b e_t02a0 ! sh2add %r26,%r1,%r1 1258 1259 x88: sh2add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_shift ! sh3add %r1,%r29,%r29 1260 1261 x89: sh2add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_t0 ! sh3add %r1,%r26,%r1 1262 1263 x90: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1264 1265 x91: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0 ! sh1add %r1,%r26,%r1 1266 1267 x92: sh2add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_4t0 ! sh1add %r1,%r26,%r1 1268 1269 x93: zdep %r26,26,27,%r1 ! sub %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r1,%r1 1270 1271 x94: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_2t0 ! sh1add %r26,%r1,%r1 1272 1273 x95: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r1,%r1 1274 1275 x96: sh3add %r26,0,%r1 ! sh1add %r1,%r1,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1276 1277 x97: sh3add %r26,0,%r1 ! sh1add %r1,%r1,%r1 ! b e_t0 ! sh2add %r1,%r26,%r1 1278 1279 x98: zdep %r26,26,27,%r1 ! sh1add %r1,%r1,%r1 ! b e_t0 ! sh1add %r26,%r1,%r1 1280 1281 x99: sh3add %r26,0,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r1,%r1 1282 1283 x100: sh2add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1284 1285 x101: sh2add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0 ! sh2add %r1,%r26,%r1 1286 1287 x102: zdep %r26,26,27,%r1 ! sh1add %r26,%r1,%r1 ! b e_t0 ! sh1add %r1,%r1,%r1 1288 1289 x103: sh2add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_t02a0 ! sh2add %r1,%r26,%r1 1290 1291 x104: sh1add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_shift ! sh3add %r1,%r29,%r29 1292 1293 x105: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r1,%r1 1294 1295 x106: sh1add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_2t0 ! sh2add %r1,%r26,%r1 1296 1297 x107: sh3add %r26,%r26,%r1 ! sh2add %r26,%r1,%r1 ! b e_t02a0 ! sh3add %r1,%r26,%r1 1298 1299 x108: sh3add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1300 1301 x109: sh3add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_t0 ! sh2add %r1,%r26,%r1 1302 1303 x110: sh3add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_2t0 ! sh1add %r1,%r26,%r1 1304 1305 x111: sh3add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r1,%r1 1306 1307 x112: sh1add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_t0 ! zdep %r1,27,28,%r1 1308 1309 x113: sh3add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t02a0 ! sh1add %r1,%r1,%r1 1310 1311 x114: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_2t0 ! sh1add %r1,%r1,%r1 1312 1313 x115: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_2t0a0 ! sh1add %r1,%r1,%r1 1314 1315 x116: sh1add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_4t0 ! sh2add %r1,%r26,%r1 1316 1317 x117: sh1add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh3add %r1,%r1,%r1 1318 1319 x118: sh1add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0a0 ! sh3add %r1,%r1,%r1 1320 1321 x119: sh1add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t02a0 ! sh3add %r1,%r1,%r1 1322 1323 x120: sh2add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_shift ! sh3add %r1,%r29,%r29 1324 1325 x121: sh2add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_t0 ! sh3add %r1,%r26,%r1 1326 1327 x122: sh2add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_2t0 ! sh2add %r1,%r26,%r1 1328 1329 x123: sh2add %r26,%r26,%r1 ! sh3add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r1,%r1 1330 1331 x124: zdep %r26,26,27,%r1 ! sub %r1,%r26,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1332 1333 x125: sh2add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0 ! sh2add %r1,%r1,%r1 1334 1335 x126: zdep %r26,25,26,%r1 ! sub %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1336 1337 x127: zdep %r26,24,25,%r1 ! comb,<> %r25,0,l0 ! sub %r1,%r26,%r1 ! b,n ret_t0 1338 1339 x128: zdep %r26,24,25,%r1 ! comb,<> %r25,0,l1 ! add %r29,%r1,%r29 ! bv,n 0(r31) 1340 1341 x129: zdep %r26,24,25,%r1 ! comb,<> %r25,0,l0 ! add %r1,%r26,%r1 ! b,n ret_t0 1342 1343 x130: zdep %r26,25,26,%r1 ! add %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1344 1345 x131: sh3add %r26,0,%r1 ! sh3add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r26,%r1 1346 1347 x132: sh3add %r26,0,%r1 ! sh2add %r1,%r26,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1348 1349 x133: sh3add %r26,0,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r26,%r1 1350 1351 x134: sh3add %r26,0,%r1 ! sh2add %r1,%r26,%r1 ! b e_2t0 ! sh1add %r1,%r26,%r1 1352 1353 x135: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0 ! sh1add %r1,%r1,%r1 1354 1355 x136: sh3add %r26,0,%r1 ! sh1add %r1,%r26,%r1 ! b e_shift ! sh3add %r1,%r29,%r29 1356 1357 x137: sh3add %r26,0,%r1 ! sh1add %r1,%r26,%r1 ! b e_t0 ! sh3add %r1,%r26,%r1 1358 1359 x138: sh3add %r26,0,%r1 ! sh1add %r1,%r26,%r1 ! b e_2t0 ! sh2add %r1,%r26,%r1 1360 1361 x139: sh3add %r26,0,%r1 ! sh1add %r1,%r26,%r1 ! b e_2t0a0 ! sh2add %r1,%r26,%r1 1362 1363 x140: sh1add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_4t0 ! sh2add %r1,%r1,%r1 1364 1365 x141: sh3add %r26,0,%r1 ! sh1add %r1,%r26,%r1 ! b e_4t0a0 ! sh1add %r1,%r26,%r1 1366 1367 x142: sh3add %r26,%r26,%r1 ! sh3add %r1,0,%r1 ! b e_2t0 ! sub %r1,%r26,%r1 1368 1369 x143: zdep %r26,27,28,%r1 ! sh3add %r1,%r1,%r1 ! b e_t0 ! sub %r1,%r26,%r1 1370 1371 x144: sh3add %r26,%r26,%r1 ! sh3add %r1,0,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1372 1373 x145: sh3add %r26,%r26,%r1 ! sh3add %r1,0,%r1 ! b e_t0 ! sh1add %r1,%r26,%r1 1374 1375 x146: sh3add %r26,%r26,%r1 ! sh3add %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1376 1377 x147: sh3add %r26,%r26,%r1 ! sh3add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r26,%r1 1378 1379 x148: sh3add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1380 1381 x149: sh3add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r26,%r1 1382 1383 x150: sh3add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_2t0 ! sh1add %r1,%r26,%r1 1384 1385 x151: sh3add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_2t0a0 ! sh1add %r1,%r26,%r1 1386 1387 x152: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_shift ! sh3add %r1,%r29,%r29 1388 1389 x153: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_t0 ! sh3add %r1,%r26,%r1 1390 1391 x154: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_2t0 ! sh2add %r1,%r26,%r1 1392 1393 x155: zdep %r26,26,27,%r1 ! sub %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r1,%r1 1394 1395 x156: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_4t0 ! sh1add %r1,%r26,%r1 1396 1397 x157: zdep %r26,26,27,%r1 ! sub %r1,%r26,%r1 ! b e_t02a0 ! sh2add %r1,%r1,%r1 1398 1399 x158: zdep %r26,27,28,%r1 ! sh2add %r1,%r1,%r1 ! b e_2t0 ! sub %r1,%r26,%r1 1400 1401 x159: zdep %r26,26,27,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0 ! sub %r1,%r26,%r1 1402 1403 x160: sh2add %r26,%r26,%r1 ! sh2add %r1,0,%r1 ! b e_shift ! sh3add %r1,%r29,%r29 1404 1405 x161: sh3add %r26,0,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0 ! sh2add %r1,%r26,%r1 1406 1407 x162: sh3add %r26,%r26,%r1 ! sh3add %r1,%r1,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1408 1409 x163: sh3add %r26,%r26,%r1 ! sh3add %r1,%r1,%r1 ! b e_t0 ! sh1add %r1,%r26,%r1 1410 1411 x164: sh2add %r26,%r26,%r1 ! sh3add %r1,%r26,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1412 1413 x165: sh3add %r26,0,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r1,%r1 1414 1415 x166: sh2add %r26,%r26,%r1 ! sh3add %r1,%r26,%r1 ! b e_2t0 ! sh1add %r1,%r26,%r1 1416 1417 x167: sh2add %r26,%r26,%r1 ! sh3add %r1,%r26,%r1 ! b e_2t0a0 ! sh1add %r1,%r26,%r1 1418 1419 x168: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_shift ! sh3add %r1,%r29,%r29 1420 1421 x169: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh3add %r1,%r26,%r1 1422 1423 x170: zdep %r26,26,27,%r1 ! sh1add %r26,%r1,%r1 ! b e_t0 ! sh2add %r1,%r1,%r1 1424 1425 x171: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_t0 ! sh3add %r1,%r1,%r1 1426 1427 x172: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_4t0 ! sh1add %r1,%r26,%r1 1428 1429 x173: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_t02a0 ! sh3add %r1,%r1,%r1 1430 1431 x174: zdep %r26,26,27,%r1 ! sh1add %r26,%r1,%r1 ! b e_t04a0 ! sh2add %r1,%r1,%r1 1432 1433 x175: sh3add %r26,0,%r1 ! sh1add %r1,%r26,%r1 ! b e_5t0 ! sh1add %r1,%r26,%r1 1434 1435 x176: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_8t0 ! add %r1,%r26,%r1 1436 1437 x177: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_8t0a0 ! add %r1,%r26,%r1 1438 1439 x178: sh2add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_2t0 ! sh3add %r1,%r26,%r1 1440 1441 x179: sh2add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_2t0a0 ! sh3add %r1,%r26,%r1 1442 1443 x180: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1444 1445 x181: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0 ! sh2add %r1,%r26,%r1 1446 1447 x182: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_2t0 ! sh1add %r1,%r26,%r1 1448 1449 x183: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_2t0a0 ! sh1add %r1,%r26,%r1 1450 1451 x184: sh2add %r26,%r26,%r1 ! sh3add %r1,%r1,%r1 ! b e_4t0 ! add %r1,%r26,%r1 1452 1453 x185: sh3add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r1,%r1 1454 1455 x186: zdep %r26,26,27,%r1 ! sub %r1,%r26,%r1 ! b e_2t0 ! sh1add %r1,%r1,%r1 1456 1457 x187: sh3add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t02a0 ! sh2add %r1,%r1,%r1 1458 1459 x188: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_4t0 ! sh1add %r26,%r1,%r1 1460 1461 x189: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh3add %r1,%r1,%r1 1462 1463 x190: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_2t0 ! sh2add %r1,%r1,%r1 1464 1465 x191: zdep %r26,25,26,%r1 ! sh1add %r1,%r1,%r1 ! b e_t0 ! sub %r1,%r26,%r1 1466 1467 x192: sh3add %r26,0,%r1 ! sh1add %r1,%r1,%r1 ! b e_shift ! sh3add %r1,%r29,%r29 1468 1469 x193: sh3add %r26,0,%r1 ! sh1add %r1,%r1,%r1 ! b e_t0 ! sh3add %r1,%r26,%r1 1470 1471 x194: sh3add %r26,0,%r1 ! sh1add %r1,%r1,%r1 ! b e_2t0 ! sh2add %r1,%r26,%r1 1472 1473 x195: sh3add %r26,0,%r1 ! sh3add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r1,%r1 1474 1475 x196: sh3add %r26,0,%r1 ! sh1add %r1,%r1,%r1 ! b e_4t0 ! sh1add %r1,%r26,%r1 1476 1477 x197: sh3add %r26,0,%r1 ! sh1add %r1,%r1,%r1 ! b e_4t0a0 ! sh1add %r1,%r26,%r1 1478 1479 x198: zdep %r26,25,26,%r1 ! sh1add %r26,%r1,%r1 ! b e_t0 ! sh1add %r1,%r1,%r1 1480 1481 x199: sh3add %r26,0,%r1 ! sh2add %r1,%r26,%r1 ! b e_2t0a0 ! sh1add %r1,%r1,%r1 1482 1483 x200: sh2add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_shift ! sh3add %r1,%r29,%r29 1484 1485 x201: sh2add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0 ! sh3add %r1,%r26,%r1 1486 1487 x202: sh2add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_2t0 ! sh2add %r1,%r26,%r1 1488 1489 x203: sh2add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_2t0a0 ! sh2add %r1,%r26,%r1 1490 1491 x204: sh3add %r26,0,%r1 ! sh1add %r1,%r26,%r1 ! b e_4t0 ! sh1add %r1,%r1,%r1 1492 1493 x205: sh2add %r26,%r26,%r1 ! sh3add %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r1,%r1 1494 1495 x206: zdep %r26,25,26,%r1 ! sh2add %r26,%r1,%r1 ! b e_t02a0 ! sh1add %r1,%r1,%r1 1496 1497 x207: sh3add %r26,0,%r1 ! sh1add %r1,%r26,%r1 ! b e_3t0 ! sh2add %r1,%r26,%r1 1498 1499 x208: sh2add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_8t0 ! add %r1,%r26,%r1 1500 1501 x209: sh2add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_8t0a0 ! add %r1,%r26,%r1 1502 1503 x210: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_2t0 ! sh2add %r1,%r1,%r1 1504 1505 x211: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_2t0a0 ! sh2add %r1,%r1,%r1 1506 1507 x212: sh1add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_4t0 ! sh2add %r1,%r26,%r1 1508 1509 x213: sh1add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_4t0a0 ! sh2add %r1,%r26,%r1 1510 1511 x214: sh3add %r26,%r26,%r1 ! sh2add %r26,%r1,%r1 ! b e2t04a0 ! sh3add %r1,%r26,%r1 1512 1513 x215: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_5t0 ! sh1add %r1,%r26,%r1 1514 1515 x216: sh3add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_shift ! sh3add %r1,%r29,%r29 1516 1517 x217: sh3add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_t0 ! sh3add %r1,%r26,%r1 1518 1519 x218: sh3add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_2t0 ! sh2add %r1,%r26,%r1 1520 1521 x219: sh3add %r26,%r26,%r1 ! sh3add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r1,%r1 1522 1523 x220: sh1add %r26,%r26,%r1 ! sh3add %r1,%r1,%r1 ! b e_4t0 ! sh1add %r1,%r26,%r1 1524 1525 x221: sh1add %r26,%r26,%r1 ! sh3add %r1,%r1,%r1 ! b e_4t0a0 ! sh1add %r1,%r26,%r1 1526 1527 x222: sh3add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_2t0 ! sh1add %r1,%r1,%r1 1528 1529 x223: sh3add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_2t0a0 ! sh1add %r1,%r1,%r1 1530 1531 x224: sh3add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_8t0 ! add %r1,%r26,%r1 1532 1533 x225: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0 ! sh2add %r1,%r1,%r1 1534 1535 x226: sh1add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_t02a0 ! zdep %r1,26,27,%r1 1536 1537 x227: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_t02a0 ! sh2add %r1,%r1,%r1 1538 1539 x228: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_4t0 ! sh1add %r1,%r1,%r1 1540 1541 x229: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_4t0a0 ! sh1add %r1,%r1,%r1 1542 1543 x230: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_5t0 ! add %r1,%r26,%r1 1544 1545 x231: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_3t0 ! sh2add %r1,%r26,%r1 1546 1547 x232: sh1add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_8t0 ! sh2add %r1,%r26,%r1 1548 1549 x233: sh1add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_8t0a0 ! sh2add %r1,%r26,%r1 1550 1551 x234: sh1add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_2t0 ! sh3add %r1,%r1,%r1 1552 1553 x235: sh1add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_2t0a0 ! sh3add %r1,%r1,%r1 1554 1555 x236: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e4t08a0 ! sh1add %r1,%r1,%r1 1556 1557 x237: zdep %r26,27,28,%r1 ! sh2add %r1,%r1,%r1 ! b e_3t0 ! sub %r1,%r26,%r1 1558 1559 x238: sh1add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e2t04a0 ! sh3add %r1,%r1,%r1 1560 1561 x239: zdep %r26,27,28,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0ma0 ! sh1add %r1,%r1,%r1 1562 1563 x240: sh3add %r26,%r26,%r1 ! add %r1,%r26,%r1 ! b e_8t0 ! sh1add %r1,%r1,%r1 1564 1565 x241: sh3add %r26,%r26,%r1 ! add %r1,%r26,%r1 ! b e_8t0a0 ! sh1add %r1,%r1,%r1 1566 1567 x242: sh2add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_2t0 ! sh3add %r1,%r26,%r1 1568 1569 x243: sh3add %r26,%r26,%r1 ! sh3add %r1,%r1,%r1 ! b e_t0 ! sh1add %r1,%r1,%r1 1570 1571 x244: sh2add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_4t0 ! sh2add %r1,%r26,%r1 1572 1573 x245: sh3add %r26,0,%r1 ! sh1add %r1,%r1,%r1 ! b e_5t0 ! sh1add %r1,%r26,%r1 1574 1575 x246: sh2add %r26,%r26,%r1 ! sh3add %r1,%r26,%r1 ! b e_2t0 ! sh1add %r1,%r1,%r1 1576 1577 x247: sh2add %r26,%r26,%r1 ! sh3add %r1,%r26,%r1 ! b e_2t0a0 ! sh1add %r1,%r1,%r1 1578 1579 x248: zdep %r26,26,27,%r1 ! sub %r1,%r26,%r1 ! b e_shift ! sh3add %r1,%r29,%r29 1580 1581 x249: zdep %r26,26,27,%r1 ! sub %r1,%r26,%r1 ! b e_t0 ! sh3add %r1,%r26,%r1 1582 1583 x250: sh2add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_2t0 ! sh2add %r1,%r1,%r1 1584 1585 x251: sh2add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_2t0a0 ! sh2add %r1,%r1,%r1 1586 1587 x252: zdep %r26,25,26,%r1 ! sub %r1,%r26,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1588 1589 x253: zdep %r26,25,26,%r1 ! sub %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r26,%r1 1590 1591 x254: zdep %r26,24,25,%r1 ! sub %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1592 1593 x255: zdep %r26,23,24,%r1 ! comb,<> %r25,0,l0 ! sub %r1,%r26,%r1 ! b,n ret_t0 1594 1595 ;1040 insts before this. 1596 ret_t0: bv 0(r31) 1597 1598 e_t0: add %r29,%r1,%r29 1599 1600 e_shift: comb,<> %r25,0,l2 1601 1602 zdep %r26,23,24,%r26 ; %r26 <<= 8 *********** 1603 bv,n 0(r31) 1604 e_t0ma0: comb,<> %r25,0,l0 1605 1606 sub %r1,%r26,%r1 1607 bv 0(r31) 1608 add %r29,%r1,%r29 1609 e_t0a0: comb,<> %r25,0,l0 1610 1611 add %r1,%r26,%r1 1612 bv 0(r31) 1613 add %r29,%r1,%r29 1614 e_t02a0: comb,<> %r25,0,l0 1615 1616 sh1add %r26,%r1,%r1 1617 bv 0(r31) 1618 add %r29,%r1,%r29 1619 e_t04a0: comb,<> %r25,0,l0 1620 1621 sh2add %r26,%r1,%r1 1622 bv 0(r31) 1623 add %r29,%r1,%r29 1624 e_2t0: comb,<> %r25,0,l1 1625 1626 sh1add %r1,%r29,%r29 1627 bv,n 0(r31) 1628 e_2t0a0: comb,<> %r25,0,l0 1629 1630 sh1add %r1,%r26,%r1 1631 bv 0(r31) 1632 add %r29,%r1,%r29 1633 e2t04a0: sh1add %r26,%r1,%r1 1634 1635 comb,<> %r25,0,l1 1636 sh1add %r1,%r29,%r29 1637 bv,n 0(r31) 1638 e_3t0: comb,<> %r25,0,l0 1639 1640 sh1add %r1,%r1,%r1 1641 bv 0(r31) 1642 add %r29,%r1,%r29 1643 e_4t0: comb,<> %r25,0,l1 1644 1645 sh2add %r1,%r29,%r29 1646 bv,n 0(r31) 1647 e_4t0a0: comb,<> %r25,0,l0 1648 1649 sh2add %r1,%r26,%r1 1650 bv 0(r31) 1651 add %r29,%r1,%r29 1652 e4t08a0: sh1add %r26,%r1,%r1 1653 1654 comb,<> %r25,0,l1 1655 sh2add %r1,%r29,%r29 1656 bv,n 0(r31) 1657 e_5t0: comb,<> %r25,0,l0 1658 1659 sh2add %r1,%r1,%r1 1660 bv 0(r31) 1661 add %r29,%r1,%r29 1662 e_8t0: comb,<> %r25,0,l1 1663 1664 sh3add %r1,%r29,%r29 1665 bv,n 0(r31) 1666 e_8t0a0: comb,<> %r25,0,l0 1667 1668 sh3add %r1,%r26,%r1 1669 bv 0(r31) 1670 add %r29,%r1,%r29 1671 1672 .procend 1673 .end 1674 1675 .import $$divI_2,millicode 1676 .import $$divI_3,millicode 1677 .import $$divI_4,millicode 1678 .import $$divI_5,millicode 1679 .import $$divI_6,millicode 1680 .import $$divI_7,millicode 1681 .import $$divI_8,millicode 1682 .import $$divI_9,millicode 1683 .import $$divI_10,millicode 1684 .import $$divI_12,millicode 1685 .import $$divI_14,millicode 1686 .import $$divI_15,millicode 1687 .export $$divI,millicode 1688 .export $$divoI,millicode 1689 $$divoI: 1690 .proc 1691 .callinfo NO_CALLS 1692 comib,=,n -1,arg1,negative1 ; when divisor == -1 1693 $$divI: 1694 comib,>>=,n 15,arg1,small_divisor 1695 add,>= 0,arg0,retreg ; move dividend, if retreg < 0, 1696 normal1: 1697 sub 0,retreg,retreg ; make it positive 1698 sub 0,arg1,temp ; clear carry, 1699 ; negate the divisor 1700 ds 0,temp,0 ; set V-bit to the comple- 1701 ; ment of the divisor sign 1702 add retreg,retreg,retreg ; shift msb bit into carry 1703 ds r0,arg1,temp ; 1st divide step, if no carry 1704 addc retreg,retreg,retreg ; shift retreg with/into carry 1705 ds temp,arg1,temp ; 2nd divide step 1706 addc retreg,retreg,retreg ; shift retreg with/into carry 1707 ds temp,arg1,temp ; 3rd divide step 1708 addc retreg,retreg,retreg ; shift retreg with/into carry 1709 ds temp,arg1,temp ; 4th divide step 1710 addc retreg,retreg,retreg ; shift retreg with/into carry 1711 ds temp,arg1,temp ; 5th divide step 1712 addc retreg,retreg,retreg ; shift retreg with/into carry 1713 ds temp,arg1,temp ; 6th divide step 1714 addc retreg,retreg,retreg ; shift retreg with/into carry 1715 ds temp,arg1,temp ; 7th divide step 1716 addc retreg,retreg,retreg ; shift retreg with/into carry 1717 ds temp,arg1,temp ; 8th divide step 1718 addc retreg,retreg,retreg ; shift retreg with/into carry 1719 ds temp,arg1,temp ; 9th divide step 1720 addc retreg,retreg,retreg ; shift retreg with/into carry 1721 ds temp,arg1,temp ; 10th divide step 1722 addc retreg,retreg,retreg ; shift retreg with/into carry 1723 ds temp,arg1,temp ; 11th divide step 1724 addc retreg,retreg,retreg ; shift retreg with/into carry 1725 ds temp,arg1,temp ; 12th divide step 1726 addc retreg,retreg,retreg ; shift retreg with/into carry 1727 ds temp,arg1,temp ; 13th divide step 1728 addc retreg,retreg,retreg ; shift retreg with/into carry 1729 ds temp,arg1,temp ; 14th divide step 1730 addc retreg,retreg,retreg ; shift retreg with/into carry 1731 ds temp,arg1,temp ; 15th divide step 1732 addc retreg,retreg,retreg ; shift retreg with/into carry 1733 ds temp,arg1,temp ; 16th divide step 1734 addc retreg,retreg,retreg ; shift retreg with/into carry 1735 ds temp,arg1,temp ; 17th divide step 1736 addc retreg,retreg,retreg ; shift retreg with/into carry 1737 ds temp,arg1,temp ; 18th divide step 1738 addc retreg,retreg,retreg ; shift retreg with/into carry 1739 ds temp,arg1,temp ; 19th divide step 1740 addc retreg,retreg,retreg ; shift retreg with/into carry 1741 ds temp,arg1,temp ; 20th divide step 1742 addc retreg,retreg,retreg ; shift retreg with/into carry 1743 ds temp,arg1,temp ; 21st divide step 1744 addc retreg,retreg,retreg ; shift retreg with/into carry 1745 ds temp,arg1,temp ; 22nd divide step 1746 addc retreg,retreg,retreg ; shift retreg with/into carry 1747 ds temp,arg1,temp ; 23rd divide step 1748 addc retreg,retreg,retreg ; shift retreg with/into carry 1749 ds temp,arg1,temp ; 24th divide step 1750 addc retreg,retreg,retreg ; shift retreg with/into carry 1751 ds temp,arg1,temp ; 25th divide step 1752 addc retreg,retreg,retreg ; shift retreg with/into carry 1753 ds temp,arg1,temp ; 26th divide step 1754 addc retreg,retreg,retreg ; shift retreg with/into carry 1755 ds temp,arg1,temp ; 27th divide step 1756 addc retreg,retreg,retreg ; shift retreg with/into carry 1757 ds temp,arg1,temp ; 28th divide step 1758 addc retreg,retreg,retreg ; shift retreg with/into carry 1759 ds temp,arg1,temp ; 29th divide step 1760 addc retreg,retreg,retreg ; shift retreg with/into carry 1761 ds temp,arg1,temp ; 30th divide step 1762 addc retreg,retreg,retreg ; shift retreg with/into carry 1763 ds temp,arg1,temp ; 31st divide step 1764 addc retreg,retreg,retreg ; shift retreg with/into carry 1765 ds temp,arg1,temp ; 32nd divide step, 1766 addc retreg,retreg,retreg ; shift last retreg bit into retreg 1767 xor,>= arg0,arg1,0 ; get correct sign of quotient 1768 sub 0,retreg,retreg ; based on operand signs 1769 bv,n 0(r31) 1770 nop 1771 ;______________________________________________________________________ 1772 small_divisor: 1773 blr,n arg1,r0 1774 nop 1775 ; table for divisor == 0,1, ... ,15 1776 addit,= 0,arg1,r0 ; trap if divisor == 0 1777 nop 1778 bv 0(r31) ; divisor == 1 1779 copy arg0,retreg 1780 b,n $$divI_2 ; divisor == 2 1781 nop 1782 b,n $$divI_3 ; divisor == 3 1783 nop 1784 b,n $$divI_4 ; divisor == 4 1785 nop 1786 b,n $$divI_5 ; divisor == 5 1787 nop 1788 b,n $$divI_6 ; divisor == 6 1789 nop 1790 b,n $$divI_7 ; divisor == 7 1791 nop 1792 b,n $$divI_8 ; divisor == 8 1793 nop 1794 b,n $$divI_9 ; divisor == 9 1795 nop 1796 b,n $$divI_10 ; divisor == 10 1797 nop 1798 b normal1 ; divisor == 11 1799 add,>= 0,arg0,retreg 1800 b,n $$divI_12 ; divisor == 12 1801 nop 1802 b normal1 ; divisor == 13 1803 add,>= 0,arg0,retreg 1804 b,n $$divI_14 ; divisor == 14 1805 nop 1806 b,n $$divI_15 ; divisor == 15 1807 nop 1808 ;______________________________________________________________________ 1809 negative1: 1810 sub 0,arg0,retreg ; result is negation of dividend 1811 bv 0(r31) 1812 addo arg0,arg1,r0 ; trap iff dividend==0x80000000 && divisor==-1 1813 .procend 1814 .end 1815