SAPHanaSR-ScaleOut-doc-0.185.2-150000.42.1<>,7Xep9|]H!=plNosX0tͭ=xntoB j(AX%!NmQz@[';7BC}Ui6GGVRe4MyTqCx w>]Ғn_ ;F6UzIC&^Dfe^㬘*Uk}(,; +Vac;Q?G;L8yB 2(ؙxX憽4.s*+ǐ#A^Lo+nxǨ#>>/@?/0d! - G   ;      $).`d| (8(9l:F,G-H-I-X-Y- \-4]-8^-Rb-~c.'d.e.f.l.u.v.z..../,CSAPHanaSR-ScaleOut-doc0.185.2150000.42.1Setup-Guide for SAPHanaSRThis sub package points to the Setup-Guide for getting SAP HANA system replication under cluster control.eh01-ch2dPSUSE Linux Enterprise 15SUSE LLC GPL-2.0https://www.suse.com/Productivity/Clustering/HAhttp://scn.sap.com/community/hana-in-memory/blog/2014/04/04/fail-safe-operation-of-sap-hana-suse-extends-its-high-availability-solutionlinuxnoarchP$e9f93685fdd9bae3d05b964e271864337516c19a205a05f348b63ef1c447a2f9erootrootSAPHanaSR-ScaleOut-0.185.2-150000.42.1.src.rpmSAPHanaSR-ScaleOut-doc    rpmlib(CompressedFileNames)rpmlib(FileDigests)rpmlib(PayloadFilesHavePrefix)rpmlib(PayloadIsXz)3.0.4-14.6.0-14.0-15.2-1SAPHanaSR-doc4.14.1eydd@c@aa@`@__h^2]M@[[2*Z@ZlW~W^@abriel@suse.comabriel@suse.comabriel@suse.comabriel@suse.comabriel@suse.comabriel@suse.comabriel@suse.comabriel@suse.comabriel@suse.comabriel@suse.comabriel@suse.comclanig@suse.deimanyugin@suse.comimanyugin@suse.comimanyugin@suse.comimanyugin@suse.comimanyugin@suse.com- Version bump to 0.185.2 * fix cluster state in SAPHanaSR-manageAttr by supporting the different behaviour of 'crmadmin -q -S' in different pacemaker versions (bsc#1217830) * inside SAPHanaSR-hookHelper use the full path for the cibadmin command to support non root users in special user environments (bsc#1216484) * if the SAPHanaSrMultiTarget hook has successfully reported a SR event to the cluster a still existing fall-back state file will be removed to prevent an override of an already reported SR state. (bsc#1215693) * update man pages: SAPHanaSR-ScaleOut.7 SAPHanaSR-ScaleOut_basic_cluster.7 SAPHanaSR_maintenance_examples.7 ocf_suse_SAPHanaTopology.7 ocf_suse_SAPHanaController.7 SAPHanaSR-showAttr.8- Version bump to 0.185.1 * improve supportability by providing the current process ID of the RA, which is logged in the RA outputs, to HANA tracefiles too. This allows a mapping of the SAP related command invocations from the RA and the HANA executions which might have a delay in between. (bsc#1214613) * modify /tmp full fix (bsc#1210728) * update man pages: SAPHanaSR-ScaleOut.7 * add improvements from SAP to the RA scripts, part II (jsc#PED-1739, jsc#PED-2608)- Version bump to 0.185.0 * avoid explicid and implicid usage of /tmp filesystem to keep the SAPHanaSR resource agents working even in situations with /tmp filesystem full. (bsc#1210728) * fix the path for the HA/DR provider hook in the global.ini examples to be useful (bsc#1210573) * update man pages: SAPHanaSR_maintenance_examples.7 SAPHanaSR-ScaleOut_basic_cluster.7 SAPHanaSrMultiTarget.py.7 SAPHanaSR.py.7 SAPHanaSR-monitor.8 SAPHanaSR-showAttr.8 SAPHanaSR-manageAttr.8- Version bump to 0.184.1 * add new HA/DR provider hook susChkSrv supporting a fast-dying indexserver (jsc#PED-1241, jsc#PED-1240) * add new HA/DR provider hook susTkOver for blocking manual takeovers (jsc#SLE-16347) * add improvements from SAP to the RA scripts regarding the handling of the SAP tools 'HDB version', 'HDBSettings.sh' and 'pycd' and the SAPHana log filter handling (jsc#PED-1739, jsc#PED-2608) * fix SAPHanaSR-replay-archive to handle hb_report archives again (bsc#1198897) * fix for SAPHanaSR-monitor reporting "LPA status of one node is missing" (bsc#1192963, bsc#1203973) * add lost-nameserver-slave handling to SAPHanaTopology, to avoid toggeling SAPHanaController resource, if all nameserver-masters got lost. SAPHanaController keeps started, if WAITING4NODES event needs to be processed. This only keeps the resource "started". The SAP HANA instance will only be started, if enough nodes are available to fulfill the needs of the SAP HANA landscape. * add new tool SAPHanaSR-manageProvider to show, add and delete HA/DR provider sections in the global.ini of SAP HANA. * changes to the demote_clone function of the resource agent: if the role is '*:shtdown:shtdown:shtdown' (topology agent run into timeouts) the function fail with rc=1, to get the managed resource stopped changes to the stop_clone function of the topology agent: call landscapeHostConfiguration.py and set the roles as they were reported. If the command timed out, set the role to '*:shtdown:shtdown:shtdown' and return 1 to get the node fenced. The used timeout for the landscapeHostConfiguration.py call can be configured by the cluster action timeout, if needed. It will be 50% of the action timeout or the minimum of 300s. (bsc#1198127) * SAPHanaSRTools.pm: shows terminate node attribute too * change SAPHanaSR-manageAttr to support the different behaviour of 'crmadmin -qD' in different pacemaker versions (bsc#1200969) * fix HANA_CALL function to support MCOS environments again (bsc#1198780) * correct the order constraint in man page ocf_suse_SAPHanaTopology.7 (bsc#1197239)- change version to 0.181.0 - Add systemd support for the resource agent to interact with the new SAP unit files for sapstartsrv. As the new version of the SAP Startup Framework will use systemd unit files to control the sapstartsrv process instead of the previous used SysV init script, we need to adapt the handling of sapstartsrv inside the resource agents to support both ways. (bsc#1189532, bsc#1189533) - add dedicated logging of HANA_CALL problems. So it will be now possible to identify, if the called hana command or the needed su command throws the error and for further hints we log the stderr output. Additional it is possible to get regular log messages for the used commands, their return code and their stderr output by enabling the 'debug' mode of the resource agents. (bsc#1182774) - add switch 'cib_access' to the SAPHanaSrMultiTarget hook to give control over the hook runtime. Default is 'all-on' which means there are 3 cib calls performed inside the hook script. Changing the value of 'cib_access' inside the global.ini file to 'site-on' to perform the absolute minimum cib calls (only one) (bsc#1189540)- change version to 0.180.1 - Multi-target SAP HANA Scale-out replication support. Extent the SAP HANA ressource agents from single replication automation to multi replication automation, which means adding multi-target support (srHook - SAPHanaSrMultiTarget.py) /usr/sbin/SAPHanaSR-manageAttr - Add a helper tool to support the customer during the update from a single replication automation to multi replication automation. (jsc#SLE-17452, jsc#SLE-20081) - The resource start and stop timeout is now configurable by increasing the timeout for the action 'start' and/or 'stop' in the cluster. We will use 95% of this action timeouts to calculate the new resource start and stop timeout for the 'WaitforStarted' and 'WaitforStopped' functions. If the new, calculated timeout value is less than '3600', it will be set to '3600', so that we do not decrease this timeout by accident in currently running clusters (bsc#1182545) - improve handling of return codes in saphana_stopSystem and saphana_stop function. (bsc#1182115) - If the cluster is down during a SR change event, these events need special care. They need to pick-up separately to not get lost. - integrate man pages back to the base package SAPHanaSR-ScaleOut. The doc package SAPHanaSR-ScaleOut-doc will still exists, but only contain the file 'SAPHanaSR-Setup-Guide.pdf' - During a SR takeover there may be a small time frame where the srHook gets a state change from the API with an empty site name. This results in a mis-configuration of the cluster. So log the occurrence of an empty site name, but do not generate bad formatted cluster attribute name. (analog to ScaleUp - bsc#1173581) - add SAPHanaSR-call-monitor - handle 'leftover instances' if HANA is configured to have only one master name server, but no additional master name server candidates, there may be the situation, where the master name server died and so the landscape has no active name server anymore. But there are other instances running with active communication channels. To prevent data corruption or operations on outdated data, we identify and stop these 'leftover' instances. - add workaround for 'sapcontrol -function WaitforStopped' terminating to early. If the master name server dies and does not have a failover candidate but there are still running worker nodes the command 'sapcontrol -function WaitforStopped' terminates to early and does not wait till all the remaining worker nodes are down. So the resource agent needs to wait during his 'stopSystem' function till the last "partial" node is down before returning a result to the cluster. (bsc#1196650) - manual page updates: SAPHanaSR-ScaleOut.7 (bsc#1144442) SAPHanaSR-showAttr.8 (bsc#1144312) and others- change version to 0.164.2 - The resource start and stop timeout is now configurable by increasing the timeout for the action 'start' and/or 'stop'. We will use 95% of this action timeouts to calculate the new resource start and stop timeout for the 'WaitforStarted' and 'WaitforStopped' functions. If the new, calculated timeout value is less than '3600', it will be set to '3600', so that we do not decrease this timeout by accident (bsc#1182545) - add return codes for saphana_stop and saphana_StopSystem (bsc#1182115) - man page SAPhanaSR-ScaleOut minor mistakes (bsc#1144442)- change version to 0.164.1 - adapt man page SAPHanaSR-showAttr(8) and the README (bsc#1144729)- PROMOTED/PROMOTED Fix - The PROMOTED/PROMOTED values happened after the main tenance procedure and the refresh of the resource did not fixed that (bsc#1176330) - Improved SCORING and logging Score of secondary in takeover phase increased from 122 to 145 to avoid promotion of former primary masternameserver candidates (bsc#1174610) - Fixed typos and improved descriptions in comments - Change default timeouts and intervals to match the official recommendations- let the SAPHanaSR-ScaleOut-doc package conflict with the SAPHanaSR-doc package (bsc#1157685)- change version to 0.164.0 - restart sapstartsrv service on master nameserver node (bsc#1156150) - Use a fall-back scoring for the master nameserver nodes, if the current roles of the node(s) got lost. (bsc#1156067) - clean up package, add checks, correct typos- Version 0.163.2 - Fix bsc#1098979: SAPHanaSR-ScaleOut SAPHanaTopology and SAPHanaController allowing virtual host names- Version 0.163.1- Fix bsc#1092331: SAPHanaSR: SAPHanaSR-showAttr fails to open an archived cib file - Fix bsc#1091988:SAPHanaSR-ScaleOut SAPHanaSR-monitor depends on package not existing in SLES - SAPHanaSR-showAttr, SAPHanaSR-monitor moved to /usr/sbin to match the file layout in SAPHanaSR-ScaleUp- Version 0.163.0 - Fix bsc#1045603: Update man pages - Fix bsc#1045536: SAPHanaSR-ScaleOut missing update to use SLES python - Fix bsc#1086545: minor typos in package description and man pages- Version 0.161.1 - Added a package conflict for SAPHanaSR, bsc#989162 - Restructure package and the .spec file- Initial technical preview release. fate#318793h01-ch2d 17028884260.185.2-150000.42.1SAPHanaSR-Setup-Guide.pdf/usr/share/doc/packages/SAPHanaSR-ScaleOut/-fmessage-length=0 -grecord-gcc-switches -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector-strong -funwind-tables -fasynchronous-unwind-tables -fstack-clash-protection -gobs://build.suse.de/SUSE:Maintenance:31933/SUSE_SLE-15_Update/ac64d26b31859be05cd0ee25c6a93cd9-SAPHanaSR-ScaleOut.SUSE_SLE-15_Updatedrpmxz5noarch-suse-linux1OY&Cbutf-8ecbc7948931c03b31530f85c62550788dce15ee48ae8b71ee7fb54d5ddbaac3c? 7zXZ !t/]"k%5kǀyԑS>pHy`s ßDɉsBmJ[17*IXo26e RWUxΫ%i̐%)AìW2+K,|L\bE@b,WEBylo?j+S)dß3%Z'y!V7YT;䜥JP<£L#%CMu5:GX?+d>fQZfL7ԗ), YZ