./configure CFLAGS='-arch x86_64' APXSLDFLAGS='-arch x86_64' --with-apxs=/usr/sbin/apxsThen you must pass the these additional flags to the apxs command in order to generate a Universal Binary shared module.
-Wl,-dynamic -Wl,'-arch ppc' -Wl,'-arch ppc64' -Wl,'-arch i386' -Wl,'-arch x86_64' -Wc,-dynamic -Wc,'-arch ppc' -Wc,'-arch ppc64' -Wc,'-arch i386' -Wc,'-arch x86_64'If you then do a file command on the shared module it should return;
$ file mod_recital.so mod_recital2.2.so: Mach-O universal binary with 4 architectures mod_recital2.2.so (for architecture ppc7400): Mach-O bundle ppc mod_recital2.2.so (for architecture ppc64): Mach-O 64-bit bundle ppc64 mod_recital2.2.so (for architecture i386): Mach-O bundle i386 mod_recital2.2.so (for architecture x86_64): Mach-O 64-bit bundle x86_64The apache module files are stored in the /usr/libexec/apache2/ directory on a default apache install on the Mac and the configuration file is /private/etc/apache2/httpd.conf
cp /usr/bin/ld /usr/libexec/gcc/i386-redhat-linux/4.1.2/real-ld
collect2: cannot find ld
gcc -print-search-dirs
It would appear that gigabit LAN is not! In fact it often runs at the same speed as 100Mbps LAN. Let's look at why exactly.
After configuring your network you can use the ifconfig command to see what speeds the LAN is connected. Even though 1000Mbps is reported by the card, the reality is that the overall throughtput may well be ~100Mpbs. You can try copying a large file using scp to demonstrate this.
As it turns out, in order to use a gigabit LAN you need to use CAT6 cables. CAT5 and CAT5E are not good enough. End result, the ethernet cards throttle back the speed to reduce dropped packets and errors.
You can find a good article here titled Squeeze Your Gigabit NIC for Top Performance. After tuning up the TCP parameters i found that it made no dfifference. The principal reasons behind low gigabit ethernet performance can be summed up as follows.
- Need to use CAT6 cables
- Slow Disk speed
- Limitations of the PCI bus which the gigabit ethernet cards use
You can get an idea about the disk speed using the hdparm command:
Display the disk partitions and choose the main linux partition which has the / filesystem.
# fdisk -l
Then get disk cache and disk read statistics:
# hdparm -Tt /dev/sda0
On my desktop system the sata disk perfomance is a limiting factor. These were the results:
/dev/sda1:
Timing cached reads: 9984 MB in 2.00 seconds = 4996.41 MB/sec
Timing buffered disk reads: 84 MB in 3.13 seconds = 58.49 MB/sec
Well, that equates to a raw disk read speed of 58.49 * 8 = 467Mbps which is half the speed of a gigabit LAN.
So.. NAS storage with lots of memory looks to be the way to go... If you use the right cables!
In this article Barry Mavin explains step by step how to setup a Linux HA (High Availability) cluster for the running of Recital applications on Redhat/Centos 5.3 although the general configuration should work for other linux versions with a few minor changes.
APPEND FROM <table-name>Before when appending into a shared Recital table each new row was locked along with the table header, then unlocked after it was inserted. This operation has now been enhanced to lock the table once, complete inserting all the rows from the table and then unlock the table. The performance of this operation has been increased by using this method. All the database and table constraints are still enforced.
After split brain has been detected, one node will always have the resource in a StandAlone connection state. The other might either also be in the StandAlone state (if both nodes detected the split brain simultaneously), or in WFConnection (if the peer tore down the connection before the other node had a chance to detect split brain).
At this point, unless you configured DRBD to automatically recover from split brain, you must manually intervene by selecting one node whose modifications will be discarded (this node is referred to as the split brain victim). This intervention is made with the following commands:
# drbdadm secondary resource
# drbdadm disconnect resource
# drbdadm -- --discard-my-data connect resource
On the other node (the split brain survivor), if its connection state is also StandAlone, you would enter:
# drbdadm connect resource
You may omit this step if the node is already in the WFConnection state; it will then reconnect automatically.
If all else fails and the machines are still in a split-brain condition then on the secondary (backup) machine issue:
drbdadm invalidate resource
$hdiutil create /tmp/tmp.dmg -ov -volname "RecitalInstall" -fs HFS+ -srcfolder "/tmp/macosxdist/"
$hdiutil convert /tmp/tmp.dmg -format UDZO -o RecitalInstall.dmg