Recital

Login Register
Recital 10 enhanced Recital by enabling it to be used in bash shell scripts and in shell commands which use pipes and/or redirect stdin and stdout. If stdin is not redirected then recital will startup and operate as normal in a terminal window. Additionally you can use heredoc to denote a block of recital commands that should be executed. Note that when used in this manner, no UI commands can be executed and no user interaction is allowed.  
# recital < mrprog.prg 
# recital < myprog.prg > myoutput.txt
# recital > myoutput.txt <<END
use customers
list structure
END
# echo "select * from sales!customers where overdue" | recital | wc -l
Individual commands can be executed in shell scripts.
# recital -c "create database sales"
# recital -c "create table sales!invoices (id int, name char(25), due date)"
Expressions can be evaluated and used in shell scripts.
# VER=`recital -e "version(1)"`
You can view what command line options are available by typing:
# recital --help
Published in Blogs
Read more...
Recital Web: cookies, sessions, 64-bit Apache module: documentation update:

Recital Web Getting Started
Published in Blogs
Read more...

DRBD:
DRBD (Distributed Replicated Block Device) forms the storage redundancy portition of a HA cluster setup. Explained in basic terms DRBD provides a means of achieving RAID 1 behavoir over a network, where whole block devices are mirrored accross the network.

To start off you will need 2 indentically sized raw drives or partitions. Many how-to's on the internet assume the use of whole drives, of course this will be better performance, but if you are simply getting familar with the technology you can repartition existing drives to allow for two eqaully sized raw partitions, one on each of the systems you will be using.

There are 3 DRBD replication modes:
• Protocol A: Write I/O is reported as completed as soon as it reached local disk and local TCP send buffer
• Protocol B: Write I/O is reported as completed as soon as it reached local disk and remote TCP buffer cache
• Protocol C: Write I/O is reported as completed as soon as it reached both local and remote disks.

If we were installing the HA cluster on a slow LAN or if the geogrphical seperation of the systems involved was great, then I recommend you opt for asyncronous mirroring (Protocol A) where the notifcation of a completed write operation occurs as soon as the local disk write is performed. This will greatly improve performance.

As we are setting up our HA cluster connected via a fast LAN, we will be using DRBD in fully syncronous mode, protocol C.
Protocol C involves the file system on the active node only being notified that the write operation was finished when the block is written to both disks of the cluster. Protocol C is the most commonly used mode of DRBD.

/etc/drbd.conf

global { usage-count yes; }
common { syncer { rate 10M; } }
resource r0 {
protocol C;
net {
max-buffers 2048;
ko-count 4;
}
on bailey {
device    /dev/drbd0;
disk      /dev/sda4;
address   192.168.1.125:7789;
meta-disk internal;
}
on giskard {
device    /dev/drbd0;
disk      /dev/sda3;
address   192.168.1.127:7789;
meta-disk internal;
}
}

drbd.conf explained:

Global section, usage-count. The DRBD project keeps statistics about the usage of DRBD versions. They do this by contacting a HTTP server each time a new DRBD version is installed on a system. This can be disabled by setting usage-count no;.

The common seciton contains configurations inhereted by all resources defined.
Setting the syncronisation rate, this is accoimplished by going to the syncer section and then assigning a value to the rate setting. The syncronisation rate refers to rate in which the data is being mirrored in the background. The best setting for the syncronsation rate is related to the speed of the network with which the DRBD systems are communicating on. 100Mbps ethernet supports around 12MBps, Giggabit ethernet somewhere around 125MBps.

in the configuration above, we have a resource defined as r0, the nodes are configured in the "on" host subsections.
"Device" configures the path of the logical block device that will be created by DRBD
"Disk" configures the block device that will be used to store the data.
"Address" configures the IP address and port number of the host that will hold this DRBD device.
"Meta-disk" configures the location where the metadata about the DRBD device will be stored.
You can set this to internal and DRBD will use the physical block device to store the information, by recording the metadata within the last sections of the disk.
Once you have created your configuration file, you must conduct the following steps on both the nodes.

Create device metadata.

$ drbdadm create-md r0
v08 Magic number not found
Writing meta data...
initialising activity log
NOT initialized bitmap
New drbd meta data block sucessfully created.
success

Attach the backing device.
$ drbdadm attach r0

Set the syncronisation parameters.
$ drbdadm syncer r0

Connect it to the peer.
$ drbdadm connect r0

Run the service.
$ service drbd start

Heartbeat:

Heartbeat provides the IP redundancy and the service HA functionailty.
On the failure of the primary node the VIP is assigned to the secondary node and the services configured to be HA are started on the secondary node.

Heartbeat configuration:

/etc/ha/ha.conf

## /etc/ha.d/ha.cf on node1
## This configuration is to be the same on both machines
## This example is made for version 2, comment out crm if using version 1
// replace the node variables with the names of your nodes.

crm no
keepalive 1
deadtime 5
warntime 3
initdead 20
bcast eth0
auto_failback yes
node bailey
node giskard

/etc/ha.d/authkeys
// The configuration below set authentication off, and encryption off for the authentication of nodes and their packets.
//Note make sure the authkeys file has the correct permisisions chmod 600

## /etc/ha.d/authkeys
auth 1
1 crc

/etc/ha.d/haresources
//192.168.1.40 is the VIP (Virtual IP) assigned to the cluster.
//the "smb" in the configuration line represents the service we wish to make HA
// /devdrbd0 represents the resource name you configured in the drbd.conf

## /etc/ha.d/haresources
## This configuration is to be the same on both nodes

bailey 192.168.1.40 drbddisk Filesystem::/dev/drbd0::/drbdData::ext3 smb

Published in Blogs
Read more...

In this article Barry Mavin, CEO and Chief Software Architect for Recital, details how to work with Triggers in the Recital Database Server.

Overview

A trigger is a special kind of stored procedure that runs when you modify data in a specified table using one or more of the data modification operations: UPDATE, INSERT, or DELETE.

Triggers can query other tables and can include complex SQL statements. They are primarily useful for enforcing complex business rules or requirements. For example, you can control whether to allow a new order to be inserted based on a customer's current account status.

Triggers are also useful for enforcing referential and data integrity.

Triggers can be used with any data source that is handled natively by the Recital Database Engine. This includes Recital, FoxPro, FoxBASE, Clipper, dBase, CISAM, and RMS data,

Creating and Editing Triggers

To create a new Trigger,  right-click the Procedures node in the Databases tree of the Project Explorer and choose Create. To modify an existing Trigger select the Trigger in the Databases Tree in the Project Explorer by double-clicking on it, or select Modify from the context menu. By convertion we recommend that you name your Stored Procedures beginning with "sp_xxx_", user-defined functions with "f_xxx_", and Triggers with "dt_xxx_", where xxx is the name of the table that they are associated with.

Associating Triggers with a Table

Once you have written your Triggers as detailed above you can associate them with the operations performed on a Table by selecting the Table tab.

The Tables tab allows you to select a Trigger procedure by clicking on the small button at the right of the Text field.

Types of Triggers

As can be seen from the Tables tab detailed below, The Recital Database Server handles 6 distinct types of Triggers.

Open Trigger

The Open Trigger is called after is a table is opened but before any operations are performed on it. You can use this trigger to record a log of table usage or provide a programmable means of checing security. If the Trigger procedure returns .F. (false), then the table is not opened. You can use a TRY...CATCH block around the associated command to inform the user.

Close Trigger

The Close Trigger is called just prior to a table being closed. In this trigger you may find it useful to get transaction counts by using the IOSTATS() built-in 4GL function, and record these values in a transaction log.

Update Trigger

The Update Trigger is called prior to a record update operation being performed. You can use this trigger to perform complex application or data specific validation. If the Trigger procedure returns .F. (false), then the record is not updated. You can use inform the user from within the Trigger procedure the reason that the data cannot be updated.

Delete Trigger

The Delete Trigger is called prior to a record delete operation being performed. You can use this trigger to perform complex application or data specific validation such as cross-table lookups e.g. attempting to delete a customer recortd when there are still open orders for that specific customer. If the Trigger procedure returns .F. (false), then the record is not deleted.

Insert Trigger

The Insert Trigger is called prior to a record insert (append) operation being performed. You can use this trigger to perform such tasks as setting up default values of columns within the record. If the Trigger procedure returns .F. (false), then the record is not inserted.

Rollback Trigger

The RollbackTrigger is called prior to a rollback operation being performed from within a form. If the Trigger procedure returns .F. (false), then the record is not rolled back to its original state.

Testing the Trigger

To test run the Trigger, select the Trigger in the Databases Tree in the Project Explorer by double-clicking on it. Once the Database Administrator is displayed, click the Run button to run the Trigger.

Published in Blogs
Read more...
Recital 10 enhances the APPEND FROM command. The enhancement was added to the following syntax ;
APPEND FROM  <table-name> 
Before when appending into a shared Recital table each new row was locked along with the table header, then unlocked after it was inserted. This operation has now been enhanced to lock the table once, complete inserting all the rows from the table and then unlock the table. The performance of this operation has been increased by using this method. All the database and table constraints are still enforced.
Published in Blogs
Read more...
TIP
To access the menu bar in Recital, press the / key.

Full details on Recital Function Keys can be found in the Key Assist section of the Help menu, or in our documentation wiki here.
Published in Blogs
Read more...
By default Recital uses PAM to authenticate users.  It is also possible to tell PAM to use Kerberos.  Simply replace the existing entries in the /etc/pam.d/recital file with the ones below:

auth       sufficient   pam_krb5.so try_first_pass
auth       sufficient   pam_unix.so shadow nullok try_first_pass
account    required     pam_unix.so broken_shadow
account    [default=bad success=ok user_unknown=ignore] pam_krb5.so
Published in Blogs
Read more...

If you have 4 GB or more RAM use the Linux kernel compiled for PAE capable machines. Your machine may not show up total 4GB ram. All you have to do is install PAE kernel package.

This package includes a version of the Linux kernel with support for up to 64GB of high memory. It requires a CPU with Physical Address Extensions (PAE).

The non-PAE kernel can only address up to 4GB of memory. Install the kernel-PAE package if your machine has more than 4GB of memory (>=4GB).

# yum install kernel-PAE

If you want to know how much memory centos is using type this in a terminal:

# cat /proc/meminfo
Published in Blogs
Read more...

Here is a simple shell script to copy your ssh authorization key to a remote machine so that you can run ssh and scp without having to repeatedly login.

#!/bin/sh
# save in file ssh_copykeyto.sh then chmod +x ssh_copykeyto.sh
KEY="$HOME/.ssh/id_rsa.pub"
if [ ! -f ~/.ssh/id_rsa.pub ];then
echo "private key not found at $KEY"
echo "create it with "ssh-keygen -t rsa" before running this script
exit
fi
if [ -z $1 ];then
echo "Bad args: specify user@host as the first argument to this script"
exit
fi
echo "Copying ssh authorization key to $1... "
KEYCODE=`cat $KEY`
ssh -q $1 "mkdir ~/.ssh 2>/dev/null; chmod 700 ~/.ssh; echo "$KEYCODE" >> ~/.ssh/authorized_keys; \ chmod 644 ~/.ssh/authorized_keys"
echo "done!"
Published in Blogs
Read more...
 
System Requirements:
  • Minimum memory: 4MB
  • Minimum Diskspace: ~20MB
The Recital Runtime System (RTS) executes the object code generated by the Recital compiler. Object files are read from disk and loaded dynamically into shared memory segments. The advantage of this is that when an application has been loaded and is being run by one user, further users share the same object code in memory. This results in performance gains, reduced memory consumption and also provides a high degree of scalability for Recital applications.
Published in Blogs
Read more...

Copyright © 2025 Recital Software Inc.

Login

Register

User Registration
or Cancel