There are a couple of ways you could achieve this -- software RAID at level 0, and LVM.
 
I think LVM is probably more flexible in this particular case, so let's look at that.
 
A word of warning -- when you combine multiple disks together to form one great big volume like this, you multiply the likelihood of data loss because of drive failure by the number of drives. If one drive fails, it brings down the whole ship. 
You definitely need a backup of the whole volume if you're not willing to lose everything on it.
 
And, as with any operation involving formatting and repartitioning disks and such, you should have a backup before trying any of this anyway!
 
Ubuntu's Alternate installer offers a way to create LVM at install time, but by the sounds of it, you're already installed.
 
If you're already installed, you can still set it up, although a fair bit of command line/terminal action will be required. This is a more advanced setup than most people's Linux installs.
 
I created a little virtual machine test environment to have a play with LVM, and I managed to create one big volume from two virtual disks.
 
(A quick introduction to the way Linux talks about disks and partitioning.
 
Notice that Linux labels your physical disks like this:
 
sda -- first disk
sdb -- second disk
sdc -- third disk
 
and so on.
 
Partitions on individual disks are labelled with a number after the letter of the disk:
 
sda1 -- first primary partition on first disk
sda2 -- second partition on first disk
and so on...
 
However, if you have an extended partition with any number of logical drives in it, the first logical drive is labelled '5', regardless of whether there is or isn't a '4'!
 
sda5 -- first logical disk inside extended partition on first disk
sda6 -- second logical disk inside extended partition on first disk
 
These disk identifiers, when combined with the prefix 
/dev/, are the 'device nodes'.
)
 
So, I have three disks in my test VM -- 
sda, the disk where Ubuntu is installed. I don't want to touch that at all. 
sdb and 
sdc are the two disks I want to combine into one big LVM volume.
 
These aren't instructions specifically for your setup, so don't follow them just yet. You'll likely have bigger disks and other differences, and we need to understand more about your specific environment. This should give you a taste of how it should be done, so here is what I did:
 
1.
 
Created a partition on sdb, and a partition on sdc, each of which filled the whole disk. You should be able to use 
GParted Live CD for this. I now have sdb1 and sdc1, where I want the volume group to be created.
 
I used Linux fdisk to set the partition type of each of these to '8e', which is the partition type number for Linux LVM. This is so they are detected properly.
 
Code:
fdisk /dev/sdb
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 8e
Command (m for help): w
Calling ioctl() to re-read partition table
Code:
fdisk /dev/sdc
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 8e
Command (m for help): w
Calling ioctl() to re-read partition table
Now, this is what the disks on my system look like:
 
Code:
$ sudo fdisk -l
Disk /dev/sda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000b44e6
  Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048    39987199    19992576   83  Linux
/dev/sda2        39989246    41940991      975873    5  Extended
/dev/sda5        39989248    41940991      975872   82  Linux swap / Solaris
Disk /dev/sdb: 12.9 GB, 12884901888 bytes
128 heads, 33 sectors/track, 5957 cylinders, total 25165824 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xa93a8c3d
  Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    25165823    12581888   8e  Linux LVM
Disk /dev/sdc: 40.8 GB, 40802189312 bytes
149 heads, 52 sectors/track, 10285 cylinders, total 79691776 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x04ff00e1
  Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048    79691775    39844864   8e  Linux LVM
Back in Ubuntu, I installed LVM2:
 
Code:
sudo apt-get install lvm2
Now, we create a new Volume Group with the two partitions on the two physical disks we want to use. The Volume Group is just a collection of disks that we want to use with LVM, but just creating the Volume Group doesn't do anything else but collect the disks together, ready for the next stage.
 
Code:
sudo vgcreate vg1 /dev/sdb1 /dev/sdc1
We ask 'vgcreate' to create a volume group named '
vg1', using 
/dev/sdb1 and 
/dev/sdc1.
 
3.
 
With the Volume Group created from our two disks, we now need to create a volume inside it:
 
Code:
sudo lvcreate -L 45G -n data vg1
We ask 'lvcreate' to make a 45 GB logical volume, named 
data on volume group 
vg1.
 
That ends up as 45 GB or so. I probably could have made it slightly larger, but as you can see, it is larger than the 40 GB 
sdc disk, so we must be using the combined size of both disks for the volume!
 
4.
 
We now need to format the new logical volume with a filesystem, so we actually write files to it and so on.
 
Code:
sudo mke2fs -j /dev/vg1/data
As you can see, the 'device node' for the new LVM logical volume that we made is accessed through 
/dev/, followed by 
vg1 (the Volume Group), followed by 
data (the Logical Volume name itself).
 
5.
 
Finally, we need to create a mount point, which is where the files will actually appear and be accessible from. We must also mount the volume into that mount point.
 
A mount point is just a directory somewhere. Let's say 
/data.
 
 
Now, we will edit the file 
/etc/fstab, and add our new logical volume into the lists of filesystems that we want to be mounted and ready for when the computer starts up.
 
Code:
sudo gedit /etc/fstab
At the bottom, add a new line with the device node, mount point and a few other options (which aren't too important right now).
 
Code:
/dev/vg1/data       /data      auto     rw      0     0
Save the file and quit. Finally, let's mount that now so we don't have to reboot to have access to it:
 
 
Now, you can access 
/data, drop files in there and they will be saved onto these disks. How exciting!
 
-----
 
If you do want to proceed further with this, it would be helpful to know exactly which version of Ubuntu you're running, and also to see the output of:
 
 
in a terminal, when you have your extra disks connected. This will show all the information about your disks.
 
Let us know if you want to go further! This is more advanced stuff, but it is quite possible to do. Just make sure you have your stuff backed up. :)