WARNING: Rants ahead.
Simple request migrate schema from one database to another, right?
Create new database perform schema export and import this only works if objects are self contained.
The following objects are missing from schema export to name a few.
Here’s what was done and hopefully did not missed anything. TBS was pre-created.
$ cat impdp_full_public.par
userid="/ as sysdba"
$ cat (more...)
We live in exciting times, Oracle Database 12.2 for Exadata was released earlier today.
The 12.2 database was already available on the Exadata Express Cloud Service and Database as a service for a few months now.
Today, it has been released for Exadata on-premises, five days earlier than the initial Oracle announcement of 15th Feb.
The documentation suggests that to run 12.2 database you need to run at least 12.1.2. (more...)
This is just a quick blog entry to showcase a few of the publications from IT vendors showcasing SLOB. SLOB allows performance engineers to speak in short sentences. As I’ve pointed out before, SLOB is not used to test how well Oracle handles transaction. If you are worried that Oracle cannot handle transactions then you have bigger problems than what can be tested with SLOB. SLOB is how you test whether–or how well–a platform can (more...)
I created a 4 node container based (Proxmox LXC) Hortonworks Data Platform 2.5 Hadoop cluster recently and all went well apart from all the charts on the Ambari homepage were blank or showing “N/A”, like this:
An outline of the environment:
- 4 node cluster of LXC containers on Proxmox host
- Centos 7 Linux OS
- Nodes are called bishdp0[1-4], all created from same template and identical configuration
- All containers are on 192.168.1.0/24 (more...)
There’s a thread running on OTN at present about deleting huge volumes of duplicated data from a table (to reduce it from 1.1 billion to about 22 million rows). The thread isn’t what I’m going to talk about, though, other than quoting some numbers from it to explain what this post is about.
An overview of the requirement suggests that a file of about 2.2 million rows is loaded into the table every (more...)
During my Oracle’s Cloud Licensing Change : Be Warned! post I said, “It’s getting really hard to remain an Oracle fanboy these days!”
Since then I’ve heard a number of stories of customers being contacted and told they need to double their licensing for systems on the cloud. This is for existing systems that were “fully licensed” before 23rd January 2017. I’ve also heard of a number of big companies that have now made policy changes (more...)
$ echo $ORACLE_HOME
$ $ORACLE_HOME/OPatch/opatch lsinventory -detail -oh $ORACLE_HOME
Oracle Interim Patch Installer version 184.108.40.206.3
Copyright (c) 2017, Oracle Corporation. All rights reserved.
Oracle Home : /u01/app/oracle/product/12.1.0/db_1
Central Inventory : /u01/app/oraInventory
from : /u01/app/oracle/product/12.1.0/db_1/oraInst.loc
OPatch version : 220.127.116.11.3
OUI version : 18.104.22.168.0
Log file location : /u01/app/oracle/product/12.1.0/db_1/cfgtoollogs/opatch/opatch2017-02-08_15-56-03PM_1.log
List of Homes on this (more...)
It’s been a while since the Israeli user group (iloug) had a technology meetup (SIG meeting). The last time that happened was over two years ago – and since then, we only had the bigger conferences with guests from all over the world. Yesterday we renewed that long time tradition and held such a meetup.
Although I am not part of the OUG board (and not for the lack of trying, just no elections for (more...)
12c (22.214.171.124.0) RAC Oracle Linux Server release 7.3
/u01/software/grid/runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose
Starting Clock synchronization checks using Network Time Protocol(NTP)...
Checking existence of NTP configuration file "/etc/ntp.conf" across nodes
Node Name File exists?
The NTP configuration file "/etc/ntp.conf" is available on all nodes
NTP configuration file "/etc/ntp.conf" existence check passed
Checking daemon liveness...
Check: Liveness for "ntpd"
During the Upgrade of Oracle Linux 7.2 to 7.3 with “yum update”, when updating the microcode_ctl package, the system crashed and firmware reported an uncorrectable CPU error after reboot.
It turned out that the update of the package in combination with specific Intel CPUs causes the issue. In the meantime, there is a red hat bug and solution available.
- MOS: The system could (more...)
Common vs Local Users
A 12c Multitenant Database is designed to make consolidation easier. The PDBs have not only their application data in separate tablespaces, they also have their application metadata in separate SYSTEM tablespaces. That’s what makes it so easy and fast to unplug a PDB.
The SYSTEM tablespace of the CDB contains the internal metadata that is shared by all PDBs. Internal metadata (the dictionary tables) and internal objects (like the DBMS* packages) (more...)
In the last part of this installment I'll have a brief look at the network performance measured in the Oracle DBaaS environment, in particular the network interface that gets used as private interconnect in case of RAC configuration. The network performance could also be relevant when evaluating how to transfer data to the cloud database.
I've used the freely available "iperf" tool to measure the network bandwidth and got the following results:[root@test12102rac2 ~]# iperf3 (more...)
I was playing around with the Exadata X2-2 in the Enkitec lab this weekend, and hit an interesting issue when patching the storage servers. We were taking the system up to version 126.96.36.199.3 for testing purposes. I fired off the patchmgr script, and one of the storage servers failed when beginning the first phase of the patching cycle:
[root@enkdb03 patch_188.8.131.52.3.161208]# ./patchmgr -cells cell_group -patch -ignore_alerts
This is new as of 184.108.40.206.
$ srvctl config database -d hawk
Database unique name: hawk
Database name: hawk
Oracle home: /u01/app/oracle/product/12.1.0/db_1
Oracle user: oracle
Password file: +DATA/hawk/orapwhawk
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Disk Groups: DATA
Mount point paths:
OSDBA group: dba
Database instances: hawk1,hawk2
Configured nodes: hawk01,hawk02
To connect to PostgreSQL, the following parameters are required:
1. Host or Host Address
3. Database Name
As mentioned, in my earlier post, like sqlplus in oracle, PostgreSQL has “psql”.
To connect any PostgreSQL db, you can use
The default port number for PostgreSQL is 5432.
i) psql -h hostname -p port -d dbname -U username
bash-3.2$ psql -h localhost (more...)