Ako zistit dovod kernel panic-u na Solarise

root@dm1dbsp02b:/var/crash# mdb 9
cannot open compressed dump; decompress using savecore -f vmdump.9
root@dm1dbsp02b:/var/crash# savecore -f vmdump.9
savecore: System dump time: Mon Jul 27 02:48:06 2015
savecore: saving system crash dump in /var/crash/{unix,vmcore}.9
Constructing namelist /var/crash/unix.9
Constructing corefile /var/crash/vmcore.9
root@dm1dbsp02b:/var/crash# mdb 9
Loading modules: [ unix genunix specfs dtrace mac cpu.generic uppc apix scsi_vhci zfs mr_sas 
sd ip hook neti arp usba kssl stmf stmf_sbd random sockfs md fctl lofs idm crypto nfs sata
fcp cpc fcip smbsrv logindmux ptm nsmb ufs sppp ipc ]
> ::panicinfo
 cpu 29
 thread ffffc1c0ae7ab0a0
 message forced crash dump initiated at user request
 rdi fffffffffbc46e58
 rsi fffffffc846d9e00
 rdx ffffc1c0ae7ab0a0
 rcx fffffffc846d9e40
 r8 fffffffffc0dad30
 r9 ffffc1005b960000
 rax fffffffc846d9d50
 rbx 0
 rbp fffffffc846d9e30
 r10 2b46000
 r10 2b46000
 r11 2b46000
 r12 1
 r13 5
 r14 0
 r15 0
 fsbase ffff80ffbf792a40
 gsbase ffffc1c068083540
 ds 4b
 es 4b
 fs 0
 gs 0
 trapno 0
 err 0
 rip fffffffffb861d10
 cs 30
 rflags 246
 rsp fffffffc846d9d48
 ss 38
 gdt_hi 0
 gdt_lo d00001ef
 idt_hi 0
 idt_lo 40000fff
 ldt 0
 task 70
 cr0 80050033
 cr2 ffff80ffb23f5670
 cr3 1fcba40000
 cr4 406f8
> ::cpuinfo
 ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC
 0 fffffffffc0598b0 1f 0 0 59 no no t-30115 ffffc1c0ae8ac800 gipcd.bin
 1 ffffc1c066c5c000 1f 0 0 -1 no no t-30115 fffffffc80605c20 (idle)
 2 ffffc1c067e2c540 1f 0 0 -1 no no t-30115 fffffffc805ffc20 (idle)
 3 ffffc1c067e2bac0 1f 0 0 -1 no no t-30116 fffffffc80aafc20 (idle)
 4 ffffc1c067e1a500 1f 0 0 -1 no no t-30117 fffffffc80b94c20 (idle)
 5 ffffc1c067e89000 1f 0 0 -1 no no t-30115 fffffffc80bffc20 (idle)
 6 ffffc1c067ea2580 1f 0 0 -1 no no t-32244 fffffffc80c6ac20 (idle)
 7 ffffc1c067eb4000 1f 0 0 -1 no no t-30115 fffffffc80cedc20 (idle)
 8 ffffc1c067ed0580 1f 0 0 -1 no no t-30116 fffffffc80d58c20 (idle)
 9 ffffc1c067edba80 1f 0 0 -1 no no t-30116 fffffffc80dc9c20 (idle)
 10 ffffc1c067ef2040 1f 0 0 60 no no t-30115 fffffffc84a1bc20 sched
 11 ffffc1c067f3c540 1f 0 0 -1 no no t-30116 fffffffc80ee3c20 (idle)
 12 ffffc1c067f5bb00 1f 0 0 -1 no no t-30117 fffffffc80f75c20 (idle)
 13 ffffc1c067f6d000 1f 0 0 -1 no no t-30120 fffffffc80fe0c20 (idle)
 14 ffffc1c067f8b580 1f 0 0 -1 no no t-30117 fffffffc8104bc20 (idle)
 15 ffffc1c067f98a80 1f 0 0 -1 no no t-30115 fffffffc810b6c20 (idle)
 16 ffffc1c067fad040 1f 0 0 -1 no no t-30115 fffffffc81121c20 (idle)
 17 ffffc1c067fb2080 1f 0 0 -1 no no t-30116 fffffffc8118cc20 (idle)
 18 ffffc1c067fc4000 1f 0 0 -1 no no t-30116 fffffffc811f7c20 (idle)
 19 ffffc1c067fdd040 1f 0 0 -1 no no t-30116 fffffffc8127ac20 (idle)
 20 ffffc1c067ff0080 1f 0 0 -1 no no t-30115 fffffffc812e5c20 (idle)
 21 ffffc1c068005000 1f 0 0 -1 no no t-30116 fffffffc81350c20 (idle)
 22 ffffc1c06801b040 1f 0 0 -1 no no t-30118 fffffffc813bbc20 (idle)
 23 ffffc1c068056500 1f 0 0 -1 no no t-30120 fffffffc814a3c20 (idle)
 24 ffffc1c068074a80 1f 0 0 -1 no no t-30119 fffffffc8150ec20 (idle)
 25 ffffc1c068069500 1f 0 0 -1 no no t-30116 fffffffc81579c20 (idle)
 26 ffffc1c0680b9540 1f 0 0 -1 no no t-30116 fffffffc815e4c20 (idle)
 27 ffffc1c0680cd000 1f 1 0 -1 no no t-30116 fffffffc8164fc20 (idle)
 28 ffffc1c0680d3040 1f 0 0 -1 no no t-30116 fffffffc816bac20 (idle)
 29 fffffffffc0631b0 1b 0 0 110 no no t-30180 ffffc1c0ae7ab0a0 cssdagent
 30 ffffc1c06812dac0 1f 0 0 -1 no no t-30115 fffffffc81790c20 (idle)
 31 ffffc1c06806cac0 1f 0 0 -1 no no t-30118 fffffffc817fbc20 (idle)
>

Slohnute z: http://www.cuddletech.com/blog/pivot/entry.php?id=965

Problem pri bootovani z LVM na degradovaom MD RAID 1

Jedna sa o bug https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1351528

workaround:
- add break=premount to grub kernel line entry
- for continued visibility of text boot output also remove quiet, splash and possibly set gxmode 640x480
now @ initramfs prompt:
mdadm --detail /dev/md0 should indicate a state of clean, degraded, array is started so this part is ok
lvm lvs output attributes are as follows:
-wi-d---- (instead of the expected -wi-a----)
lvs manpage this means device tables are missing (device mapper?)
FIX: simply run lvm vgchange -ay and exit initramsfs. This will lead to a booting system.


Chyba je sposobena nevypopulovanim /dev/mapper v pripade ze RAID je v degradovanom stave. 
Workaround je: 

  * Add new file /usr/share/initramfs-tools/scripts/init-premount/10hack-raid-udev

#!/bin/sh
sleep 5
udevadm trigger –action=add
exit 0

chmod a+x /usr/share/initramfs-tools/scripts/init-premount/10hack-raid-udev

* update-initramfs -u


				

alter session priamo v query, platny len do dobehnutia danej query

Od Oracle 11g existuje sql hint OPT_PARAM ktorym sa da ohintovat SQL prikaz, funguje podobne ako alter session s tym ze je platny len do dobehnutia danej query

 

/*+ opt_param(<parameter_name> [,] <parameter_value>) */
parameter_name is the name of a parameter
 parameter_value is its value.

napriklad

 

select /*+ opt_param('hash_join_enabled','false') */ 
dept_no,
emp_name,
empno 
from 
emp e, dept d 
where e.ename=d.dname;

select /*+ opt_param('hash_join_enabled','false') */ 
dept_no,
emp_name,
empno 
from 
emp e, dept d 
where e.ename=d.dname;

 

alebo ak chceme pouzit viac

 

/*+ OPT_PARAM('_always_semi_join' 'off')
 OPT_PARAM('_b_tree_bitmap_plans' 'false')
 OPT_PARAM('query_rewrite_enabled' 'false')
 OPT_PARAM('_new_initial_join_orders' 'false')
 OPT_PARAM('optimizer_dynamic_sampling' 1)
 OPT_PARAM('optimizer_index_cost_adj' 1) */

Ako zistit Mac firmware password

sudo nvram security-password

security-password %fa%cb%d9%d9%dd%c5%d8%ce

In the password string, count the number of percent symbols, which are separators for the hex codes that represent a character of your password, where two hex code characters together represent one ASCII text character. Since the Calculator can only handle words up to 8 characters (16 hex characters), if there are more than 8 symbols, then you will have to split the password up and convert in sections.

Therefore, copy the security password output from the Terminal to a text editor and delete the percent symbols in it, followed by splitting the password string at every 16th character. After this, perform the following steps on each 16-character section:

Open the Calculator and set it to Programmer mode in the View menu or by pressing Command-3.
Copy one 16-character section of your password and paste it into the calculator. You should see its binary equivalent shown below the yellow-green display, and also see its ASCII-text representation at the bottom-left of the display (you may have to click the “ASCII” button to reveal this).
Starting with the first bit in the binary output (the one furthest from the blue zero at the right), reverse every other bit by clicking its corresponding 1 or 0. For example, if you see “1010 0101” then change it to “0000 1111.”Each ASCII character of the password will be a group of eight bits (a “byte”). Each of the two hex values that represents one of these characters is a group of four bits (a “nibble”), giving 16 possible combinations for a nibble. Hexadecimal numbering goes from 0 through 9 and then continues with A through F, giving 16 possible values to represent the combinations of a nibble.


CalculatorFirmwarePassword

http://reviews.cnet.com/8301-13727_7-57521667-263/use-the-calculator-to-reveal-a-macs-firmware-password/

Zapinanie a vypinanie opcii priamo na Oracle binarkach

Zapnutie
——
$ cd $ORACLE_HOME/rdbms/lib

$ make -f ins_rdbms.mk part_on ioracle

Vypnutie
——-
$ cd $ORACLE_HOME/rdbms/lib

$ make -f ins_rdbms.mk part_off ioracle

 

Mozne opcie su:

Product/Component Enable Switch Disable Switch

Automated Storage Management

asm_on

asm_off

Oracle Data Mining

dm_on

dm_off

Database Vault

dv_on

dv_off

Oracle OLAP

olap_on

olap_off

Oracle Label Security

lbac_on

lbac_off

Oracle Partitioning

part_on

part_off

Real Application Cluster

rac_on

rac_off

Real Application Testing

rat_on

rat_off

 

Tak isto od Oracle 11.2 je mozne pouzit

chopt <enable|disable> <option>

 

 

Extrahovanie sifrovacich klucov pomocou zvuku

Here, we describe a new acoustic cryptanalysis key extraction attack, applicable to GnuPG’s current implementation of RSA. The attack can extract full 4096-bit RSA decryption keys from laptop computers (of various models), within an hour, using the sound generated by the computer during the decryption of some chosen ciphertexts. We experimentally demonstrate that such attacks can be carried out, using either a plain mobile phone placed next to the computer, or a more sensitive microphone placed 4 meters away.

 

http://www.cs.tau.ac.il/~tromer/acoustic/

jop, toto je poriadny tablespace script..

column dummy noprint
column pct_used format 999.9 heading "%|Used"
column name format a19 heading "Tablespace Name"
column Kbytes format 999,999,999 heading "KBytes"
column used format 999,999,999 heading "Used"
column free format 999,999,999 heading "Free"
column largest format 999,999,999 heading "Largest"
column max_size format 999,999,999 heading "MaxPoss|Kbytes"
column pct_max_used format 999.9 heading "%|Max|Used"
break on report
compute sum of kbytes on report
compute sum of free on report
compute sum of used on report
 
select (select decode(extent_management,'LOCAL','*',' ')
 from dba_tablespaces where tablespace_name = b.tablespace_name) ||
nvl(b.tablespace_name,
 nvl(a.tablespace_name,'UNKOWN')) name,
 kbytes_alloc kbytes,
 kbytes_alloc-nvl(kbytes_free,0) used,
 nvl(kbytes_free,0) free,
 ((kbytes_alloc-nvl(kbytes_free,0))/
 kbytes_alloc)*100 pct_used,
 nvl(largest,0) largest,
 nvl(kbytes_max,kbytes_alloc) Max_Size,
 decode( kbytes_max, 0, 0, (kbytes_alloc/kbytes_max)*100) pct_max_used
from ( select sum(bytes)/1024 Kbytes_free,
 max(bytes)/1024 largest,
 tablespace_name
 from sys.dba_free_space
 group by tablespace_name ) a,
 ( select sum(bytes)/1024 Kbytes_alloc,
 sum(maxbytes)/1024 Kbytes_max,
 tablespace_name
 from sys.dba_data_files
 group by tablespace_name
 union all
 select sum(bytes)/1024 Kbytes_alloc,
 sum(maxbytes)/1024 Kbytes_max,
 tablespace_name
 from sys.dba_temp_files
 group by tablespace_name )b
where a.tablespace_name (+) = b.tablespace_name
order by 1
/




Exadata v kocke

HealthCheck HealthCheck scripts run from /usr/oracle/healthcheck healthcheck binaries can be copied from Frame to Frame by tar’ing all the files up and scp to new location.

Change all Cell/DBNode passwords at once PASSWORD=<value> dcli -l root -g ~/all_group “echo ${PASSWORD} | passwd –stdin root”

Status of Cluster /usr/oracle/grid/product/ora11gR2/bin/crsctl stat res -t

Hardware sensors Status or detailed sensor info ipmitool sdr | grep -v ok ipmitool sensor

Gather all diagnostics information /opt/oracle.SupportTools/onecommand/diagget.sh

Hardware Profile /opt/oracle.SupportTools/CheckHWnFWProfile -S > tmp/CheckHWnFWProfile_dm10db01.cbp.dhs.gov.txt

Infiniband/RDS Status ibstatus rds-info -n

Status of network connections ip n s ip sar -n DEV 1 3 ip -s link show

Confirm Storage Cell Status dcli -l root -g ~/cell_group ‘cellcli -e list cell’

Additional Reading (MOS Notes) 888828.1 – Database Machine and Exadata Storage Server 11g Release 2 (11.2) Supported Versions 1078889.1 – Exadata calibrate reports substandard IOPS on SAS drives 1093890.1 – Steps To Shutdown/Startup The Exadata & RDBMS Services and Cell/Compute Nodes 757552.1 – Oracle Exadata Best Practices 1120955.1 – Exadata V2: How To Startup or Shutdown An Exadata Or Compute Node Server Thru ILOM? 735323.1 – Exadata Storage Server Diagnostic Collection Guide 1053498.1 – Network Diagnostics information for Oracle Database Machine Environments 1072676.1 – Exadata General FAQ 1070954.1 – Oracle Exadata Database Machine exachk or HealthCheck 735323.1 – Exadata Storage Server Diagnostic Collection Guide 1071221.1 – Oracle Sun Database Machine Backup and Recovery Best Practices 359395.1 – Remote Diagnostic Agent (RDA) 4 – RAC Cluster Guide 085606 – Yum repo – http://www.oracle.com/technetwork/topics/linux/yum-repository-setup-085606.html 1317159.1 – Changing IP addresses on Exadata Database Machine 361468.1 – HugePages on Oracle Linux 64-bit Best Practices: http://www.oracle.com/technetwork/database/features/availability/exadata-maa-best-practices-155385.html

 

DB NODES

Automated Diagnostics Repository (ADR) Report adrci> show homes adrci> set homepath <diag path for the instance that generated the incident> adrci> ips create package incident <incident number> adrci> ips generate package 1 in /tmp

Reset BMC dcli -l root -g ~/dbs_group ipmitool bmc reset cold

Display Disks Status dcli -l root -g ~/cell_group ‘cellcli -e list griddisk attributes name,size,status,asmmodestatus’ dcli -l root -g ~/cell_group ‘cellcli -e list celldisk’

Display Sensor Status ipmitool sensor

Temperature dcli -l root -g /root/all_group ipmitool sensor get T_AMB | grep -I Reading dcli -l root -g /root/cell_group ‘cellcli -e list cell detail’ | grep -i Reading

Restart OS Watcher /opt/oracle.oswatcher/osw/stopOSW.sh /opt/oracle.cellos/vldrun -script oswatcher

 

INFINIBAND

Shows IB ports and status on the IB switch listlinkup

IB/RDS Status ibstatus rds-info -n

Status of network connections ip n s sar -n DEV 1 3 ip -s link show

 

STORAGE CELLS

Capture all alerts from cell history based on a time stamp dcli -l root -g ~/cell_group cellcli -e “list alerthistory where begintime \> \’2011-05-25T00:00:00-05:00\'”

Check/change default disk timeout for cells (cell failure) The default value for the timer is 3.6 hours, 0 is default. This column is in seconds. SQL> SELECT NAME, repair_timer from v$asm_disks; SQL> ALTER DISKGROUP DATA SET ATTRIBUTE ‘DISK_REPAIR_TIME’=’8.5H’;

Get cell disk info cellcli -e list griddisk attributes name,size,offset,status

Help with cellcli cellcli -e help alter cell

Reset BMC dcli -l root -g ~/cell_group cellcli -e “alter cell restart bmc”

Restart cell services cellcli -e ‘alter cell restart services all’

Flashcache Disk Resurrection Procedure disable alerts & monitoring cell shutdown init 6 cell startup (Run the command below until all disks are “active online yes”) cellcli -e list griddisk attributes name,status,asmmodestatus,asmdeactivationoutcome

Flashcache Disk Resurrection Procedure disable alerts & monitoring cell shutdown init 6 cell startup (Run the command below until all disks are “active online yes”) cellcli -e list griddisk attributes name,status,asmmodestatus,asmdeactivationoutcome

Flash Cache Status/Creation dcli -l root -g ~/cell_group cellcli -e ‘list flashcache’ cellcli -e ‘drop celldisk all flashdisk force’ cellcli -e ‘create celldisk all flashdisk’ cellcli -e ‘create flashcache all’

List total I/O error counts Can indicate a pending proactive/predictive failure. dcli -l root -g ~/cell_group cellcli -e “list griddisk where errorCount > 0 detail”

Exadata related….

Background Processes in the Exadata Cell Environment on database server:
The background processes for the database and Oracle ASM instance for an Exadata
Cell environment are the same as other environments, except for the following background process:

– diskmon Process – The diskmon process is a fundamental component of Exadata Cell, and is responsible for implementing I/O fencing.

– XDMG Process (Exadata Automation Manager)
Its primary task is to watch for inaccessible disks and cells, and to detect when the disks and cells become accessible.

– XDWK Process (Exadata Automation Worker)
The XDWK process begins when asynchronous actions, such as ONLINE, DROP or ADD for an Oracle ASM disk are requested by the XDMG process.
The XDWK process will stop after 5 minutes of inactivity.

Output:
> ps -ef | egrep “diskmon|xdmg|xdwk”
oracle 4684 4206 0 06:42 pts/1 00:00:00 egrep diskmon|xdmg|xdwk
oracle 10321 1 0 2010 ? 00:38:15 /u01/app/11.2.0/grid/bin/diskmon.bin -d -f
oracle 10858 1 0 2010 ? 00:00:18 asm_xdmg_+ASM1

As a departure from ASM storage technology that uses a process architecture borrowed from database instances,
the storage servers have a brand new set of processes to manage disk I/O. They are:

– RS, the restart service. Performing a similar role to SMON, RS monitors other processes, and automatically restarts them if they fail unexpectedly.
RS also handles planned restarts in conjunction with software updates.
The main cellrssrm process spawns several helper processes, including cellrsbmt, cellrsbkm, cellrsomt, and cellrsssmt.

– MS, the management service. MS is the back-end process that processes configuration and monitoring commands. It communicates with cellcli, described in the next section.
MS is written in Java, unlike the other background processes which are distributed in binary form and are likely written in C.

– CELLSRV, the cell service. CELLSRV handles the actual I/O processing of the storage server.
It is not uncommon to see heavy usage from CELLSRV process threads during periods of heavy load.
Among other things, CELLSRV provides:
. Communication with database nodes using the iDB/RDS protocols over the InfiniBand network
. Disk I/O with the underlying cell disks
. Offload of SQL processing from database nodes
. I/O resource management, prioritizing I/O requests based on a defined policy

– I/O Resource Manager (IORM). Enables storage grid by prioritizing I/Os to ensure predictable performance

Cell node Management Overview:
DBA’s login as OS user “celladmin” to manage cell nodes.
Each cell node internally run ASM instance to manage cell node disks. This means, you can’t see the ASM pmon process on the cell node.

Cell Admin. Tool’s: cellcli and dcli.
Cell monitoring Tool’s: OSWatcher, ORION (I/O performance benchmarking tool) and ADRCI

Cell Nodes Logs and Traces:
$ADR_BASE/diag/asm/cell/`hostname`/trace/alert.log
$ADR_BASE/diag/asm/cell/`hostname`/trace/ms-odl.*
$ADR_BASE/diag/asm/cell/`hostname`/trace/svtrc__0.trc — ps -ef | grep “cellsrv 100”
$ADR_BASE/diag/asm/cell/`hostname`/incident/*

/var/log/messages*, dmesg
/var/log/sa/*
/var/log/cellos/*

cellcli -e list alerthistory

$OSSCONF/cellinit.ora — #CELL Initialization Parameters
$OSSCONF/cell_disk_config.xml
$OSSCONF/griddisk.owners.dat
$OSSCONF/cell_bootstrap.ora

/opt/oracle/cell/cellsrv/deploy/log/cellcli.lst*

$OSSCONF/alerts.xml
$OSSCONF/metrics/*
oswatcher data

df -h -> check if /opt/oracle file system full? /opt/oracle only 2GB in size on cell node !!!

Where :
$OSSCONF is: /opt/oracle/cell11.2.1.3.1_LINUX.X64_100818.1/cellsrv/deploy/config
$ADR_BASE is: /opt/oracle/cell11.2.1.3.1_LINUX.X64_100818.1/log

Cell Check and shutdown/startup commands:
Note: For full list of commands use: cellcli -e help

cellcli -e alter cell shutdown services all
cellcli -e alter cell startup services all
cellcli -e alter cell shutdown services cellsrv
cellcli -e alter cell restart services cellsrv
cellcli -e list lun detail
cellcli -e list griddisk detail
cellcli -e list celldisk detail
cellcli -e list physicaldisk detail
cellcli -e list flashcache detail
cellcli -e list physicaldisk attributes name, diskType, luns, status
cellcli -e list physicaldisk where disktype=harddisk attributes physicalfirmware
cellcli -e list lun attributes name, diskType, isSystemLun, status

imagehistory (root/sudo)
imageinfo (root/sudo)
service celld status (root/sudo)
lsscsi | grep MARVELL

Smart scan layers:
Smart scan involves multiple layers of code
KDS/KTR/KCBL – data layers in rdbms
KCFIS – smart scan layer in rdbms
Predicate Disk – smart scan layer in cellsrv
Storage index – IO avoidance optimization in cellsrv
Flash IO – IO layer in cellsrv to fetch data from flash cache
Block IO – IO layer in cellsrv to fetch data from hard-disks
FPLIB – filtering library in cellsrv

Is smart scan issue?
Cell_offload_processing=false (default true)
If the problem does not occurs, it’s the smart scan issue.

Is this a FPLIB issue?
_kcfis_cell_passthru_enabled=true (default false)
If the problem does not occurs, it’s the FPLIB issue

Is storage index issue?
_kcfis_storageidx_disabled=true (default false)
Problem still occurs, it’s not a storage index issue.

Is flash cache issue?
For 11.2.0.2, _kcfis_keep_in_cellfc_enabled=false (default true) do not use flash cache
For 11.2.0.1, _kcfis_control1=1 (default 0)
Problem still occurs, it’s not a flash cache problem.

Cell related Database view’s:
select * from sys.GV_$CELL_STATE;
select * from sys.GV_$CELL;
select * from sys.GV_$CELL_THREAD_HISTORY;
select * from sys.GV_$CELL_REQUEST_TOTALS;
select * from sys.GV_$CELL_CONFIG;

Bloom filter in Exadata:
In Oracle 10g concept of bloom filtering was introduced.
When two tables are joined via a hash join, the first table (typically the smaller table) is scanned and the rows that satisfy the ‘where’ clause predicates (for that table) are used to create a hash table.
During the hash table creation a bit vector or bloom filter is also created based on the join column.
The bit vector is then sent as an additional predicate to the second table scan.
After the ‘where’ clause predicates have been applied to the second table scan, the resulting rows will have their join column hashed and it will be compared to values in the bit vector.
If a match is found in the bit vector that row will be sent to the hash join. If no match is found then the row will be disregarded.
On Exadata the bloom filter or bit vector is passed as an additional predicate so it will be overloaded to the storage cells making bloom filtering very efficient.

How to Identify a Bloom Filter in an Execution plan:
You can identify a bloom filter in a plan when you see :BF0000 in the Name column of the execution plan.

To disable the feature, the initialization parameter _bloom_pruning_enabled must be set to FALSE.

12.10.2013, Vysoka

JaT- 151