<!--{{{-->
<link rel='alternate' type='application/rss+xml' title='RSS' href='index.xml' />
<!--}}}-->
Background: #fff
Foreground: #000
PrimaryPale: #8cf
PrimaryLight: #18f
PrimaryMid: #04b
PrimaryDark: #014
SecondaryPale: #ffc
SecondaryLight: #fe8
SecondaryMid: #db4
SecondaryDark: #841
TertiaryPale: #eee
TertiaryLight: #ccc
TertiaryMid: #999
TertiaryDark: #666
Error: #f88
/*{{{*/
body {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}

a {color:[[ColorPalette::PrimaryMid]];}
a:hover {background-color:[[ColorPalette::PrimaryMid]]; color:[[ColorPalette::Background]];}
a img {border:0;}

h1,h2,h3,h4,h5,h6 {color:[[ColorPalette::SecondaryDark]]; background:transparent;}
h1 {border-bottom:2px solid [[ColorPalette::TertiaryLight]];}
h2,h3 {border-bottom:1px solid [[ColorPalette::TertiaryLight]];}

.button {color:[[ColorPalette::PrimaryDark]]; border:1px solid [[ColorPalette::Background]];}
.button:hover {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::SecondaryLight]]; border-color:[[ColorPalette::SecondaryMid]];}
.button:active {color:[[ColorPalette::Background]]; background:[[ColorPalette::SecondaryMid]]; border:1px solid [[ColorPalette::SecondaryDark]];}

.header {background:[[ColorPalette::PrimaryMid]];}
.headerShadow {color:[[ColorPalette::Foreground]];}
.headerShadow a {font-weight:normal; color:[[ColorPalette::Foreground]];}
.headerForeground {color:[[ColorPalette::Background]];}
.headerForeground a {font-weight:normal; color:[[ColorPalette::PrimaryPale]];}

.tabSelected{color:[[ColorPalette::PrimaryDark]];
	background:[[ColorPalette::TertiaryPale]];
	border-left:1px solid [[ColorPalette::TertiaryLight]];
	border-top:1px solid [[ColorPalette::TertiaryLight]];
	border-right:1px solid [[ColorPalette::TertiaryLight]];
}
.tabUnselected {color:[[ColorPalette::Background]]; background:[[ColorPalette::TertiaryMid]];}
.tabContents {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::TertiaryPale]]; border:1px solid [[ColorPalette::TertiaryLight]];}
.tabContents .button {border:0;}

#sidebar {}
#sidebarOptions input {border:1px solid [[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel {background:[[ColorPalette::PrimaryPale]];}
#sidebarOptions .sliderPanel a {border:none;color:[[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel a:hover {color:[[ColorPalette::Background]]; background:[[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel a:active {color:[[ColorPalette::PrimaryMid]]; background:[[ColorPalette::Background]];}

.wizard {background:[[ColorPalette::PrimaryPale]]; border:1px solid [[ColorPalette::PrimaryMid]];}
.wizard h1 {color:[[ColorPalette::PrimaryDark]]; border:none;}
.wizard h2 {color:[[ColorPalette::Foreground]]; border:none;}
.wizardStep {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];
	border:1px solid [[ColorPalette::PrimaryMid]];}
.wizardStep.wizardStepDone {background:[[ColorPalette::TertiaryLight]];}
.wizardFooter {background:[[ColorPalette::PrimaryPale]];}
.wizardFooter .status {background:[[ColorPalette::PrimaryDark]]; color:[[ColorPalette::Background]];}
.wizard .button {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::SecondaryLight]]; border: 1px solid;
	border-color:[[ColorPalette::SecondaryPale]] [[ColorPalette::SecondaryDark]] [[ColorPalette::SecondaryDark]] [[ColorPalette::SecondaryPale]];}
.wizard .button:hover {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::Background]];}
.wizard .button:active {color:[[ColorPalette::Background]]; background:[[ColorPalette::Foreground]]; border: 1px solid;
	border-color:[[ColorPalette::PrimaryDark]] [[ColorPalette::PrimaryPale]] [[ColorPalette::PrimaryPale]] [[ColorPalette::PrimaryDark]];}

.wizard .notChanged {background:transparent;}
.wizard .changedLocally {background:#80ff80;}
.wizard .changedServer {background:#8080ff;}
.wizard .changedBoth {background:#ff8080;}
.wizard .notFound {background:#ffff80;}
.wizard .putToServer {background:#ff80ff;}
.wizard .gotFromServer {background:#80ffff;}

#messageArea {border:1px solid [[ColorPalette::SecondaryMid]]; background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]];}
#messageArea .button {color:[[ColorPalette::PrimaryMid]]; background:[[ColorPalette::SecondaryPale]]; border:none;}

.popupTiddler {background:[[ColorPalette::TertiaryPale]]; border:2px solid [[ColorPalette::TertiaryMid]];}

.popup {background:[[ColorPalette::TertiaryPale]]; color:[[ColorPalette::TertiaryDark]]; border-left:1px solid [[ColorPalette::TertiaryMid]]; border-top:1px solid [[ColorPalette::TertiaryMid]]; border-right:2px solid [[ColorPalette::TertiaryDark]]; border-bottom:2px solid [[ColorPalette::TertiaryDark]];}
.popup hr {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::PrimaryDark]]; border-bottom:1px;}
.popup li.disabled {color:[[ColorPalette::TertiaryMid]];}
.popup li a, .popup li a:visited {color:[[ColorPalette::Foreground]]; border: none;}
.popup li a:hover {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; border: none;}
.popup li a:active {background:[[ColorPalette::SecondaryPale]]; color:[[ColorPalette::Foreground]]; border: none;}
.popupHighlight {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}
.listBreak div {border-bottom:1px solid [[ColorPalette::TertiaryDark]];}

.tiddler .defaultCommand {font-weight:bold;}

.shadow .title {color:[[ColorPalette::TertiaryDark]];}

.title {color:[[ColorPalette::SecondaryDark]];}
.subtitle {color:[[ColorPalette::TertiaryDark]];}

.toolbar {color:[[ColorPalette::PrimaryMid]];}
.toolbar a {color:[[ColorPalette::TertiaryLight]];}
.selected .toolbar a {color:[[ColorPalette::TertiaryMid]];}
.selected .toolbar a:hover {color:[[ColorPalette::Foreground]];}

.tagging, .tagged {border:1px solid [[ColorPalette::TertiaryPale]]; background-color:[[ColorPalette::TertiaryPale]];}
.selected .tagging, .selected .tagged {background-color:[[ColorPalette::TertiaryLight]]; border:1px solid [[ColorPalette::TertiaryMid]];}
.tagging .listTitle, .tagged .listTitle {color:[[ColorPalette::PrimaryDark]];}
.tagging .button, .tagged .button {border:none;}

.footer {color:[[ColorPalette::TertiaryLight]];}
.selected .footer {color:[[ColorPalette::TertiaryMid]];}

.sparkline {background:[[ColorPalette::PrimaryPale]]; border:0;}
.sparktick {background:[[ColorPalette::PrimaryDark]];}

.error, .errorButton {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::Error]];}
.warning {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::SecondaryPale]];}
.lowlight {background:[[ColorPalette::TertiaryLight]];}

.zoomer {background:none; color:[[ColorPalette::TertiaryMid]]; border:3px solid [[ColorPalette::TertiaryMid]];}

.imageLink, #displayArea .imageLink {background:transparent;}

.annotation {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; border:2px solid [[ColorPalette::SecondaryMid]];}

.viewer .listTitle {list-style-type:none; margin-left:-2em;}
.viewer .button {border:1px solid [[ColorPalette::SecondaryMid]];}
.viewer blockquote {border-left:3px solid [[ColorPalette::TertiaryDark]];}

.viewer table, table.twtable {border:2px solid [[ColorPalette::TertiaryDark]];}
.viewer th, .viewer thead td, .twtable th, .twtable thead td {background:[[ColorPalette::SecondaryMid]]; border:1px solid [[ColorPalette::TertiaryDark]]; color:[[ColorPalette::Background]];}
.viewer td, .viewer tr, .twtable td, .twtable tr {border:1px solid [[ColorPalette::TertiaryDark]];}

.viewer pre {border:1px solid [[ColorPalette::SecondaryLight]]; background:[[ColorPalette::SecondaryPale]];}
.viewer code {color:[[ColorPalette::SecondaryDark]];}
.viewer hr {border:0; border-top:dashed 1px [[ColorPalette::TertiaryDark]]; color:[[ColorPalette::TertiaryDark]];}

.highlight, .marked {background:[[ColorPalette::SecondaryLight]];}

.editor input {border:1px solid [[ColorPalette::PrimaryMid]];}
.editor textarea {border:1px solid [[ColorPalette::PrimaryMid]]; width:100%;}
.editorFooter {color:[[ColorPalette::TertiaryMid]];}

#backstageArea {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::TertiaryMid]];}
#backstageArea a {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::Background]]; border:none;}
#backstageArea a:hover {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; }
#backstageArea a.backstageSelTab {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}
#backstageButton a {background:none; color:[[ColorPalette::Background]]; border:none;}
#backstageButton a:hover {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::Background]]; border:none;}
#backstagePanel {background:[[ColorPalette::Background]]; border-color: [[ColorPalette::Background]] [[ColorPalette::TertiaryDark]] [[ColorPalette::TertiaryDark]] [[ColorPalette::TertiaryDark]];}
.backstagePanelFooter .button {border:none; color:[[ColorPalette::Background]];}
.backstagePanelFooter .button:hover {color:[[ColorPalette::Foreground]];}
#backstageCloak {background:[[ColorPalette::Foreground]]; opacity:0.6; filter:'alpha(opacity:60)';}
/*}}}*/
/*{{{*/
* html .tiddler {height:1%;}

body {font-size:.75em; font-family:arial,helvetica; margin:0; padding:0;}

h1,h2,h3,h4,h5,h6 {font-weight:bold; text-decoration:none;}
h1,h2,h3 {padding-bottom:1px; margin-top:1.2em;margin-bottom:0.3em;}
h4,h5,h6 {margin-top:1em;}
h1 {font-size:1.35em;}
h2 {font-size:1.25em;}
h3 {font-size:1.1em;}
h4 {font-size:1em;}
h5 {font-size:.9em;}

hr {height:1px;}

a {text-decoration:none;}

dt {font-weight:bold;}

ol {list-style-type:decimal;}
ol ol {list-style-type:lower-alpha;}
ol ol ol {list-style-type:lower-roman;}
ol ol ol ol {list-style-type:decimal;}
ol ol ol ol ol {list-style-type:lower-alpha;}
ol ol ol ol ol ol {list-style-type:lower-roman;}
ol ol ol ol ol ol ol {list-style-type:decimal;}

.txtOptionInput {width:11em;}

#contentWrapper .chkOptionInput {border:0;}

.externalLink {text-decoration:underline;}

.indent {margin-left:3em;}
.outdent {margin-left:3em; text-indent:-3em;}
code.escaped {white-space:nowrap;}

.tiddlyLinkExisting {font-weight:bold;}
.tiddlyLinkNonExisting {font-style:italic;}

/* the 'a' is required for IE, otherwise it renders the whole tiddler in bold */
a.tiddlyLinkNonExisting.shadow {font-weight:bold;}

#mainMenu .tiddlyLinkExisting,
	#mainMenu .tiddlyLinkNonExisting,
	#sidebarTabs .tiddlyLinkNonExisting {font-weight:normal; font-style:normal;}
#sidebarTabs .tiddlyLinkExisting {font-weight:bold; font-style:normal;}

.header {position:relative;}
.header a:hover {background:transparent;}
.headerShadow {position:relative; padding:4.5em 0em 1em 1em; left:-1px; top:-1px;}
.headerForeground {position:absolute; padding:4.5em 0em 1em 1em; left:0px; top:0px;}

.siteTitle {font-size:3em;}
.siteSubtitle {font-size:1.2em;}

#mainMenu {position:absolute; left:0; width:10em; text-align:right; line-height:1.6em; padding:1.5em 0.5em 0.5em 0.5em; font-size:1.1em;}

#sidebar {position:absolute; right:3px; width:16em; font-size:.9em;}
#sidebarOptions {padding-top:0.3em;}
#sidebarOptions a {margin:0em 0.2em; padding:0.2em 0.3em; display:block;}
#sidebarOptions input {margin:0.4em 0.5em;}
#sidebarOptions .sliderPanel {margin-left:1em; padding:0.5em; font-size:.85em;}
#sidebarOptions .sliderPanel a {font-weight:bold; display:inline; padding:0;}
#sidebarOptions .sliderPanel input {margin:0 0 .3em 0;}
#sidebarTabs .tabContents {width:15em; overflow:hidden;}

.wizard {padding:0.1em 1em 0em 2em;}
.wizard h1 {font-size:2em; font-weight:bold; background:none; padding:0em 0em 0em 0em; margin:0.4em 0em 0.2em 0em;}
.wizard h2 {font-size:1.2em; font-weight:bold; background:none; padding:0em 0em 0em 0em; margin:0.4em 0em 0.2em 0em;}
.wizardStep {padding:1em 1em 1em 1em;}
.wizard .button {margin:0.5em 0em 0em 0em; font-size:1.2em;}
.wizardFooter {padding:0.8em 0.4em 0.8em 0em;}
.wizardFooter .status {padding:0em 0.4em 0em 0.4em; margin-left:1em;}
.wizard .button {padding:0.1em 0.2em 0.1em 0.2em;}

#messageArea {position:fixed; top:2em; right:0em; margin:0.5em; padding:0.5em; z-index:2000; _position:absolute;}
.messageToolbar {display:block; text-align:right; padding:0.2em 0.2em 0.2em 0.2em;}
#messageArea a {text-decoration:underline;}

.tiddlerPopupButton {padding:0.2em 0.2em 0.2em 0.2em;}
.popupTiddler {position: absolute; z-index:300; padding:1em 1em 1em 1em; margin:0;}

.popup {position:absolute; z-index:300; font-size:.9em; padding:0; list-style:none; margin:0;}
.popup .popupMessage {padding:0.4em;}
.popup hr {display:block; height:1px; width:auto; padding:0; margin:0.2em 0em;}
.popup li.disabled {padding:0.4em;}
.popup li a {display:block; padding:0.4em; font-weight:normal; cursor:pointer;}
.listBreak {font-size:1px; line-height:1px;}
.listBreak div {margin:2px 0;}

.tabset {padding:1em 0em 0em 0.5em;}
.tab {margin:0em 0em 0em 0.25em; padding:2px;}
.tabContents {padding:0.5em;}
.tabContents ul, .tabContents ol {margin:0; padding:0;}
.txtMainTab .tabContents li {list-style:none;}
.tabContents li.listLink { margin-left:.75em;}

#contentWrapper {display:block;}
#splashScreen {display:none;}

#displayArea {margin:1em 17em 0em 14em;}

.toolbar {text-align:right; font-size:.9em;}

.tiddler {padding:1em 1em 0em 1em;}

.missing .viewer,.missing .title {font-style:italic;}

.title {font-size:1.6em; font-weight:bold;}

.missing .subtitle {display:none;}
.subtitle {font-size:1.1em;}

.tiddler .button {padding:0.2em 0.4em;}

.tagging {margin:0.5em 0.5em 0.5em 0; float:left; display:none;}
.isTag .tagging {display:block;}
.tagged {margin:0.5em; float:right;}
.tagging, .tagged {font-size:0.9em; padding:0.25em;}
.tagging ul, .tagged ul {list-style:none; margin:0.25em; padding:0;}
.tagClear {clear:both;}

.footer {font-size:.9em;}
.footer li {display:inline;}

.annotation {padding:0.5em; margin:0.5em;}

* html .viewer pre {width:99%; padding:0 0 1em 0;}
.viewer {line-height:1.4em; padding-top:0.5em;}
.viewer .button {margin:0em 0.25em; padding:0em 0.25em;}
.viewer blockquote {line-height:1.5em; padding-left:0.8em;margin-left:2.5em;}
.viewer ul, .viewer ol {margin-left:0.5em; padding-left:1.5em;}

.viewer table, table.twtable {border-collapse:collapse; margin:0.8em 1.0em;}
.viewer th, .viewer td, .viewer tr,.viewer caption,.twtable th, .twtable td, .twtable tr,.twtable caption {padding:3px;}
table.listView {font-size:0.85em; margin:0.8em 1.0em;}
table.listView th, table.listView td, table.listView tr {padding:0px 3px 0px 3px;}

.viewer pre {padding:0.5em; margin-left:0.5em; font-size:1.2em; line-height:1.4em; overflow:auto;}
.viewer code {font-size:1.2em; line-height:1.4em;}

.editor {font-size:1.1em;}
.editor input, .editor textarea {display:block; width:100%; font:inherit;}
.editorFooter {padding:0.25em 0em; font-size:.9em;}
.editorFooter .button {padding-top:0px; padding-bottom:0px;}

.fieldsetFix {border:0; padding:0; margin:1px 0px 1px 0px;}

.sparkline {line-height:1em;}
.sparktick {outline:0;}

.zoomer {font-size:1.1em; position:absolute; overflow:hidden;}
.zoomer div {padding:1em;}

* html #backstage {width:99%;}
* html #backstageArea {width:99%;}
#backstageArea {display:none; position:relative; overflow: hidden; z-index:150; padding:0.3em 0.5em 0.3em 0.5em;}
#backstageToolbar {position:relative;}
#backstageArea a {font-weight:bold; margin-left:0.5em; padding:0.3em 0.5em 0.3em 0.5em;}
#backstageButton {display:none; position:absolute; z-index:175; top:0em; right:0em;}
#backstageButton a {padding:0.1em 0.4em 0.1em 0.4em; margin:0.1em 0.1em 0.1em 0.1em;}
#backstage {position:relative; width:100%; z-index:50;}
#backstagePanel {display:none; z-index:100; position:absolute; width:90%; margin:0em 3em 0em 3em; padding:1em 1em 1em 1em;}
.backstagePanelFooter {padding-top:0.2em; float:right;}
.backstagePanelFooter a {padding:0.2em 0.4em 0.2em 0.4em;}
#backstageCloak {display:none; z-index:20; position:absolute; width:100%; height:100px;}

.whenBackstage {display:none;}
.backstageVisible .whenBackstage {display:block;}
/*}}}*/
/***
StyleSheet for use when a translation requires any css style changes.
This StyleSheet can be used directly by languages such as Chinese, Japanese and Korean which need larger font sizes.
***/
/*{{{*/
body {font-size:0.8em;}
#sidebarOptions {font-size:1.05em;}
#sidebarOptions a {font-style:normal;}
#sidebarOptions .sliderPanel {font-size:0.95em;}
.subtitle {font-size:0.8em;}
.viewer table.listView {font-size:0.95em;}
/*}}}*/
/*{{{*/
@media print {
#mainMenu, #sidebar, #messageArea, .toolbar, #backstageButton, #backstageArea {display: none ! important;}
#displayArea {margin: 1em 1em 0em 1em;}
/* Fixes a feature in Firefox 1.5.0.2 where print preview displays the noscript content */
noscript {display:none;}
}
/*}}}*/
<!--{{{-->
<div class='header' macro='gradient vert [[ColorPalette::PrimaryLight]] [[ColorPalette::PrimaryMid]]'>
<div class='headerShadow'>
<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
</div>
<div class='headerForeground'>
<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
</div>
</div>
<div id='mainMenu' refresh='content' tiddler='MainMenu'></div>
<div id='sidebar'>
<div id='sidebarOptions' refresh='content' tiddler='SideBarOptions'></div>
<div id='sidebarTabs' refresh='content' force='true' tiddler='SideBarTabs'></div>
</div>
<div id='displayArea'>
<div id='messageArea'></div>
<div id='tiddlerDisplay'></div>
</div>
<!--}}}-->
<!--{{{-->
<div class='toolbar' macro='toolbar [[ToolbarCommands::ViewToolbar]]'></div>
<div class='title' macro='view title'></div>
<div class='subtitle'><span macro='view modifier link'></span>, <span macro='view modified date'></span> (<span macro='message views.wikified.createdPrompt'></span> <span macro='view created date'></span>)</div>
<div class='tagging' macro='tagging'></div>
<div class='tagged' macro='tags'></div>
<div class='viewer' macro='view text wikified'></div>
<div class='tagClear'></div>
<!--}}}-->
<!--{{{-->
<div class='toolbar' macro='toolbar [[ToolbarCommands::EditToolbar]]'></div>
<div class='title' macro='view title'></div>
<div class='editor' macro='edit title'></div>
<div macro='annotations'></div>
<div class='editor' macro='edit text'></div>
<div class='editor' macro='edit tags'></div><div class='editorFooter'><span macro='message views.editor.tagPrompt'></span><span macro='tagChooser excludeLists'></span></div>
<!--}}}-->
To get started with this blank TiddlyWiki, you'll need to modify the following tiddlers:
* SiteTitle & SiteSubtitle: The title and subtitle of the site, as shown above (after saving, they will also appear in the browser title bar)
* MainMenu: The menu (usually on the left)
* DefaultTiddlers: Contains the names of the tiddlers that you want to appear when the TiddlyWiki is opened
You'll also need to enter your username for signing your edits: <<option txtUserName>>
These InterfaceOptions for customising TiddlyWiki are saved in your browser

Your username for signing your edits. Write it as a WikiWord (eg JoeBloggs)

<<option txtUserName>>
<<option chkSaveBackups>> SaveBackups
<<option chkAutoSave>> AutoSave
<<option chkRegExpSearch>> RegExpSearch
<<option chkCaseSensitiveSearch>> CaseSensitiveSearch
<<option chkAnimate>> EnableAnimations

----
Also see [[AdvancedOptions]]
<<importTiddlers>>
http://www.evernote.com/shard/s48/sh/780e7534-cee4-4278-87f7-449e81d2acfc/23124af97669edefaca70c4f6e719426
http://www.evernote.com/shard/s48/sh/c5653239-108b-4515-a19e-d69fa3ad92c1/2476cb2c13785e02a391d05a7daf7507
http://jacobian.org/writing/web-scale/
http://thebuild.com/blog/2010/10/27/things-i-do-not-understand-web-scale/
<<showtoc>>

On the following notes I've discussed the "scale out vs scale up" and "speed vs bandwidth"
** T3 CPUs thread:core ratio http://bit.ly/2g5bdPA
** Mainframe (MIPS) to Sparc sizing http://bit.ly/2g5dwSB
** DB2 Mainframe to Oracle Sizing http://bit.ly/2g53JMm
You can read on the URLs to get more details but here are the essential parts of the discussions

! The "scale out (x86) vs scale up (T5)" boils down to two factors
!! Factor 1) bandwidth vs speed

<<<
	[img[ http://i.imgur.com/oP70UP8.png ]]

	T5 can offer more bandwidth capacity but slower CPU speed than x86 CPUs of Exadata. But with Exadata you need to have more compute nodes to match the bandwidth capacity of the T5.
	If you compare the LIO/sec performance of
	SPARCT5 8threads pinned to a core with SPECint_rate of 29
	vs
	XeonE5 (X4-2 in this example) 2threads pinned to a core with SPECint_rate of 39
	&nbsp;
	You’ll see the following curve.. &nbsp;the 2threads on Xeon will give you higher LIOs/sec value vs the first 2threads of SPARC just because of the speed differences
	
	[img[ http://i.imgur.com/MjMn10y.png ]]
	> Y-axis: **Logical IOs/sec** X-axis: **CPU thread**
	
	But then when you saturate the entire platform the SPARC given that it has a lot of “slower” threads in effect can consolidate more LIO workload but at a price of LIO speed performance. So meaning if you have an OLTP SQL executing .2ms per execute in Xeon that will be much slower in SPARC. That’s why I prefer to scale out using Xeon machines (with faster CPUs) than scale up with SPARC. But then it depends if the application can take advantage of the RAC (rac-aware app)
	
	[img[ http://i.imgur.com/wDoXUjp.png ]]
	> Y-axis: **Logical IOs/sec** X-axis: **CPU thread**
	
	But then the X4-8 is pretty promising if they need a scale up kind of solution, it is faster than T5,M5,M6 which has speed of 38 vs 30 (see comparison here [http://bit.ly/2fOzM06](http://bit.ly/2fOzM06))
	
	and X4-8 (https://twitter.com/karlarao/status/435882623500423168) is pretty much the same speed as the compute nodes of X4-2 so you also get that linear scaling that they’re saying in T5,M5,M6 but a much faster CPU… 
<<<

!! Factor 2) the compatibility of the apps to either T5 (scale-up) or x86 (scale-out)
<<<
	If this is way too old school mainframe specific program then they might want to stick with zEC12. 
<<<


! I like this reply by Alex Fatkulin about Xeon vs SPARC
https://twitter.com/alexfatkulin
<<<
I think Xeon is a lot more versatile platform when it comes to the types of workloads it can handle.

A very strong point about Xeon is it a lot more forgiving to the type of workloads you want to run. It can be a race car or it can be a heavy duty truck. A SPARC is a bulldozer and that's the only thing it is and the only thing it can be. You might find yourself in a wrong vehicle in the middle of the highway ;-)

SPARC is like a bulldozer. It can move with a constant speed no matter what's in front of it but it doesn't change the fact that it moves slow. 

Some years ago I was involved with a company doing AirCraft maintenance software which bought a bunch of SPARC servers thinking that a lot of threads is cool otherwise why would SUN call these CoolThreads? The problem was that they had a lot of very complex logic sensitive to single threaded performance which wasn't designed to run in parallel. The end result is SPARC could not do a maintenance cycle within a window. For these unfamiliar with the subject the only thing worth mentioning is that it kept planes grounded. So it was a case where a software couldn't take any advantage of the high throughput offered by the SPARC platform while SPARC couldn't offer high single threaded performance. Guess what these SPARC servers got replaced with. Now granted this was all before T5 but the fact of the matter is T5 continues to lag significantly behind latest Intel generations in core IPC.
<<<



! Discussions on thread:core ratio with Frits Hoogland
https://twitter.com/fritshoogland

AFAIK, these are the heavily threaded ones (thread:core ratio 8:1). When I calculate the CPU time from AWR per second, any CPU time above the number of threads roughly means queue time right? 
<<<
<- Yes, and that should show as CPU Wait on Oracle side of things
<<<

With the calculated CPU time (alias queue time subtracted), I got the number of active threads. However, in my book only one of the eight threads can truly execute, the other ones are visible as running on CPU, but in reality waiting to truly execute on the core. 
<<<
<- this will manifest as  diminishing returns on workload level LIO/sec performance (imagine the LIO load profile section in AWR) as you saturate more threads.. imagine a line like this https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=diminishing%20returns%20curve
<<<

This should mean that when CPU threads are more busy than the core can handle, you get increased time on CPU, which in reality is only waiting time, which is a cpu thread waiting/stalling to run. 
<<<
<- you’ll first see the diminishing returns on LIO/sec performance and then the CPU wait afterwards when the line reaches plateau and it gets worse and worse 
<<<

Can you confirm this is how that works? Or otherwise please correct me where I am making a mistake. 
<<<
<- From my tests on investigating the thread performance, I’ve noticed there are two types of LIO workload.. the short and the long ones, Kevin showed this at oaktable world before and even at RMOUG way back. I think he calls it big and small. But the idea is, the short ones tends to share pretty well with other threads on let’s say the same core. Meaning the core gets busy and the threads are just time slicing pretty damn quick that the net net effect is a pretty good LIO/sec performance, and this also assumes all threads are being utilized evenly and still the diminishing returns apply as you saturate more and more threads. On the other hand, the long ones tends to hog on the time slice which results to overall lower LIO/sec performance .. this behavior is better explained on this wiki entry here [[cpu centric benchmark comparisons]] or here http://bit.ly/1xOJrEu

And also this two types can mix in a given workload.  
<<<

This also means that when going to Xeon (a recent one), this will extremely boost performance because the CPU time will decrease (significantly) because the thread:core ratio is much lower (2:1). 
This means that it’s not only the specint ratio difference which the system get’s faster, but also the excess stalling on CPU. 
<<<

<- Yes it’s possible, that it’s a contribution of all those factors. But I think the boost is mainly driven by the speed (newer CPU) or you can also say the thread:core ratio is much faster in Xeon than SPARC. 
If you compare the LIO/sec performance of

SPARCT5 8threads pinned to a core with SPECint_rate of 29
vs
XeonE5 2threads pinned to a core with SPECint_rate of 44

You’ll see the following curve..  the 2threads on Xeon will give you higher LIOs/sec value vs the first 2threads of SPARC just because of the speed differences

[img[ http://i.imgur.com/MjMn10y.png ]]
Y-axis: **Logical IOs/sec** X-axis: **CPU thread** 

But then when you saturate the entire platform the SPARC given that it has a lot of “slower” threads in effect can consolidate more LIO workload but at the price of LIO speed performance. So meaning if you have an OLTP SQL executing .2ms per execute in Xeon that will be much slower in SPARC. That’s why I prefer to scale out using Xeon machines (with faster CPUs) than scale up with SPARC. But then it depends if the application can take advantage of the RAC (rac-aware app)

[img[ http://i.imgur.com/wDoXUjp.png ]]
Y-axis: **Logical IOs/sec** X-axis: **CPU thread**

But the X4-8 is pretty promising if they need a scale up kind of solution, it is faster than T5,M5,M6 (see wiki [[M6, M5, T5]]) which has speed of 38 vs 30

X4-8 https://twitter.com/karlarao/status/435882623500423168
and X4-8 is pretty much the same speed as the compute nodes of X4-2 so you also get that linear scaling that they’re saying in T5,M5,M6 but a much faster CPU…

 

SPECint_rate2006 reference

— below are the variable values (raw and final header)
Result/# Cores, # Cores, # Chips, # Cores Per Chip, # Threads Per Core, Baseline, Result, Hardware Vendor, System, Published
$ less spec.txt | sort -rnk1 | grep -i sparc | grep -i oracle
30.5625, 16, 1, 16, 8, 441, 489, Oracle Corporation, SPARC T5-1B, Oct-13
@@29.2969, 128, 8, 16, 8, 3490, 3750, Oracle Corporation, SPARC T5-8, Apr-13@@
29.1875, 16, 1, 16, 8, 436, 467, Oracle Corporation, SPARC T5-1B, Apr-13
18.6, 2, 1, 2, 2, 33.7, 37.2, Oracle Corporation, SPARC Enterprise M3000, Apr-11
14.05, 4, 1, 4, 2, 50.3, 56.2, Oracle Corporation, SPARC Enterprise M3000, Apr-11
13.7812, 64, 16, 4, 2, 806, 882, Oracle Corporation, SPARC Enterprise M8000, Dec-10
13.4375, 128, 32, 4, 2, 1570, 1720, Oracle Corporation, SPARC Enterprise M9000, Dec-10
12.3047, 256, 64, 4, 2, 2850, 3150, Oracle Corporation, SPARC Enterprise M9000, Dec-10
11.1875, 16, 4, 4, 2, 158, 179, Oracle Corporation, SPARC Enterprise M4000, Dec-10
11, 32, 8, 4, 2, 313, 352, Oracle Corporation, SPARC Enterprise M5000, Dec-10
@@10.4688, 32, 2, 16, 8, 309, 335, Oracle Corporation, SPARC T3-2, Feb-11
10.4062, 64, 4, 16, 8, 614, 666, Oracle Corporation, SPARC T3-4, Feb-11
10.375, 16, 1, 16, 8, 153, 166, Oracle Corporation, SPARC T3-1, Jan-11@@

x3-2 spec
$ cat spec.txt | grep -i intel | grep -i "E5-26" | grep -i sun | sort -rnk1
@@44.0625, 16, 2, 8, 2, 632, 705, Oracle Corporation, Sun Blade X6270 M3 (Intel Xeon E5-2690 2.9GHz)@@
44.0625, 16, 2, 8, 2, 632, 705, Oracle Corporation, Sun Blade X3-2B (Intel Xeon E5-2690 2.9GHz)
44.0625, 16, 2, 8, 2, 630, 705, Oracle Corporation, Sun Server X3-2L (Intel Xeon E5-2690 2.9GHz)
44.0625, 16, 2, 8, 2, 630, 705, Oracle Corporation, Sun Fire X4270 M3 (Intel Xeon E5-2690 2.9GHz)
43.875, 16, 2, 8, 2, 628, 702, Oracle Corporation, Sun Server X3-2 (Intel Xeon E5-2690 2.9GHz)
43.875, 16, 2, 8, 2, 628, 702, Oracle Corporation, Sun Fire X4170 M3 (Intel Xeon E5-2690 2.9GHz)
<<<









[img(70%,70%)[https://i.imgur.com/XsBOAey.jpg]]

[img(70%,70%)[https://i.imgur.com/xMCK0Ug.png]]


<<<
Introduction 06:20
Welcome! Thank you for learning the Data Warehouse concepts with me! 
Preview
06:20
–
Brief about the Data warehouse
21:13
Is Data Warehouse still relevant in the age of Big Data? 
Preview
04:25
Why do we need a Data Warehouse? 
Preview
05:26
What is a Data Warehouse? 
Preview
05:42
Characteristics of a Data Warehouse 
Preview
05:40
–
Business Intelligence
23:37
What is Business Intelligence? 
05:37
Business Intelligence -Extended Explanation 
03:34
Uses of Business Intelligence 
08:02
Tools used for (in) Business Intelligence 
06:24
–
Data Warehouse Architectures
32:12
Enterprise Architecture or Centralized Architecture 
Preview
04:46
Federated Architecture 
03:05
Multi-Tired Architecture 
03:13
Components of a Data Warehouse 
03:57
Purpose of a Staging Area in Data Warehouse Architecture - Part 1 
04:49
Purpose of a Staging Area in Data Warehouse Architecture - Part 2 
03:41
Advantages of Traditional warehouse 
02:33
Limitations of Traditional Data Warehouses 
06:08
–
ODS - Operational Data Store
14:13
What is ODS? 
02:26
Define ODS 
07:40
Differences between ODS,DWH, OLTP, OLAP, DSS 
04:07
–
OLAP
28:15
OLAP Overview 
05:17
OLTP Vs OLAP - Part 1_U 
04:05
OLTP Vs OLAP - Part 2 
05:31
OLAP Architecture - MOLAP 
05:56
ROLAP 
03:35
HOLAP 
02:20
DOLAP 
01:31
–
Data Mart
13:52
What is a Data Mart? 
01:40
Fundamental Difference between DWH and DM 
00:40
Advantages of a Data Mart 
02:46
Characteristics of a Data Mart 
03:37
Disadvantages of a Data Mart 
03:01
Mistakes and MisConceptions of a Data Mart 
02:08
–
Metadata
19:30
Overview of Metadata 
01:50
Benefits of Metadata 
01:47
Types of Metadata
05:38
Projects on Metadata 
05:28
Best Practices for Metadata Setup 
01:36
Summary 
03:11
–
Data Modeling
05:53
What is Data Modeling? 
02:11
Data Modeling Techniques 
03:42
–
Entity Relational Data Model
35:46
ER - (Entity Relation) Data Model 
03:37
ER Data Model - What is Entity? 
02:01
ER Data Model - Types of Entities - Part 1 
03:57
ER Data Model - Types of Entities - Part 2 
01:49
ER Data Model - Attributes 
01:54
ER Data Model - Types of Attributes 
03:59
ER Data Model - Entity-Set and Keys 
02:42
ER Data Model - Identifier 
01:53
ER Data Model - Relationship 
01:15
ER Data Model - Notation 
02:34
ER Data Model - Logical Data Model 
01:30
ER Data Model - Moving from Logical Data Model to Physical Data Model
02:14
ER Data Model - Differences between CDM, LDM and PDM 
03:06
ER Data Model - Disadvantages 
03:15
–
Dimensional Model
01:24:32
What is Dimension Modelling? 
04:38
Benefits of Dimensional Modelling 
01:52
What is a Dimension? 
02:36
What is a Fact? 
02:00
Additive Facts 
01:45
Semi Additive Facts 
02:23
Non-Additive Facts 
01:26
FactLess Facts 
02:26
What is a Surrogate key? 
03:45
Star Schema 
04:54
SnowFlake Schema 
03:22
Galaxy Schema or Fact Constellation Schema 
02:25
Differences between Star Schema and SnowFlake Schema? 
04:55
Conformed Dimension 
06:17
Junk Dimension 
03:12
Degenerate Dimension 
03:36
Slowly Changing Dimensions - Intro and Example Creation 
05:35
Slowly Changing Dimensions - Type 1, 2 and 3 
12:14
Slowly Changing Dimensions - Summary 
03:05
Step by Step approach to set up the Dimensional Model using a retail case study
06:44
ER Model Vs Dimensional Model 
05:22
–
DWH Indexes
10:59
What is an Index? 
02:04
Bitmap Index 
03:46
B-Tree index 
01:49
Bitmap Index Vs B Tree Index 
03:20
–
Data Integration and ETL
13:20
What is Data Integration? 
06:49
What is ETL? 
03:49
Common Questions and Summary 
02:42
–
ETL Vs ELT
13:45
ETL - Explained 
06:03
ELT - Explained 
05:24
ETL Vs ELT
02:18
–
ETL - Extraction Transformation & Loading
12:48
Build Vs Buy 
05:10
ETL Tools for Data Warehouses 
01:56
Extraction Methods in Data Warehouses 
05:42
–
Typical Roles In DWH Project
44:18
Project Sponsor 
03:24
Project Manager 
01:46
Functional Analyst or Business Analyst 
02:53
SME - Subject Matter Expert 
04:17
DW BI Architect 
03:07
Data Modeler 
08:59
DWH Tech Admin 
01:20
ETL Developers 
01:56
BI OLAP Developers 
01:29
ETL Testers/QA Group 
01:58
DB UNIX Network Admins 
00:56
Data Architect, Data Warehouse Architect, BI Architect and Solution Architect 
09:57
Final Note about the Roles 
02:16
–
DW/BI/ETL Implemetation Approach
39:48
Different phases in DW/BI/ETL Implementation Approach 
01:51
Knowledge Capture Sessions 
03:34
Requirements 
07:21
Architecture phases 
04:48
Data Model/Database 
01:35
ETL Phase 
02:43
Data Access Phase 
02:10
Data Access Types - Selection 
01:37
Data Access Types - Drilling Down 
00:58
Data Access Types - Exception Reporting 
00:36
Data Access Types - Calculations 
01:26
Data Access Types - Graphics and Visualization 
00:58
Data Access Types -Data Entry Options 
02:04
Data Access Types - Customization 
01:00
Data Access Types - WebBased Reporting 
00:56
Data Access Types - BroadCasting 
01:04
Deploy 
01:42
Iterative Approach 
03:25
–
Retired Lectures
02:23
ETL Vs ELT 
02:23
–
Bonus Section
01:37
Links to other courses 
01:37
<<<
good compilation of oracle hints http://www.hellodba.com/Download/OracleSQLHints.pdf


12c new SQL hints
http://www.hellodba.com/reader.php?ID=220&lang=EN

search for "hints"
http://www.hellodba.com/index.php?class=DOC&lang=EN


.

<<showtoc>> 


! Greg Wooledge wiki
http://mywiki.wooledge.org/FullBashGuide
http://mywiki.wooledge.org/BashGuide/Practices
https://mywiki.wooledge.org/BashPitfalls
http://mywiki.wooledge.org/BashWeaknesses
http://mywiki.wooledge.org/BashFAQ
http://www.tldp.org/LDP/abs/html/abs-guide.html	


! essential documentation 
http://www.tldp.org/LDP/abs/html/abs-guide.html
https://github.com/DingGuodong/bashstyle   <- some style guide
http://superuser.com/questions/414965/when-to-use-bash-and-when-to-use-perl-python-ruby/415134
https://www.shellcheck.net/
http://www.gnu.org/software/bash/manual/bash.html
https://wiki.bash-hackers.org/
https://www.in-ulm.de/~mascheck/
http://www.grymoire.com/Unix/Quote.html
http://www.shelldorado.com/


! video courses
https://www.pluralsight.com/courses/bash-shell-scripting
https://www.pluralsight.com/courses/red-hat-enterprise-linux-shell-scripting-fundamentals

https://www.safaribooksonline.com/library/view/bash-scripting-fundamentals/9780134541730/
https://www.safaribooksonline.com/library/view/advanced-bash-scripting/9780134586229/




! /usr/bin/env or /bin/env
<<<
it's better to use 
#!/usr/bin/env bash

In most cases, using /usr/bin/env bash will be better than /bin/bash;
If you are running in a multi-user environment and security is a big concern, forget about /usr/bin/env (or anything that uses the $PATH, actually);
If you need an extra argument to your interpreter and you care about portability, /usr/bin/env may also give you some headaches.
<<<
https://www.google.com/search?q=%2Fusr%2Fbin%2Fenv+or+%2Fbin%2Fenv&oq=usr%2Fbin+or+%2Fbin&aqs=chrome.4.69i57j69i58j0l4.11969j1j1&sourceid=chrome&ie=UTF-8
https://stackoverflow.com/questions/5549044/whats-the-difference-of-using-usr-bin-env-or-bin-env-in-shebang
https://unix.stackexchange.com/questions/29608/why-is-it-better-to-use-usr-bin-env-name-instead-of-path-to-name-as-my
https://www.brianstorti.com/rethinking-your-shebang/



! batch 
http://steve-jansen.github.io/guides/windows-batch-scripting/part-10-advanced-tricks.html
''batch file a-z'' http://ss64.com/nt/
''batch file categorized'' http://ss64.com/nt/commands.html

! loop
http://ss64.com/nt/for.html
http://ss64.com/nt/for_cmd.html
http://stackoverflow.com/questions/1355791/how-do-you-loop-in-a-windows-batch-file
http://stackoverflow.com/questions/1103994/how-to-run-multiple-bat-files-within-a-bat-file












.








<<showtoc>>


! info

!! getting started
!! sample code

! data types and variables

!! data type
!! specific data types/values
!! variable assignment/scope
!! comparison operators

! data structures

!! data containers/structures
!! vector
!! matrix
!! data frame or pandas
!! list
!! sets

! control structures

!! control workflow
!! if else
!! error handling
!! unit testing / TDD

! loops

!! loops workflow
!! for loop
!! while loop

! advanced concepts

!! functions
!! OOP

! other functions methods procedures
! language specific operations

!! data workflow
!! directory operations
!! package management
!! importing data
!! cleaning data
!! data manipulation
!! visualization

! scripting

!! scripting workflow
!! run a script
!! print multiple var
!! input data









! xxxxxxxxxxxxxxxxxxxxxxxx
! xxx Data Engineering
! xxxxxxxxxxxxxxxxxxxxxxxx



! workflow 
! installation  and upgrade
! commands
! performance and troubleshooting
!! sizing and capacity planning
!! benchmark
! high availability 
! security

! xxxxxxxxxxxxxxxxxxxxxxxx




.
! 2021

<<<
Kumaran's courses are the best out there to get you up to speed w/ design patterns and technology components for modern data architecture. Love the format, short, sweet, practical, and direct to the point.


https://www.linkedin.com/learning/architecting-big-data-applications-real-time-application-engineering/sm-analyze-the-problem
https://www.linkedin.com/learning/architecting-big-data-applications-batch-mode-application-engineering/welcome

https://www.linkedin.com/learning/stream-processing-design-patterns-with-kafka-streams/stream-processing-with-kafka
https://www.linkedin.com/learning/stream-processing-design-patterns-with-spark/streaming-with-spark

https://www.linkedin.com/learning/applied-ai-for-it-operations/artificial-intelligence-and-its-many-uses
https://www.linkedin.com/learning/applied-ai-for-human-resources/artificial-intelligence-and-human-resources


https://www.linkedin.com/in/kumaran-ponnambalam-961a344/?trk=lil_instructor
<<<


<<<
design and architecture

https://www.pluralsight.com/courses/google-dataflow-architecting-serverless-big-data-solutions
https://www.pluralsight.com/courses/google-cloud-platform-leveraging-architectural-design-patterns
https://www.pluralsight.com/courses/google-cloud-functions-architecting-event-driven-serverless-solutions
https://www.pluralsight.com/courses/google-dataproc-architecting-big-data-solutions
https://www.pluralsight.com/courses/google-machine-learning-apis-designing-implementing-solutions
https://www.pluralsight.com/courses/google-bigquery-architecting-data-warehousing-solutions
https://www.pluralsight.com/courses/google-cloud-automl-designing-implementing-solutions

https://www.linkedin.com/learning/search?keywords=apache%20beam
https://www.linkedin.com/learning/data-science-on-google-cloud-platform-building-data-pipelines/what-goes-into-a-data-pipeline <— good summary
https://www.linkedin.com/learning/google-cloud-platform-for-enterprise-essential-training/enterprise-ready-gcp

https://www.linkedin.com/learning/architecting-big-data-applications-batch-mode-application-engineering/dw-lay-out-the-architecture <— good 5 use cases
https://www.linkedin.com/learning/data-science-on-google-cloud-platform-architecting-solutions/architecting-data-science <— good 4 use cases
https://www.linkedin.com/learning/data-science-on-google-cloud-platform-designing-data-warehouses/why-data-warehouses-are-important
https://www.linkedin.com/learning/architecting-big-data-applications-real-time-application-engineering/sm-analyze-the-problem <— good 4 use cases 
<<<





!! Distributed systems in one lesson 
https://learning.oreilly.com/videos/distributed-systems-in/9781491924914?autoplay=false



!! ML system
ML system design https://us.teamblind.com/s/5HGmH4Wd
Machine Learning Systems: Designs that scale https://learning.oreilly.com/library/view/machine-learning-systems/9781617293337/kindle_split_024.html



!! kafka 
https://www.youtube.com/results?search_query=kafka+system+design



!! messaging service 
https://www.datanami.com/2019/05/28/assessing-your-options-for-real-time-message-buses/
https://www.udemy.com/courses/search/?src=ukw&q=message+queueing+
https://www.udemy.com/rabbitmq-messaging-with-java-spring-boot-and-spring-mvc/
https://bytes.com/topic/python/answers/437385-queueing-python-ala-jms


!! instagram
http://highscalability.com/blog/2011/12/6/instagram-architecture-14-million-users-terabytes-of-photos.html


https://github.com/CodeThat/Algorithm-Implementations



https://www.algoexpert.io/purchase    coupon "devon"
https://www.algoexpert.io/questions/Nth%20Fibonacci

    
<<showtoc>>

! practice courses and resources 
<<<

https://www.udemy.com/aws-emr-and-spark-2-using-scala/learn/v4/t/lecture/9366830?start=0
https://www.udemy.com/python-and-spark-setup-development-environment/
https://www.udemy.com/linux-fundamentals-for-it-professionals/learn/v4/overview
https://www.udemy.com/fundamentals-of-programming-using-python-3/learn/v4/overview
https://www.udemy.com/python-for-data-science-and-machine-learning-bootcamp/learn/v4/t/lecture/5774370?start=0
https://www.udemy.com/data-science-and-machine-learning-bootcamp-with-r/learn/v4/content
https://www.udemy.com/machinelearning/learn/v4/t/lecture/5935024?start=0
https://www.udemy.com/data-science-and-machine-learning-with-python-hands-on/learn/v4/t/lecture/4020676?start=0
Installing TensorFlow and H2O in R https://learning.oreilly.com/learning-paths/learning-path-r/9781789340839/9781788838771-video1_4
Using the H2O Deep Learning Framework https://learning.oreilly.com/videos/learning-path-r/9781788298742/9781788298742-video1_29
Interpretable AI - Not just for regulators - Patrick Hall (H2O.ai | George Washington University), Sri Satish (H2O.ai) h2o.ai https://learning.oreilly.com/videos/strata-data-conference/9781491976326/9781491976326-video316338
https://weidongzhou.wordpress.com/tag/big-data/


Practical Machine Learning with H2O https://learning.oreilly.com/library/view/practical-machine-learning/9781491964590/
https://www.udemy.com/complete-deep-learning-in-r-with-keras-others/

<<<
<<showtoc>>



! info

!! getting started
!! sample code

! data types and variables

!! data type
!! specific data types/values
!! variable assignment/scope

!!! delete variables 
https://stackoverflow.com/questions/26545051/is-there-a-way-to-delete-created-variables-functions-etc-from-the-memory-of-th?rq=1
https://stackoverflow.com/questions/3543833/how-do-i-clear-all-variables-in-the-middle-of-a-python-script


!! comparison operators

! data structures


!! data containers/structures
!! R vector / python tuple or list
<<<
https://stackoverflow.com/questions/252703/difference-between-append-vs-extend-list-methods-in-python/28119966
<<<

!! R matrix / python numpy
!! R data frame / python pandas


!!! save dataframe to parquet file 
https://stackoverflow.com/questions/41066582/python-save-pandas-data-frame-to-parquet-file
{{{
pip install fastparquet
df.to_parquet('myfile.parquet', engine='fastparquet', compression='UNCOMPRESSED')


# this created the table columns as BYTES
bq load --location=US --source_format=PARQUET tink.enc_parquet2 myfile.parquet
 
}}}





!! R list / python dictionary
<<<
https://stackoverflow.com/questions/1024847/add-new-keys-to-a-dictionary

https://stackoverflow.com/questions/1867861/dictionaries-how-to-keep-keys-values-in-same-order-as-declared
<<<


!! python sets
!! python list comprehension, nested list/dictionary



! control structures

!! control workflow
!! if else
<<<
https://stackoverflow.com/questions/2493404/complex-if-statement-in-python
<<<


!! error handling
!! unit testing / TDD

! loops

!! loops workflow
!! for loop
!! while loop

! advanced concepts

!! functions

!!! functional programming
Reactive Programming in Python https://learning.oreilly.com/videos/reactive-programming-in/9781786460332/9781786460332-video1_3?autoplay=false


!! OOP

!!! static vs class method 
https://stackoverflow.com/questions/136097/what-is-the-difference-between-staticmethod-and-classmethod
!!! what is pass 
https://stackoverflow.com/questions/13886168/how-to-use-the-pass-statement
!!! iterating through instance object attributes 
https://www.saltycrane.com/blog/2008/09/how-iterate-over-instance-objects-data-attributes-python/
https://stackoverflow.com/questions/739882/iterating-over-object-instances-of-a-given-class-in-python
https://stackoverflow.com/questions/44196243/iterate-over-list-of-class-objects-pythonic-way
https://stackoverflow.com/questions/42581286/iterate-over-an-instance-objects-attributes-in-python
https://stackoverflow.com/questions/21598872/how-to-create-multiple-class-objects-with-a-loop-in-python
https://stackoverflow.com/questions/25150955/python-iterating-through-object-attributes

!! args kwargs 
<<<
https://stackoverflow.com/questions/8977594/in-python-what-determines-the-order-while-iterating-through-kwargs/41634018
https://stackoverflow.com/questions/26748097/using-an-ordereddict-in-kwargs

<<<



! other functions methods procedures
! language specific operations

!! data workflow
!! directory operations
!! package management
!!! use-import-module-or-from-module-import
https://stackoverflow.com/questions/710551/use-import-module-or-from-module-import

!! importing data
!! cleaning data
!! data manipulation
!! visualization

!!! python and highcharts
<<<
https://www.highcharts.com/blog/products/highmaps/226-get-your-data-ready-for-charts-with-python/
https://github.com/kyper-data/python-highcharts
Flask Web Development in Python - 6 - js Plugin - Highcharts example https://www.youtube.com/watch?v=9Ic79kOBj_M


<<<



! scripting

!! scripting workflow
!! run a script
!! print multiple var
!! input data

!! check if list exist 
<<<
https://stackoverflow.com/questions/11556234/how-to-check-if-a-list-exists-in-python
<<<


!! command line args parser 
<<<
Building cmd line using click https://www.youtube.com/watch?v=6OY1xFYJVxQ
https://medium.com/@collectiveacuity/argparse-vs-click-227f53f023dc
https://realpython.com/comparing-python-command-line-parsing-libraries-argparse-docopt-click/
https://stackoverflow.com/questions/3217673/why-use-argparse-rather-than-optparse
https://ttboj.wordpress.com/2010/02/03/getopt-vs-optparse-vs-argparse/
https://pymotw.com/2/optparse/
https://docs.python.org/2/howto/argparse.html
https://leancrew.com/all-this/2015/06/better-option-parsing-in-python-maybe/
https://www.quora.com/What-are-the-advantages-of-using-argparse-over-optparse-or-vice-versa
<<<

<<<
http://www.annasyme.com/docs/python_structure.html   <- GOOD
good basics https://medium.com/code-85/how-to-pass-command-line-values-to-a-python-script-1e3e7b244c89 <- GOOD
https://towardsdatascience.com/a-simple-guide-to-command-line-arguments-with-argparse-6824c30ab1c3
https://martin-thoma.com/how-to-parse-command-line-arguments-in-python/


logging https://gist.github.com/olooney/8155400
https://pymotw.com/2/argparse/
https://gist.github.com/BurkovBA/947ae7406a3b22b32c81904da9d9797e
https://zetcode.com/python/argparse/
https://gist.github.com/abalter/605773b34a68bb370bf84007ee55a130
https://github.com/nhoffman/argparse-bash
https://python.plainenglish.io/parse-args-in-bash-scripts-d50669be6a61
https://stackoverflow.com/questions/14340822/pass-bash-argument-to-python-script
{{{
#!/bin/sh

python script.py "$@"

}}}
https://stackoverflow.com/questions/4256107/running-bash-commands-in-python
{{{
bashCommand = "cwm --rdf test.rdf --ntriples > test.nt"
import subprocess
process = subprocess.Popen(bashCommand.split(), stdout=subprocess.PIPE)
output, error = process.communicate()

}}}
https://stackabuse.com/executing-shell-commands-with-python/
https://stackoverflow.com/questions/34836382/python-3-subprocessing-a-python-script-that-uses-argparse
https://medium.com/code-85/how-to-pass-command-line-values-to-a-python-script-1e3e7b244c89


<<<

{{{
Every option has some values like:

    dest: You will access the value of option with this variable
    help: This text gets displayed whey someone uses --help.
    default: If the command line argument was not specified, it will get this default value.
    action: Actions tell optparse what to do when it encounters an option on the command line. action defaults to store. These actions are available:
        store: take the next argument (or the remainder of the current argument), ensure that it is of the correct type, and store it to your chosen destination dest.
        store_true: store True in dest if this flag was set.
        store_false: store False in dest if this flag was set.
        store_const: store a constant value
        append: append this option’s argument to a list
        count: increment a counter by one
        callback: call a specified function
    nargs: ArgumentParser objects usually associate a single command-line argument with a single action to be taken. The nargs keyword argument associates a different number of command-line arguments with a single action.
    required: Mark a command line argument as non-optional (required).
    choices: Some command-line arguments should be selected from a restricted set of values. These can be handled by passing a container object as the choices keyword argument to add_argument(). When the command line is parsed, argument values will be checked, and an error message will be displayed if the argument was not one of the acceptable values.
    type: Use this command, if the argument is of another type (e.g. int or float).

argparse automatically generates a help text. So if you call python myScript.py --help you will get something like that:

usage: ikjMultiplication.py [-h] [-i FILE]

ikjMatrix multiplication

optional arguments:
  -h, --help  show this help message and exit
  -i FILE     input file with two matrices

}}}





! xx


! forecasting
!! times series 
timeseries techniques https://www.safaribooksonline.com/library/view/practical-data-analysis/9781783551668/ch07.html
http://www.johnwittenauer.net/a-simple-time-series-analysis-of-the-sp-500-index/
time series python statsmodels http://conference.scipy.org/scipy2011/slides/mckinney_time_series.pdf
Do not smooth times series, you hockey puck http://wmbriggs.com/post/195/
practical Data Analysis Cookbook https://github.com/drabastomek/practicalDataAnalysisCookbook

! underscore in python
watch the two videos below:
{{{
What's the meaning of underscores (_ & __) in Python variable names
Python Tutorial: if __name__ == '__main__' 
}}}
https://www.youtube.com/watch?v=ALZmCy2u0jQ
https://www.youtube.com/watch?v=sugvnHA7ElY
{{{
Difference between _, __ and __xx__ in Python
http://igorsobreira.com/2010/09/16/difference-between-one-underline-and-two-underlines-in-python.html
http://stackoverflow.com/questions/8689964/why-do-some-functions-have-underscores-before-and-after-the-function-name
http://programmers.stackexchange.com/questions/229804/usage-of-while-declaring-any-variables-or-class-member-in-python
}}}


! sqldf / pandasql 
http://blog.yhat.com/posts/pandasql-intro.html
pandasql: Make python speak SQL https://community.alteryx.com/t5/Data-Science-Blog/pandasql-Make-python-speak-SQL/ba-p/138435
https://statcompute.wordpress.com/2016/10/17/flavors-of-sql-on-pandas-dataframe/
https://www.r-bloggers.com/turning-data-into-awesome-with-sqldf-and-pandasql/


! gui ide 
https://www.yhat.com/products/rodeo



! for loops 
https://data36.com/python-for-loops-explained-data-science-basics-5/


! PYTHONPATH
https://stackoverflow.com/questions/19917492/how-to-use-pythonpath
<<<
You're confusing PATH and PYTHONPATH. You need to do this:

export PATH=$PATH:/home/randy/lib/python 
PYTHONPATH is used by the python interpreter to determine which modules to load.

PATH is used by the shell to determine which executables to run.
<<<


! python compatibility 

!! pycon talk - start here
Brett Cannon - How to make your code Python 2/3 compatible - PyCon 2015 https://www.youtube.com/watch?v=KPzDX5TX5HE
https://www.youtube.com/results?search_query=python-modernize


!! performance between 2 and 3 
https://chairnerd.seatgeek.com/migrating-to-python-3/

!! coding differences between 2 and 3 
[[..python 2 to 3]]
https://wiki.python.org/moin/Python2orPython3

!! 2to3 - tool to automatically convert code 
https://docs.python.org/2/library/2to3.html
Python 2to3 - Convert your Python 2 to Python 3 automatically https://www.youtube.com/watch?v=8qxKYnAsNuU
Make Python 2 Programs Compatible with Python 3 Automatically https://www.youtube.com/watch?v=M6wkCIdfI8U
https://stackoverflow.com/questions/40020178/what-python-linter-can-i-use-to-spot-python-2-3-compatibility-issues

!! futurize and modernize 
{{{
# this will work in python 2
from __future__ import print_function
print('hello world')	
}}}
https://python-future.org/faq.html
https://www.youtube.com/results?search_query=python+futurize
python-future vs 2to3 https://www.google.com/search?q=python-future+vs+2to3&oq=python-future+vs+2to3&aqs=chrome..69i57.3384j0j4&sourceid=chrome&ie=UTF-8
Moving from Python 2 to Python 3 http://ptgmedia.pearsoncmg.com/imprint_downloads/informit/promotions/python/python2python3.pdf    <-- good stuff
Python How to use from __future__ import print_function https://www.youtube.com/watch?v=lLpp2cbUWX0  <-- good stuff
http://python-future.org/quickstart.html#to-convert-existing-python-2-code   <- futurize
https://www.youtube.com/results?search_query=future__+import
http://python3porting.com/noconv.html
https://www.reddit.com/r/Python/comments/45vok2/why_did_python_3_change_the_print_syntax/


!! six 
https://pypi.org/project/six/


! tricks 

!! count frequency of words
http://stackoverflow.com/questions/30202011/how-can-i-count-comma-separated-values-in-one-column-of-my-panda-table
https://www.google.com/search?q=R+word+count&oq=R+word+count&aqs=chrome..69i57j0l5.2673j0j1&sourceid=chrome&ie=UTF-8#q=r+count+frequency+of+numbers&*
http://stackoverflow.com/questions/8920145/count-the-number-of-words-in-a-string-in-r
http://r.789695.n4.nabble.com/How-to-count-the-number-of-occurence-td1661733.html
http://stackoverflow.com/questions/1923273/counting-the-number-of-elements-with-the-values-of-x-in-a-vector
https://www.quora.com/How-do-I-generate-frequency-counts-of-categorical-variables-eg-total-number-of-0s-and-total-number-of-1s-from-each-column-within-a-dataset-in-RStudio
http://stackoverflow.com/questions/1296646/how-to-sort-a-dataframe-by-columns



! data structures / containers 

!! pickle 
https://stackoverflow.com/questions/11641493/how-to-cpickle-dump-and-load-separate-dictionaries-to-the-same-file
Serializing Data Using the pickle and cPickle Modules https://learning.oreilly.com/library/view/python-cookbook/0596001673/ch08s03.html
Reading a pickle file (PANDAS Python Data Frame) in R https://stackoverflow.com/questions/35121192/reading-a-pickle-file-pandas-python-data-frame-in-r


! scheduler - celery 
https://www.youtube.com/results?search_query=python+scheduler+async+every+minute+background
https://www.udemy.com/using-python-with-oracle-db/learn/lecture/5330818#overview
https://stackoverflow.com/questions/22715086/scheduling-python-script-to-run-every-hour-accurately
https://stackoverflow.com/questions/2223157/how-to-execute-a-function-asynchronously-every-60-seconds-in-python






! learning materials 
https://linuxacademy.com/linux/training/learningpath/name/scripting-automation-for-sysadmins
https://acloud.guru/learn/automating-aws-with-python






<<showtoc>>

! Upgrading R
* first, fix the permissions of the R folder by making if "full control" http://stackoverflow.com/questions/5059692/unable-to-update-r-packages-in-default-library-on-windows-7
* download the new rstudio https://www.rstudio.com/products/rstudio/download/
* follow the steps mentioned here http://stackoverflow.com/questions/13656699/update-r-using-rstudio and here http://www.r-statistics.com/2013/03/updating-r-from-r-on-windows-using-the-installr-package/ basically you'll have to execute the following:
{{{
# installing/loading the package:
if(!require(installr)) {
install.packages("installr"); require(installr)} #load / install+load installr
 
# using the package:
updateR() # this will start the updating process of your R installation.  It will check for newer versions, and if one is available, will guide you through the decisions you'd need to make.
}}}

! Clone R 
https://github.com/MangoTheCat/pkgsnap

! rstudio 
preview version https://www.rstudio.com/products/rstudio/download/preview/

! documentation 
{{{
> library(RDocumentation)
Do you want to automatically load RDocumentation when you start R? [y|n] y
Congratulations!
R will now use RDocumentation to display your help files.
If you're offline, R will just display your local documentation.
To avoid automatically loading the RDocumentation package, use disable_autoload().
If you don't want the ? and help functionality to show RDocumentation pages, use disable_override().

Attaching package: ‘RDocumentation’

The following objects are masked from ‘package:utils’:

    ?, help, help.search
}}}

! favorite packages
!! summary 

!!! visualization
* ggfortify (autoplot) - easy plotting of data.. just execute autoplot
<<<
http://www.sthda.com/english/wiki/ggfortify-extension-to-ggplot2-to-handle-some-popular-packages-r-software-and-data-visualization
http://rpubs.com/sinhrks/basics
http://rpubs.com/sinhrks/plot_lm
<<<

!!! time series 
* quantstart time series - https://www.quantstart.com/articles#time-series-analysis
* xts - convert to time series object
** as.xts()

!!! quant 

* quantmod http://www.quantmod.com/gallery/
* quantstrat and blotter
http://masterr.org/r/how-to-install-quantstrat/
http://www.r-bloggers.com/nuts-and-bolts-of-quantstrat-part-i/
http://www.programmingr.com/content/installing-quantstrat-r-forge-and-source/
using quantstrat to evaluate intraday trading strategies http://www.rinfinance.com/agenda/2013/workshop/Humme+Peterson.pdf
* highfrequency package 
https://cran.r-project.org/web/packages/highfrequency/highfrequency.pdf , http://feb.kuleuven.be/public/n09022/research.htm
http://highfrequency.herokuapp.com/
*quantlib http://quantlib.org/index.shtml

!!!! quant topics

* quant data
https://www.onetick.com/
interactivebrokers api http://www.r-bloggers.com/how-to-save-high-frequency-data-in-mongodb/ , http://www.r-bloggers.com/i-see-high-frequency-data/ 

* quant portals 
quanstart - learning materials (books, scripts) https://www.quantstart.com/faq
http://www.rfortraders.com/
http://www.quantlego.com/welcome/
https://www.quantstart.com/articles/Quantitative-Finance-Reading-List
http://datalab.lu/
http://carlofan.wix.com/data-science-chews

* quant books 
https://www.amazon.com/Quantitative-Trading-Understanding-Mathematical-Computational/dp/1137354070?ie=UTF8&camp=1789&creative=9325&creativeASIN=1137354070&linkCode=as2&linkId=KJAPF3TMVPQHWD4H&redirect=true&ref_=as_li_qf_sp_asin_il_tl&tag=boucom-20
https://www.quantstart.com/successful-algorithmic-trading-ebook
https://www.quantstart.com/advanced-algorithmic-trading-ebook
https://www.quantstart.com/cpp-for-quantitative-finance-ebook

* quant career 
https://www.quantstart.com/articles/Can-You-Still-Become-a-Quant-in-Your-Thirties
http://www.dlsu.edu.ph/academics/graduate-studies/cob/master-sci-fin-eng.asp

* quant strategies 
trend following strategy http://www.followingthetrend.com/2014/03/improving-the-free-trend-following-trading-rules/
connorsRSI http://www.qmatix.com/ConnorsRSI-Pullbacks-Guidebook.pdf

* quant options trade
http://www.businessinsider.com/the-story-of-the-first-ever-options-trade-in-recorded-history-2012-3

* quant portfolio optimization 
http://www.rinfinance.com/RinFinance2009/presentations/yollin_slides.pdf
http://zoonek.free.fr/blosxom/R/2012-06-01_Optimization.html

* quant time series databases
https://kx.com/benchmarks.php
http://www.paradigm4.com/

* PerformanceAnalytics-package 
http://braverock.com/brian/R/PerformanceAnalytics/html/PerformanceAnalytics-package.html

!!! TDD, testing	
http://www.agiledata.org/essays/tdd.html
http://r-pkgs.had.co.nz/tests.html
* testthat

!!! performance
* rtools 
https://github.com/stan-dev/rstan/wiki/Install-Rtools-for-Windows
* Rcpp
* RInside 

* speed up loop in R 
http://stackoverflow.com/questions/2908822/speed-up-the-loop-operation-in-r
http://www.r-bloggers.com/faster-for-loops-in-r/
http://biostat.mc.vanderbilt.edu/wiki/pub/Main/SvetlanaEdenRFiles/handouts.pdf
http://www.r-bloggers.com/faster-higher-stonger-a-guide-to-speeding-up-r-code-for-busy-people/

!!! reporting 
* knitr

!!! database programming
http://blog.aguskurniawan.net/



! favorite functions 
* cut 
** turns continuous variables into factors http://www.r-bloggers.com/r-function-of-the-day-cut/


! .Rprofile
http://www.r-bloggers.com/fun-with-rprofile-and-customizing-r-startup/
http://stackoverflow.com/questions/13633876/getting-rprofile-to-load-at-startup
http://www.dummies.com/how-to/content/how-to-install-and-configure-rstudio.html


! require vs library
http://stackoverflow.com/questions/5595512/what-is-the-difference-between-require-and-library
http://yihui.name/en/2014/07/library-vs-require/
https://github.com/rstudio/shiny#installation

! R java issue fix 
{{{

 check the environment 
> Sys.getenv()
ALLUSERSPROFILE          C:\ProgramData
APPDATA                  C:\Users\karl\AppData\Roaming
CommonProgramFiles       C:\Program Files\Common Files
CommonProgramFiles(x86)
                         C:\Program Files (x86)\Common Files
CommonProgramW6432       C:\Program Files\Common Files
COMPUTERNAME             KARL-REMOTE
ComSpec                  C:\Windows\system32\cmd.exe
DISPLAY                  :0
FP_NO_HOST_CHECK         NO
GFORTRAN_STDERR_UNIT     -1
GFORTRAN_STDOUT_UNIT     -1
HADOOP_HOME              C:\tmp\hadoop
HOME                     C:/Users/karl/Documents
HOMEDRIVE                C:
HOMEPATH                 \Users\karl
JAVA_HOME                C:/Program Files/Java/jdk1.8.0_25/bin
LOCALAPPDATA             C:\Users\karl\AppData\Local
LOGONSERVER              \\KARL-REMOTE
NUMBER_OF_PROCESSORS     4
OS                       Windows_NT
PATH                     C:\Program
                         Files\R\R-3.3.1\bin\x64;C:\ProgramData\Oracle\Java\javapath;C:\oracle\product\11.1.0\db_1\bin;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program
                         Files (x86)\Common
                         Files\SYSTEM\MSMAPI\1033;C:\Python33;C:\Python33\Scripts;C:\Program
                         Files (x86)\QuickTime\QTSystem\;C:\Program Files
                         (x86)\nodejs\;C:\Users\karl\AppData\Roaming\npm;C:\Users\karl\AppData\Local\atom\bin;C:\Users\karl\AppData\Local\Pandoc\;C:\Program
                         Files\Java\jdk1.8.0_25\jre\bin\server;C:\Program
                         Files\Java\jdk1.8.0_25\bin
PATHEXT                  .COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC
PROCESSOR_ARCHITECTURE   AMD64
PROCESSOR_IDENTIFIER     Intel64 Family 6 Model 58 Stepping 9, GenuineIntel
PROCESSOR_LEVEL          6
PROCESSOR_REVISION       3a09
ProgramData              C:\ProgramData
ProgramFiles             C:\Program Files
ProgramFiles(x86)        C:\Program Files (x86)
ProgramW6432             C:\Program Files
PSModulePath             C:\Windows\system32\WindowsPowerShell\v1.0\Modules\
PUBLIC                   C:\Users\Public
R_ARCH                   /x64
R_COMPILED_BY            gcc 4.9.3
R_DOC_DIR                C:/PROGRA~1/R/R-33~1.1/doc
R_HOME                   C:/PROGRA~1/R/R-33~1.1
R_LIBS_USER              C:/Users/karl/Documents/R/win-library/3.3
R_USER                   C:/Users/karl/Documents
RMARKDOWN_MATHJAX_PATH   C:/Program Files/RStudio/resources/mathjax-23
RS_LOCAL_PEER            \\.\pipe\33860-rsession
RS_RPOSTBACK_PATH        C:/Program Files/RStudio/bin/rpostback
RS_SHARED_SECRET         63341846741
RSTUDIO                  1
RSTUDIO_MSYS_SSH         C:/Program Files/RStudio/bin/msys-ssh-1000-18
RSTUDIO_PANDOC           C:/Program Files/RStudio/bin/pandoc
RSTUDIO_SESSION_PORT     33860
RSTUDIO_USER_IDENTITY    karl
RSTUDIO_WINUTILS         C:/Program Files/RStudio/bin/winutils
SESSIONNAME              Console
SystemDrive              C:
SystemRoot               C:\Windows
TEMP                     C:\Users\karl\AppData\Local\Temp
TMP                      C:\Users\karl\AppData\Local\Temp
USERDOMAIN               karl-remote
USERNAME                 karl
USERPROFILE              C:\Users\karl
windir                   C:\Windows

 check java version 
> system("java -version")
java version "1.8.0_91"
Java(TM) SE Runtime Environment (build 1.8.0_91-b14)
Java HotSpot(TM) Client VM (build 25.91-b14, mixed mode)


 add the java directories on PATH , critical here is the directory of jvm.dll
C:\Program Files\Java\jdk1.8.0_25\jre\bin\server;C:\Program Files\Java\jdk1.8.0_25\bin

 set JAVA_HOME
Sys.setenv(JAVA_HOME="C:/Program Files/Java/jdk1.8.0_25/bin")
library(rJava)
library(XLConnect)

}}}

! remove duplicate records 
http://www.cookbook-r.com/Manipulating_data/Finding_and_removing_duplicate_records/
http://www.dummies.com/how-to/content/how-to-remove-duplicate-data-in-r.html

! get R memory usage 
http://stackoverflow.com/questions/1358003/tricks-to-manage-the-available-memory-in-an-r-session
{{{
# improved list of objects
.ls.objects <- function (pos = 1, pattern, order.by,
                        decreasing=FALSE, head=FALSE, n=5) {
    napply <- function(names, fn) sapply(names, function(x)
                                         fn(get(x, pos = pos)))
    names <- ls(pos = pos, pattern = pattern)
    obj.class <- napply(names, function(x) as.character(class(x))[1])
    obj.mode <- napply(names, mode)
    obj.type <- ifelse(is.na(obj.class), obj.mode, obj.class)
    obj.prettysize <- napply(names, function(x) {
                           capture.output(format(utils::object.size(x), units = "auto")) })
    obj.size <- napply(names, object.size)
    obj.dim <- t(napply(names, function(x)
                        as.numeric(dim(x))[1:2]))
    vec <- is.na(obj.dim)[, 1] & (obj.type != "function")
    obj.dim[vec, 1] <- napply(names, length)[vec]
    out <- data.frame(obj.type, obj.size, obj.prettysize, obj.dim)
    names(out) <- c("Type", "Size", "PrettySize", "Rows", "Columns")
    if (!missing(order.by))
        out <- out[order(out[[order.by]], decreasing=decreasing), ]
    if (head)
        out <- head(out, n)
    out
}

# shorthand
lsos <- function(..., n=10) {
    .ls.objects(..., order.by="Size", decreasing=TRUE, head=TRUE, n=n)
}

lsos()

}}}

! dplyr join functions cheat sheet
https://stat545-ubc.github.io/bit001_dplyr-cheatsheet.html


! loess
http://flowingdata.com/2010/03/29/how-to-make-a-scatterplot-with-a-smooth-fitted-line/


! gather vs melt
http://stackoverflow.com/questions/26536251/comparing-gather-tidyr-to-melt-reshape2

! tidyr vs reshape2
http://rpubs.com/paul4forest/reshape2tidyrdplyr


! bootstrapping 
https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=bootstrapping%20in%20r

! forecasting 
Forecasting time series using R by Prof Rob J Hyndman at Melbourne R Users https://www.youtube.com/watch?v=1Lh1HlBUf8k
forecasting principles and practice http://robjhyndman.com/uwafiles/fpp-notes.pdf
http://www.statistics.com/forecasting-analytics#fees
!! melbourne talk
http://robjhyndman.com/seminars/melbournerug/
http://robjhyndman.com/talks/MelbourneRUG.pdf
http://robjhyndman.com/talks/MelbourneRUGexamples.R
!! time series data
https://forecasters.org/resources/time-series-data/m3-competition/
https://forecasters.org/resources/time-series-data/
http://www.forecastingprinciples.com/index.php?option=com_content&view=article&id=8&Itemid=18
https://datamarket.com/data/list/?q=provider%3atsdl
!! prediction competitions
http://robjhyndman.com/hyndsight/prediction-competitions/
!! forecasting books
Forecasting: principles and practice https://www.otexts.org/book/fpp
!! automated forecasting examples
http://www.dxbydt.com/munge-automate-forecast/ , https://github.com/djshahbydt/Munge-Automate-Forecast.../blob/master/Munge%2C%20Automate%20%26%20Forecast...
http://www.dxbydt.com/wp-content/uploads/2015/11/data.csv
https://github.com/pmaier1971/AutomatedForecastingWithShiny/blob/master/server.R
!! forecasting UI examples
https://pmaier1971.shinyapps.io/AutomatedForecastingWithShiny/  <- check the overview and economic forecasting tabs
http://www.ae.be/blog-en/combining-the-power-of-r-and-d3-js/ , http://vanhumbeecka.github.io/R-and-D3/plotly.html  R and D3 binding
https://nxsheet.com/sheets/56d0a87264e47ee60a95f652

!! forecasting and shiny 
https://aneesha.shinyapps.io/ShinyTimeseriesForecasting/
https://medium.com/@aneesha/timeseries-forecasting-with-the-forecast-r-package-and-shiny-6fa04c64196#.r9nllan82
http://www.datasciencecentral.com/profiles/blogs/time-series-forecasting-and-internet-of-things-iot-in-grain

!! forecasting time series reading materials
http://a-little-book-of-r-for-time-series.readthedocs.io/en/latest/index.html
http://a-little-book-of-r-for-time-series.readthedocs.io/en/latest/src/timeseries.html
understanding time series data https://www.safaribooksonline.com/library/view/practical-data-analysis/9781783551668/ch07s03.html
https://www.quantstart.com/articles#time-series-analysis

!! acf pacf, arima arma
http://www.forecastingbook.com/resources/online-tutorials/acf-and-random-walk-in-xlminer
autocorrelation in bearing performance https://www.youtube.com/watch?v=oVQCS9Om_w4
autocorrelation function in time series analysis https://www.youtube.com/watch?v=pax02Q0aJO8
Detecting AR & MA using ACF and PACF plots https://www.youtube.com/watch?v=-vSzKfqcTDg
time series theory https://www.youtube.com/playlist?list=PLUgZaFoyJafhfcggaNzmZt_OdJq32-iFW
R Programming LiveLessons (Video Training): Fundamentals to Advanced https://www.safaribooksonline.com/library/view/r-programming-livelessons/9780133578867/
understanding time series data https://www.safaribooksonline.com/library/view/practical-data-analysis/9781783551668/ch07s03.html

ARMA (no differencing), ARIMA (with differencing) https://www.quora.com/Whats-the-difference-between-ARMA-ARIMA-and-ARIMAX-in-laymans-terms
https://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average
https://en.wikipedia.org/wiki/Autoregressive%E2%80%93moving-average_model
ARIMA models https://www.otexts.org/fpp/8
Stationarity and differencing https://www.otexts.org/fpp/8/1
https://www.quora.com/What-are-the-differences-between-econometrics-quantitative-finance-mathematical-finance-computational-finance-and-financial-engineering

!! time series forecasting model compare 
http://stats.stackexchange.com/questions/140163/timeseries-analysis-procedure-and-methods-using-r

!! cross validation 
http://stats.stackexchange.com/questions/140163/timeseries-analysis-procedure-and-methods-using-r
http://robjhyndman.com/hyndsight/crossvalidation/
http://robjhyndman.com/hyndsight/tscvexample/
Evaluating forecast accuracy https://www.otexts.org/fpp/2/5/
http://moderntoolmaking.blogspot.com/2011/11/functional-and-parallel-time-series.html
http://moderntoolmaking.blogspot.com/search/label/cross-validation
http://moderntoolmaking.blogspot.com/search/label/forecasting
cross validation and train/test split - Selecting the best model in scikit-learn using cross-validation https://www.youtube.com/watch?v=6dbrR-WymjI



! dplyr vs data.table
http://stackoverflow.com/questions/21435339/data-table-vs-dplyr-can-one-do-something-well-the-other-cant-or-does-poorly/27840349#27840349
http://www.r-bloggers.com/working-with-large-datasets-with-dplyr-and-data-table/
http://www.r-statistics.com/2013/09/a-speed-test-comparison-of-plyr-data-table-and-dplyr/


! shiny 
reproducible research with R and shiny https://www.safaribooksonline.com/library/view/strata-hadoop/9781491927960/part24.html
http://rmarkdown.rstudio.com/
https://rstudio.github.io/packrat/
https://rstudio.github.io/packrat/
https://www.shinyapps.io
https://gist.github.com/SachaEpskamp/5796467 A general shiny app to import and export data to R. Note that this can be used as a starting point for any app that requires data to be loaded into Shiny.
https://www.youtube.com/watch?v=HPZSunrSo5M R Shiny app tutorial # 15 - how to use fileInput to upload CSV or Text file

!! shiny time series 
http://markedmondson.me/my-google-analytics-time-series-shiny-app-alpha
https://gist.github.com/MarkEdmondson1234/3190fb967f3cbc2eeae2
http://blog.rstudio.org/2015/04/14/interactive-time-series-with-dygraphs/
http://stackoverflow.com/questions/28049248/create-time-series-graph-in-shiny-from-user-inputs


!! courses/tutorials
http://shiny.rstudio.com/
http://shiny.rstudio.com/tutorial/
http://shiny.rstudio.com/articles/
http://shiny.rstudio.com/gallery/
http://shiny.rstudio.com/articles/shinyapps.html
http://shiny.rstudio.com/reference/shiny/latest/ <- function references
https://www.safaribooksonline.com/library/view/introduction-to-shiny/9781491959558/
https://www.safaribooksonline.com/library/view/web-application-development/9781782174349/
http://deanattali.com/blog/building-shiny-apps-tutorial/
https://github.com/rstudio/IntroToShiny


!! showcase/gallery/examples
https://www.rstudio.com/products/shiny/shiny-user-showcase/
https://github.com/rstudio/shiny-examples


!! persistent data/storage in shiny
http://deanattali.com/blog/shiny-persistent-data-storage/  
http://daattali.com/shiny/persistent-data-storage/
https://github.com/daattali/shiny-server/tree/master/persistent-data-storage


!! google form with shiny app
http://deanattali.com/2015/06/14/mimicking-google-form-shiny/


!! real time monitoring of R package downloads
https://gallery.shinyapps.io/087-crandash/
https://github.com/Athospd/semantix_closeness_centrality


!! R pivot table
http://www.magesblog.com/2015/03/pivot-tables-with-r.html
http://www.joyofdata.de/blog/pivoting-data-r-excel-style/
http://stackoverflow.com/questions/33214397/download-rpivottable-ouput-in-shiny
https://www.rforexcelusers.com/make-pivottable-in-r/
https://github.com/smartinsightsfromdata/rpivotTable/blob/master/R/rpivotTable.R
https://github.com/joyofdata/r-big-pivot


!! setup shiny server 
https://www.digitalocean.com/community/tutorials/how-to-set-up-shiny-server-on-ubuntu-14-04
http://deanattali.com/2015/05/09/setup-rstudio-shiny-server-digital-ocean/
http://www.r-bloggers.com/how-to-get-your-very-own-rstudio-server-and-shiny-server-with-digitalocean/
http://johndharrison.blogspot.com/2014/03/rstudioshiny-server-on-digital-ocean.html
http://www.r-bloggers.com/deploying-your-very-own-shiny-server/
http://matthewlincoln.net/2015/08/31/setup-rstudio-and-shiny-servers-on-digital-ocean.html


!! nearPoints, brushedPoints
http://shiny.rstudio.com/articles/selecting-rows-of-data.html
http://shiny.rstudio.com/reference/shiny/latest/brushedPoints.html
http://stackoverflow.com/questions/31445367/r-shiny-datatableoutput-not-displaying-brushed-points
http://stackoverflow.com/questions/34642851/shiny-ggplot-with-interactive-x-and-y-does-not-pass-information-to-brush
http://stackoverflow.com/questions/29965979/data-object-not-found-when-deploying-shiny-app
https://github.com/BillPetti/Scheduling-Shiny-App


!! deploy app 
library(rsconnect)
rsconnect::deployApp('E:/GitHub/code_ninja/r/shiny/karlshiny')


!! shiny and d3
http://stackoverflow.com/questions/26650561/binding-javascript-d3-js-to-shiny
http://www.r-bloggers.com/d3-and-r-interacting-through-shiny/
https://github.com/timelyportfolio/shiny-d3-plot
https://github.com/vega/vega/wiki/Vega-and-D3
http://vega.github.io/


! data frame vs data table 
http://stackoverflow.com/questions/13618488/what-you-can-do-with-data-frame-that-you-cant-in-data-table
http://stackoverflow.com/questions/18001120/what-is-the-practical-difference-between-data-frame-and-data-table-in-r

! stat functions
stat_summary dot plot - ggplot2 dot plot : Quick start guide - R software and data visualization http://www.sthda.com/english/wiki/print.php?id=180

! ggplot2
ggplot2 essentials http://www.sthda.com/english/wiki/ggplot2-essentials
Be Awesome in ggplot2: A Practical Guide to be Highly Effective - R software and data visualization http://www.sthda.com/english/wiki/be-awesome-in-ggplot2-a-practical-guide-to-be-highly-effective-r-software-and-data-visualization
Beautiful plotting in R: A ggplot2 cheatsheet http://zevross.com/blog/2014/08/04/beautiful-plotting-in-r-a-ggplot2-cheatsheet-3/

!! real time viz
http://stackoverflow.com/questions/11365857/real-time-auto-updating-incremental-plot-in-r
http://stackoverflow.com/questions/27205610/real-time-auto-incrementing-ggplot-in-r


! ggvis
ggvis vs ggplot2 http://ggvis.rstudio.com/ggplot2.html
ggvis basics http://ggvis.rstudio.com/ggvis-basics.html#layers
Properties and scales http://ggvis.rstudio.com/properties-scales.html
ggvis cookbook http://ggvis.rstudio.com/cookbook.html
https://www.cheatography.com/shanly3011/cheat-sheets/data-visualization-in-r-ggvis-continued/
http://stats.stackexchange.com/questions/117078/for-plotting-with-r-should-i-learn-ggplot2-or-ggvis

! Execute R inside Oracle 
https://blogs.oracle.com/R/entry/invoking_r_scripts_via_oracle
https://blogs.oracle.com/R/entry/oraah_enabling_high_performance_r
https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1
http://sheepsqueezers.com/media/documentation/oracle/ore-trng4-embeddedrscripts-1501638.pdf
Oracle R Enterprise Hands-on Lab http://static1.1.sqspcdn.com/static/f/552253/24257177/1390505576063/BIWA_14_Presentation_3.pdf?token=LqmhB3tJhuDeN0eYOXaGlm04BlI%3D
http://www.peakindicators.com/blog/the-advantages-of-ore-over-traditional-r
COUPLING DATABASES AND ADVANCED ANALYTICAL TOOLS (R) http://it4bi.univ-tours.fr/it4bi/medias/pdfs/2014_Master_Thesis/IT4BI_2014_submission_4.pdf
R Interface for Embedded R Execution http://docs.oracle.com/cd/E67822_01/OREUG/GUID-3227A0D4-C5FE-49C9-A28C-8448705ADBCF.htm#OREUG495
automated trading strategies with R http://www.oracle.com/assets/media/automatedtradingstrategies-2188856.pdf?ssSourceSiteId=otnen
Is it possible to run a SAS or R script from PL/SQL? http://stackoverflow.com/questions/4043629/is-it-possible-to-run-a-sas-or-r-script-from-pl-sql
statistical analysis with oracle http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/40cfa537-62b8-2f10-a78d-d320a2ab7205?overridelayout=true

! Turn your R code into a web API 
https://github.com/trestletech/plumber


! errors

!! dplyr “Select” - Error: found duplicated column name
http://stackoverflow.com/questions/28549045/dplyr-select-error-found-duplicated-column-name

! spark 
[[sparklyr]]

! R cookbook - Winston C. 
http://www.cookbook-r.com/

! references 
http://www.amazon.com/The-Art-Programming-Statistical-Software/dp/1593273843/ref=tmm_pap_title_0?ie=UTF8&qid=1392504776&sr=8-1
http://www.amazon.com/R-Graphics-Cookbook-Winston-Chang/dp/1449316956/ref=tmm_pap_title_0?ie=UTF8&qid=1392504949&sr=8-2
http://had.co.nz/
http://adv-r.had.co.nz/ <- advanced guide
http://adv-r.had.co.nz/Style.html <- style guide 

https://www.youtube.com/user/rdpeng/playlists

https://cran.r-project.org/doc/contrib/Short-refcard.pdf
discovering statistics using R http://library.mpib-berlin.mpg.de/toc/z2012_1351.pdf

! rpubs favorites
http://rpubs.com/karlarao
Interpreting coefficients from interaction (Part 1) http://rpubs.com/hughes/15353


! tricks 


!! count frequency of words
{{{

numbers <- c(33, 30, 14, 1 , 6, 19, 34, 17, 14, 15, 24 , 21, 24, 34, 6, 24, 34, 6, 29, 5, 19 , 4, 3, 19, 4, 14, 20, 34)

library(dplyr); arrange(as.data.frame(table(numbers)))

numbers Freq
      1    1
      3    1
      5    1
     15    1
     17    1
     20    1
     21    1
     29    1
     30    1
     33    1
      4    2
      6    3
     14    3
     19    3
     24    3
     34    4
}}}

!! How to print text and variables in a single line in r 
https://stackoverflow.com/questions/32241806/how-to-print-text-and-variables-in-a-single-line-in-r/32242334


! data structures 

!! see R DATA FORMAT 
[[R data format]]



! XLConnect

!! XLConnect strftime
https://stackoverflow.com/questions/21312173/how-can-i-retrive-the-time-only-with-xlconnect

!! 1899-Dec-31
http://www.cpearson.com/excel/datetime.htm

!! R import big xlsx 
https://stackoverflow.com/questions/19147884/importing-a-big-xlsx-file-into-r/31029292#31029292

!! R write CSV 
http://rprogramming.net/write-csv-in-r/

!! R java memory 
https://stackoverflow.com/questions/34624002/r-error-java-lang-outofmemoryerror-java-heap-space
https://stackoverflow.com/questions/11766981/xlconnect-r-use-of-jvm-memory

!! R commandargs 
https://www.rdocumentation.org/packages/R.utils/versions/2.8.0/topics/commandArgs



! scraping HTML (XML package)
http://bradleyboehmke.github.io/2015/12/scraping-html-tables.html









.






<<<
Before h2o there’s a GUI data mining tool called rattle
 
quick intro http://r4stats.com/articles/software-reviews/rattle/
detailed course https://www.udemy.com/data-mining-with-rattle/
https://www.kdnuggets.com/2017/02/top-r-packages-machine-learning.html
 
I’d definitely try h2o with the same data set https://www.h2o.ai/try-driverless-ai/
 
it also helps having Tableau for easy validation the raw data, and sqldf https://www.r-bloggers.com/make-r-speak-sql-with-sqldf/ (also available in python - from pandasql import sqldf)
and of course R studio, pycharm, and sql developer
 
another GUI tool is exploratory made by an ex oracle guy (from oracle visual analyzer team)
https://exploratory.io/features
<<<
<<showtoc>>

http://insightdataengineering.com/blog/The-Data-Engineering-Ecosystem-An-Interactive-Map.html
https://blog.insightdatascience.com/the-new-data-engineering-ecosystem-trends-and-rising-stars-414a1609d4a0#.c03g5b1nc
https://github.com/InsightDataScience/data-engineering-ecosystem/wiki/Data-Engineering-Ecosystem
https://github.com/InsightDataScience/data-engineering-ecosystem


! the ecosystem

!! v3
http://xyz.insightdataengineering.com/blog/pipeline_map/
[img(50%,50%)[ https://i.imgur.com/xeo0SP4.png ]]

!! v2
http://xyz.insightdataengineering.com/blog/pipeline_map_v2.html
[img(90%,90%)[ http://i.imgur.com/gn9E7Jf.png ]]

!! v1
http://xyz.insightdataengineering.com/blog/pipeline_map_v1.html
[img(90%,90%)[ https://lh3.googleusercontent.com/-iD9v8Iho_7g/VZVvf0mK1PI/AAAAAAAACmU/VlovJ-JP2cI/s2048/20150702_DataEngineeringEcosystem.png ]]


! hadoop architecture use case
[img[ https://lh3.googleusercontent.com/-QsRM3czDMkg/Vhfg7pTmFrI/AAAAAAAACzU/4BEa8SfK_KU/s800-Ic42/IMG_8542.JPG ]]



! others
https://trello.com/b/rbpEfMld/data-science





! also check 
!! microservices patterns 
[img(100%,100%)[ https://i.imgur.com/7p8kBwI.png]]
https://microservices.io/patterns/index.html






<<showtoc>> 

! The players: 

!! Cloud computing 
!!! AWS
!!! Azure
!!! Google Cloud 
!!! Digital Ocean

!! Infrastructure as code 
!!! Chef http://www.getchef.com/chef/ , http://puppetlabs.com/puppet/puppet-enterprise
!!! Puppet
!!! Ansible
!!! Saltstack
!!! terraform
!!! cfengine

!! Build and Test using continuous integration 
!!!  jenkins https://jenkins-ci.org/

!! Containerization 
!!!  docker, kubernetes


! ''reviews'' 
http://www.infoworld.com/d/data-center/puppet-or-chef-the-configuration-management-dilemma-215279

! Comparison of open-source configuration management software 
http://en.wikipedia.org/wiki/Comparison_of_open_source_configuration_management_software

! List of build automation software 
http://en.wikipedia.org/wiki/List_of_build_automation_software



! nice viz from heroku website
[img[ http://i.imgur.com/4kTs7TE.png  ]]
[img[ http://i.imgur.com/PHCK74x.png  ]]

! from hashicorp 
[img(70%,70%)[ http://i.imgur.com/yj0bKNF.png ]]
[img(70%,70%)[ http://i.imgur.com/zrSw8ge.png ]]

http://thenewstack.io/devops-landscape-2015-the-race-to-the-management-layer/
https://gist.github.com/diegopacheco/8f3a03a0869578221ecf






https://becominghuman.ai/cheat-sheets-for-ai-neural-networks-machine-learning-deep-learning-big-data-678c51b4b463
https://medium.com/machine-learning-in-practice/cheat-sheet-of-machine-learning-and-python-and-math-cheat-sheets-a4afe4e791b6
https://ml-cheatsheet.readthedocs.io/en/latest/
https://technology.amis.nl/2017/05/06/the-hello-world-of-machine-learning-with-python-pandas-jupyter-doing-iris-classification-based-on-quintessential-set-of-flower-data/
<<showtoc>>


! DVC - data version control 


MLOps Data Versioning and DataOps with Dmitry Petrov of DVC.org
https://www.meetup.com/pl-PL/bristech/events/271251921/

https://www.eventbrite.com/e/dc-thurs-dvc-w-dmitry-petrov-tickets-120036389071?ref=enivtefor001&invite=MjAwNjgyNjMva2FybGFyYW9AZ21haWwuY29tLzA%3D%0A&utm_source=eb_email&utm_medium=email&utm_campaign=inviteformalv2&utm_term=eventpage
Using Python With Oracle Database 11g
http://www.oracle.com/technetwork/articles/dsl/python-091105.html

http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/OOW11/python_db/python_db.htm
http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/oow10/python_db/python_db.htm
http://cx-oracle.sourceforge.net/
http://www.python.org/dev/peps/pep-0249/

http://www.amazon.com/gp/product/1887902996
http://wiki.python.org/moin/BeginnersGuide

''python for ipad'' http://www.tuaw.com/2012/11/19/python-3-2-lets-you-write-python-on-the-iphone/

''python environment''
http://blog.andrewhays.net/love-your-terminal
http://ozkatz.github.com/improving-your-python-productivity.html

http://showmedo.com/videotutorials/python
The Ultimate Python Programming Course http://goo.gl/vvpWE, https://www.udemy.com/the-ultimate-python-programming-course/
Python 3 Essential Training http://www.lynda.com/Python-3-tutorials/essential-training/62226-2.html






{{{
parameters: 
    p_owner       
    p_tabname     
    p_partname    
    p_granularity 
    p_est_percent 
    p_method_opt
    p_degree
}}}

{{{

CREATE OR REPLACE PROCEDURE alloc_app_perf.table_stats 
( 
    p_owner IN varchar2,
    p_tabname IN varchar2, 
    p_partname IN varchar2 default NULL,  
    p_granularity IN varchar2 default 'GLOBAL AND PARTITION',
    p_est_percent IN varchar2 default 'DBMS_STATS.AUTO_SAMPLE_SIZE',
    p_method_opt IN varchar2 default 'FOR ALL COLUMNS SIZE AUTO',
    p_degree IN varchar2 default 8
) 
IS
    action varchar2(128);
    v_mode varchar2(30);
    cmd varchar2(2000);
BEGIN
    action := 'Analyzing the table ' || p_tabname; 
    IF p_partname IS NOT NULL THEN
        action := action||', partition '||p_partname;
        v_mode := p_granularity;

        cmd := '
        BEGIN 
            DBMS_STATS.GATHER_TABLE_STATS('||
            'ownname=>'''||p_owner||''',tabname=>'''||p_tabname||
            ''',partname=>'''||p_partname||''',granularity=>'''||v_mode||
            ''',estimate_percent=>'||p_est_percent||',method_opt=>'''||p_method_opt||''',cascade=>TRUE,degree=>'||p_degree||');
        END;';

        execute immediate cmd;
    ELSE 
        v_mode := 'DEFAULT';

        cmd := '
        BEGIN 
            DBMS_STATS.GATHER_TABLE_STATS('||
            'ownname=>'''||p_owner||''',tabname=>'''||p_tabname||
            ''',estimate_percent=>'||p_est_percent||',method_opt=>'''||p_method_opt||''',cascade=>TRUE,degree=>'||p_degree||');
        END;';

        execute immediate cmd;
    END IF; 
END;
/

}}}


{{{

grant analyze any to alloc_app_perf;
exec alloc_app_perf.table_stats(p_owner=>'BAS',p_tabname=>'CLASS_SALES')
exec alloc_app_perf.table_stats(p_owner=>'BAS',p_tabname=>'CLASS_SALES',p_est_percent=>'dbms_stats.auto_sample_size')
exec alloc_app_perf.table_stats(p_owner=>'BAS',p_tabname=>'CLASS_SALES',p_est_percent=>'1')

exec alloc_app_perf.table_stats(p_owner=>'BAS',p_tabname=>'DBA_OBJECTS')
exec alloc_app_perf.table_stats(p_owner=>'BAS',p_tabname=>'DBA_OBJECTS',p_est_percent=>'1')
exec alloc_app_perf.table_stats(p_owner=>'BAS',p_tabname=>'DBA_OBJECTS',p_est_percent=>'100')
exec alloc_app_perf.table_stats(p_owner=>'BAS',p_tabname=>'DBA_OBJECTS',p_est_percent=>'dbms_stats.auto_sample_size')

exec alloc_app_perf.table_stats(p_owner=>'BAS',p_tabname=>'DEMO_SKEW')
exec alloc_app_perf.table_stats(p_owner=>'BAS',p_tabname=>'DEMO_SKEW',p_method_opt=>'for all columns size skewonly')


}}}


! generate stats commands 
{{{

set lines 500
set pages 0
select 'exec alloc_app_perf.table_stats(p_owner=>'''||owner||''',p_tabname=>'''||table_name||''',p_degree=>''16'');'
from dba_tables where table_name in 
('ALGO_INPUT_FOR_REVIEW'        
,'ALLOCATED_NOTSHIPPED_INVS'     
,'ALLOC_BATCH_LINE_ITEMS'        
,'ALLOC_BATCH_VOLUMEGRADE_CHANGE'
,'ALLOC_SKU0_MASTER'             
,'ALLOC_SKU_MASTER'              
,'ALLOC_STORES'                  
,'EOM_NEED_UNITS'                
,'EOM_UNIT_TARGETS'              
,'INVENTORY_CLASSES'             
,'SIM_CLASSES'                   
,'STAGING_STORES'                
,'STORE_ON_ORDERS'               
,'STORE_WAREHOUSE_DETAILS'       
,'VOLUME_GRADE_CONSTRAINTS')
/


}}}


https://github.com/DingGuodong/LinuxBashShellScriptForOps
http://www.bashoneliners.com/
https://github.com/learnbyexample/scripting_course
https://github.com/learnbyexample/Linux_command_line/blob/master/Shell_Scripting.md
https://github.com/learnbyexample/Linux_command_line/blob/master/Text_Processing.md
https://medium.com/capital-one-developers/bashing-the-bash-replacing-shell-scripts-with-python-d8d201bc0989
https://www.linuxjournal.com/content/python-scripts-replacement-bash-utility-scripts
https://www.educba.com/bash-shell-programming-with-python/
https://www.linuxquestions.org/questions/linux-software-2/need-help-converting-bash-script-to-python-4175605267/
https://stackoverflow.com/questions/2839810/converting-a-bash-script-to-python-small-script
https://tails.boum.org/blueprint/Port_shell_scripts_to_Python/
https://www.dreamincode.net/forums/topic/399713-convert-a-shell-script-to-python/
https://grasswiki.osgeo.org/wiki/Converting_Bash_scripts_to_Python

https://medium.com/capital-one-tech/bashing-the-bash-replacing-shell-scripts-with-python-d8d201bc0989



! tools 
https://zwischenzugs.com/2016/08/29/bash-to-python-converter/
https://github.com/tomerfiliba/plumbum
https://hub.docker.com/r/imiell/bash2py/
https://github.com/ianmiell/bash2py
http://www.swag.uwaterloo.ca/bash2py/index.html  , https://ieeexplore.ieee.org/document/7081866/?reload=true

<<showtoc>>


! gcp security 
* https://www.udemy.com/course/introduction-to-google-cloud-security-features/learn/lecture/14562410#overview


! bigquery 
* Practical Google BigQuery for those who already know SQL https://www.udemy.com/course/practical-google-bigquery-for-those-who-already-know-sql/
* https://www.udemy.com/course/google-bigquery-for-marketers-and-agencies/


! cloud composer (airflow)
* playlist - Apache Airflow Tutorials - https://www.youtube.com/watch?v=AHMm1wfGuHE&list=PLYizQ5FvN6pvIOcOd6dFZu3lQqc6zBGp2
* Apache Airflow using Google Cloud Composer https://www.udemy.com/course/apache-airflow-using-google-cloud-composer-introduction/


! dataflow (apache beam)
* https://www.udemy.com/course/streaming-analytics-on-google-cloud-platform/learn/lecture/7996614#announcements
* https://www.udemy.com/course/apache-beam-a-hands-on-course-to-build-big-data-pipelines/learn/lecture/16220774#announcements


! end to end 
* https://www.udemy.com/course/data-engineering-on-google-cloud-platform/
* https://www.udemy.com/course/talend-open-studio-for-big-data-using-gcp-bigquery/


! SQL 
https://www.udemy.com/course/oracle-analytic-functions-in-depth/
https://www.udemy.com/course/oracle-plsql-is-my-game-exam-1z0-144/


! python 
https://www.udemy.com/course/python-oops-beginners/learn/lecture/7359360#overview
https://www.udemy.com/course/python-object-oriented-programming-oop/learn/lecture/16917860#overview
https://www.udemy.com/course/python-sql-tableau-integrating-python-sql-and-tableau/learn/lecture/13205790#overview
https://www.youtube.com/c/Coreyms/videos
https://www.youtube.com/c/realpython/videos



! java 
https://www.udemy.com/course/java-for-absolute-beginners/learn/lecture/14217184#overview
<<showtoc>>

! PL/SQL User's Guide and Reference Release - Sample PL/SQL Programs
https://docs.oracle.com/cd/A97630_01/appdev.920/a96624/a_samps.htm

! steve's videos - the plsql channel
http://tutorials.plsqlchannel.com/public/index.php 
https://learning.oreilly.com/search/?query=Steven%20Feuerstein&extended_publisher_data=true&highlight=true&include_assessments=false&include_case_studies=true&include_courses=true&include_orioles=true&include_playlists=true&is_academic_institution_account=false&sort=relevance&page=0
https://www.youtube.com/channel/UCpJpLMRm452kVcie3RpINPw/playlists
http://stevenfeuersteinonplsql.blogspot.com/2015/03/27-hours-of-free-plsql-video-training.html
practically perfect plsql playlist https://apexapps.oracle.com/pls/apex/f?p=44785:141:0::NO::P141_PAGE_ID,P141_SECTION_ID:168,1208
https://www.youtube.com/channel/UCpJpLMRm452kVcie3RpINPw/playlists
https://www.oracle.com/database/technologies/appdev/plsql.html


! style guide 
http://oracle.readthedocs.org/en/latest/sql/basics/style-guide.html
http://www.williamrobertson.net/documents/plsqlcodingstandards.htlm



! plsql the good parts 
https://github.com/mortenbra/plsql-the-good-parts
http://mortenbra.github.io/plsql-the-good-parts/



! bulk collect and forall 
https://venzi.wordpress.com/2007/09/27/bulk-collect-forall-vs-cursor-for-loop/


! mvc pl/sq/
http://www.dba-oracle.com/oracle_news/2004_10_27_MVC_development_using_plsql.htm
https://github.com/osalvador/dbax
http://it.toolbox.com/blogs/jjflash-oracle-journal/mvc-for-plsql-and-the-apex-listener-42688
http://jj-blogger.blogspot.com/2006/05/plsql-and-faces.html
https://www.rittmanmead.com/blog/2004/09/john-flack-on-mvc-development-using-plsql/
http://www.liberidu.com/blog/2016/11/02/how-you-should-or-shouldnt-design-program-for-a-performing-database-environment/



! references/books
http://stevenfeuersteinonplsql.blogspot.com/2014/05/resources-for-new-plsql-developers.html
https://www.safaribooksonline.com/library/view/beginning-plsql-from/9781590598825/
https://www.safaribooksonline.com/library/view/beginning-oracle-plsql/9781484207376/
https://www.safaribooksonline.com/library/view/oracle-and-plsql/9781430232070/
https://www.safaribooksonline.com/library/view/oracle-plsql-for/9780764599576/
https://www.safaribooksonline.com/library/view/oracle-plsql-for/0596005873/

! wiki
http://www.java2s.com/Tutorials/Database/Oracle_PL_SQL_Tutorial/index.htm
https://gerardnico.com/wiki/plsql/plsql


! implicit cursor attribute	
https://www.ibm.com/support/knowledgecenter/en/SS6NHC/com.ibm.swg.im.dashdb.apdv.plsql.doc/doc/c0053881.html
    https://www.ibm.com/support/knowledgecenter/SS6NHC/com.ibm.swg.im.dashdb.apdv.plsql.doc/doc/c0053879.html
    https://www.ibm.com/support/knowledgecenter/SS6NHC/com.ibm.swg.im.dashdb.apdv.plsql.doc/doc/c0053878.html
    https://www.ibm.com/support/knowledgecenter/SS6NHC/com.ibm.swg.im.dashdb.apdv.plsql.doc/doc/c0053607.html
PL/SQL Language Elements https://docs.oracle.com/cd/B28359_01/appdev.111/b28370/langelems.htm#LNPLS013 
Cursor Attribute https://docs.oracle.com/cd/B28359_01/appdev.111/b28370/cursor_attribute.htm#LNPLS01311
https://markhoxey.wordpress.com/2012/12/11/referencing-implicit-cursor-attributes-sql/


! plsql profiling 
https://www.thatjeffsmith.com/archive/2019/02/sql-developer-the-pl-sql-hierarchical-profiler/

{{{
Script to produce HTML report with top consumers out of PL/SQL Profiler DBMS_PROFILER data (Doc ID 243755.1)
PURPOSE
To use the PL/SQL Profiler please refer to DBMS_PROFILER documentation as per Oracle® Database PL/SQL Packages and Types Reference for your specific release and platform.

Once you have executed the PL/SQL Profiler for a piece of your application, you can use script profiler.sql provided in this document. This profiler.sql script produces a nice HTML report with the top time consumers as per your execution of the PL/SQL Profiler.

TROUBLESHOOTING STEPS
Familiarize yourself with the PL/SQL Profiler documented in the "Oracle® Database PL/SQL Packages and Types Reference" under DBMS_PROFILER.
If needed, create the PL/SQL Profiler Tables under your application schema: @?/rdbms/admin/proftab.sql
If needed, install the DBMS_PROFILER API, connected as SYS: @?/rdbms/admin/profload.sql
Start PL/SQL Profiler in your application: EXEC DBMS_PROFILER.START_PROFILER('optional comment');
Execute your transaction to be profiled. Calls to PL/SQL Libraries are expected.
Stop PL/SQL Profiler: EXEC DBMS_PROFILER.STOP_PROFILER;
Connect as your application user, execute script profiler.sql provided in this document: @profiler.sql
Provide to profiler.sql the "runid" out of a displayed list.
Review HTML report generated by profiler.sql.
}}}




! plsql collections 
Collections in Oracle PLSQL https://www.youtube.com/watch?v=DvA-amyao7s

!! accessing varray 
Accessing elements in a VARRAY column which is in a type https://community.oracle.com/thread/3961996
https://docs.oracle.com/cd/E11882_01/server.112/e41084/statements_10002.htm#i2071643





! pl/sql design patterns 
<<<
https://technology.amis.nl/2006/03/10/design-patterns-in-plsql-the-template-pattern/
https://technology.amis.nl/2006/03/11/design-patterns-in-plsql-interface-injection-for-even-looser-coupling/
https://technology.amis.nl/2006/04/02/design-patterns-in-plsql-implementing-the-observer-pattern/

https://blog.serpland.com/tag/design-patterns
https://blog.serpland.com/oracle/design-patterns-in-plsql-oracle


https://peterhrasko.wordpress.com/2017/09/16/oop-design-patterns-in-plsql/
<<<




! plsql dynamic sql 
!! EXECUTE IMMEDIATE with multiple lines of columns to insert 
https://stackoverflow.com/questions/14401631/execute-immediate-with-multiple-lines-of-columns-to-insert
https://stackoverflow.com/questions/9090072/insert-a-multiline-string-in-oracle-with-sqlplus


! plsql cursor within the cursor 
https://www.techonthenet.com/oracle/questions/cursor2.php



! end










https://startupsventurecapital.com/essential-cheat-sheets-for-machine-learning-and-deep-learning-researchers-efb6a8ebd2e5
Data Mining from a process perspective 
(from the book Data Mining for Business Analytics - Concepts, Techniques, and Applications)
[img(80%,80%)[http://i.imgur.com/tJ4TVCX.png]]
[img(80%,80%)[http://i.imgur.com/rmLhSnV.png]]

Machine Learning Summarized in One Picture
http://www.datasciencecentral.com/profiles/blogs/machine-learning-summarized-in-one-picture
[img(80%,80%)[http://i.imgur.com/oA0LjyF.png]]

Data Science Summarized in One Picture
http://www.datasciencecentral.com/profiles/blogs/data-science-summarized-in-one-picture
https://www.linkedin.com/pulse/business-intelligence-data-science-fuzzy-borders-rubens-zimbres
[img(80%,80%)[http://i.imgur.com/1SnVfqV.png]]


Python for Big Data in One Picture
http://www.datasciencecentral.com/profiles/blogs/python-for-big-data-in-one-picture
https://www.r-bloggers.com/python-r-vs-spss-sas/
[img(80%,80%)[http://i.imgur.com/5kPV76P.jpg]]

R for Big Data in One Picture
http://www.datasciencecentral.com/profiles/blogs/r-for-big-data-in-one-picture
[img(80%,80%)[http://i.imgur.com/abDq0ow.jpg]]


! top data science packages
[img(100%,100%)[ https://i.imgur.com/tj3ryoK.png]]
https://www.coriers.com/comparison-of-top-data-science-libraries-for-python-r-and-scala-infographic/






! ML modelling in R cheat sheet
[img(100%,100%)[ https://i.imgur.com/GPiepGw.jpg]]
https://github.com/rstudio/cheatsheets/raw/master/Machine%20Learning%20Modelling%20in%20R.pdf
https://www.r-bloggers.com/machine-learning-modelling-in-r-cheat-sheet/


! ML workflow 
[img(100%,100%)[ https://i.imgur.com/TuhIB7T.png ]]







.


also see [[database/data movement methods]]


<<showtoc>>

! RMAN 

[img(50%,50%)[ http://i.imgur.com/eLK7RRk.png ]]

!! backup and restore 
        backup and restore from physical standby http://gavinsoorma.com/2012/04/performing-a-database-clone-using-a-data-guard-physical-standby-database/
	Using RMAN Incremental Backups to Refresh Standby Database http://oracleinaction.com/using-rman-incremental-backups-refresh-standby-database/
	https://jarneil.wordpress.com/2008/06/03/applying-an-incremental-backup-to-a-physical-standby/

!! active duplication 
        create standby database using rman active duplicate https://www.pythian.com/blog/creating-a-physical-standby/
	https://oracle-base.com/articles/11g/duplicate-database-using-rman-11gr2#active_database_duplication
	https://oracle-base.com/articles/12c/recovery-manager-rman-database-duplication-enhancements-12cr1
!! backup-based duplication
        duplicate database without connecting to target http://oracleinaction.com/duplicate-db-no-db-conn/
        https://www.safaribooksonline.com/library/view/rman-recipes-for/9781430248361/9781430248361_Ch15.xhtml
        https://www.safaribooksonline.com/library/view/oracle-database-12c/9780071847445/ch10.html#ch10lev15
        http://oracleinaction.com/duplicate-db-no-db-conn/
        http://oracledbasagar.blogspot.com/2011/11/cloning-on-different-server-using-rman.html
!! restartable duplicate 
        11gr2 DataGuard: Restarting DUPLICATE After a Failure https://blogs.oracle.com/XPSONHA/entry/11gr2_dataguard_restarting_dup

! dNFS + CloneDB
    uses the backup piece as the backing storage,
    1210656.1: “Clone your dNFS Production Database for Testing.”
    How to Accelerate Test and Development Through Rapid Cloning of Production Databases and Operating Environments http://www.oracle.com/technetwork/server-storage/hardware-solutions/o13-022-rapid-cloning-db-1919816.pdf
    https://oracle-base.com/articles/11g/clonedb-11gr2
    http://datavirtualizer.com/database-thin-cloning-clonedb-oracle/
Clonedb: The quick and easy cloning solution you never knew you had https://www.youtube.com/watch?v=YBVj1DkUG54

! oem12c snapclone , snap clone
    http://datavirtualizer.com/em-12c-snap-clone/
    snap clone https://www.safaribooksonline.com/library/view/building-database-clouds/9780134309781/ch08.html#ch08
https://dbakevlar.com/2013/09/em-12c-snap-clone/
DB Snap Clone on Exadata 	https://www.youtube.com/watch?v=nvEmP6Z65Bg



! Thin provisioning of PDBs using “Snapshot Copy” (using ACFS snapshot or ZFS)

!! ACFS snapshot
https://www.youtube.com/watch?v=jwgD2sg8cyM
https://www.youtube.com/results?search_query=acfs+snapshot
How To Manually Create An ACFS Snapshot (Doc ID 1347365.1)
12.2 Oracle ACFS Snapshot Enhancements (Doc ID 2200299.1)



! exadata sparse clones 
https://www.doag.org/formes/pubfiles/10819226/2018-Infra-Peter_Brink-Exadata_Snapshot_Clones-Praesentation.pdf
https://docs.oracle.com/en/engineered-systems/exadata-database-machine/sagug/exadata-storage-server-snapshots.html#GUID-78F67DD0-93C8-4944-A8F0-900D910A06A0
https://learning.oreilly.com/library/view/Oracle+Database+Exadata+Cloud+Service:+A+Beginner's+Guide/9781260120882/ch3.xhtml#page_83
How to Calculate the Physical Size and Virtual Size for Sparse GridDisks in Exadata Sparse Diskgroups (ORA-15041) (Doc ID 2473412.1)


! summary matrix 
https://www.oracle.com/technetwork/database/exadata/learnmore/exadata-database-copy-twp-2543083.pdf
https://blogs.oracle.com/exadata/exadata-snapshots-part1

[img(100%,100%)[ https://i.imgur.com/C9dQNwr.png]]




! flexclone (netapp)
    snap best practices http://www.netapp.com/us/media/tr-3761.pdf

! delphix 
    Instant Cloning: Boosting Application Development http://www.nocoug.org/download/2014-02/NoCOUG_201402_delphix.pdf






! references
https://www.safaribooksonline.com/search/?query=RMAN%20duplicate&highlight=true&is_academic_institution_account=false&extended_publisher_data=true&include_orioles=true&source=user&include_courses=true&sort=relevance&page=2
Oracle Database 12c Oracle RMAN Backup & Recovery https://www.safaribooksonline.com/library/view/oracle-database-12c/9780071847445/
{{{
10 Duplication: Cloning the Target Database
RMAN Duplication: A Primer
Why Use RMAN Duplication?
Different Types of RMAN Duplication
The Duplication Architecture
Duplication: Location Considerations
Duplication to the Same Server: An Overview
Duplication to the Same Server, Different ORACLE_HOME
Duplication to a Remote Server: An Overview
Duplication and the Network
RMAN Workshop: Build a Password File
Duplication to the Same Server
RMAN Workshop: Duplication to the Same Server Using Disk Backups
Using Tape Backups
Duplication to a Remote Server
RMAN Workshop: Duplication to a Remote Server Using Disk Backups
Using Tape Backups for Remote Server Duplication
Targetless Duplication in 12c
Incomplete Duplication: Using the DBNEWID Utility
New RMAN Cloning Features for 12c
Using Compression
Duplicating Large Tablespaces
Summary

Duplication to a Single-Node System
RMAN Workshop: Duplicating a RAC Database to a Single-Node Database

Case #9: Completing a Failed Duplication Manually
Case #10: Using RMAN Duplication to Create a Historical Subset of the Target Database
}}}

Building Database Clouds in Oracle 12c https://www.safaribooksonline.com/library/view/building-database-clouds/9780134309781/ch08.html#ch08
{{{
Chapter 8. Cloning Databases in Enterprise Manager 12c
Full Clones
Snap Clones
Summary
}}}

Oracle Database 11g—Underground Advice for Database Administrators https://www.safaribooksonline.com/library/view/oracle-database-11gunderground/9781849680004/ch06s09.html
{{{
RMAN cloning and standbys—physical, snapshot, or logical
}}}

Oracle Database Problem Solving and Troubleshooting Handbook https://www.safaribooksonline.com/library/view/oracle-database-problem/9780134429267/ch14.html
{{{
14. Strategies for Migrating Data Quickly between Databases
}}}

Oracle RMAN Database Duplication https://www.safaribooksonline.com/library/view/oracle-rman-database/9781484211120/9781484211137_Ch01.xhtml
{{{
CHAPTER 1 Introduction
}}}

RMAN Recipes for Oracle Database 12c: A Problem-Solution Approach, Second Edition https://www.safaribooksonline.com/library/view/rman-recipes-for/9781430248361/9781430248361_Ch15.xhtml
{{{
15-1. Renaming Database Files in a Duplicate Database
15-2. Specifying Alternative Names for OMF or ASM File Systems
15-3. Creating a Duplicate Database from RMAN Backups
15-4. Duplicating a Database Without Using RMAN Backups
15-5. Specifying Options for Network-based Active Database Duplication
15-6. Duplicating a Database with Several Directories
15-7. Duplicating a Database to a Past Point in Time
15-8. Skipping Tablespaces During Database Duplication
15-9. Duplicating a Database with a Specific Backup Tag
15-10. Resynchronizing a Duplicate Database
15-11. Duplicating Pluggable Databases and Container Databases
15-12. Transporting Tablespaces on the Same Operating System Platform
15-13. Performing a Cross-Platform Tablespace Transport by Converting Files on the Source Host
15-14. Performing a Cross-Platform Tablespace Transport by Converting Files on the Destination Host
15-15. Transporting a Database by Converting Files on the Source Database Platform
15-16. Transporting Tablespaces to a Different Platform Using RMAN Backup Sets
15-17. Transporting a Database to a Different Platform Using RMAN Backup Sets

}}}








https://15445.courses.cs.cmu.edu/fall2019/
also see [[database cloning methods , rman duplicate]]


<<showtoc>>

! Transactional Migration Methods
[img(30%,30%)[http://i.imgur.com/IXfxJlZ.png]]


! Nontransactional Migration Methods
[img(30%,30%)[http://i.imgur.com/cB7is6q.png]]
[img(30%,30%)[http://i.imgur.com/09X0J6j.png]]


! Piecemeal Migration Methods / Manual Migration Methods
[img(30%,30%)[http://i.imgur.com/zPZlA2V.png]]
[img(30%,30%)[http://i.imgur.com/cnyrkW4.png]]


! Replication techniques
[img(30%,30%)[http://i.imgur.com/oi93qRg.png]]
[img(30%,30%)[http://i.imgur.com/ka9Tm52.png]]


! references 
Oracle Database Problem Solving and Troubleshooting Handbook https://www.safaribooksonline.com/library/view/oracle-database-problem/9780134429267/ch14.html
{{{
14. Strategies for Migrating Data Quickly between Databases
}}}
Oracle RMAN Database Duplication https://www.safaribooksonline.com/library/view/oracle-rman-database/9781484211120/9781484211137_Ch01.xhtml
{{{
CHAPTER 1 Introduction
}}}




! yahoo oath 
https://www.google.com/search?q=oath+hadoop+platform&oq=oath+hadoop+platform&aqs=chrome..69i57.5613j1j1&sourceid=chrome&ie=UTF-8
https://www.google.com/search?biw=1194&bih=747&ei=4ux9W7m3Nurs_QbNupXQCg&q=yahoo+hadoop+platform+oath&oq=yahoo+hadoop+platform+oath&gs_l=psy-ab.3...12449.17634.0.20887.26.26.0.0.0.0.108.2024.24j2.26.0..2..0...1.1.64.psy-ab..0.23.1791...0j0i131k1j0i67k1j0i131i67k1j0i3k1j0i22i30k1j0i22i10i30k1j33i21k1j33i160k1j33i22i29i30k1.0.rnoY_HHMiAw
also see [[JL five-hints]] for examples on using hints to manipulate the priority of query block/table












.
based on http://ptgmedia.pearsoncmg.com/imprint_downloads/informit/promotions/python/python2python3.pdf

! .
[img(80%,80%)[https://i.imgur.com/VFFUDki.png]]
! .
[img(80%,80%)[https://i.imgur.com/aMNeCYv.png]]
! .
[img(80%,80%)[https://i.imgur.com/21KlHuY.png]]
! .
[img(80%,80%)[https://i.imgur.com/11GXxTf.png]]
https://community.oracle.com/docs/DOC-1005069  <- arup, good stuff
https://blogs.oracle.com/developers/updates-to-python-php-and-c-drivers-for-oracle-database


https://blog.dbi-services.com/oracle-locks-identifiying-blocking-sessions/

{{{
when w.wait_event_text like 'enq: TM%' then
    ' mode '||decode(w.p1 ,1414332418,'Row-S' ,1414332419,'Row-X' ,1414332420,'Share' ,1414332421,'Share RX' ,1414332422,'eXclusive')
     ||( select ' on '||object_type||' "'||owner||'"."'||object_name||'" ' from all_objects where object_id=w.p2 )
}}}


https://jonathanlewis.wordpress.com/2010/06/21/locks/
{{{
This list is specifically about the lock modes for a TM lock:

Value   Name(s)                    Table method (TM lock)
    0   No lock                    n/a
 
    1   Null lock (NL)             Used during some parallel DML operations (e.g. update) by
                                   the pX slaves while the QC is holding an exclusive lock.
 
    2   Sub-share (SS)             Until 9.2.0.5/6 "select for update"
        Row-share (RS)             Since 9.2.0.1/2 used at opposite end of RI during DML until 11.1
                                   Lock table in row share mode
                                   Lock table in share update mode
 
    3   Sub-exclusive(SX)          Update (also "select for update" from 9.2.0.5/6)
        Row-exclusive(RX)          Lock table in row exclusive mode
                                   Since 11.1 used at opposite end of RI during DML
 
    4   Share (S)                  Lock table in share mode
                                   Can appear during parallel DML with id2 = 1, in the PX slave sessions
                                   Common symptom of "foreign key locking" (missing index) problem
                                   Note that bitmap indexes on the child DON'T address the locking problem
 
    5   share sub exclusive (SSX)  Lock table in share row exclusive mode
        share row exclusive (SRX)  Less common symptom of "foreign key locking" but likely to be more
                                   frequent if the FK constraint is defined with "on delete cascade."
 
    6   Exclusive (X)              Lock table in exclusive mode
                                   create index    -- duration and timing depend on options used
                                   insert /*+ append */ 
}}}
{{{

select * from dba_tables where owner = 'KARLARAO' order by last_analyzed desc;

BEGIN
  DBMS_STATS.GATHER_SCHEMA_STATS('KARLARAO', 
  options=>'GATHER',
  estimate_percent=>dbms_stats.auto_sample_size,
  degree=>dbms_stats.auto_degree,
  cascade=>TRUE,
  no_invalidate=> FALSE);
END;
/



SELECT DBMS_STATS.GET_PREFS('AUTOSTATS_TARGET') AS autostats_target,
       DBMS_STATS.GET_PREFS('CASCADE') AS cascade,
       DBMS_STATS.GET_PREFS('DEGREE') AS degree,
       DBMS_STATS.GET_PREFS('ESTIMATE_PERCENT') AS estimate_percent,
       DBMS_STATS.GET_PREFS('METHOD_OPT') AS method_opt,
       DBMS_STATS.GET_PREFS('NO_INVALIDATE') AS no_invalidate,
       DBMS_STATS.GET_PREFS('GRANULARITY') AS granularity,
       DBMS_STATS.GET_PREFS('PUBLISH') AS publish,
       DBMS_STATS.GET_PREFS('INCREMENTAL') AS incremental,
       DBMS_STATS.GET_PREFS('STALE_PERCENT') AS stale_percent
FROM   dual;
}}}
<<showtoc>>


Here are some profile/baseline steps that can be done
 
! Then to create a profile from a good plan you can do either of the two below:

{{{
Take note that if the predicate has literals you need to specify on force_matching=TRUE so that the literals will be treated as binds
 
 
Create a profile by copying the plan_hash_value from a different SQL_ID (let’s say you rewrote the SQL and you want to inject that new plan to the old SQL_ID) 
https://raw.githubusercontent.com/karlarao/scripts/master/performance/create_sql_profile-goodbad.sql
 
dwbs001s1(sys): @create_sql_profile-goodbad.sql
Enter value for goodsql_id: 22s34g2djar10
Enter value for goodchild_no (0):  <HIT ENTER>
Enter value for badsql_id: 00fnpu38hz98x
Enter value for badchild_no (0): <HIT ENTER>
Enter value for profile_name (PROF_sqlid_planhash):  <HIT ENTER>
Enter value for category (DEFAULT):  <HIT ENTER>
Enter value for force_matching (FALSE): <HIT ENTER>
Enter value for plan_hash_value: <HIT ENTER>
SQL Profile PROF_00fnpu38hz98x_ created.
 
 
Create a profile by copying the plan_hash_value from the same SQL (let’s say the previous good plan_hash_value exist, and you want the SQL_ID to use that)
https://raw.githubusercontent.com/karlarao/scripts/master/performance/copy_plan_hash_value.sql
 
HCMPRD1> @copy_plan_hash_value.sql
Enter value for plan_hash_value to generate profile from (X0X0X0X0): 3609883731  <-- this is the good plan
Enter value for sql_id to attach profile to (X0X0X0X0): c7tadymffd34z
Enter value for child_no to attach profile to (0):
Enter value for category (DEFAULT):
Enter value for force_matching (false):
 
PL/SQL procedure successfully completed.
}}}



! After stabilizing the SQL to run on acceptable response time. You can create a SQL baseline on the SQL_ID with one or more good plan_hash_value
Example below

{{{
SQL with multiple Execution Plans
 
·         The following SQLs especially SQL_ID 93c0q2r788x6c (bad PHV 369685592) and 8txzdvns1jzxm (bad PHV 866924405) would benefit from using the SQL Plan Baseline to exclude the
bad PHVs from executing and just use the good ones
 
3d.297. SQL with multiple Execution Plans (DBA_HIST_SQLSTAT)
cid:image001.png@01D4C1AE.70BF4300
 
This can be done by following the example below:
 
·         In the example the SQL_ID 93c0q2r788x6c adds Plan Hash Values 1948592153 and 2849155601 to its SQL Plan Baseline so that the optimizer would just choose between the two plans
 
 
-- create the baseline
DECLARE
my_plans pls_integer;
BEGIN
my_plans := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id => '93c0q2r788x6c',plan_hash_value=>'1948592153', fixed =>'YES', enabled=>'YES');
END;
/
      
-- add the other plan
DECLARE
my_plans pls_integer;
BEGIN
my_plans := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id => '93c0q2r788x6c',plan_hash_value=>'2849155601', fixed =>'YES', enabled=>'YES');
END;
/
-- verify
set lines 200
set verify off
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_SQL_PLAN_BASELINE(sql_handle=>'&sql_handle', format=>'basic'));
 
 
 
--######################################################################
if PHV 2849155601 is not on cursor cache then the DBMS_SPM.LOAD_PLANS_FROM_SQLSET has to be used
--########################################################################
 
exec dbms_sqltune.create_sqlset(sqlset_name => '93c0q2r788x6c_sqlset_test',description => 'sqlset descriptions');
 
declare
baseline_ref_cur DBMS_SQLTUNE.SQLSET_CURSOR;
begin
open baseline_ref_cur for
select VALUE(p) from table(
DBMS_SQLTUNE.SELECT_WORKLOAD_REPOSITORY(&begin_snap_id, &end_snap_id,'sql_id='||CHR(39)||'&sql_id'||CHR(39)||' and plan_hash_value=2849155601',NULL,NULL,NULL,NULL,NULL,NULL,'ALL')) p;
DBMS_SQLTUNE.LOAD_SQLSET('93c0q2r788x6c_sqlset_test', baseline_ref_cur);
end;
/
 
SELECT NAME,OWNER,CREATED,STATEMENT_COUNT FROM DBA_SQLSET where name='93c0q2r788x6c_sqlset_test';
 
select * from table(dbms_xplan.display_sqlset('93c0q2r788x6c_sqlset_test','&sql_id'));
 
select sql_handle, plan_name, origin, enabled, accepted, fixed, module from dba_sql_plan_baselines;
 
set serveroutput on
declare
my_int pls_integer;
begin
my_int := dbms_spm.load_plans_from_sqlset (
sqlset_name => '93c0q2r788x6c_sqlset_test',
basic_filter => 'sql_id="93c0q2r788x6c",
sqlset_owner => 'SYS',
fixed => 'YES',
enabled => 'YES');
DBMS_OUTPUT.PUT_line(my_int);
end;
/
 
select sql_handle, plan_name, origin, enabled, accepted, fixed, module from dba_sql_plan_baselines;
 
 
-- make sure the additional PHV is ACCEPTED and FIXED
 
SET SERVEROUTPUT ON
DECLARE
  l_plans_altered  PLS_INTEGER;
BEGIN
  l_plans_altered := DBMS_SPM.alter_sql_plan_baseline(
    sql_handle      => 'SQL_c244ec33ef56024a',
    plan_name       => 'SQL_PLAN_c4j7c6grpc0kaf8003e90',
    attribute_name  => 'ACCEPTED',
    attribute_value => 'YES');
  DBMS_OUTPUT.put_line('Plans Altered: ' || l_plans_altered);
END;
/
 
set serveroutput on                                          
DECLARE                                                      
  l_plans_altered  PLS_INTEGER;                              
BEGIN                                                         
  l_plans_altered := DBMS_SPM.alter_sql_plan_baseline(       
    sql_handle      => 'SQL_c244ec33ef56024a',           
    plan_name       => 'SQL_PLAN_c4j7c6grpc0kaf8003e90',      
    attribute_name  => 'FIXED',                              
    attribute_value => 'YES');                                                                               
  DBMS_OUTPUT.put_line('Plans Altered: ' || l_plans_altered);
END;                                                          
/  
}}}



! You can verify the SQL_ID picking up the good plan by using dplan or dplanx
<<<
https://raw.githubusercontent.com/karlarao/scripts/master/performance/dplan.sql
rac-aware https://raw.githubusercontent.com/karlarao/scripts/master/performance/dplanx.sql
<<<



! Also read on “how to migrate SQL baseline” across databases. Because you need to have those baselines propagated on all your environments.
<<<
There’s also a tool inside SQLTXPLAIN (search this in MOS) it’s called coe_xfr_sql_profile https://raw.githubusercontent.com/karlarao/scripts/master/performance/coe_xfr_sql_profile_12c.sql
What this does is you run it in a SQL_ID and PLAN_HASH_VALUE and it will create a sql file. And when you run this on another environment it will create a sql profile on that SQL_ID and PLAN_HASH_VALUE combination.
So it becomes a backup of that SQL performance or another way of migrating or backing up profiles across environments.
 
In summary if you have full control over the code. Rewriting or putting hints (but not too much) to behave optimally is what I recommend. This way it gets pushed to your code base and it’s tracked on your git/version control repo and propagated across environments.
You can also baseline on top of the rewrite or hints but make sure this is maintained across environments.
<<<
















.
CS_RESOURCE_MANAGER.LIST_CURRENT_RULES
https://docs.oracle.com/en/cloud/paas/autonomous-database/serverless/adbsb/cs-resource-manager.html#GUID-4707D03A-868D-43AE-BB2F-A9BF9F738604
https://docs.oracle.com/en-us/iaas/autonomous-database-serverless/doc/service--concurrency-limit-change-ocpu.html
https://docs.oracle.com/en-us/iaas/Content/General/Concepts/regions.htm
http://kerryosborne.oracle-guy.com/2009/07/how-to-attach-a-sql-profile-to-a-different-statement/

HOWTO: bad plan to good plan switch http://www.evernote.com/shard/s48/sh/308af73e-47bc-4598-ab31-77ab74cbbed9/7acc32b91ebb64639116d3931a4e9935

{{{
15:07:41 HCMPRD1> @copy_plan_hash_value.sql
Enter value for plan_hash_value to generate profile from (X0X0X0X0): 3609883731  <-- this is the good plan
Enter value for sql_id to attach profile to (X0X0X0X0): c7tadymffd34z
Enter value for child_no to attach profile to (0):
Enter value for category (DEFAULT):
Enter value for force_matching (false):

PL/SQL procedure successfully completed.

}}}
Database Development guide -> 2 Connection Strategies for Database Applications
https://docs.oracle.com/en/database/oracle/oracle-database/19/adfns/connection_strategies.html#GUID-90D1249D-38B8-47BF-9829-BA0146BD814A


https://docs.oracle.com/database/122/ADFNS/connection_strategies.htm#ADFNS-GUID-90D1249D-38B8-47BF-9829-BA0146BD814A
<<showtoc>>


! redo apply

!! Without Real Time Apply (RTA) on standby database
{{{
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
}}}

!! With Real Time Apply (RTA)
If you configured your standby redo logs, you can start real-time apply using the following command:
{{{
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
}}}

!! Stopping Redo Apply on standby database
To stop Redo Apply in the foreground, issue the following SQL statement.
{{{
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
}}}



! monitor redo apply 

!! Last sequence received and applied
You can use this (important) SQL to check whether your physical standby is in Sync with the Primary:
{{{
SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied", (ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference"
FROM
(SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,
(SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL
WHERE
ARCH.THREAD# = APPL.THREAD#;
}}}
{{{
-- redo transport services 
ALTER SESSION SET NLS_DATE_FORMAT ='DD-MON-RR HH24:MI:SS';
SELECT INST_ID, SEQUENCE#, APPLIED, FIRST_TIME, NEXT_TIME FROM GV$ARCHIVED_LOG ORDER BY 2,1,4;

ALTER SYSTEM SWITCH LOGFILE;
 
SELECT INST_ID, SEQUENCE#, APPLIED, FIRST_TIME, NEXT_TIME FROM GV$ARCHIVED_LOG ORDER BY 2,1,4;
}}}

!! on standby, get data guard stats 
{{{
set linesize 120
col START_TIME format a20
col ITEM format a20
SELECT TO_CHAR(START_TIME, 'DD-MON-RR HH24:MI:SS') START_TIME, ITEM , SOFAR, UNITS
FROM V$RECOVERY_PROGRESS
WHERE ITEM IN ('Active Apply Rate', 'Average Apply Rate', 'Redo Applied');
}}}


!! on standby, retrieve the transport lag and the apply lag
{{{
-- Transport lag represents the data that will be lost in case of disaster
col NAME for a13
col VALUE for a13
col UNIT for a30
set LINES 132
SELECT NAME, VALUE, UNIT, TIME_COMPUTED
FROM V$DATAGUARD_STATS WHERE NAME IN ('transport lag', 'apply lag');
}}}


!! Standby database process status
{{{
select distinct process, status, thread#, sequence#, block#, blocks from v$managed_standby ;
}}}


If using real time apply
{{{
select TYPE, ITEM, to_char(TIMESTAMP, 'DD-MON-YYYY HH24:MI:SS') from v$recovery_progress where ITEM='Last Applied Redo';
or 
select recovery_mode from v$archive_dest_status where dest_id=1;

}}}



! others 
{{{
select * from v$managed_standby;
select * from v$log;
select * from v$standby_log;
select * from DBA_REGISTERED_ARCHIVED_LOG
select * from V$ARCHIVE
select * from V$PROXY_ARCHIVEDLOG
select * from V$ARCHIVED_LOG
select * from V$ARCHIVE_GAP
select * from V$ARCHIVE_PROCESSES;
select * from V$ARCHIVE_DEST;
select * from V$ARCHIVE_DEST_STATUS
select * from V$PROXY_ARCHIVELOG_DETAILS
select * from V$BACKUP_ARCHIVELOG_DETAILS
select * from V$BACKUP_ARCHIVELOG_SUMMARY
select * from V$PROXY_ARCHIVELOG_SUMMARY

----------------------------
-- MONITOR RECOVERY
----------------------------

-- Monitoring the Process Activities
     -- The V$MANAGED_STANDBY view on the standby database site shows you the activities performed by both redo transport and Redo Apply processes in a Data Guard environment. The CLIENT_P column in the output of the following query identifies the corresponding primary database process.
SELECT PROCESS, CLIENT_PROCESS, SEQUENCE#, STATUS FROM V$MANAGED_STANDBY;

-- Determining the Progress of Redo Apply
     -- The V$ARCHIVE_DEST_STATUS view on either a primary or standby database site provides you information such as the online redo log files that were archived, the archived redo log files that are applied, and the log sequence numbers of each. The following query output shows the standby database is two archived redo log files behind in applying the redo data received from the primary database. To determine if real-time apply is enabled, query the RECOVERY_MODE column of the V$ARCHIVE_DEST_STATUS view. It will contain the value MANAGED REAL TIME APPLY when real-time apply is enabled
SELECT ARCHIVED_THREAD#, ARCHIVED_SEQ#, APPLIED_THREAD#, APPLIED_SEQ#, RECOVERY_MODE FROM V$ARCHIVE_DEST_STATUS;

-- Determining the Location and Creator of the Archived Redo Log Files
     -- the location of the archived redo log, which process created the archived redo log, redo log sequence number of each archived redo log file, when each log file was archived, and whether or not the archived redo log file was applied
set lines 300
col name format a80
alter session set NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS';
SELECT NAME, CREATOR, SEQUENCE#, APPLIED, COMPLETION_TIME FROM V$ARCHIVED_LOG where applied = 'NO' order by 3;

-- Viewing Database Incarnations Before and After OPEN RESETLOGS
SELECT RESETLOGS_ID,THREAD#,SEQUENCE#,STATUS,ARCHIVED FROM V$ARCHIVED_LOG ORDER BY RESETLOGS_ID,SEQUENCE# ;
SELECT INCARNATION#, RESETLOGS_ID, STATUS FROM V$DATABASE_INCARNATION;

-- Viewing the Archived Redo Log History
     -- The V$LOG_HISTORY on the standby site shows you a complete history of the archived redo log, including information such as the time of the first entry, the lowest SCN in the log, the highest SCN in the log, and the sequence numbers for the archived redo log files.
SELECT FIRST_TIME, FIRST_CHANGE#, NEXT_CHANGE#, SEQUENCE# FROM V$LOG_HISTORY;

-- Determining Which Log Files Were Applied to the Standby Database
select max(sequence#), applied, thread# from v$archived_log group by applied, thread# order by 1;

-- Determining Which Log Files Were Not Received by the Standby Site
--SELECT LOCAL.THREAD#, LOCAL.SEQUENCE#, local.applied FROM 
--(SELECT THREAD#, SEQUENCE#, applied FROM V$ARCHIVED_LOG WHERE DEST_ID=1) LOCAL 
 --WHERE LOCAL.SEQUENCE# NOT IN 
--(SELECT SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND 
--THREAD# = LOCAL.THREAD#); 


------------------------------------------------------------------------------------
-- Monitoring Log Apply Services on Physical Standby Databases
------------------------------------------------------------------------------------

-- Accessing the V$DATABASE View
     -- Issue the following query to show information about the protection mode, the protection level, the role of the database, and switchover status:
SELECT DATABASE_ROLE, DB_UNIQUE_NAME INSTANCE, OPEN_MODE, PROTECTION_MODE, PROTECTION_LEVEL, SWITCHOVER_STATUS FROM V$DATABASE;
     -- Issue the following query to show information about fast-start failover:
SELECT FS_FAILOVER_STATUS FSFO_STATUS, FS_FAILOVER_CURRENT_TARGET TARGET_STANDBY, FS_FAILOVER_THRESHOLD THRESHOLD, FS_FAILOVER_OBSERVER_PRESENT OBS_PRES FROM V$DATABASE;

-- Accessing the V$MANAGED_STANDBY Fixed View
     -- Query the physical standby database to monitor Redo Apply and redo transport services activity at the standby site.. The previous query output shows that an RFS process completed archiving a redo log file with sequence number 947. The output also shows that Redo Apply is actively applying an archived redo log file with the sequence number 946. The recovery operation is currently recovering block number 10 of the 72-block archived redo log file.
SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;

-- Accessing the V$ARCHIVE_DEST_STATUS Fixed View
     -- To determine if real-time apply is enabled, query the RECOVERY_MODE column of the V$ARCHIVE_DEST_STATUS view. It will contain the value MANAGED REAL TIME APPLY when real-time apply is enabled
SELECT ARCHIVED_THREAD#, ARCHIVED_SEQ#, APPLIED_THREAD#, APPLIED_SEQ#, RECOVERY_MODE FROM V$ARCHIVE_DEST_STATUS;

-- Accessing the V$ARCHIVED_LOG Fixed View
     -- The V$ARCHIVED_LOG fixed view on the physical standby database shows all the archived redo log files received from the primary database. This view is only useful after the standby site starts receiving redo data; before that time, the view is populated by old archived redo log records generated from the primary control file.
SELECT REGISTRAR, CREATOR, THREAD#, SEQUENCE#, FIRST_CHANGE#, NEXT_CHANGE# FROM V$ARCHIVED_LOG;

-- Accessing the V$LOG_HISTORY Fixed View
     -- Query the V$LOG_HISTORY fixed view on the physical standby database to show all the archived redo log files that were applied
SELECT THREAD#, SEQUENCE#, FIRST_CHANGE#, NEXT_CHANGE# FROM V$LOG_HISTORY;

-- Accessing the V$DATAGUARD_STATUS Fixed View
     -- The V$DATAGUARD_STATUS fixed view displays events that would typically be triggered by any message to the alert log or server process trace files.
SELECT MESSAGE FROM V$DATAGUARD_STATUS;


}}}








! references
https://www.oracle-scripts.net/dataguard-management/
[[Coderepo]]

[[Awk]], [[grep]], [[sed]], [[sort, uniq]]
[[BashShell]]
[[PowerShell]]
[[Perl]]
[[Python]]

[[PL/SQL]]
[[x R - Datacamp]] [[R maxym]]

[[HTML5]]
[[Javascript]] [[node.js]]
[[GoLang]]

[[Java]]
[[Machine Learning]]

viz and reporting in [[Tableau]]

[[noSQL]]




<<showtoc>> 

! learning path 
https://training.looker.com/looker-development-foundations  enroll here first "Getting Started with LookML" you will be redirected to -> https://learn.looker.com/projects/learn_intro/documents/home.md
https://training.looker.com/looker-development-foundations/334816


! watch videos
!! Lynda
https://www.linkedin.com/learning/looker-first-look/welcome

!! Business User Video Tutorials
https://docs.looker.com/video-library/exploring-data

!! Developer Video Tutorials
https://docs.looker.com/video-library/data-modeling


! official doc 
https://docs.looker.com/

!! release notes 
https://docs.looker.com/relnotes/intro

!! development 
!!! what is LookML 
https://docs.looker.com/data-modeling/learning-lookml/what-is-lookml
!!! Steps to Learning LookML
https://docs.looker.com/data-modeling/learning-lookml

!!! Retrieve and Chart Data
https://docs.looker.com/exploring-data/retrieve-chart-intro


!! admin
!!! Clustering 
https://docs.looker.com/setup-and-management/tutorials/clustering




! comparison vs Tableau 
https://looker.com/compare/looker-vs-tableau   "a trusted data model" 
https://webanalyticshub.com/tableau-looker-domo/
[img(80%,80%)[ https://user-images.githubusercontent.com/3683046/90553425-5bb73780-e162-11ea-9b74-038d47f3341f.png]]
https://www.itbusinessedge.com/business-intelligence/looker-vs.-tableau.html
https://www.quora.com/To-anyone-that-has-used-Looker-how-would-you-compare-it-to-Tableau-in-terms-of-price-capabilities



! x



! LOD vs Subtotals, row totals, and Table Calculations
https://discourse.looker.com/t/tableau-lod-equivalent-custom-dimension/18641
https://help.looker.com/hc/en-us/articles/360023635234-Subtotals-with-Table-Calculations
https://docs.looker.com/exploring-data/visualizing-query-results/table-next-options
https://help.tableau.com/current/pro/desktop/en-us/calculations_calculatedfields_lod_fixed.htm







<<showtoc>>

! h2o 
https://www.h2o.ai/products/h2o-driverless-ai/

! metaverse 
http://www.matelabs.in/#home
http://docs.mateverse.com/user-guide/getting-started/

HOWTO: create a manual SQL Profile https://www.evernote.com/shard/s48/sh/f1bda7e9-2ced-4794-8c5e-32b1beac567b/96cd95cebb8f3cad0329833d7aa4a328


http://kerryosborne.oracle-guy.com/2010/07/sqlt-coe_xfr_sql_profilesql/
http://kerryosborne.oracle-guy.com/2010/11/how-to-lock-sql-profiles-generated-by-sql-tuning-advisor/
http://bryangrenn.blogspot.com/2010/12/sql-profiles.html
Oracle Analytics Cloud: Augmented Analytics at Scale https://www.oracle.com/business-analytics/comparison-chart.html
https://docs.oracle.com/en/middleware/bi/analytics-server/whats-different-oas/index.html#OASWD-GUID-C907A4B0-FAFD-4F54-905C-D6FCA519C262
https://www.linkedin.com/pulse/should-i-run-performance-test-part-ci-pipeline-aniket-gadre/  <- ANSWER IS YES!
<<<
I like Wilson's answer 

Aniket, the assumptions underlying your statements makes sense ... if this was still 1995. This is exactly what I cover in my PerfGuild talk April 8th https://automationguild.com/performance 1) Perf tests just to determine "Pass or Fail" is out of fashion now because performance testers should be more valuable than "traffic cops writing tickets", and provide tools and advice to developers. Why? So that performance issues are identified early, while that code is still fresh in the mind of developers rather than in production when changes are too expensive to change. 2) monitoring systems can be brought up automatically in the pipeline. Ask your APM vendor to show you how. 3) in my experience, many performance issues are evident within 10 minutes if you can ramp up quickly enough. 4) same. 5) the point of CI/CD is to provide coverage of potential risks. Companies pay us the big bucks for us to predict issues, not to be reactive chumps. 6) Please get back in your time machine and join us in the 21st century. There are cloud environments now which spin up servers for a short time. 6 again) memory leaks are not the only reason for perf tests. Perf tests are now done to tune configurations since companies are now paying for every cycle used rather than having a fixed number of machines. It's time to upgrade your assumptions.
<<<

references 
https://alexanderpodelko.com/docs/Continuous_Performance_Testing_CMG17.pdf
https://www.qlik.com/us/products/qlikview/personal-edition
https://community.qlik.com/thread/36516
https://www.qlik.com/us/solutions/developers
http://branch.qlik.com/#!/project

https://app.pluralsight.com/library/courses/qlikview-analyzing-data/table-of-contents
{{{
    forecast step by step: 
        eyeball the data
            raw data    
            data exploration
            periodicity
            ndiff (how much we should difference)
            decomposition - determine the series components (trend, seasonality etc.)
                x = decompose(AirPassengers, "additive")
                mymodel = x$trend + x$seasonal; plot(mymodel)           # just the trend and seasonal data
                mymodel2 = AirPassengers - x$seasonal ; plot(mymodel2)  # orig data minus the seasonal data
            seasonplot 
        process data
            create xts object
            create a ts object from xts (coredata, index, frequency/periodicity)
            partition data train,validation sets        
        graph it 
            tsoutliers (outlier detection) , anomaly detection (AnomalyDetection package)
            log scale data
            add trend line (moving average (centered - ma and trailing - rollmean) and simple exponential smoothing (ets))
        performance evaluation
            Type of seasonality assessed graphically (decompose - additive,etc.)
            detrend and seasonal adjustment (smoothing/deseasonalizing)
            lag-1 diff graph
            forecast residual graph
            forecast error graph
            acf/pacf (Acf, tsdisplay)
                raw data
                forecast residual
                lag-1 diff
            autocorrelation 
                fUnitRoots::adfTest() - time series data is non-stationary (p value above 0.05)
                tsdisplay(diff(data_ts, lag=1)) - ACF displays there's no autocorrelation going on (no significant lags out of the 95% confidence interval, the blue line) 
            accuracy
            cross validation https://github.com/karlarao/forecast_examples/tree/master/cross_validation/cvts_tscvexample_investigation
            forecast of training
            forecast of training + validation + future (steps ahead)       
        forecast result
            display prediction intervals (forecast quantile)
            display the actual and forecasted series
            displaying the forecast errors
            distribution of forecast errors
}}}
https://saplumira.com/
https://www.quora.com/Which-data-visualization-tool-is-better-SAP-Lumira-or-Tableau
https://www.prokarma.com/blog/2014/08/20/look-sap-lumira-and-lumira-cloud
https://blogs.sap.com/2014/09/12/a-lumira-extension-to-acquire-twitter-data/
https://www.sap.com/developer/tutorials/lumira-initial-data-acquisition.html
http://visualbi.com/blogs/sap-lumira-discovery/connect-sap-hana-bw-universe-sap-lumira-discovery/
<<showtoc>>

! standards for business communications
[img(40%,40%)[ https://i.imgur.com/Y6Ekegn.png]]

! download and documentation
Alternate download site across versions https://licensing.tableausoftware.com/esdalt/
Release notes across versions http://www.tableausoftware.com/support/releases?signin=650fb8c2841d145bc3236999b96fd7ab
Official doc http://www.tableausoftware.com/community/support/documentation-old
knowledgebase http://kb.tableausoftware.com/
manuals http://www.tableausoftware.com/support/manuals
http://www.tableausoftware.com/new-features/6.0
http://www.tableausoftware.com/new-features/7.0
http://www.tableausoftware.com/new-features/8.0
http://www.tableausoftware.com/fast-pace-innovation  <-- timeline across versions

''Tableau - Think Data Thursday Video Library'' http://community.tableausoftware.com/community/groups/tdt-video-library
''Tableau Style Guide'' https://github.com/davidski/dataviz/blob/master/Tableau%20Style%20Guide.md
''Software Development Lifecycle With Tableau'' https://github.com/russch/tableau-sdlc-sample
''How to share data with a statistician'' https://github.com/davidski/datasharing


! license 
https://customer-portal.tableau.com/s/
upgrading tableau desktop http://kb.tableausoftware.com/articles/knowledgebase/upgrading-tableau-desktop
offline activation http://kb.tableausoftware.com/articles/knowledgebase/offline-activation
renewal cost for desktop and personal http://www.triadtechpartners.com/wp-content/uploads/Tableau-GSA-Price-List-April-2013.pdf
renewal FAQ http://www.tableausoftware.com/support/customer-success
eula http://mkt.tableausoftware.com/files/eula.pdf


! viz types
* treemap http://www.tableausoftware.com/new-features/new-view-types
* bubble chart
* word cloud


! connectors
''Oracle Driver''
there’s an Oracle Driver so you can connect directly to a database http://downloads.tableausoftware.com/drivers/oracle/desktop/tableau7.0-oracle-driver.msi
http://www.tableausoftware.com/support/drivers
http://kb.tableausoftware.com/articles/knowledgebase/oracle-connection-errors


! HOWTOs
http://www.tableausoftware.com/learn/training  <-- LOTS OF GOOD STUFF!!!
http://community.tableausoftware.com/message/242749#242749 <-- Johan's Ideas Collections

''parameters'' http://www.youtube.com/watch?v=wvF7gAV82_c

''calculated fields'' http://www.youtube.com/watch?v=FpppiLBdtGc, http://www.tableausoftware.com/table-calculations. http://kb.tableausoftware.com/articles/knowledgebase/combining-date-and-time-single-field

''scatter plots'' http://www.youtube.com/watch?v=RYMlIY4nT9k, http://downloads.tableausoftware.com/quickstart/feature-guides/trend_lines.pdf

''getting the r2'',''trendlines'' http://kb.tableausoftware.com/articles/knowledgebase/statistics-finding-correlation, http://onlinehelp.tableausoftware.com/v7.0/pro/online/en-us/trendlines_model.html

''forecasting'' http://tombrownonbi.blogspot.com/2010/07/simple-forecasting-using-tableau.html, resolving forecast errors http://onlinehelp.tableausoftware.com/current/pro/online/en-us/forecast_resolve_errors.html

tableau forecast model - Holt-Winters exponential smoothing
http://onlinehelp.tableausoftware.com/v8.1/pro/online/en-us/help.html#forecast_describe.html

Method for Creating Multipass Aggregations Using Tableau Server  <-- doing various statistical methods in tableau
http://community.tableausoftware.com/message/181143#181143

Monte Carlo in Tableau
http://drawingwithnumbers.artisart.org/basic-monte-carlo-simulations-in-tableau/

''dashboards'' http://community.tableausoftware.com/thread/109753?start=0&tstart=0, http://tableaulove.tumblr.com/post/27627548817/another-method-to-update-data-from-inside-tableau, http://ryrobes.com/tableau/tableau-phpgrid-an-almost-instant-gratification-data-entry-tool/

''dashboard size'' http://kb.tableausoftware.com/articles/knowledgebase/fixed-size-dashboard

''dashboard multiple sources'' http://kb.tableausoftware.com/articles/knowledgebase/multiple-sources-one-worksheet

''reference line weekend highlight , reference line weekend in tableau'' https://community.tableau.com/thread/123456 (shading in weekends), http://www.evolytics.com/blog/tableau-hack-how-to-highlight-a-dimension/ , https://discussions.apple.com/thread/1919024?tstart=0 , https://3danim8.wordpress.com/2013/11/18/using-tableau-buckets-to-compare-weekday-to-weekend-data/ , 	http://onlinehelp.tableau.com/current/pro/desktop/en-us/actions_highlight_advanced.html , https://community.tableau.com/thread/120260 (How can I add weekend reference lines) 	

''reference line'', ''reference band'' http://onlinehelp.tableausoftware.com/v7.0/pro/online/en-us/reflines_addlines.html, http://vizwiz.blogspot.com/2012/09/tableau-tip-adding-moving-reference.html, http://onlinehelp.tableausoftware.com/v6.1/public/online/en-us/i1000860.html, http://kb.tableausoftware.com/articles/knowledgebase/independent-field-reference-line,  http://community.tableausoftware.com/thread/127009?start=0&tstart=0, http://community.tableausoftware.com/thread/121369

''custom reference line - based on a measure field''
http://community.tableausoftware.com/message/275150 <-- drag calculated field on the marks area

''dynamic reference line''
http://community.tableausoftware.com/thread/124998, http://community.tableausoftware.com/thread/105433, http://www.interworks.com/blogs/iwbiteam/2012/04/09/adding-different-reference-lines-tableau

''percentile on reference line''
https://community.tableau.com/thread/108974

''dynamic parameter''
http://drawingwithnumbers.artisart.org/creating-a-dynamic-parameter-with-a-tableau-data-blend/

''thresholds'' Multiple thresholds for different cells on one worksheet http://community.tableausoftware.com/thread/122285

''email and alerting'' http://www.metricinsights.com/data-driven-alerting-and-email-notifications-for-tableau/, http://community.tableausoftware.com/thread/124411

''templates'' http://kb.tableausoftware.com/articles/knowledgebase/replacing-data-source, http://www.tableausoftware.com/public/templates/schools, http://wannabedatarockstar.blogspot.com/2013/06/create-default-tableau-template.html, http://wannabedatarockstar.blogspot.co.uk/2013/04/colour-me-right.html

''click to filter'' http://kb.tableausoftware.com/articles/knowledgebase/combining-sheet-links-and-dashboards

''tableau worksheet actions'' http://community.tableausoftware.com/thread/138785

''date functions and calculations'' http://onlinehelp.tableausoftware.com/current/pro/online/en-us/functions_functions_date.html, http://pharma-bi.com/2011/04/fiscal-period-calculations-in-tableau-2/

''date dimension'' http://blog.inspari.dk/2013/08/27/making-the-date-dimension-ready-for-tableau/

''Date Range filter and Default date filter''
google search https://www.google.com/search?q=tableau+date+range+filter&oq=tableau+date+range+&aqs=chrome.2.69i57j0l5.9028j0j7&sourceid=chrome&es_sm=119&ie=UTF-8
Creating a Filter for Start and End Dates Using Parameters http://kb.tableausoftware.com/articles/howto/creating-a-filter-for-start-and-end-dates-parameters
Tableau Tip: Showing all dates on a date filter after a Server refresh http://vizwiz.blogspot.com/2014/01/tableau-tip-showing-all-dates-on-date.html
Tableau Tip: Default a date filter to the last N days http://vizwiz.blogspot.com/2013/09/tableau-tip-default-date-filter-to-last.html

''hide NULL values'' http://reports4u.co.uk/tableau-hide-null-values/, http://reports4u.co.uk/tableau-hide-values-quick-filter/, http://kb.tableausoftware.com/articles/knowledgebase/replacing-null-literalsclass, http://kb.tableausoftware.com/articles/knowledgebase/null-values <-- good stuff

''logical functions - if then else, case when then'' http://onlinehelp.tableausoftware.com/v7.0/pro/online/en-us/functions_functions_logical.html, http://kb.tableausoftware.com/articles/knowledgebase/understanding-logical-calculations, http://onlinehelp.tableausoftware.com/v6.1/public/online/en-us/id2611b7e2-acb6-467e-9f69-402bba5f9617.html

''tableau working with sets''
https://www.tableausoftware.com/public/blog/2013/03/powerful-new-tools
http://onlinehelp.tableausoftware.com/v6.1/public/online/en-us/i1201140.html
http://community.tableausoftware.com/thread/136845 <-- good example on filters
https://www.tableausoftware.com/learn/tutorials/on-demand/sets?signin=a8f73d84a4b046aec26bc955854a381b <-- GOOD STUFF video tutorial 
IOPS SIORS - Combining several measures in one dimension - http://tableau-ext.hosted.jivesoftware.com/thread/137680

''tableau groups''
http://vizwiz.blogspot.com/2013/05/tableau-tip-creating-primary-group-from.html
http://www.tableausoftware.com/learn/tutorials/on-demand/grouping?signin=f98f9fd64dcac0e7f2dc574bca03b68c  <-- VIDEO tutorial 

''Random Number generation in tableau'' 
http://community.tableausoftware.com/docs/DOC-1474

''Calendar view viz''
http://thevizioneer.blogspot.com/2014/04/day-1-how-to-make-calendar-in-tableau.html
http://vizwiz.blogspot.com/2012/05/creating-interactive-monthly-calendar.html
http://vizwiz.blogspot.com/2012/05/how-common-is-your-birthday-find-out.html

''Custom SQL''
http://kb.tableausoftware.com/articles/knowledgebase/customizing-odbc-connections
http://tableaulove.tumblr.com/post/20781994395/tableau-performance-multiple-tables-or-custom-sql
http://bensullins.com/leveraging-your-tableau-server-to-create-large-data-extracts/
http://tableaulove.tumblr.com/post/18945358848/how-to-publish-an-unpopulated-tableau-extract
http://onlinehelp.tableausoftware.com/v8.1/pro/online/en-us/customsql.html
http://onlinehelp.tableausoftware.com/v7.0/pro/online/en-us/customsql.html
Using Raw SQL Functions http://kb.tableausoftware.com/articles/knowledgebase/raw-sql
http://community.tableausoftware.com/thread/131017

''Geolocation'' 
http://tableaulove.tumblr.com/post/82299898419/ip-based-geo-location-in-tableau-new-now-with-more
http://dataremixed.com/2014/08/from-gps-to-viz-hiking-washingtons-trails/
https://public.tableausoftware.com/profile/timothyvermeiren#!/vizhome/TimothyAllRuns/Dashboard

''tableau - import custom geocoding data - world map''
https://community.tableau.com/thread/200454
https://www.youtube.com/watch?v=nVrCH-PWM10
https://www.youtube.com/watch?v=IDyMMPiNVGw
https://onlinehelp.tableau.com/current/pro/online/mac/en-us/custom_geocoding.html
https://onlinehelp.tableau.com/current/pro/online/mac/en-us/maps_customgeocode_importing.html

''tableau perf analyzer''
http://www.interworks.com/services/business-intelligence/tableau-performance-analyzer

''tableau and python''
http://bensullins.com/bit-ly-data-to-csv-for-import-to-tableau/

''Visualize and Understand Tableau Functions''
https://public.tableausoftware.com/profile/tyler3281#!/vizhome/EVERYONEWILLUSEME/MainScreen

''tableau workbook on github''
http://blog.pluralsight.com/how-to-store-your-tableau-server-workbooks-on-github

''tableau radar chart / spider graph''
https://wikis.utexas.edu/display/tableau/How+to+create+a+Radar+Chart

''maps animation''
http://www.tableausoftware.com/public/blog/2014/08/capturing-animation-tableau-maps-2574?elq=d12cbf266b1342e68ea20105369371cf


''if in list'' http://community.tableausoftware.com/ideas/1870, http://community.tableausoftware.com/ideas/1500
<<<
{{{
IF 
trim([ENV])='x07d' OR 
trim([ENV])='x07p'  
THEN 'AML' 
ELSE 'OTHER' END


IF 
TRIM([ENV]) = 'x07d' THEN 'AML' ELSEIF 
TRIM([ENV]) = 'x07p' THEN 'AML' 
ELSE 'OTHER' END


IF [Processor AMD] THEN 'AMD'
ELSEIF [Processor Intel] THEN 'INTEL'
ELSEIF [Processor IBM Power] THEN 'IBM Power'
ELSEIF [Processor SPARC] THEN 'SPARC'
ELSE 'Other' END


IF contains('x11p,x08p,x28p',trim([ENV]))=true THEN 'PROD' 
ELSEIF contains('x29u,x10u,x01u',trim([ENV]))=true THEN 'UAT' 
ELSEIF contains('x06d,x07d,x12d',trim([ENV]))=true THEN 'DEV' 
ELSEIF contains('x06t,x14t,x19t',trim([ENV]))=true THEN 'TEST' 
ELSE 'OTHER' END

[Snap Id] = (150106) or
[Snap Id] = (150107) or
[Snap Id] = (150440) or
[Snap Id] = (150441)
}}}
<<<

''calculated field filter'' http://stackoverflow.com/questions/30753330/tableau-using-calculated-fields-for-filtering-dimensions, http://breaking-bi.blogspot.com/2013/03/creating-table-calculations-on-values.html
<<<
{{{
DRW
SUM(IF contains('CD_IO_RQ_R_LG_SEC-CD,CD_IO_RQ_R_SM_SEC-CD,CD_IO_RQ_W_LG_SEC-CD,CD_IO_RQ_W_SM_SEC-CD',trim([Metric]))=true THEN 1 END) > 0

CD_IO_RQ_R_LG_SEC-CD,0.21
CD_IO_RQ_R_SM_SEC-CD,0.62
CD_IO_RQ_W_LG_SEC-CD,2.14
CD_IO_RQ_W_SM_SEC-CD,5.69
}}}
<<<

''What is the difference between Tableau Server and Tableau Server Worker?'' http://community.tableausoftware.com/thread/109121

''tableau vs spotfire vs qlikview'' http://community.tableausoftware.com/thread/116055, https://apandre.wordpress.com/2013/09/13/tableau-8-1-vs-qlikview-11-2-vs-spotfire-5-5/ , http://butleranalytics.com/spotfire-tableau-and-qlikview-in-a-nutshell/ , https://www.trustradius.com/compare-products/tableau-desktop-vs-tibco-spotfire

''twbx for sending workbooks'' http://kb.tableausoftware.com/articles/knowledgebase/sending-packaged-workbook

''YOY moving average'' http://daveandrade.com/2015/01/25/tableau-table-calcs-how-to-calculate-a-year-over-year-4-week-moving-average/

''json'' http://community.tableau.com/ideas/1276

''tableau reverse engineering'' http://www.theinformationlab.co.uk/2015/01/22/learning-tableau-reverse-engineering/

''filter partial highlight'' https://community.tableau.com/thread/143761 , http://breaking-bi.blogspot.com/2014/03/partial-highlighting-on-charts-in.html

''Window functions'' https://community.tableau.com/thread/144402, http://kb.tableau.com/articles/knowledgebase/functional-differences-olap-relational, http://www.lunametrics.com/blog/2015/09/17/yoy-bar-charts-in-tableau/, http://breaking-bi.blogspot.com/2013/03/working-with-window-calculations-and.html, https://www.interworks.com/blog/tmccullough/2014/09/29/5-tableau-table-calculation-functions-you-need-know, http://breaking-bi.blogspot.com/2013/04/using-lookup-function-in-tableau.html
{{{
LOOKUP(sum([Net]),-1)
}}}

''Count only the numbers that are positive, and get the percentage'' 
{{{
(COUNT(IF [Diff] >= 0 THEN [Diff] END) / COUNT([Diff]))*100
}}}

''Add category for stock Position Type'' 
{{{
IF contains('Buy,Sell',trim([Type 1]))=true THEN 'Long' 
ELSEIF contains('Buy to Cover,Sell Short',trim([Type 1]))=true THEN 'Short' 
ELSE 'OTHER' END
}}}

''updated processor group filter''
{{{
IF contains(lower(trim([Processor])),'amd')=true THEN 'AMD' 
ELSEIF contains(lower(trim([Processor])),'intel')=true THEN 'INTEL' 
ELSEIF contains(lower(trim([Processor])),'power')=true THEN 'IBM' 
ELSEIF contains(lower(trim([Processor])),'sparc')=true THEN 'SPARC' 
ELSE 'OTHER' END
}}}

''storage cell dimension for x2 and x3 cells on the same diskgroup - useful for destage IOs''
{{{
IF 
trim([Cellname])='192.168.10.9' OR 
trim([Cellname])='192.168.10.10' OR
trim([Cellname])='192.168.10.11' OR
trim([Cellname])='192.168.10.12' OR
trim([Cellname])='192.168.10.13' OR
trim([Cellname])='192.168.10.14' OR
trim([Cellname])='192.168.10.15' OR
trim([Cellname])='192.168.10.16' OR
trim([Cellname])='192.168.10.17' OR
trim([Cellname])='192.168.10.18' OR
trim([Cellname])='192.168.10.19' OR
trim([Cellname])='192.168.10.20' OR
trim([Cellname])='192.168.10.21' OR
trim([Cellname])='192.168.10.22' 
THEN 'x2' 
elseif 
trim([Cellname])='192.168.10.38' OR 
trim([Cellname])='192.168.10.39' OR
trim([Cellname])='192.168.10.40' OR
trim([Cellname])='192.168.10.41' OR
trim([Cellname])='192.168.10.42' OR
trim([Cellname])='192.168.10.43' OR
trim([Cellname])='192.168.10.44' OR
trim([Cellname])='192.168.10.45' OR
trim([Cellname])='192.168.10.46' OR
trim([Cellname])='192.168.10.47' OR
trim([Cellname])='192.168.10.48' OR
trim([Cellname])='192.168.10.49' OR
trim([Cellname])='192.168.10.50' OR
trim([Cellname])='192.168.10.51' 
THEN 'x3'
else 
'other'
end
}}}

! highlight SQL 

{{{
IF contains(lower(trim([Sql Id])),'069k4ppu1n1nc')=true THEN [Sql Id]
ELSE 'OTHER' END
}}}

{{{
IF contains(lower(trim([Sql Id])),'069k4ppu1n1nc')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'0vzyv2wsr2apz')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'0xsz99mn2nuvc')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'0zwcr39tvssxj')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'1d9qrkfvh78bt')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'1zbk54du40dnu')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'2xjwy1jvu31xu')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'2ywv61bm22pw7')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'3fn33utt2ptns')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'3knptw3bxf1c9')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'3qvc497pz6hvp')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'3tpznswf2f7ak')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'4v775zu1p3b3f')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'51u31qah6z8d9')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'59wrat188thgf')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'5f2t4rq7xkfav')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'61r81qmqpt1bs')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'6cwh5bz0d0jkv')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'6fyy4v8c85cmk')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'6k6g6725pwjpw')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'7x0psn00ac54g')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'7xmjvrazhyntv')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'82psz0nhm68wf')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'885mt394synz4')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'af4vzj7jyv5mz')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'azkbmbyxahmh2')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'cws1kfprz7u8f')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'d6h43fh3d9p7g')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'dwgtzzmc509zf')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'f3frkf6tvkjwn')=true THEN [Sql Id]
ELSEIF contains(lower(trim([Sql Id])),'gd55mjru6w8x')=true THEN [Sql Id]
ELSE 'OTHER' END
}}}



!! complex calculated field logic 
IF statement with multiple value condition https://community.tableau.com/thread/119254
Specific rows to column using calculated fields https://community.tableau.com/thread/266616
Tableau Tip: Return the value of a single dimension member https://www.thedataschool.co.uk/naledi-hollbruegge/tableau-tip-tuesday-getting-just-one-variable-field/
https://www.google.com/search?q=tableau+calculated+field+nested+if&oq=tableau+calculated+field+nested+&aqs=chrome.0.0j69i57j0.6974j0j1&sourceid=chrome&ie=UTF-8
Nesting IF/CASE question https://community.tableau.com/thread/254997 , https://community.tableau.com/thread/254997?start=15&tstart=0

{{{

IF [Metric Name] = 'RSRC_MGR_CPU_WAIT_TIME_PCT' THEN 
    IF [Value] > 100 THEN [Value]*.2 END
END


IF [Metric Name] = 'RSRC_MGR_CPU_WAIT_TIME_PCT' AND [Value] > 100 THEN 
    IF [Value] > 100 THEN 120
    ELSE [Value] END
ELSEIF [Metric Name] = 'Host CPU Utilization (%)' THEN [Value]
END


IF [Metric Name] = 'RSRC_MGR_CPU_WAIT_TIME_PCT' AND [Value] > 100 THEN [Value]*.2 
END

}}}



! ASH elapsed time , DATEDIFF
{{{
DATEDIFF('second',MIN([TMS]),max([TM]))
}}}
https://www.google.com/search?ei=Yr8LXdCgGu2N_Qau-6_QCQ&q=tableau+lod+time+calculation+max+first+column+minus+value+second+column&oq=tableau+lod+time+calculation+max+first+column+minus+value+second+column&gs_l=psy-ab.3...40963.51043..51405...0.0..0.311.4393.27j1j7j1......0....1..gws-wiz.......0i71j35i304i39j33i10j33i160j33i299.cvq9wrnC6uY
LOD expression for 'difference from overall average' https://community.tableau.com/thread/227662
https://www.theinformationlab.co.uk/2017/01/27/calculate-datediff-one-column-tableau/
https://onlinehelp.tableau.com/current/pro/desktop/en-us/functions_functions_date.htm


!! DATEADD , add hours 
{{{
DATEADD('hour', 3, #2004-04-15#)  
}}}
https://community.tableau.com/thread/157714



! LOD expression 
LOD expression to calculate average of a sum of values per user https://community.tableau.com/thread/224293



! OBIEE workload separation 
{{{

IF contains(lower(trim([Module])),'BIP')=true THEN 'BIP'
ELSEIF contains(lower(trim([Module])),'ODI')=true THEN 'ODI'
ELSEIF contains(lower(trim([Module])),'nqs')=true THEN 'nqsserver'
ELSE 'OTHER' END

}}}






! get underlying SQL 
https://community.tableau.com/thread/170370
http://kb.tableau.com/articles/howto/viewing-underlying-sql-queries-desktop
{{{
C:\Users\karl\Documents\My Tableau Repository\Logs
}}}

! automating tableau reports 
Tableau-generated PDF imports to Inkscape with text missing https://community.tableau.com/thread/118822
Automate pdf generation using Tableau Desktop https://community.tableau.com/thread/137724
Tableau Scripting Engine https://community.tableau.com/ideas/1694
tableau community https://community.tableau.com/message/199249#199249
http://powertoolsfortableau.com/tools/portals-for-tableau
https://dwuconsulting.com/tools/software-videos/automating-tableau-pdf <- GOOD STUFF
https://www.autoitscript.com/site/autoit/
http://stackoverflow.com/questions/17212676/vba-automation-of-tableau-workbooks-using-cmd
http://www.graphgiraffe.net/blog/tableau-tutorial-automated-pdf-report-creation-with-tableau-desktop


! remove last 2 characters
https://community.tableau.com/docs/DOC-1391
https://community.tableau.com/message/325328

! measure names, measure values 
Combining several measures in one dimension https://community.tableau.com/thread/137680
Tableau Tips and Tricks: Measure Names and Measure Values https://www.youtube.com/watch?v=m0DGW_WYKtA
http://kb.tableau.com/articles/knowledgebase/measure-names-and-measure-values-explained



! initial SQL filter
When the user opens the workbook there will be parameter1 and parameter2 prompts for date range. And that date range will be in effect for all the sheets on that workbook 
https://onlinehelp.tableau.com/current/pro/desktop/en-us/connect_basic_initialsql.html
https://www.tableau.com/about/blog/2016/2/introducing-initial-sql-parameters-tableau-93-50213
https://tableauandbehold.com/2016/03/09/using-initial-sql-for/



! add total to stacked bar chart 
https://www.credera.com/blog/business-intelligence/tableau-workaround-part-3-add-total-labels-to-stacked-bar-chart/


! tableau roles 

tableau roles 
https://onlinehelp.tableau.com/current/server/en-us/users_site_roles.htm
https://www.google.com/search?q=tableau+publisher+vs+interactor&oq=tableau+publisher+vs+inter&aqs=chrome.0.0j69i57.5424j0j1&sourceid=chrome&ie=UTF-8



! How to verify the version of Tableau workbook
https://community.tableau.com/thread/257592
https://community.powertoolsfortableau.com/t/how-to-find-the-tableau-version-of-a-workbook/176
<<<
Add “.zip” to the end of the TWBX file. For example, “my workbook.twbx” would become “my workbook.twbx.zip”
<<<


! Embedded database credentials in Tableau

https://www.google.com/search?q=tableau+data+source+embed+credentials&oq=tableau+data+source+embed+credentials&aqs=chrome..69i57.5805j0j1&sourceid=chrome&ie=UTF-8
https://onlinehelp.tableau.com/current/pro/desktop/en-us/publishing_sharing_authentication.htm
https://onlinehelp.tableau.com/current/server/en-us/impers_SQL.htm
http://help.metricinsights.com/m/61790/l/278359-embed-database-credentials-in-tableau
https://howto.mt.gov/Portals/19/Documents/Publishing%20in%20Tableau.pdf
https://howto.mt.gov/Portals/19/Documents/Tableau%20Server%20Authentication.pdf
https://howto.mt.gov/tableau#610017167-tableau-desktop




! tableau dynamic reference line 
https://kb.tableau.com/articles/howto/adding-separate-dynamic-reference-lines-for-each-dimension-member
Add a reference line based on a calculated field https://community.tableau.com/thread/216051


! Add Separate Dynamic Reference Lines For Each Dimension Member in Tableau
How to Add Separate Dynamic Reference Lines For Each Dimension Member in Tableau https://www.youtube.com/watch?v=_3ASdjKFsAM



! dynamic axis range 
Dynamic Axis Range - Fixing One End (or both, or have it dynamic) https://community.tableau.com/docs/DOC-6215
http://drawingwithnumbers.artisart.org/creating-a-dynamic-range-parameter-in-tableau/
https://www.reddit.com/r/tableau/comments/77dx2v/hello_heroes_of_tableau_is_it_possible_to/
Set Axis Range Based On Calculated Field https://community.tableau.com/thread/243009
https://community.tableau.com/docs/DOC-6215  <- good stuff
https://public.tableau.com/profile/simon.r5129#!/vizhome/DynamicAxisRange-onlyPlotwithin1SD/Usethistorestrictoutliers


! window_stdev
https://playfairdata.com/how-to-do-anomaly-detection-in-tableau/
https://www.linkedin.com/pulse/standard-deviation-tableau-sumeet-bedekar/


! visualizing survey data 
https://www.datarevelations.com/visualizing-survey-data




! export tableau to powerpoint 
https://onlinehelp.tableau.com/current/pro/desktop/en-us/save_export_image.htm
https://www.clearlyandsimply.com/clearly_and_simply/2012/05/embed-tableau-visualizations-in-powerpoint.html
https://www.google.com/search?q=tableau+on+powerpoint&oq=tableau+on+powe&aqs=chrome.0.0j69i57j0l4.3757j0j1&sourceid=chrome&ie=UTF-8


! T test 
https://dabblingwithdata.wordpress.com/2015/09/18/kruskal-wallis-significance-testing-with-tableau-and-r/
http://breaking-bi.blogspot.com/2013/03/conducting-2-sample-z-test-in-tableau.html
T-test using Tableau for proportion & means https://community.tableau.com/thread/258064
Calculating T-Test (or any other statistical tests) parameters in Tableau https://community.tableau.com/thread/251371
T test https://www.google.com/search?q=t+test+columns+in+tableau&oq=t+test+columns+in+tableau&aqs=chrome..69i57j69i64l3.7841j0j1&sourceid=chrome&ie=UTF-8
t-test of two independent means https://community.tableau.com/docs/DOC-1428
https://www.google.com/search?q=t+test+in+tableau&oq=t+test+in+tableau&aqs=chrome..69i57j0l3j69i64l2.2239j0j1&sourceid=chrome&ie=UTF-8



! pivot data source 
Tableau in Two Minutes - How to Pivot Data in the Data Source https://www.youtube.com/watch?v=fvRVJ7d7NFI
Combine 3 Date Fields in the same Axis https://community.tableau.com/thread/206580
tableau combine 3 dates in one axis https://www.google.com/search?q=tableau+combine+3+dates+in+one+axis&oq=tableau+combine+3+dates+in+one+axis&aqs=chrome..69i57.22106j0j4&sourceid=chrome&ie=UTF-8


! people 
http://www.penguinanalytics.co , http://www.penguinanalytics.co/Datasets/ , https://public.tableau.com/profile/john.alexander.cook#!/



! Pareto Chart Reference line 20pct
Pareto Chart Reference line 20pct https://community.tableau.com/thread/228448
add y axis reference line pareto chart tableau
https://www.google.com/search?q=add+y+axis+reference+line+pareto+chart+tableau&oq=add+y+axis+reference+line+pareto+chart+tableau&aqs=chrome..69i57.6862j0j1&sourceid=chrome&ie=UTF-8
create pareto chart https://www.youtube.com/watch?v=pptICtCPSVg



! math based bin 
Count Number of Occurances of a Value https://community.tableau.com/message/303878#303878   <- good stuff 
http://vizdiff.blogspot.com/2015/07/create-bins-via-math-formula.html
To create custom bins or buckets for Sales https://community.tableau.com/thread/229116
tableau manual create bucket of 1 https://www.google.com/search?q=tableau+manual+create+bucket+of+1&oq=tableau+manual+create+bucket+of+1&aqs=chrome..69i57.7065j1j1&sourceid=chrome&ie=UTF-8
Calculated Field - Number Generator https://community.tableau.com/thread/205361
tableau create a sequence of numbers calculater field https://www.google.com/search?q=tableau+create+a+sequence+of+numbers+calculater+field&oq=tableau+create+a+sequence+of+numbers+calculater+field&aqs=chrome..69i57.11948j0j1&sourceid=chrome&ie=UTF-8
tableau calculated field create sequence https://www.google.com/search?q=tableau+calculated+field+create+sequence&oq=tableau+calculated+field+create+sequence&aqs=chrome..69i57.7054j0j1&sourceid=chrome&ie=UTF-8
Grouping bins greater than 'x' https://community.tableau.com/thread/220806
creating bins https://www.youtube.com/watch?v=ZFdqXVNST24
creating bins2 https://www.youtube.com/watch?v=VwDPBWuHu3Q



! constant reference line on continuous data tableau
constant reference line on continuous data tableau https://www.google.com/search?ei=6rSmXPysDaWt_Qb5nrlI&q=constant+reference+line+on+continuous+data+tableau&oq=constant+reference+line+on+continuous+data+tableau&gs_l=psy-ab.3...14576.15058..15169...0.0..0.83.382.5......0....1..gws-wiz.......0i71j35i304i39.3OhD28SQVL0 



! Add reference line in an axis made by Dimension
Add reference line in an axis made by Dimension https://community.tableau.com/thread/223274
Placing the Reference Line https://community.tableau.com/thread/260253
Highlight bin in Histogram https://community.tableau.com/thread/287638
tableau highlight bin https://www.google.com/search?q=tableau+highligh+bin&oq=tableau+highligh+bin&aqs=chrome..69i57.3251j0j1&sourceid=chrome&ie=UTF-8


! slope graph
https://www.tableau.com/about/blog/2016/9/how-add-vertical-lines-slope-graphs-multiple-measures-59632



! Reference Bands based on calculation
Reference Bands based on calculation https://community.tableau.com/thread/258490


! coding CASE statement easily 
http://vizdiff.blogspot.com/2015/07/coding-case-statement-made-easy.html



! Floor and Ceiling Functions
Floor and Ceiling Functions https://community.tableau.com/docs/DOC-1354
https://www.google.com/search?q=tableau+floor+function&oq=tableau+floor+f&aqs=chrome.0.0j69i57.4668j0j1&sourceid=chrome&ie=UTF-8



! reference band based on calculation 
reference band based on calculation https://community.tableau.com/thread/258490
tableau reference band on dimension in title https://www.google.com/search?ei=1rWmXInnLe61ggeNzI_wBg&q=tableau+reference+band+on+dimension+in+title&oq=tableau+reference+band+on+dimension&gs_l=psy-ab.1.0.33i22i29i30.16534.18647..20459...0.0..0.154.931.6j3......0....1..gws-wiz.......0i71j0i22i30j0i22i10i30.BFQ4RmLGzpA


! reference band, highlight weekends (check screenshot)
{{{
IF DATEPART('weekday',[Date])=6 or DATEPART('weekday',[Date])=7 THEN 0 END
}}}
https://www.evolytics.com/blog/tableau-hack-how-to-highlight-a-dimension/
adding reference line to discrete variable https://community.tableau.com/thread/193986
tableau Reference Line Discrete Headers https://www.google.com/search?q=tableau+Reference+Line+Discrete+Headers&oq=tableau+Reference+Line+Discrete+Headers&aqs=chrome..69i57j69i60.7445j0j1&sourceid=chrome&ie=UTF-8
https://kb.tableau.com/articles/issue/add-reference-line-to-discrete-field
Shading in weekends https://community.tableau.com/thread/123456



! Dashboard actions - Highlight bin from different source (see screenshot)
Highlight bin from different source https://community.tableau.com/thread/157710


! conditional format individual rows 
Tableau: Advanced Conditional Formatting - format text columns differently https://www.youtube.com/watch?v=w2nlT_TBUzU , https://www.youtube.com/watch?v=7H7Dy0G0y04
https://www.evolytics.com/blog/tableau-hack-conditionally-format-individual-rows-columns/
http://www.vizwiz.com/2016/06/tableau-tip-tuesday-how-to.html
is there a way to highlight or bold certain rows ( the entire row) in tableau the way you can in excel https://community.tableau.com/thread/122382
Color Coding by column instead of the entire row https://community.tableau.com/thread/115822


! tableau can't compare numerical bin and integer values
https://www.google.com/search?ei=8p-mXJuVFNCs5wKU0KDgAg&q=tableau+can%27t+compare+numerical+bin+and+integer+values&oq=tableau+can%27t+compare+integer+and+numeric+values&gs_l=psy-ab.1.0.0i8i7i30.12190.18434..20595...0.0..0.110.1319.13j2......0....1..gws-wiz.......0i71j33i10.xCVj6fYB9lU



! A secondary axis chart: How to add a secondary axis in Tableau
A secondary axis chart: How to add a secondary axis in Tableau https://www.youtube.com/watch?v=8yNPCgL7OtI




! tableau server 
https://www.udemy.com/administering-tableau-server-10-with-real-time-scenarios/ 


! tableau performance tuning 
Enhancing Tableau Data Queries https://www.youtube.com/watch?v=STfTQ55QE9s&index=19&list=LLmp7QJNLQvBQcvdltLTkiYQ&t=0s 


! perf tool - tableau log viewer
https://github.com/tableau/tableau-log-viewer https://github.com/tableau/tableau-log-viewer/releases



! tableau dashboard actions
Tableau Actions Give Your Dashboards Superpowers https://www.youtube.com/watch?v=r8SNKmzsW6c 


! tabpy/R 
Data science applications with TabPy/R https://www.youtube.com/watch?v=nRtOMTnBz_Y&feature=youtu.be




! time dimension example 
(with example workbook) US Holiday Date Flags 2010-2020, to share https://community.tableau.com/thread/246992
http://radacad.com/do-you-need-a-date-dimension
http://radacad.com/custom-functions-made-easy-in-power-bi-desktop
oracle generate date dimension https://sonra.io/2009/02/24/lets-create-a-date-dimension-with-pure-oracle-sql/


! prophet forecasting 
(with example workbook) using prophet to forecast https://community.tableau.com/thread/285800
example usage https://community.tableau.com/servlet/JiveServlet/download/855640-292880/Data%20Science%20Applications.twbx


! tableau database writeback 
Writeback to reporting database in Tableau 8 - hack or feature https://community.tableau.com/thread/122102
https://www.tableau.com/ja-jp/about/blog/2016/10/tableau-getdata-api-60539
Tableau writeback https://www.reddit.com/r/tableau/comments/6quhg4/tableau_writeback/
Updating Data in Your Database with Tableau https://www.youtube.com/watch?v=UWI_ub1Xuwg
K4 Analytics: How to extend Tableau with write-back and leverage your Excel models https://www.youtube.com/watch?v=5PlzdA19TUw
https://www.clearlyandsimply.com/clearly_and_simply/2016/06/writing-and-reading-tableau-views-to-and-from-databases-and-text-files-part-2.html
Tableau Write Back to Database https://community.tableau.com/thread/284428
Can we write-back to the database https://community.tableau.com/thread/279806
https://www.computerweekly.com/blog/CW-Developer-Network/Tableau-widens-developer-play-what-is-a-writeback-extension
Tableau's Extension API - Write Back https://www.youtube.com/watch?v=Jiazp_zQ0jY	
https://tableaufans.com/extension-api/tableau-extension-api-write-back-updated-source-code-for-tableau-2018-2/
Another method to update data from inside tableau http://tableaulove.tumblr.com/post/27627548817/another-method-to-update-data-from-inside-tableau
https://biztory.com/2017/10/09/interactive-commenting-solution-tableau-server/
adding information to charts for data which is not in the data source file https://community.tableau.com/thread/139439


! Commenting On Data Points In Dashboard
https://interworks.com/blog/jlyons/2018/10/01/portals-for-tableau-101-inline-commenting-on-dashboards/
https://www.theinformationlab.co.uk/2016/04/13/dashboards-reports-dynamic-comments-tableau/
Commenting On Data Points In Dashboard https://community.tableau.com/docs/DOC-8867
https://community.tableau.com/thread/157149?start=15&tstart=0


! tableau display database data as annotation 
Populate Annotation with Calculated Field https://community.tableau.com/thread/156259
Annotations / Comments / data writeback https://community.tableau.com/ideas/1261
https://stackoverflow.com/questions/40533735/generating-dynamic-displayed-annotations-in-tableau-dashboard



! order of operations 
https://www.theinformationlab.co.uk/2013/01/28/5-things-i-wish-i-knew-about-tableau-when-i-started/


! adding timestamp on charts 
https://www.thedataschool.co.uk/robbin-vernooij/time-stamping-your-data-in-tableau-and-tableau-prep/


! tableau threshold data alerts 
data driven alerts https://www.youtube.com/watch?v=vp3u4D7ao8w
https://www.google.com/search?q=tableau+threshold+alerts&oq=tableau+threshold+alerts&aqs=chrome..69i57.5575j0j1&sourceid=chrome&ie=UTF-8

! tableau pdf automation
<<<
Automation of creating PDF workbooks and delivery via email https://community.tableau.com/thread/120031
Tabcmd and Batch Scripts to automate PDF generation https://www.youtube.com/watch?v=ajB7CDcoyDU
https://www.thedataschool.co.uk/philip-mannering/idiots-guide-controlling-tableau-command-line-using-tabcmd/
Print to PDF using pages shelf in Tableau Desktop https://community.tableau.com/thread/238654
https://www.quora.com/How-do-I-automate-reports-using-the-Tableau-software
https://onlinehelp.tableau.com/current/pro/desktop/en-us/printing.htm

Can TabCMD be used to automatically schedule reports to a file share https://community.tableau.com/thread/176497
how to use tabcmd in tableau desktop,and what are the commands for downloading pdf file and txbx file https://community.tableau.com/thread/154051

we can achieve the batch export to pdf using tabcmd and can also input parameters, https://www.thedataschool.co.uk/philip-mannering/idiots-guide-controlling-tableau-command-line-using-tabcmd/   , a batch file can be scheduled on your laptop or on the server itself. then PDF files related to EXP1 will be spooled on a directory and can be merged (cpu, io, mem, etc.) into 1 file using another tool. all handled in the batch script
example workflow of automating pdf https://www.youtube.com/watch?v=ajB7CDcoyDU
<<<


! tableau open source 
https://tableau.github.io/


! tableau javascript api (embedded analytics - js api, REST API, SSO, and mobile)
Tableau JavaScript API | The most delicious ingredient for your custom applications https://www.youtube.com/watch?v=Oda_T5PMwt0
official doc https://onlinehelp.tableau.com/current/api/js_api/en-us/JavaScriptAPI/js_api.htm
Tableau JavaScript API: Getting Started https://www.youtube.com/watch?v=pCstUYalMEU


! tableau hyper api (hyper files as data frame - enables CRUD on files)
Hyper API: Automating Data Connectivity to Solve Real Business Problems https://www.youtube.com/watch?v=-FrMCmknI0Y
https://help.tableau.com/current/api/hyper_api/en-us/docs/hyper_api_whatsnew.html

* https://github.com/tableau/hyper-api-samples
* https://github.com/Bartman0/tableau-incremental-refresh/blob/main/tableau-incremental-refresh.py
* https://github.com/manish-aspiring-DS/Tableau-Hyper-Files


! tableau other developer tools 
https://www.tableau.com/developer/tools
https://www.tableau.com/support/help
<<<

    Tableau Connector SDK
    Tableau Embedded Analytics Playbook
    Tableau Extensions API
    Tableau Hyper API
    Tableau JavaScript API
    Tableau Metadata API
    Tableau Python Server (TabPY)
    Tableau REST API
    Tableau Webhooks
    Web Data Connector SDK

<<<



! alexa tableau integration 
https://github.com/jinuik?tab=repositories
http://bihappyblog.com/2016/06/11/voice-controlled-tableau-dashboard/
https://www.talater.com/annyang/
Tableau and Google Assistant / Siri https://community.tableau.com/thread/267634
Tableau Assistant - Alexa https://www.youtube.com/watch?v=V8TJBj0msIQ
https://www.tableau.com/about/blog/2017/3/hacking-alexa-and-other-tableau-api-tricks-67108
Alexa as a Tableau Assistant https://www.youtube.com/watch?v=zqGK2LYtx-U
Tableau 16 Hackathon - Voice Assisted Analytics https://www.youtube.com/watch?v=5Uul3Qy8YVE
alexa with tableau https://community.tableau.com/thread/256681
Integrating Tableau with Alexa https://community.tableau.com/thread/264965
https://twitter.com/tableau/status/967885701164527621



! tableau data source refresh schedule 
https://www.youtube.com/results?search_query=tableau+data+source
https://www.youtube.com/results?search_query=tableau+data+source+refresh
https://www.youtube.com/watch?v=FuDX1u9QSb8 Tableau - Do it Yourself Tutorial - Refresh Extracts using Command line - DIY -33-of-50 

!! tableau sync client, tableau bridge
Tableau - Do it Yourself Tutorial - Refresh Extracts using Command line - DIY -33-of-50 https://www.youtube.com/watch?v=FuDX1u9QSb8&list=PLklSCDzsQHdkjiTHqqCaU8tdA70AlSnPs&index=24&t=0s
https://onlinehelp.tableau.com/current/pro/desktop/en-gb/extracting_push.htm
https://www.google.com/search?q=tableay+sync+client&oq=tableay+sync+client&aqs=chrome..69i57j0l5.2789j0j0&sourceid=chrome&ie=UTF-8
https://www.tableau.com/about/blog/2015/5/online-sync-client-38549
https://onlinehelp.tableau.com/current/online/en-us/qs_refresh_local_data.htm
https://kb.tableau.com/articles/issue/error-this-file-was-created-by-a-newer-version-of-tableau-using-online-sync-client



! tableau outlier detection, standard deviation 
https://public.tableau.com/views/HandlingDataOutliers/OutlierHandling?%3Aembed=y&%3AshowVizHome=no&%3Adisplay_count=y&%3Adisplay_static_image=y
Outliers based on Standard Deviation https://community.tableau.com/thread/195904



! tableau awk split delimiter 
{{{
SPLIT([Name],':',2 )
}}}
https://community.tableau.com/thread/177520



! custom color palette 
How to use same color palette for different visualizations https://community.tableau.com/thread/248482
https://onlinehelp.tableau.com/current/pro/desktop/en-us/formatting_worksheet.htm
https://www.tableauexpert.co.in/2015/11/how-to-create-custom-color-palette-in.html



! Labels overlapping
Labels overlapping https://community.tableau.com/thread/208870
mark labels https://community.tableau.com/thread/212775
How to avoid overlapping of labels in dual axis charts https://community.tableau.com/thread/236099


! real time graphs, kafka streaming
streaming data https://community.tableau.com/thread/125081
https://rockset.com/blog/using-tableau-for-live-dashboards-on-event-data/
https://rockset.com/blog/tableau-kafka-real-time-sql-dashboard-on-streaming-data/
Real Time streaming data from Kafka https://community.tableau.com/ideas/8913
<<<
Since Kafka is a streaming data source, it would not make sense to connect Kafka directly to Tableau. But you can use Tableau's Rockset JDBC connector to build live Tableau dashboards on streaming event data with:

1. Low data latency (new data shows up in seconds)
2. Fast SQL queries (including JOINs with other data sources)
3. Support for high QPS, interactive queries & drill downs
<<<


! tableau data source - custom SQL pivot 
https://help.tableau.com/current/pro/desktop/en-us/pivot.htm
Combine multiple dimensions / pivot multiple columns https://community.tableau.com/thread/189601
https://www.google.com/search?q=tableau+pivot+dimension+columns&oq=tableau+pivot+dimension+columns&aqs=chrome..69i57j33.11202j0j1&sourceid=chrome&ie=UTF-8
https://www.google.com/search?q=tableau+pivot+column+not+working&oq=tableau+pivot+column+not+&aqs=chrome.2.69i57j33l6.7094j1j1&sourceid=chrome&ie=UTF-8


! circle graph jitter - spacing scatter plot, overlapping circles
Overlapping marks on scatter plot https://community.tableau.com/thread/283671
https://www.google.com/search?q=tableau+jitter+on+circle+chart&oq=tableau+jitter+on+circle+chart&aqs=chrome..69i57j33.6029j0j1&sourceid=chrome&ie=UTF-8


! tableau timeline graph 
https://playfairdata.com/how-to-make-a-tableau-timeline-when-events-overlap/
https://playfairdata.com/how-to-make-a-timeline-in-tableau/
https://www.google.com/search?q=tableau+visualize+start+end+times+time+series&oq=tableau+visualize+start+end+times+time+series&aqs=chrome..69i57.8797j1j1&sourceid=chrome&ie=UTF-8
https://www.google.com/search?q=tableau+time+series+start+and+end+timestamp&oq=tableau+time+series+start+and+end+timestamp&aqs=chrome..69i57.12272j0j1&sourceid=chrome&ie=UTF-8


! Changing the File Path for Extracts
https://kb.tableau.com/articles/howto/changing-the-file-path-for-extracts
{{{
Answer
If you have a .twbx file, convert the .twbx file to a .twb file using one of the following methods:
In Tableau Desktop, open the packaged workbook (.twbx file), and then select File > Save As. Under Save as type, select Tableau Workbook (*.twb).
In Windows Explorer, right-click the .twbx file, and then select Unpackage.
In Tableau Desktop, open the .twb file. Click the sheet tab and  then select Data > <data source name> > Extract > Remove.
Select Just remove the extract, and then click OK.
Select  Data > <data source name> > Extract Data, and then click Extract.
Select the desired location, and then click Save.
}}}


! Difference Between Extract Filters and Data Source Filters
https://community.tableau.com/docs/DOC-8721
{{{
Extract Filter:

As the name implies extract filters are used to filter out the data while creating the extract.

Example: Let’s say we have database with the data for different countries as shown below

USA –                    5000 rows
Canada –             2000 rows
India                      10000 rows
Australia              1500 rows
If we apply the Extract filters to bring the data only for USA (Country=USA), Tableau creates the Extract (.tde) just for the Country USA and ignore the data for all other countries.

Size of the Extract is always proportionate the Extract filters.

#of rows in the extract: 5000 rows for country USA

Data Source Filters:

In Tableau 9.0.4 applying Data source filters won’t change to the volume data and size of the extract. Instead data source filters applies the filters to the background query when we use any of the dimensions or measures in the visualizations.

Example:

If we apply the Data Source filters to bring the data only for USA (Country=USA), Tableau creates the Extract (.tde) with the full volume of the data for all countries (not only for USA) and there won’t be any relationship between the data source filters and the size of the extract.

#of rows in the extract: 18,500 (for all countries)

 

However there won’t be any change the way we use the dimensions and measures using both the extracts in the Visualization. Both should work as expected and will show the data only for USA.
}}}


! Trim a string up to a special character
https://community.tableau.com/thread/134857
{{{
RIGHT([String], LEN([String] - FIND([String],":"))
}}}


! startswith 
Creating two filters using the first letter (Starts with) https://community.tableau.com/thread/235823


! tableau topn within category - INDEX

How to find the top N within a category in Tableau
https://www.youtube.com/watch?v=z0R9OsDl-10

https://kb.tableau.com/articles/howto/finding-the-top-n-within-a-category








! Videos
Tableau TCC12 Session: Facebook http://www.ustream.tv/recorded/26807227
''Tableau Server/Desktop videos''
http://www.lynda.com/Tableau-tutorials/Up-Running-Tableau/165439-2.html
http://beta.pluralsight.com/search/?searchTerm=tableau
http://pluralsight.com/training/Authors/Details/ben-sullins
http://www.livefyre.com/profile/21361843/ ben sullins comments/questions



! people 
tableauing dangerously https://cmtoomey.github.io/blog/page5/


! TC conference papers and materials (2016 to 2018)
https://www.dropbox.com/sh/lztdogubf20498e/AAAPptLIxaAPLdBGmwUtMVJba?dl=0


.








! fork this
https://github.com/karlarao/karlaraowiki


! how to run two versions of mozilla (need to create a new profile)
{{{
"C:\Program Files (x86)\MozillaFirefox4RC2\firefox.exe" -P "karlarao" -no-remote
}}}
ALERT: Action Required for Autonomous Databases (Doc ID 2911553.1)
https://blogs.oracle.com/datawarehousing/getting-started-with-autonomous-data-warehouse-part-1-oracle-moviestream

DML without limits, now in BigQuery https://cloud.google.com/blog/products/data-analytics/dml-without-limits-now-in-bigquery
Dremel: Interactive Analysis of Web-Scale Datasets (with cost based optimizer)
https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/36632.pdf
[img(50%,50%)[https://user-images.githubusercontent.com/3683046/91346837-69e30480-e7af-11ea-9e18-452a3c1c8a28.png]]
[img(50%,50%)[https://user-images.githubusercontent.com/3683046/91346835-68b1d780-e7af-11ea-9186-7a12eb70ddf1.png]]
https://github.com/googleapis/python-bigquery/tree/master/benchmark
https://github.com/googleapis/python-bigquery/tree/master/samples

https://cloud.google.com/bigquery/docs/release-notes
<<showtoc>>

! one SQL 

vi test.py 
{{{
from google.cloud import bigquery

# Construct a BigQuery client object.
client = bigquery.Client()

query = """
with cte as (
SELECT /* cte_query */ b.* 
FROM `example-prod-284123.dataset01.table01` a
inner join `example-dev-284123.dataset01.table01` b
on a.col1 = b.col1
)
SELECT /* now3 main_query */ b.* 
FROM `example-prod-284123.dataset01.table01` a
inner join `example-dev-284123.dataset01.table01` b
on a.col1 = b.col1
inner join cte
on a.col1 = cte.col1
inner join (SELECT /* subquery */ b.* 
FROM `example-prod-284123.dataset01.table01` a
inner join `example-dev-284123.dataset01.table01` b
on a.col1 = b.col1) sq
on a.col1 = sq.col1
"""
query_job = client.query(query)  # Make an API request.

print("done")
#for row in query_job:
#    print(','.join(row))

}}}


! two SQLs - SELECT and DDL
{{{
cat test3-createtbl.py 

from google.cloud import bigquery

# Construct a BigQuery client object.
client = bigquery.Client()

query = """
with cte as (
SELECT /* cte_query */ b.* 
FROM `example-prod-284123.dataset01.table01` a
inner join `example-dev-284123.dataset01.table01` b
on a.col1 = b.col1
)
SELECT /* now3 main_query */ b.* 
FROM `example-prod-284123.dataset01.table01` a
inner join `example-dev-284123.dataset01.table01` b
on a.col1 = b.col1
inner join cte
on a.col1 = cte.col1
inner join (SELECT /* subquery */ b.* 
FROM `example-prod-284123.dataset01.table01` a
inner join `example-dev-284123.dataset01.table01` b
on a.col1 = b.col1) sq
on a.col1 = sq.col1
;

create table `example-dev-284123.dataset01.table03`
as select * from `example-dev-284123.dataset01.table01`;
"""
query_job = client.query(query)  # Make an API request.

}}}





.



https://cloud.google.com/products/calculator/
https://cloudpricingcalculator.appspot.com/static/data/pricelist.json
https://cloud.google.com/storage/pricing
http://calculator.s3.amazonaws.com/index.html


<<showtoc>>

! node and pl/sql
http://www.slideshare.net/lucasjellema/oracle-databasecentric-apis-on-the-cloud-using-plsql-and-nodejs <-- GOOD STUFF
https://technology.amis.nl/2016/04/01/running-node-oracledb-the-oracle-database-driver-for-node-js-in-the-pre-built-vm-for-database-development/ <- GOOD STUFF
https://github.com/lucasjellema/sig-nodejs-amis-2016  <- GOOD STUFF
https://www.npmjs.com/search?q=plsql
https://www.npmjs.com/package/node_plsql
https://www.npmjs.com/package/oracledb
http://www.slideshare.net/lucenerevolution/sir-en-final
https://blogs.oracle.com/opal/entry/introducing_node_oracledb_a_node
https://github.com/oracle/node-oracledb
https://github.com/oracle/node-oracledb/blob/master/doc/api.md#plsqlexecution
http://stackoverflow.com/questions/36009085/how-to-execute-stored-procedure-query-in-node-oracledb-if-we-are-not-aware-of
http://www.slideshare.net/lucasjellema/oracle-databasecentric-apis-on-the-cloud-using-plsql-and-nodejs
http://dgielis.blogspot.com/2015/01/setting-up-node-and-oracle-database.html
http://pythonhackers.com/p/doberkofler/node_plsql
http://lauren.pomogi-mne.com/how-to-run-oracle-user-defined-functions-using-nodejs-stack-overflow-1061567616/



! node websocket , socket.io
websocket https://www.youtube.com/watch?v=9L-_cNQaizM
Real-time Data with Node.js, Socket.IO, and Oracle Database https://www.youtube.com/watch?v=-mMIxikhi6M
http://krisrice.blogspot.com/2014/06/publish-data-over-rest-with-nodejs.html
An Intro to JavaScript Web Apps on Oracle Database http://nyoug.org/wp-content/uploads/2015/04/McGhan_JavaScript.pdf

! dan mcghan's relational to json
I'm looking up books/references for "end to end app from data model to nodejs" and I came across that video - VTS: Relational to JSON with Node.js https://www.youtube.com/watch?v=hFoeVZ4UpBs
https://blogs.oracle.com/newgendbaccess/entry/in_praise_of_dan_mcghan
https://jsao.io/2015/07/relational-to-json-in-oracle-database/ , http://www.slideshare.net/CJavaPeru/relational-to-json-with-node-dan-mc-ghanls
http://www.garysguide.com/events/lxnry6t/NodeJS-Microservices-NW-js-Mapping-Relational-to-JSON	
https://blog.risingstack.com/nodejs-at-scale-npm-best-practices/
An Intro to JavaScript Web Apps on Oracle Database http://nyoug.org/wp-content/uploads/2015/04/McGhan_JavaScript.pdf


! path to nodejs 
https://github.com/gilcrest/OracleMacOSXElCapitanSetup4Node	
http://drumtechy.blogspot.com/2015/03/my-path-to-nodejs-and-oracle-glory.html
http://drumtechy.blogspot.com/2015/03/my-path-to-nodejs-and-oracle-glory_14.html
http://drumtechy.blogspot.com/2015/03/my-path-to-nodejs-and-oracle-glory_16.html
<<showtoc>>

! ksun-oracle
!! book - oracle database performance tuning - studies practices research 
https://drive.google.com/file/d/1VijpHBG1I7Wi2mMPj91kSmHH_J-CZKr_/edit

!! Cache Buffer Chains Latch Contention Case Study-2: Reverse Primary Key Index
http://ksun-oracle.blogspot.com/2020/02/cache-buffer-chains-latch-contention_25.html
{{{
wget https://raw.githubusercontent.com/karlarao/scripts/master/performance/create_hint_sqlprofile.sql

profile fix offload initial max SQL from hours to 3secs
We have the following SQL that ran long in DWTST that we fixed through SQL profile (from 50mins to 3secs). We are expecting this to run longer in PROD due to larger size table. 

SELECT MIN("LOAD_DATE") FROM "DIM"."ENS_CSM_SUMMARY_DT_GLT" 

I’ve attached the script to implement the fix. And please following the steps below: 

1)	On prod host 
            cd /db_backup_denx3/p1/gluent/karl
2)	Connect / as sysdba 
3)	Execute the script as follows 

@create_hint_sqlprofile
Enter value for sql_id: dg7zj0q9qa2gf
Enter value for profile_name (PROFILE_sqlid_MANUAL): <just hit ENTER here>
Enter value for category (DEFAULT): <just hit ENTER here>
Enter value for force_matching (false): <just hit ENTER here>
Enter value for hint_text: NO_PARALLEL
Profile PROFILE_dg7zj0q9qa2gf_MANUAL created.

This will make the query run in serial through a profile hint which is the fix for this issue. 


After this profile creation. Please cancel the job and restart it. 

}}}
Field Guide to Hadoop https://www.safaribooksonline.com/library/view/field-guide-to/9781491947920/
<<<
* scd
* cdc 
* streaming
<<<



Data lake ingestion strategies - Practical Enterprise Data Lake Insights: Handle Data-Driven Challenges in an Enterprise Big Data Lake https://learning.oreilly.com/library/view/practical-enterprise-data/9781484235225/html/454145_1_En_2_Chapter.xhtml

Information Integration and Exchange - Enterprise Information Management in Practice: Managing Data and Leveraging Profits in Today’s Complex Business Environment https://learning.oreilly.com/library/view/enterprise-information-management/9781484212189/9781484212196_Ch05.xhtml

Data Warehouse Patterns - SQL Server Integration Services Design Patterns, Second Edition https://learning.oreilly.com/library/view/sql-server-integration/9781484200827/9781484200834_Ch11.xhtml

Change Data Capture techniques - SAP Data Services 4.x Cookbook https://learning.oreilly.com/library/view/sap-data-services/9781782176565/ch09s02.html

https://www.youtube.com/results?search_query=scd+vs+cdc
https://communities.sas.com/t5/SAS-Data-Management/SCD-Type-2-Loader-vs-Change-Data-Capture/td-p/136421
https://www.google.com/search?q=why+CDC+vs+SCD&ei=0C9wXNDrM-Wmgge64pbYCg&start=10&sa=N&ved=0ahUKEwjQk_Ks7c_gAhVlk-AKHTqxBasQ8tMDCLQB&biw=1439&bih=798
https://network.informatica.com/message/75171#75171
https://it.toolbox.com/question/slowly-changing-dimension-vs-change-data-capture-053110
https://network.informatica.com/thread/40299
https://archive.sap.com/discussions/thread/2140880
<<<
CDC is Change Data Capture -

The CDC methods will enable you to extract and load only the new or changed records form the source, rather than loading the entire records from the source. Also called as delta or incremental load.

SCD Type 2 (Slowly Changing Dimension Type 2)

This lets you store/preserve the history of changed records of selected dimensions as per your choice. The transaction table / source table will mostly have only the current value and is used in certain cases where in the history of a certain dimension is required for analysis purpose.
<<<
https://books.google.com/books?id=83pgjociDWsC&pg=RA5-PT9&lpg=RA5-PT9&dq=scd+and+cdc&source=bl&ots=Ipp7HAYCFX&sig=ACfU3U2C8CiaSxa_urF19q5IhQ8DOXLbIQ&hl=en&sa=X&ved=2ahUKEwiXt9WC7M_gAhXqRt8KHdWuBnsQ6AEwCXoECAoQAQ#v=onepage&q=scd%20and%20cdc&f=false
https://community.talend.com/t5/Design-and-Development/Difference-between-CDC-and-SCD/td-p/111312


! Ideas for Event Sourcing in Oracle 
https://medium.com/@FranckPachot/ideas-for-event-sourcing-in-oracle-d4e016e90af6



<<<
With more and more organization moving to the cloud, there is a growing demand to feed data from on-premise Oracle/DB2/SQL Server databases to various platforms on the cloud. CDC can captures changes as they happen in real-time fashion and push to the target platforms, such as Kafka, Event Hub, and data lake. There are many ways to perform CDC and many CDC software are also available in the market. In this session, we will discuss what CDC options are available and introduce a few key CDC softwares, such as Oracle GoldenGate, Attunity, and Striim.
<<<







..
<<<
QUESTION:
is there a way to ignore hints AND profiles through 1 single parameter?
like **** you all hints and profiles i hate you!
 or is the only way to do this is set _optimizer_ignore_hints and disable/drop all profiles ?

ANSWER: 
For profiles: ALTER SESSION SET SQLTUNE_CATEGORY = 'IGNOREMENOW';
For baselines: ALTER SESSION SET OPTIMIZER_USE_SQL_PLAN_BASELINES=false

just everything off, these are the knobs :)  because gluent doesn't like having USE_NL hint on offloaded tables it errors with KUP-04108: unable to reread file
just in case the developers have to deal with 1000+ SQLs we know how to attack this with these knobs




OTHER WAYS OF DISABLING: 
IGNORE_OPTIM_EMBEDDED_HINTS <- disables hints at session level  
{{{
select /*+ index(DEPT) ignore_optim_embedded_hints */ * from SCOTT.DEPT;
}}}
optimizer_ignore_hints <- database wide or session level through trigger
{{{
alter session set optimizer_ignore_hints=true;
alter session set optimizer_ignore_parallel_hints=true;
}}}
<<<



<<<

To retroactively troubleshoot/analyze a SQL that's already in SPM, 
you can use the following commands in your session to deep dive and reproduce the bad plan. 
Once the fix is identified the baseline/profile can be dropped. 

For profiles: ALTER SESSION SET SQLTUNE_CATEGORY = 'IGNOREMENOW';
For baselines: ALTER SESSION SET OPTIMIZER_USE_SQL_PLAN_BASELINES=false;
<<<
<<showtoc>>

! the queries
{{{
-- sqls using profiles 
select distinct(s.sql_id), s.sql_profile
from dba_sql_profiles p,dba_hist_sqlstat s
where p.name=s.sql_profile
union
select distinct(s.sql_id), s.sql_profile
from dba_sql_profiles p,v$sql s
where p.name=s.sql_profile
;

--sqls using baselines (you have to do the reverse, but exact_matching_signature doesn't seem to available on the hist views)
set verify off
col parsing_schema format a8
col created format a10
select parsing_schema_name parsing_schema, created, plan_name, sql_handle, sql_text, optimizer_cost, enabled, accepted, fixed, origin
from dba_sql_plan_baselines
where signature in 
(select exact_matching_signature from v$sql
)
/
}}}


If they can expose the force_matching_signature on the DBA_SQL_PLAN_BASELINES view that would be great
{{{
select * from karlarao.skew where skew=3;   --6fvyp18cvnzwa 375614277642158684 -- exact matching 4404474968209701751  -- force matching 1949605896 PHV
select  *   from karlarao.SKEW Where skew=3;   --1myj38m1m3g2u 375614277642158684 -- exact matching 4404474968209701751  -- force matching 1949605896 PHV 
}}}


also dba_hist_sql_plan doesn’t seem to flush/store the sql profile
{{{
-- this, no rows even after creating awr snapshot
col sql_profile format a30
SELECT 
              sql_id,
              plan_hash_value,
              Replace(Extractvalue(Xmltype(other_xml), '/*/info[@type = "sql_profile"]'),'"','') AS sql_profile
       FROM   dba_hist_sql_plan p
       WHERE  p.other_xml IS NOT NULL
       AND    p.other_xml LIKE '%sql_profile%';


-- this shows output 
col sql_profile format a30
SELECT     
                  sql_id,
                  plan_hash_value,
                  Replace(Extractvalue(Xmltype(other_xml), '/*/info[@type = "sql_profile"]'),'"','') AS sql_profile
       FROM       gv$sql_plan p
       WHERE      p.other_xml IS NOT NULL
       AND        p.other_xml LIKE '%sql_profile%' ;
SQL_ID        PLAN_HASH_VALUE SQL_PROFILE
------------- --------------- ------------------------------
g3264pc5fj84m      1787655467 coe_g3264pc5fj84m_1787655467
}}}




this is interesting query by Eduardo C. 

https://clarodba.wordpress.com/2022/03/16/how-to-get-the-last-registered-use-of-sql-profiles-from-memory-awr-and-ash-alltogether/



{{{

WITH sqlstat AS
(
          SELECT    sql_profile,
                    Cast(Max(end_interval_time) AS DATE) max_time,
                    Max(sql_id
                              || ' / '
                              || plan_hash_value) sqlid_plan
          FROM      dba_hist_sqlstat s
          left join dba_hist_snapshot n
          USING     (snap_id, dbid, instance_number)
          WHERE     sql_profile IS NOT NULL
          GROUP BY  sql_profile ), gvsql AS
(
         SELECT   sql_profile,
                  Max(last_active_time) max_time,
                  Max(sql_id
                           || ' / '
                           || plan_hash_value) sqlid_plan
         FROM     gv$sql
         WHERE    sql_profile IS NOT NULL
         GROUP BY sql_profile ), sqlplan AS
(
       SELECT dbid,
              sql_id,
              plan_hash_value,
              Replace(Extractvalue(Xmltype(other_xml), '/*/info[@type = "sql_profile"]'),'"','') AS sql_profile
       FROM   dba_hist_sql_plan p
       WHERE  p.other_xml IS NOT NULL
       AND    p.id = 1
       AND    p.other_xml LIKE '%info  type="sql_profile" note="y"%'
       UNION
       SELECT     dbid,
                  sql_id,
                  plan_hash_value,
                  Replace(Extractvalue(Xmltype(other_xml), '/*/info[@type = "sql_profile"]'),'"','') AS sql_profile
       FROM       gv$sql_plan p
       cross join v$database d
       WHERE      p.other_xml IS NOT NULL
       AND        p.id = 1
       AND        p.other_xml LIKE '%info type="sql_profile" note="y"%' ), ash AS
(
         SELECT   sql_profile,
                  Max(max_time)   AS max_time,
                  Max(sqlid_plan) AS sqlid_plan
         FROM     (
                           SELECT   sql_profile,
                                    Cast(Max(sample_time) AS DATE) max_time,
                                    Max(sql_id
                                             || ' / '
                                             || plan_hash_value) sqlid_plan
                           FROM     dba_hist_active_sess_history a
                           join     sqlplan
                           USING    (dbid, sql_id)
                           GROUP BY sql_profile
                           UNION
                           SELECT     sql_profile,
                                      Cast(Max(sample_time) AS DATE) max_time,
                                      Max(sql_id
                                                 || ' / '
                                                 || plan_hash_value) sqlid_plan
                           FROM       gv$active_session_history a
                           cross join v$database d
                           join       sqlplan
                           USING      (dbid, sql_id)
                           GROUP BY   sql_profile )
         GROUP BY sql_profile )
SELECT    p.name AS sql_profile,
          p.category,
          CASE
                    WHEN Coalesce(st.max_time, sq.max_time, ash.max_time ) IS NULL THEN 'NOT FOUND'
                    ELSE To_char(Greatest(Nvl(st.max_time,SYSDATE-1000),Nvl(sq.max_time,SYSDATE-1000),Nvl(ash.max_time,SYSDATE-1000)),'yyyy-mm-dd hh24:mi')
          END                                                        last_registered_use,
          Coalesce(st.sqlid_plan, sq.sqlid_plan, ash.sqlid_plan )    sqlid_plan,
          st.max_time                                                sqlstats_max_time,
          sq.max_time                                                gvsql_max_time,
          ash.max_time                                               ash_max_time,
          Cast(p.created AS       DATE)                                 AS created,
          Cast(p.last_modified AS DATE)                                 AS last_mod,
          p.description,
          p.TYPE,
          p.status,
          p.force_matching,
          p.signature
FROM      dba_sql_profiles p
left join sqlstat st
ON        st.sql_profile = p.name
left join gvsql sq
ON        sq.sql_profile = p.name
left join ash
ON        ash.sql_profile = p.name
ORDER BY  last_registered_use 
/

}}}
https://onlinexperiences.com/scripts/Server.nxp?LASCmd=AI:4;F:QS!10100&ShowUUID=958AB2AD-BBE8-4F30-82C9-338C87B7D6C6&ShowKey=73520&AffiliateData=DSCGR#xsid=a62e_5IW
https://www.youtube.com/results?search_query=How+to+Use+Time+Series+Data+to+Forecast+at+Scale
Mahan Hosseinzadeh- Prophet at scale to tune & forecast time series at Spotify  https://www.youtube.com/watch?v=fegS34ItKcI
Joe Jevnik - A Worked Example of Using Neural Networks for Time Series Prediction https://www.youtube.com/watch?v=hAlGqT3Xpus
Real-time anomaly detection system for time series at scale https://www.youtube.com/watch?v=oVXySPH7MjQ
Two Effective Algorithms for Time Series Forecasting https://www.youtube.com/watch?v=VYpAodcdFfA
Nathaniel Cook - Forecasting Time Series Data at scale with the TICK stack https://www.youtube.com/watch?v=raEyZEryC0k 
How to Use Time Series Data to Forecast at Scale| DZone.com Webinar https://www.youtube.com/watch?v=KoLR7baZYec
Forecasting at Scale: How and Why We Developed Prophet for Forecasting at Facebook https://www.youtube.com/watch?v=pOYAXv15r3A
https://cloud.google.com/blog/products/databases/alloydb-for-postgresql-columnar-engine





.
https://community.hortonworks.com/articles/58458/installing-docker-version-of-sandbox-on-mac.html  <-- follow this @@docker@@ howto!
https://hortonworks.com/tutorial/learning-the-ropes-of-the-hortonworks-sandbox/
HORTONWORKS SANDBOX SANDBOX DEPLOYMENT AND INSTALL GUIDE Deploying Hortonworks Sandbox on @@Docker@@ https://hortonworks.com/tutorial/sandbox-deployment-and-install-guide/section/3/#for-mac
https://community.hortonworks.com/questions/57757/hdp-25-sandbox-not-starting.html <-- issue on sandbox not starting

https://www.quora.com/To-start-learning-and-playing-with-Hadoop-which-one-should-I-prefer-Cloudera-QuickStart-VM-Hortonworks-Sandbox-or-MapR-Sandbox









.





<<showtoc>>


! cat files 
https://stackoverflow.com/questions/19778137/why-is-there-no-hadoop-fs-head-shell-command
{{{
hadoop fs -cat /path/to/file | head
hadoop fs -cat /path/to/file | tail
}}}


! create home directory 
{{{
[root@node1 ~]# su - hdfs
[hdfs@node1 ~]$ hadoop fs -mkdir /user/vagrant

[hdfs@node1 ~]$ hadoop fs -chown vagrant:vagrant /user/vagrant

[hdfs@node1 ~]$ hadoop fs -ls /user
Found 5 items
drwxr-xr-x   - admin     hdfs             0 2019-01-06 02:11 /user/admin
drwxrwx---   - ambari-qa hdfs             0 2019-01-06 01:31 /user/ambari-qa
drwxr-xr-x   - hcat      hdfs             0 2019-01-06 01:44 /user/hcat
drwxr-xr-x   - hive      hdfs             0 2019-01-06 02:06 /user/hive
drwxr-xr-x   - vagrant   vagrant          0 2019-01-08 06:26 /user/vagrant
}}}


! copy file 
{{{
[vagrant@node1 data]$ du -sm salaries.csv 
16	salaries.csv

[vagrant@node1 data]$ hadoop fs -put salaries.csv 

[vagrant@node1 data]$ hadoop fs -ls
Found 1 items
-rw-r--r--   3 vagrant vagrant   16257213 2019-01-08 06:27 salaries.csv

}}}


! copy file with different block size
* this spreads the 16MB file to 1MB across data nodes and replicated 3x
{{{
[vagrant@node1 data]$ hadoop fs -D dfs.blocksize=1m -put salaries.csv salaries2.csv 
}}}


! check file status 
{{{
[vagrant@node1 data]$ hdfs fsck /user/vagrant/salaries.csv
Connecting to namenode via http://node1.example.com:50070/fsck?ugi=vagrant&path=%2Fuser%2Fvagrant%2Fsalaries.csv
FSCK started by vagrant (auth:SIMPLE) from /192.168.199.2 for path /user/vagrant/salaries.csv at Tue Jan 08 06:29:06 UTC 2019
.
/user/vagrant/salaries.csv:  Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741876_1056. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
Status: HEALTHY
 Total size:	16257213 B
 Total dirs:	0
 Total files:	1
 Total symlinks:		0
 Total blocks (validated):	1 (avg. block size 16257213 B)
 Minimally replicated blocks:	1 (100.0 %)
 Over-replicated blocks:	0 (0.0 %)
 Under-replicated blocks:	1 (100.0 %)
 Mis-replicated blocks:		0 (0.0 %)
 Default replication factor:	3
 Average block replication:	2.0
 Corrupt blocks:		0
 Missing replicas:		1 (33.333332 %)
 Number of data-nodes:		2
 Number of racks:		1
FSCK ended at Tue Jan 08 06:29:06 UTC 2019 in 4 milliseconds


The filesystem under path '/user/vagrant/salaries.csv' is HEALTHY



-- FILE WITH DIFFERENT BLOCK SIZE 
[vagrant@node1 data]$ hdfs fsck /user/vagrant/salaries2.csv
Connecting to namenode via http://node1.example.com:50070/fsck?ugi=vagrant&path=%2Fuser%2Fvagrant%2Fsalaries2.csv
FSCK started by vagrant (auth:SIMPLE) from /192.168.199.2 for path /user/vagrant/salaries2.csv at Tue Jan 08 06:31:11 UTC 2019
.
/user/vagrant/salaries2.csv:  Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741877_1057. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).

/user/vagrant/salaries2.csv:  Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741878_1058. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).

/user/vagrant/salaries2.csv:  Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741879_1059. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).

/user/vagrant/salaries2.csv:  Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741880_1060. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).

/user/vagrant/salaries2.csv:  Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741881_1061. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).

/user/vagrant/salaries2.csv:  Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741882_1062. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).

/user/vagrant/salaries2.csv:  Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741883_1063. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).

/user/vagrant/salaries2.csv:  Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741884_1064. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).

/user/vagrant/salaries2.csv:  Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741885_1065. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).

/user/vagrant/salaries2.csv:  Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741886_1066. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).

/user/vagrant/salaries2.csv:  Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741887_1067. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).

/user/vagrant/salaries2.csv:  Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741888_1068. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).

/user/vagrant/salaries2.csv:  Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741889_1069. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).

/user/vagrant/salaries2.csv:  Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741890_1070. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).

/user/vagrant/salaries2.csv:  Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741891_1071. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).

/user/vagrant/salaries2.csv:  Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741892_1072. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
Status: HEALTHY
 Total size:	16257213 B
 Total dirs:	0
 Total files:	1
 Total symlinks:		0
 Total blocks (validated):	16 (avg. block size 1016075 B)
 Minimally replicated blocks:	16 (100.0 %)
 Over-replicated blocks:	0 (0.0 %)
 Under-replicated blocks:	16 (100.0 %)
 Mis-replicated blocks:		0 (0.0 %)
 Default replication factor:	3
 Average block replication:	2.0
 Corrupt blocks:		0
 Missing replicas:		16 (33.333332 %)
 Number of data-nodes:		2
 Number of racks:		1
FSCK ended at Tue Jan 08 06:31:11 UTC 2019 in 1 milliseconds


The filesystem under path '/user/vagrant/salaries2.csv' is HEALTHY

}}}



! get file locations and blocks
{{{
[vagrant@node1 data]$ hdfs fsck /user/vagrant/salaries.csv -files -locations -blocks
Connecting to namenode via http://node1.example.com:50070/fsck?ugi=vagrant&files=1&locations=1&blocks=1&path=%2Fuser%2Fvagrant%2Fsalaries.csv
FSCK started by vagrant (auth:SIMPLE) from /192.168.199.2 for path /user/vagrant/salaries.csv at Tue Jan 08 06:33:18 UTC 2019
/user/vagrant/salaries.csv 16257213 bytes, 1 block(s):  Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741876_1056. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
0. BP-534825236-192.168.199.2-1546738263299:blk_1073741876_1056 len=16257213 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]

Status: HEALTHY
 Total size:	16257213 B
 Total dirs:	0
 Total files:	1
 Total symlinks:		0
 Total blocks (validated):	1 (avg. block size 16257213 B)
 Minimally replicated blocks:	1 (100.0 %)
 Over-replicated blocks:	0 (0.0 %)
 Under-replicated blocks:	1 (100.0 %)
 Mis-replicated blocks:		0 (0.0 %)
 Default replication factor:	3
 Average block replication:	2.0
 Corrupt blocks:		0
 Missing replicas:		1 (33.333332 %)
 Number of data-nodes:		2
 Number of racks:		1
FSCK ended at Tue Jan 08 06:33:18 UTC 2019 in 1 milliseconds


The filesystem under path '/user/vagrant/salaries.csv' is HEALTHY
[vagrant@node1 data]$ 
[vagrant@node1 data]$ 
[vagrant@node1 data]$ 
[vagrant@node1 data]$ 
[vagrant@node1 data]$ 
[vagrant@node1 data]$ 
[vagrant@node1 data]$ hdfs fsck /user/vagrant/salaries2.csv -files -locations -blocks
Connecting to namenode via http://node1.example.com:50070/fsck?ugi=vagrant&files=1&locations=1&blocks=1&path=%2Fuser%2Fvagrant%2Fsalaries2.csv
FSCK started by vagrant (auth:SIMPLE) from /192.168.199.2 for path /user/vagrant/salaries2.csv at Tue Jan 08 06:36:04 UTC 2019
/user/vagrant/salaries2.csv 16257213 bytes, 16 block(s):  Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741877_1057. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
 Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741878_1058. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
 Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741879_1059. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
 Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741880_1060. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
 Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741881_1061. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
 Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741882_1062. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
 Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741883_1063. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
 Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741884_1064. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
 Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741885_1065. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
 Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741886_1066. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
 Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741887_1067. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
 Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741888_1068. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
 Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741889_1069. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
 Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741890_1070. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
 Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741891_1071. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
 Under replicated BP-534825236-192.168.199.2-1546738263299:blk_1073741892_1072. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
0. BP-534825236-192.168.199.2-1546738263299:blk_1073741877_1057 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
1. BP-534825236-192.168.199.2-1546738263299:blk_1073741878_1058 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
2. BP-534825236-192.168.199.2-1546738263299:blk_1073741879_1059 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
3. BP-534825236-192.168.199.2-1546738263299:blk_1073741880_1060 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK], DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK]]
4. BP-534825236-192.168.199.2-1546738263299:blk_1073741881_1061 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
5. BP-534825236-192.168.199.2-1546738263299:blk_1073741882_1062 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK], DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK]]
6. BP-534825236-192.168.199.2-1546738263299:blk_1073741883_1063 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
7. BP-534825236-192.168.199.2-1546738263299:blk_1073741884_1064 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
8. BP-534825236-192.168.199.2-1546738263299:blk_1073741885_1065 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
9. BP-534825236-192.168.199.2-1546738263299:blk_1073741886_1066 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
10. BP-534825236-192.168.199.2-1546738263299:blk_1073741887_1067 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
11. BP-534825236-192.168.199.2-1546738263299:blk_1073741888_1068 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
12. BP-534825236-192.168.199.2-1546738263299:blk_1073741889_1069 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
13. BP-534825236-192.168.199.2-1546738263299:blk_1073741890_1070 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
14. BP-534825236-192.168.199.2-1546738263299:blk_1073741891_1071 len=1048576 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]
15. BP-534825236-192.168.199.2-1546738263299:blk_1073741892_1072 len=528573 repl=2 [DatanodeInfoWithStorage[192.168.199.2:50010,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.3:50010,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK]]

Status: HEALTHY
 Total size:	16257213 B
 Total dirs:	0
 Total files:	1
 Total symlinks:		0
 Total blocks (validated):	16 (avg. block size 1016075 B)
 Minimally replicated blocks:	16 (100.0 %)
 Over-replicated blocks:	0 (0.0 %)
 Under-replicated blocks:	16 (100.0 %)
 Mis-replicated blocks:		0 (0.0 %)
 Default replication factor:	3
 Average block replication:	2.0
 Corrupt blocks:		0
 Missing replicas:		16 (33.333332 %)
 Number of data-nodes:		2
 Number of racks:		1
FSCK ended at Tue Jan 08 06:36:04 UTC 2019 in 1 milliseconds


The filesystem under path '/user/vagrant/salaries2.csv' is HEALTHY
}}}


! read raw file in data node filesystem 
* check for the blk_<id>
{{{
[root@node1 ~]# find /hadoop/hdfs/ -name "blk_1073741876" -print
/hadoop/hdfs/data/current/BP-534825236-192.168.199.2-1546738263299/current/finalized/subdir0/subdir0/blk_1073741876

[root@node1 ~]# find /hadoop/hdfs/ -name "blk_1073741878" -print
/hadoop/hdfs/data/current/BP-534825236-192.168.199.2-1546738263299/current/finalized/subdir0/subdir0/blk_1073741878

[root@node1 ~]# less /hadoop/hdfs/data/current/BP-534825236-192.168.199.2-1546738263299/current/finalized/subdir0/subdir0/blk_1073741878

}}}


! explore files using ambari files view and NameNode UI
Ambari files view http://127.0.0.1:8080/#/main/view/FILES/auto_files_instance
Quicklinks NameNode UI http://192.168.199.2:50070/explorer.html#/























.
http://hortonworks.com/wp-content/uploads/2016/05/Hortonworks.CheatSheet.SQLtoHive.pdf

! show all config parameters
{{{
set;
}}}

! connect on beeline
{{{
beeline> !connect jdbc:hive2://sandbox.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
Connecting to jdbc:hive2://sandbox.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
Enter username for jdbc:hive2://sandbox.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2:
Enter password for jdbc:hive2://sandbox.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2:
Connected to: Apache Hive (version 1.2.1000.2.5.0.0-1245)
Driver: Hive JDBC (version 1.2.1000.2.5.0.0-1245)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://sandbox.hortonworks.com:2181/>
0: jdbc:hive2://sandbox.hortonworks.com:2181/> show databases;
+----------------+--+
| database_name  |
+----------------+--+
| default        |
| foodmart       |
| hr             |
| xademo         |
+----------------+--+
4 rows selected (0.14 seconds)


}}}
https://cwiki.apache.org/confluence/display/Hive/LanguageManual
https://cwiki.apache.org/confluence/display/Hive/Home#Home-HiveDocumentation
http://hive.apache.org/

<<showtoc>>


! WITH AS 
https://cwiki.apache.org/confluence/display/Hive/Common+Table+Expression

! UNION ALL 
https://stackoverflow.com/questions/16181684/combine-many-tables-in-hive-using-union-all

! CASE function
 http://www.folkstalk.com/2011/11/conditional-functions-in-hive.html , https://stackoverflow.com/questions/41023835/case-statements-in-hive , https://community.modeanalytics.com/sql/tutorial/sql-case/

! hive JOINS
 https://www.tutorialspoint.com/hive/hiveql_joins.htm

! DDL 
{{{
show create table
}}}

! spool to CSV
{{{
hive -e 'header =true, select * from table' > file.csv

hive -e "use default;set hive.cli.print.header=true;select * from test1;" | sed 's/[\t]/,/g' >/temp/test.csv
INSERT OVERWRITE LOCAL DIRECTORY '/path/to/hive/csv' ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' SELECT * FROM hivetablename;	
}}}


! spool to pipe delimited 
https://stackoverflow.com/questions/44333450/hive-e-with-delimiter?rq=1
https://stackoverflow.com/questions/30224875/exporting-hive-table-to-csv-in-hdfs

{{{
[raj_ops@sandbox ~]$ hive -e "use default;set hive.cli.print.header=true;select * from hr.departments;" | sed 's/[\t]/|/g' > testdata.csv

Logging initialized using configuration in file:/etc/hive/2.5.0.0-1245/0/hive-log4j.properties
OK
Time taken: 3.496 seconds
OK
Time taken: 1.311 seconds, Fetched: 30 row(s)
[raj_ops@sandbox ~]$
[raj_ops@sandbox ~]$
[raj_ops@sandbox ~]$ cat testdata.csv
departments.department_id|departments.department_name|departments.manager_id|departments.location_id
10|Administration|200|1700
20|Marketing|201|1800
110|Accounting|205|1700
120|Treasury|NULL|1700
130|Corporate Tax|NULL|1700
140|Control And Credit|NULL|1700
150|Shareholder Services|NULL|1700
160|Benefits|NULL|1700
170|Manufacturing|NULL|1700
180|Construction|NULL|1700
190|Contracting|NULL|1700
200|Operations|NULL|1700
30|Purchasing|114|1700
210|IT Support|NULL|1700
220|NOC|NULL|1700
230|IT Helpdesk|NULL|1700
240|Government Sales|NULL|1700
250|Retail Sales|NULL|1700
260|Recruiting|NULL|1700
270|Payroll|NULL|1700
10|Administration|200|1700
50|Shipping|121|1500
50|Shipping|121|1500
40|Human Resources|203|2400
50|Shipping|121|1500
60|IT|103|1400
70|Public Relations|204|2700
80|Sales|145|2500
90|Executive|100|1700
100|Finance|108|1700

}}}


! alter table CSV header property skip header
{{{
hadoop fs -copyFromLocal mts_main_v1.csv /sdxx/derived/restou/dc_master_target_summary_v1

alter table dc_master_target_summary_v1 set TBLPROPERTIES ("skip.header.line.count"="1");
}}}












! run query
{{{
[raj_ops@sandbox ~]$ hive -e 'select * from foodmart.customer limit 2'
}}}

! run script
{{{
[raj_ops@sandbox ~]$ hive -f test.sql 
[raj_ops@sandbox ~]$ cat test.sql 
select * from foodmart.customer limit 2;
}}}
<<showtoc>>



! certification matrix
https://supportmatrix.hortonworks.com/

! ambari and hdp versions 
ambari_and_hdp_versions.md https://gist.github.com/karlarao/ba6bbc1c0049de1fc1404b5d8dc56c4d
<<<
* ambari 2.4.3.0
* hdp 2.2 to 2.5.3.0 

----------

* ambari 2.5.2.0
* hdp 2.3 to 2.6.3.0 

----------

* ambari 2.6.2.2
* hdp 2.4 to 2.6.5 
<<<


! ways to install 
!! own machine 
!!! manual install 
* using apt-get and yum for ambari-server/ambari-agent
* and then manual provisioning of the cluster through ambari UI
!!! unattended install using vagrant 
* using automation tools to install ambari-server/ambari-agent
* using blueprints to push a setup and configuration to the cluster
!! on cloud 
!!! manual or unattended install 
!!! using hortonworks cloudbreak (uses docker to provision to cloud) 
https://hortonworks.com/open-source/cloudbreak/#section_1
https://cwiki.apache.org/confluence/display/AMBARI/Blueprints


! installation docs 
https://docs.hortonworks.com/
https://hortonworks.com/products/data-platforms/hdp/

!! ambari doc 
https://docs.hortonworks.com/HDPDocuments/Ambari/Ambari-2.7.3.0/index.html

!! cluster planning 
https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/cluster-planning/content/partitioning.html


! cluster resources 
!!! ambari-server
!!! ambari-agent (node 2-3)
!!! NameNode
!!! ResourceManager 
!!! Zookeeper (node 2-3)
!!! DataNode (node 2-3)
!!! NodeManager (node 2-3)




! gluent installation 
https://docs.gluent.com/goe/install_and_upgrade.html













.
http://hadooptutorial.info/impala-commands-cheat-sheet/


<<showtoc>>

! ember.js 
!! list of apps written in ember
http://iheanyi.com/journal/2015/03/24/a-list-of-open-source-emberjs-projects/
http://stackoverflow.com/questions/10830072/recommended-example-applications-written-in-ember-js

!! a very cool restaurant app 
http://www.pluralsight.com/courses/fire-up-emberjs

!! analytics tuts app 
https://code.tutsplus.com/courses/end-to-end-analytics/lessons/getting-started

<<<
very nice app that shows the resturant tables and the items ordered on that table
shows the table, items, and item details w/ total 
nice howto of ember in action 
<<<

! backbone.js
!! simple app backbone rectangle app
http://www.pluralsight.com/courses/backbone-fundamentals
<<<
very nice explanation so far! 
<<<

!! blogroll app (MBEN stack)
https://www.youtube.com/watch?v=a-ijUKVIJSw&list=PLX2HoWE32I8OCnumQmc9lcjnHIjAamIy6&index=4
another server example https://www.youtube.com/watch?v=uykzCfu1RiQ
https://www.youtube.com/watch?v=kHV7gOHvNdk&list=PLX2HoWE32I8Nkzw2TqcifObuhgJZz8a0U
!!! git repo 
https://github.com/michaelcheng429/backbone_tutorial_blogroll_app/tree/part1-clientside-code
https://github.com/michaelcheng429/backbone_tutorial_blogroll_app


!! db administration app
Application Building Patterns with Backbone.js - http://www.pluralsight.com/courses/playing-with-backbonejs
<<<
a full db administration app
uses node!
<<<

!! backbone todo
https://app.pluralsight.com/library/courses/choosing-javascript-framework/exercise-files
<<<
frontend masters - a todo mvc example
<<<

!! another app todo list 
Backbone.JS In-Depth and Intro to Testing with Mocha and Sinon - https://app.pluralsight.com/library/courses/backbone-js-in-depth-testing-mocha-sinon/table-of-contents
<<<
fronend masters class 
another app todo list 
<<<

!! music player
http://www.pluralsight.com/courses/backbonejs



! angular 
!! video publishing site w/ login (MEAN stack)
http://www.pluralsight.com/courses/building-angularjs-nodejs-apps-mean  
<<<
video publishing site built on mean stack 
great example of authentication and authorization 
<<<




! handlebars.js
!! just a simple demo page about employee address book details
http://www.lynda.com/sdk/Web-Interaction-Design-tutorials/JavaScript-Templating/156166-2.html
<<<
clear explanations maybe because it's a simple app, I like this one! 
very nice simple howto on different templating engines  
the guy used vivaldi browser and aptana for minimal setup 
<<<
<<<
jquery, moustache.js, handlebars.js, dust 
<<<
!! dog or not app 
http://www.pluralsight.com/courses/handlebars-javascript-templating
<<<
this is a handlebars centric course 
cute webapp that shows a photo where you would identify if it is a dog or not
this app shows how filtering, pagination, and scoring is done 
<<<
<<<
bower, handlebars.js, gulp
<<<

! nodejs 
!! oracle to_json and node oracle driver voting on hr schema
http://www.slideshare.net/lucasjellema/oracle-databasecentric-apis-on-the-cloud-using-plsql-and-nodejs
https://github.com/pavadeli/oowsession2016-app
https://github.com/pavadeli/oowsession2016
!! node-oracledb at amis lucas
https://github.com/lucasjellema/sig-nodejs-amis-2016
!! dino-date - showcase Oracle DB features on multiple programming languages (node, python, ruby, etc.)
<<<
DinoDate is "a fun site for finding your perfect Dino partner". It is a learning platform to showcase Oracle Database features using examples in multiple programming languages. https://community.oracle.com/docs/DOC-998357
Blaine Carter https://www.youtube.com/channel/UCnyo1hKeJ4GOsppGVRX6Y4A
http://learncodeshare.net/2016/04/08/dinodate-a-demonstration-platform-for-oracle-database/
way back 2009 http://feuerstein28.rssing.com/browser.php?indx=30498827&item=34
<<<










https://sakthismysqlblog.wordpress.com/2019/08/02/mysql-8-internal-architecture/



.
{{{
There are MYSQL functions you can use. Like this one that resolves the user:

SELECT USER();
This will return something like root@localhost so you get the host and the user.

To get the current database run this statement:

SELECT DATABASE();
}}}



! create new superuser 
{{{
create USER 'karlarao'@'%' IDENTIFIED BY 'karlarao';
GRANT ALL PRIVILEGES ON *.* TO 'karlarao'@'%' WITH GRANT OPTION;

SELECT CURRENT_USER();
SELECT DATABASE();
status;
}}}


https://community.oracle.com/tech/apps-infra/categories/database-ideas-ideas

<<<
0)	It would be ideal if you can create a separate database for your tests

1)	If you really want a quick IO test, then do calibrate_io then the iperf2/netperf
http://docs.oracle.com/cd/E11882_01/appdev.112/e40758/d_resmgr.htm#ARPLS67598
                                           
2)	If you have a bit of time, then do Orion
You can quickly do a -run dss or -run oltp
But you can also explore the attached oriontoolkit.zip

3)	If you have a lot of time, then do SLOB and test the large IOs 
For OLTP test
http://kevinclosson.net/2012/02/06/introducing-slob-the-silly-little-oracle-benchmark/
If it’s DW, then you need to test the large IOs. See attached IOsaturationtoolkit-v2.tar.bz2
http://karlarao.tiddlyspot.com/#[[cpu%20-%20SillyLittleBenchmark%20-%20SLOB]]

Kyle also has some good reference on making use of FIO https://github.com/khailey/fio_scripts/blob/master/README.md

<<<
https://wikibon.com/oracle-mysql-database-service-heatwave-vaporizes-aws-redshift-aqua-snowflake-azure-synapse-gcp-bq/
https://www.oracle.com/mysql/heatwave/



! competitors	
a similar product is TiDB 
https://pingcap.com/blog/how-we-build-an-htap-database-that-simplifies-your-data-platform
https://medium.com/swlh/making-an-htap-database-a-reality-what-i-learned-from-pingcaps-vldb-paper-6d249c930a11
! name of current db 
https://dba.stackexchange.com/questions/58312/how-to-get-the-name-of-the-current-database-from-within-postgresql
{{{
SELECT current_database();
}}}


! list current user
https://www.postgresql.org/message-id/52C315B8.2040006@gmail.com
{{{
select current_database();
}}}















.
<<showtoc>>



! 202008
INTRO 
A Tour of PostgreSQL https://www.pluralsight.com/courses/tekpub-postgres 
PostgreSQL Playbook for Developer DBAs https://www.pluralsight.com/courses/postgresql-playbook
https://www.linkedin.com/learning/postgresql-essential-training/using-the-exercise-files
https://app.pluralsight.com/library/courses/meet-postgresql/table-of-contents


PERFORMANCE
Play by Play: Database Tuning https://www.pluralsight.com/courses/play-by-play-rob-sullivan

PGPLSQL
https://www.pluralsight.com/courses/postgresql-advanced-server-programming
https://www.pluralsight.com/courses/posgresql-functions-playbook
https://www.pluralsight.com/courses/capturing-logic-custom-functions-postgresql
https://www.pluralsight.com/courses/programming-postgresql

JSON 
https://www.pluralsight.com/courses/postgresql-document-database



! courses 

https://www.pluralsight.com/courses/tekpub-postgres
https://www.udemy.com/beginners-guide-to-postgresql/learn/lecture/82719#overview
https://www.udemy.com/learn-database-design-using-postgresql/learn/lecture/1594438#overview
https://www.udemy.com/learn-partitioning-in-postgresql-from-scratch/learn/lecture/5639644#overview


pl/pgsql
https://app.pluralsight.com/profile/author/pinal-dave
https://app.pluralsight.com/library/courses/postgresql-advanced-server-programming/table-of-contents





https://www.udemy.com/course/learn-partitioning-in-postgresql-from-scratch/
https://www.udemy.com/course/the-complete-python-postgresql-developer-course/
https://www.udemy.com/course/learn-database-design-using-postgresql/
https://www.udemy.com/course/postgresql-permissionsprivilegesadvanced-review/
https://www.udemy.com/course/ordbms-with-postgresql-essential-administration-training/
https://www.udemy.com/course/ultimate-expert-guide-mastering-postgresql-administration/
https://www.udemy.com/course/beginners-guide-to-postgresql/
https://www.udemy.com/course/postgresql-encryptiondata-at-rest-ssl-security/
https://www.udemy.com/course/postgresql-backupreplication-restore/


https://www.youtube.com/results?search_query=postgresql+replication+step+by+step
https://www.youtube.com/results?search_query=postgresql+performance+tuning


! books
https://learning.oreilly.com/library/view/postgresql-up-and/9781491963401/
https://learning.oreilly.com/library/view/postgresql-for-data/9781783288601/    <- for data architects

https://learning.oreilly.com/library/view/postgresql-high-availability/9781787125537/cover.xhtml
https://learning.oreilly.com/library/view/postgresql-replication-/9781783550609/
https://learning.oreilly.com/library/view/postgresql-10-high/9781788474481/
https://learning.oreilly.com/library/view/postgresql-high-performance/9781785284335/
https://learning.oreilly.com/library/view/postgresql-96-high/9781784392970/
https://learning.oreilly.com/library/view/postgresql-90-high/9781849510301/
https://learning.oreilly.com/library/view/postgresql-high-availability/9781787125537/
https://learning.oreilly.com/library/view/postgresql-administration-cookbook/9781785883187/
https://learning.oreilly.com/library/view/mastering-postgresql-96/9781783555352/
https://learning.oreilly.com/library/view/postgresql-11-server/9781789342222/
https://learning.oreilly.com/library/view/postgresql-development-essentials/9781783989003/
https://learning.oreilly.com/library/view/beginning-postgresql-on/9781484234471/
https://learning.oreilly.com/library/view/postgresql-9-administration/9781849519069/
https://learning.oreilly.com/library/view/troubleshooting-postgresql/9781783555314/
https://learning.oreilly.com/library/view/postgresql-server-programming/9781783980581/
https://learning.oreilly.com/library/view/postgresql-developers-guide/9781783989027/
https://learning.oreilly.com/library/view/practical-postgresql/9781449309770/
https://learning.oreilly.com/library/view/professional-website-performance/9781118551721/





https://learning.oreilly.com/search/?query=postgresql%20performance&extended_publisher_data=true&highlight=true&include_assessments=false&include_case_studies=true&include_courses=true&include_orioles=true&include_playlists=true&include_collections=true&include_notebooks=true&is_academic_institution_account=false&source=user&sort=relevance&facet_json=true&page=10
















.

<<showtoc>> 

! download 
https://postgresapp.com/



! configure 
{{{

sudo mkdir -p /etc/paths.d &&
echo /Applications/Postgres.app/Contents/Versions/latest/bin | sudo tee /etc/paths.d/postgresapp


$ pwd
/Applications/Postgres.app/Contents/Versions/latest/bin

$ ls
clusterdb		gdalbuildvrt		invgeod			pg_dump			pg_waldump
createdb		gdaldem			invproj			pg_dumpall		pgbench
createuser		gdalenhance		nad2bin			pg_isready		pgsql2shp
cs2cs			gdalinfo		nearblack		pg_receivewal		postgres
dropdb			gdallocationinfo	ogr2ogr			pg_recvlogical		postmaster
dropuser		gdalmanage		ogrinfo			pg_resetwal		proj
ecpg			gdalserver		ogrtindex		pg_restore		psql
gdal-config		gdalsrsinfo		oid2name		pg_rewind		raster2pgsql
gdal_contour		gdaltindex		pg_archivecleanup	pg_standby		reindexdb
gdal_grid		gdaltransform		pg_basebackup		pg_test_fsync		shp2pgsql
gdal_rasterize		gdalwarp		pg_config		pg_test_timing		testepsg
gdal_translate		geod			pg_controldata		pg_upgrade		vacuumdb
gdaladdo		initdb			pg_ctl			pg_verify_checksums	vacuumlo


# data directory
/Users/kristofferson.a.arao/Library/Application Support/Postgres/var-11

# postgresql.conf
find . | grep postgresql.conf
./Library/Application Support/Postgres/var-11/postgresql.conf
}}}

[img(80%,80%)[https://i.imgur.com/q4haN6t.png]]

possible to create other versions of database 
[img(80%,80%)[https://i.imgur.com/IA6y1mh.png]]

also the same as [[get tuning advisor hints]]

https://blog.dbi-services.com/oracle-sql-profiles-check-what-they-do-before-accepting-them-blindly/
{{{
set serveroutput on echo off
declare
  -- input variables
  input_task_owner dba_advisor_tasks.owner%type:='SYS';
  input_task_name dba_advisor_tasks.task_name%type:='dbiInSite';
  input_show_outline boolean:=false;
  -- local variables
  task_id  dba_advisor_tasks.task_id%type;
  outline_data xmltype;
  benefit number;
begin
  for o in ( select * from dba_advisor_objects where owner=input_task_owner and task_name=input_task_name and type='SQL')
  loop
          -- get the profile hints (opt_estimate)
          dbms_output.put_line('--- PROFILE HINTS from '||o.task_name||' ('||o.object_id||') statement '||o.attr1||':');
          dbms_output.put_line('/*+');
          for r in (
            select hint,benefit from (
             select case when attr5 like 'OPT_ESTIMATE%' then cast(attr5 as varchar2(4000)) when attr1 like 'OPT_ESTIMATE%' then attr1 end hint,benefit
             from dba_advisor_recommendations t join dba_advisor_rationale r using (task_id,rec_id)
             where t.owner=o.owner and t.task_name = o.task_name and r.object_id=o.object_id and t.type='SQL PROFILE'
             --and r.message='This attribute adjusts optimizer estimates.'
            ) order by to_number(regexp_replace(hint,'^.*=([0-9.]+)[^0-9].*$','\1'))
          ) loop
           dbms_output.put_line('   '||r.hint); benefit:=to_number(r.benefit)/100;
          end loop;
          dbms_output.put_line('*/');
          -- get the outline hints
          begin
          select outline_data into outline_data from (
              select case when other_xml is not null then extract(xmltype(other_xml),'/*/outline_data/hint') end outline_data
              from dba_advisor_tasks t join dba_sqltune_plans p using (task_id)
              where t.owner=o.owner and t.task_name = o.task_name and p.object_id=o.object_id  and t.advisor_name='SQL Tuning Advisor' --11gonly-- and execution_type='TUNE SQL'
              and p.attribute='Using SQL profile'
          ) where outline_data is not null;
          exception when no_data_found then null;
          end;
          exit when not input_show_outline;
          dbms_output.put_line('--- OUTLINE HINTS from '||o.task_name||' ('||o.object_id||') statement '||o.attr1||':');
          dbms_output.put_line('/*+');
          for r in (
              select (extractvalue(value(d), '/hint')) hint from table(xmlsequence(extract( outline_data , '/'))) d
          ) loop
           dbms_output.put_line('   '||r.hint);
          end loop;
          dbms_output.put_line('*/');
          dbms_output.put_line('--- Benefit: '||to_char(to_number(benefit),'FM99.99')||'%');
  end loop;
  dbms_output.put_line('');
end;
/
}}}
What is the benefit of using google cloud pub/sub service in a streaming pipeline https://stackoverflow.com/questions/60919717/what-is-the-benefit-of-using-google-cloud-pub-sub-service-in-a-streaming-pipelin/60920217#60920217
<<<


Dataflow will need a source to get the data from. If you are using a streaming pipeline you can use different options as a source and each of them will have its own characteristics that may fit your scenario.

With Pub/Sub you can easily publish events using a client library or directly the API to a topic, and it will guarantee at least once delivery of that message.

When you connect it with Dataflow streaming pipeline, you can have a resilient architecture (Pub/Sub will keep sending the message until Dataflow acknowledge that it has processed it) and a near real-time processing. In addition, Dataflow can use Pub/Sub metrics to scale up or down depending on the number of the messages in the backlog.

Finally, Dataflow runner uses an optimized version of the PubSubIO connector which provides additional features. I suggest checking this documentation that describes some of these features.
<<<



* https://raw.githubusercontent.com/karlarao/scripts/master/security/sechealthcheck.sql
* esec360
* DBSAT  https://blogs.oracle.com/cloudsecurity/announcing-oracle-database-security-assessment-tool-dbsat-22
** https://www.oracle.com/a/ocom/docs/corporate/cyber-resilience-ds.pdf
** https://go.oracle.com/LP=38340
** concepts https://docs.oracle.com/en/database/oracle/security-assessment-tool/2.2/satug/index.html#UGSAT-GUID-C7E917BB-EDAC-4123-900A-D4F2E561BFE9
** https://www.oracle.com/technetwork/database/security/dbsat/dbsat-ds-jan2018-4219315.pdf
** https://www.oracle.com/technetwork/database/security/dbsat/dbsat-public-faq-4219329.pdf
** https://www.oracle.com/technetwork/database/security/dbsat/dbsec-dbsat-public-4219331.pdf




https://status.snowflake.com/


https://community.snowflake.com/s/topic/0TO0Z000000Unu5WAC/releases


https://docs.snowflake.com/en/release-notes/2021-01.html?_ga=2.227732125.1483243957.1613593318-1423095178.1586365212
https://www.snowflake.com/blog/new-snowflake-features-released-in-january-2021/

https://spark.apache.org/news/index.html
https://spark.apache.org/releases/spark-release-3-0-0.html
https://spark.apache.org/releases/spark-release-3-0-2.html


.
https://www.udemy.com/course/oracle-12c-sql-tuning/
https://www.udemy.com/course/sql-performance-tuning-masterclass/
https://www.udemy.com/course/sql-tuning/




https://www.udemy.com/course/oracle-12c-sql-tuning/
{{{
Course content
19 sections • 86 lectures • 19h 46m total length

    Preview06:58

    Preview18:24
    Practice 1- Preparing Practice Environment - Part 2 of 2
    27:39

    Introduction to SQL Tuning - Part 1 of 2
    15:51
    Introduction to SQL Tuning - Part 2 of 2
    06:14

    Query Optimizer Fundamentals
    13:30
    Query Optimizer Fundamentals

    5 questions

    Reading Query Execution Plans
    14:46
    Displaying Query Execution Plans
    21:04
    Pracitce 2 - Displaying Execution Plans
    16:36
    Introduction to SQL Operators
    07:02
    Table and B-Tree Index SQL Operators
    20:50
    SQL Joins - Nested Loop Joins
    10:59
    SQL Joins - Hash Joins
    05:40
    SQL Joins - Sort Merge Joins
    11:15
    SQL Operators

    9 questions
    Practice 3 - Exploring SQL Operators and Joins
    08:36

    Influencing the Optimizer with Hints - Part 1 of 2
    13:34
    Influencing the Optimizer with Hints - Part 2 of 2
    15:38
    Practice 4 - Influencing the Optimizer with Hints - Part 1 of 2
    09:10
    Practice 4 - Influencing the Optimizer with Hints - Part 2 of 2
    11:17

    Optimizer Statistics Concepts
    11:33
    Practice 5 - Exploring Optimizer Statistics
    09:44
    Gathering Optimizer Statistics
    16:43
    Practice 6 - Gathering Optimizer Statistics
    15:08
    Setting Optimizer Statistics Preferences
    13:10
    Practice 7 - Setting Optimizer Statistics Preferences
    08:23
    Managing Histograms - Part 1 of 2
    16:43
    Managing Histograms - Part 2 of 2
    13:59
    Practice 8 - Managing Histograms - Part 1 of 2
    13:22
    Practice 8 - Managing Histograms - Part 2 of 2
    10:35
    Managing Extended Statistics
    16:11
    Practice 9 - Managing Extended Statistics
    12:58
    Managing Optimizer Statistics
    14:53
    Practice 10 - Managing Optimizer Statistics
    14:20
    Managing Historical Optimizer Statistics
    07:43
    Practice 11 - Managing Historical Optimizer Statistics
    11:47
    Using Optimizer Statistics Advisor
    08:21
    Practice 12 - Using Optimizer Statistics Advisor
    06:48

    Adaptive Query Optimization
    07:17
    Adaptive Plans
    17:01
    Practice 13 - Demonstrating Adaptive Plans
    17:22
    Statistics Feedback and Dynamic Statistics
    12:43
    Practice 14 - Statistics Feedback and Dynamic Statistics
    11:12
    SQL Plan Directives
    12:41
    Practice 15 - SQL Plan Directives
    12:34

    Improving Performance Through Cursor Sharing
    14:11
    Practice 16 - Improving Performance Through Cursor Sharing
    16:42

    Monitoring Database Operations in Real-time using DBMS_MONITOR
    12:45
    Practice 17 - Monitoring Database Operations in Real-time using DBMS_MONITOR -P1
    12:59
    Practice 17 - Monitoring Database Operations in Real-time using DBMS_MONITOR -P2
    09:01
    Tracing SQL Statements using DBMS_MONITOR
    12:33
    Using tkprof Utility
    19:01
    Practice 18 - Tracing SQL Statements using DBMS_MONITOR
    20:05
    More SQL Tracing Methods
    09:34
    Practice 19 - More SQL Tracing Methods
    06:10

    Managing SQL Tuning Sets
    12:41
    Practice 20 - Managing SQL Tuning Sets
    08:29

    Using SQL Tuning Advisor - Automatic Mode
    13:36
    Using SQL Tuning Advisor - Manual Mode
    05:22
    Practice 21 - Using SQL Tuning Advisor
    17:45
    Managing SQL Profiles
    21:15
    Practice 22 - Managing SQL Profiles
    22:33

    Managing SQL Plan Baselines - Part I
    23:43
    Managing SQL Plan Baselines - Part II
    15:42
    Practice 23 - Managing SQL Plan Baselines - Part 1 of 2
    23:07
    Practice 23 - Managing SQL Plan Baselines - Part 2 of 2
    20:15
    Using Stored Outlines and Migrating them to SQL Plan Baselines
    09:31
    Practice 24 - Migrating Stored Outlines to SQL Plan Baselines
    13:57
    Managing SQL Management Base (SMB)
    06:16

    Using SQL Access Advisor
    15:53
    Practice 25 - Using SQL Access Advisor
    18:17

    Performance Considerations when Working with Tables
    25:53
    Practice 26 - Performance Tips on Using Tables
    10:01

    Using Indexes - Part I
    20:02
    Practice 27 - Using Indexes - Part I
    21:43
    Using Indexes - Part II
    21:21
    Practice 28 - Using Indexes - Part II
    18:15
    Using Indexes - Part III
    28:21
    Practice 29 - Using Indexes - Part III
    11:42
    Star Transformation
    18:00
    Practice 30 - Star Transformation
    15:17

    Using Server Result Cache
    08:45
    Practice 31 Using Server Result Cache
    08:49

    Using SQL Performance Analyzer
    11:09
    Practice 32 - Using SQL Performance Analyzer
    12:09
    Practice 33 - Using SQL Tuning Health-Check Script (SQLHC)
    05:49

    Download Course Presentation and Practice Files
    00:02
}}}


https://www.udemy.com/course/sql-performance-tuning-masterclass/
{{{
Course content
14 sections • 226 lectures • 20h 2m total length

Preview04:52
UDEMY 101: How to Use Udemy? +Some Useful Tips (Do not Skip)
05:01
Welcome Gift! + Course Document

    00:36

Preview05:55

    Preview06:20

    Do You Have a Running Database in Your PC?
    00:30
    Why to know the Oracle Database Architecture and how much to know?

02:31
Preview09:17
Oracle Database Architecture Overview (Part 2)
06:04
Database Data Blocks in Detail
07:57
Preview05:32
What is Shared Pool?
06:31
What is Buffer Cache?
05:24
What is Redo Log Buffer?
04:18
What is Undo?
03:49
How a DML is processed and committed
04:28
Automatic Memory Management
02:06
Oracle Database Storage Architecture
03:57
Logical and Physical Database Structure
06:13
Quiz - Database Architecture

    8 questions

    When to Tune?
    07:40
    What is a Bad SQL?
    05:08
    Effective Schema Design
    08:42
    Table Partitioning
    07:15
    How an SQL Statement is Processed?
    09:32
    Why do we need the Optimizer?
    05:41
    Optimizer Overview
    03:25
    Query Transformer
    08:44
    Selectivity & Cardinality
    08:02
    What is "cost" in detail?
    04:51
    Plan Generator

03:56
Row Source Generator
03:38
SQL Tuning Principles and Strategies
08:10
Query Analysis Strategy
12:58
SQL Tuning Basics Assessment Test

    12 questions

    Execution Plan and Explain Plan in Details
    07:24
    Generating Statistics (Part 1)
    06:16
    Generating Statistics (Part 2)
    07:15
    Generating Statistics (Part 3)
    08:50
    Generating Statistics (Code Samples)
    00:20
    Generating Execution Plan
    12:06
    Generating Execution Plan (Code Samples)
    00:09

Preview12:38
Autotrace (Code Samples)
00:09
V$SQL_PLAN (Code Samples)
00:23
Reading the Execution Plans (Part 1)
13:12
Reading the Execution Plans (Part 2)
10:29
Reading the Execution Plans (Code Samples)
00:07
Analyzing the Execution Plans
08:18
Analyzing the Execution Plans (Code Samples)
00:14
Execution Plans & Statistics

    10 questions

    What are Indexes and How They work in details?

10:51
Types of Table and Index Access Paths
11:59
Table Access Full
08:35
Table Access Full (Code Samples)
00:11
Table Access by ROWID
06:18
Table Access by ROWID (Code Samples)
00:07
Index Unique Scan
04:48
Index Range Scan
10:40
Index Range Scan (Code Samples)
00:30
Index Full Scan (Code Samples)
01:00
Index Fast Full Scan
06:37
Index Fast Full Scan (Code Samples)
00:29
Index Skip Scan
14:14
Index Skip Scan (Code Samples)
00:25
Index Join Scan
05:37
Index Join Scan (Code Samples)
00:12
Table & Index Access Paths

    10 questions

    What are Hints and Why to Use Them?
    04:04
    How to use Hints

15:21
How to use Hints (Code Samples)
01:21
List of Some Useful Hints
00:08
Using Hints

    5 questions

    Join Methods Overview

05:10
Nested Loop Joins
12:09
Nested Loop Join (Code Samples)
00:31
Sort Merge Joins
10:19
Sort Merge Join (Code Samples)
00:24
Hash Joins
11:08
CODE: Hash Joins
00:09
Cartesian Joins
06:28
CODE: Cartesian Joins
00:03
Join Types Overview
03:00
Equijoins & Nonequijoins
03:36
CODE: Equijoins & Nonequijoins
00:06
Outer Joins
11:16
CODE: Outer Joins
00:31
Semijoins
06:00
CODE: Semijoins
00:10
Antijoins
03:26
CODE: Antijoins
00:19
Join Operations

    7 questions

    Result Cache Operator

11:23
CODE: Result Cache Operator
00:18
View Operator
07:29
CODE: View Operator
00:37
Clusters
14:31
CODE: Clusters
00:31
Sort Operators
07:07
CODE: Sort Operators
00:10
INLIST Operator
04:25
CODE: INLIST Operator
00:17
Count Stopkey Operator
03:34
CODE: Count Stopkey Operator
00:06
First Row Operator
05:15
CODE: First Row Operator
00:08
Filter Operator
01:41
CODE: Filter Operator
00:02
Concatenation Operator
03:06
CODE: Concatenation Operator
00:04
UNION Operators
02:54
CODE: Union Operators
00:07
Intersect Operator
05:20
CODE: Intersect Operator
00:18
Minus Operator
01:49
CODE: Minus Operator
00:11
Other Optimizer Operators

    5 questions

    How to find a performance problem and its tuning solution?

15:05
Ways of Getting the Execution Plan and the Statistics
16:27
Using the Real-Time SQL Monitoring Tool Part 1
10:35
Using the Real-Time SQL Monitoring Tool Part 2
13:55
Using the Real-Time SQL Monitoring Tool Part 3
12:00
CODE: Using the Real-Time SQL Monitoring Tool
00:17
Using the Trace Files & TKPROF Utility - Part 1
15:56
Using the Trace Files & TKPROF Utility - Part 2
20:21
Using the Trace Files & TKPROF Utility - Part 3
10:22
CODE: Using the Trace Files & TKPROF Utility
00:15
Get What You Need Only
07:13
CODE: Get What You Need Only
00:04
Index Usage
16:24
CODE: Index Usage
00:37
Using Concatenation Operator
03:35
CODE: Using Concatenation Operator
00:05
Using Arithmetic Operators
03:15
CODE: Using Arithmetic Operators
00:06
Using Like Conditions
06:43
CODE: Using Like Conditions
00:13
Using Functions on the Indexed Columns
04:52
CODE: Using Functions on the Indexed Columns
00:11
Handling NULL-Based Performance Problems
08:08
CODE: Handling NULL-Based Performance Problems
00:26
Using EXISTS instead of IN Clause
05:02
Using TRUNCATE instead of DELETE command
04:37
CODE: Using TRUNCATE instead of DELETE command
00:06
Data Type Mismatch
05:35
CODE: Data Type Mismatch
00:13
Tuning Ordered Queries
07:09
CODE: Tuning Ordered Queries
00:14
Retrieving the MIN & MAX Values
10:52
CODE: Retrieving the MIN & MAX Values
00:15
UNION and UNION ALL Operators (Which one is faster?)
03:17
UNION and UNION ALL Operators (Which one is faster?)
00:07
Avoid Using the HAVING Clause!
05:22
CODE: Avoid Using the HAVING Clause!
00:11
Preview10:22
CODE: Be Careful on Views!
00:38
Create Materialized Views
07:26
CODE: Create Materialized Views
00:28
Avoid Commit Too Much or Too Less!
04:36
Partition Pruning
06:10
CODE: Partition Pruning
00:07
Using BULK COLLECT
10:01
CODE: Using BULK COLLECT
01:06
Tuning the Join Order
06:49
CODE: Tuning the Join Order
00:19
Multitable DML Operations
07:19
CODE: Multitable DML Operations
00:51
Using Temporary Tables
07:18
CODE: Using Temporary Tables
00:35
Combining SQL Statements
04:55
CODE: Combining SQL Statements
00:19
Using "WITH" Clause
08:12
CODE: Using WITH Clause
00:59
Using Analytical Functions
04:49
CODE: Using Analytical Functions
00:15
SQL Tuning Techniques

    5 questions

Preview05:27
Index Selectivity & Cardinality
05:31
B-Tree Indexes in Details
12:13
CODE: B-Tree Indexes in Details
00:27
Bitmap Indexes in Details
19:28
CODE: Bitmap Indexes in Details
00:34
Bitmap Operations
07:21
Composite Indexes and Order of Indexed Columns
10:07
CODE: Composite Indexes and Order of Indexed Columns
00:17
Covering Indexes
08:00
CODE: Covering Indexes
00:12
Reverse Key Indexes
03:37
Bitmap Join Indexes
09:53
CODE: Bitmap Join Indexes
00:29
Combining Bitmap Indexes
08:31
CODE: Combining Bitmap Indexes
00:25
Function-Based Indexes
09:19
CODE: Function-Based Indexes
00:20
Index-Organized Tables
16:47
CODE: Index-Organized Tables
00:29
Cluster Indexes
08:30
CODE: Cluster Indexes
00:31
Invisible Indexes
07:31
CODE: Invisible Indexes
00:09
Index Key Compression- Part 1
05:23
Index Key Compression- Part 2

    11:13
    CODE: Index Key Compression
    00:26
    Full-Text Searches
    20:20
    CODE: Full-Text Search Indexes
    00:34

    Tuning Star Queries

07:17
CODE: Tuning Star Queries
00:35
Using Bind Variables
08:16
CODE: Using Bind Variables
00:40
Beware of Bind Variable Peeking
05:53
CODE: Beware of Bind Variable Peeking
00:16
Cursor Sharing
14:14
CODE: Cursor Sharing
00:42
Adaptive Cursor Sharing
16:31
CODE: Adaptive Cursor Sharing
00:27
Adaptive Plans
12:43
CODE: Adaptive Plans
00:12
Dynamic Statistics (Dynamic Sampling)

    16:07
    CODE: Dynamic Statistics (Dynamic Sampling)
    00:22

    About the Database Installation
    03:11
    Option 1: Having the Database with the Oracle VirtualBox Software

16:29
Option 1: How to Install the Virtual Box on Mac OS X?
01:51
Option 2: What is Pluggable Database?
03:13
Option 2: Downloading and Installing the Oracle Database
18:18
Option 2: Unlocking the HR Schema
07:34
Option 2: Configuring and Using Oracle SQL Developer
22:14
Option 2: Installing Sample Schemas in Oracle Database
07:44
Extra: 12c Installation
00:55
Option 2: How to Unlock the HR Schema in the Oracle Database 12c?
01:36
Option 2: Oracle Database 12c Installation into Your Computer
09:20
Option 2: Configuring and Using Oracle SQL Developer for Oracle Database 12c

    10:12

    Bonus Lecture
    00:29
}}}


https://www.udemy.com/course/sql-tuning/
{{{
Course content
12 sections • 61 lectures • 3h 5m total length

Preview03:13
Preview02:22
Preview04:07
Preview01:44
Parsing

    2 questions

    Cost based Optimization
    02:08
    Gathering Statistics
    03:13
    Execution Plan
    02:11
    SQL Tuning Tools
    02:22
    Running Explain Plan
    02:57
    Optimizer statistics
    3 questions

    What is my Address?
    01:40
    Types of Table Accesses
    01:44
    Table Access FULL
    02:19
    Table Access by ROWID
    02:23
    Index Unique Scan
    03:03
    Index Range Scan
    03:17
    Choosing between FULL and INDEX scan
    01:46
    Access Paths
    3 questions

    Execution Plan
    01:12
    What should you look for?
    03:24
    What is COST?
    01:40
    Rules of Execution Plan Tree
    02:40
    Traversing through the Tree
    03:40
    Reading Execution Plan
    02:59
    Execution Plan Example #1
    02:15
    Execution Plan Example #2
    03:09
    Execution Plan Example #3
    04:34
    Execution Plan Example #4
    06:33

    SELECT consideration
    02:01
    Using Table Aliases
    02:16
    Using WHERE rather than HAVING
    02:57
    Simple Rules
    3 questions

    Index Suppression reasons
    01:48
    Use of <> operator
    03:55
    Use of SUBSTR function
    02:37
    Use of Arithmetic operators
    02:13
    Use of TRUNC function on Date columns
    02:16
    Use of || operator
    02:29
    Comparing a character column to a numeric value
    02:12
    Use of IS NULL and IS NOT NULL
    03:00
    Function based Indexes
    03:03
    Index Suppression SQL
    4 questions

    Use UNION ALL instead of UNION
    02:34
    Minimize Table lookups in a Query
    02:49
    EXISTS vs IN
    02:28
    Use EXISTS instead of DISTINCT
    02:54
    Reading same table multiple times?
    05:00
    Use TRUNCATE instead of DELETE
    03:32

    Reduce the number of Trips to the database
    01:52
    Issue frequent COMMIT statements
    01:33
    Using BULK COLLECT
    02:38

    Join Methods
    01:13
    Nested Loop Join
    05:02
    Hash Join
    02:28
    Sort Merge Join
    03:35

    Why HINTS?
    04:26
    Forcing a specific Join Method
    03:30
    HINTS list
    3 pages

    Invalid Optimizer Statistics
    05:10
    Checking SQL statements which are performing BAD
    04:58

    Effective Schema Design
    04:22
    Separate Tablespace for Data and Index
    03:25
    Index Organized Tables
    04:30
    Partitioned Tables
    05:32
    Bitmap Indexes
    05:59
}}}
<<showtoc>>

! manual way (this is recommended)

{{{

11:11:09 KARLARAO@cdb1> @spm_demo_query.sql

ALL_DISTINCT       SKEW
------------ ----------
           3          3


P_SQLID
-------------
a5jq5khm9w64n

Enter value for p_sqlid: a5jq5khm9w64n

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  a5jq5khm9w64n, child number 0
-------------------------------------
select * from skew where skew=3

Plan hash value: 246648590

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |     8 (100)|          |
|*  1 |  TABLE ACCESS FULL| SKEW |   909 |  6363 |     8  (13)| 00:00:01 |

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("SKEW"=3)


18 rows selected.






11:11:43 KARLARAO@cdb1> @spm_baselines
Enter value for sql_text:
Enter value for exact_matching_signature:

no rows selected






-- create the baseline
DECLARE
my_plans pls_integer;
BEGIN
my_plans := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id => 'a5jq5khm9w64n',plan_hash_value=>'246648590', fixed =>'YES', enabled=>'YES');
END;
/





11:12:52 KARLARAO@cdb1> DECLARE
my_plans pls_integer;
BEGIN
my_plans := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id => 'a5jq5khm9w64n',plan_hash_value=>'246648590', fixed =>'YES', enabled=>'YES');
END;
/11:18:16   2  11:18:16   3  11:18:16   4  11:18:16   5  11:18:16   6

PL/SQL procedure successfully completed.



11:18:20 KARLARAO@cdb1> @spm_baselines
Enter value for sql_text:
Enter value for exact_matching_signature:

PARSING_ CREATED              PLAN_NAME                                SQL_HANDLE                SQL_TEXT                            OPTIMIZER_COST ENA ACC FIX REP ORIGIN
-------- -------------------- ---------------------------------------- ------------------------- ----------------------------------- -------------- --- --- --- --- --------
KARLARAO 03/23/20 11:18:18    SQL_PLAN_fahs3brrwbxcm950a48a8           SQL_e543035defc5f593      select * from skew where skew=3                  8 YES YES YES YES MANUAL-L
                                                                                                                                                                    OAD






--##############################################################################################################################
created an index but doesn't get used by the SQL_ID 
what needs to be done is create a new SQL_ID with new plan hash value, and add that PHV to the old SQL_ID 
--##############################################################################################################################


11:20:38 KARLARAO@cdb1> @spm_demo_createindex.sql

Index created.


PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.

11:21:27 KARLARAO@cdb1> @spm_demo_fudgestats.sql

PL/SQL procedure successfully completed.

11:21:36 KARLARAO@cdb1> @spm_demo_query.sql

ALL_DISTINCT       SKEW
------------ ----------
           3          3


P_SQLID
-------------
a5jq5khm9w64n

Enter value for p_sqlid: a5jq5khm9w64n

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  a5jq5khm9w64n, child number 0
-------------------------------------
select * from skew where skew=3

Plan hash value: 246648590

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |     2 (100)|          |
|*  1 |  TABLE ACCESS FULL| SKEW |     1 |     1 |     2   (0)| 00:00:01 |

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("SKEW"=3)

Note
-----
   - SQL plan baseline SQL_PLAN_fahs3brrwbxcm950a48a8 used for this statement


22 rows selected.

11:21:46 KARLARAO@cdb1> @spm_demo_query.sql

ALL_DISTINCT       SKEW
------------ ----------
           3          3


P_SQLID
-------------
a5jq5khm9w64n

Enter value for p_sqlid: a5jq5khm9w64n

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  a5jq5khm9w64n, child number 0
-------------------------------------
select * from skew where skew=3

Plan hash value: 246648590

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |     2 (100)|          |
|*  1 |  TABLE ACCESS FULL| SKEW |     1 |     1 |     2   (0)| 00:00:01 |

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("SKEW"=3)

Note
-----
   - SQL plan baseline SQL_PLAN_fahs3brrwbxcm950a48a8 used for this statement


22 rows selected.

11:21:59 KARLARAO@cdb1> @spm_baselines
Enter value for sql_text:
Enter value for exact_matching_signature:

PARSING_ CREATED              PLAN_NAME                                SQL_HANDLE                SQL_TEXT                            OPTIMIZER_COST ENA ACC FIX REP ORIGIN
-------- -------------------- ---------------------------------------- ------------------------- ----------------------------------- -------------- --- --- --- --- --------
KARLARAO 03/23/20 11:18:18    SQL_PLAN_fahs3brrwbxcm950a48a8           SQL_e543035defc5f593      select * from skew where skew=3                  8 YES YES YES YES MANUAL-L
                                                                                                                                                                    OAD








--## regather stats 
exec dbms_stats.gather_index_stats(user,'SKEW_IDX', no_invalidate => false); 
exec dbms_stats.gather_table_stats(user,'SKEW', no_invalidate => false);





--## index was picked up 

11:28:22 KARLARAO@cdb1> @spm_demo_query2.sql

ALL_DISTINCT       SKEW
------------ ----------
           3          3


P_SQLID
-------------
693ccxff9a8ku

Enter value for p_sqlid: 693ccxff9a8ku

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  693ccxff9a8ku, child number 0
-------------------------------------
select /* new */ * from skew where skew=3

Plan hash value: 1949605896

------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |          |       |       |     2 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| SKEW     |     1 |     7 |     2   (0)| 00:00:01 |

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|*  2 |   INDEX RANGE SCAN                  | SKEW_IDX |     1 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("SKEW"=3)


19 rows selected.










--## use coe.sql to force index to the OLD SQL_ID 
-- edit the output sql file to match the text of OLD SQL_ID 

SQL>set lines 300
SQL>set serveroutput off
select * from skew where skew=3;
select * from table(dbms_xplan.display_cursor);SQL>
ALL_DISTINCT       SKEW
------------ ----------
           3          3

SQL>

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  a5jq5khm9w64n, child number 0
-------------------------------------
select * from skew where skew=3

Plan hash value: 1949605896

------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |          |       |       |     2 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| SKEW     |     1 |     7 |     2   (0)| 00:00:01 |

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|*  2 |   INDEX RANGE SCAN                  | SKEW_IDX |     1 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("SKEW"=3)

Note
-----
   - SQL profile coe_693ccxff9a8ku_1949605896 used for this statement

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------


23 rows selected.





SQL>@spm_baselines
Enter value for sql_text:
Enter value for exact_matching_signature:

PARSING_ CREATED              PLAN_NAME                                SQL_HANDLE                SQL_TEXT                            OPTIMIZER_COST ENA ACC FIX REP ORIGIN
-------- -------------------- ---------------------------------------- ------------------------- ----------------------------------- -------------- --- --- --- --- --------
KARLARAO 03/23/20 11:18:18    SQL_PLAN_fahs3brrwbxcm950a48a8           SQL_e543035defc5f593      select * from skew where skew=3                  8 YES YES YES YES MANUAL-L
                                                                                                                                                                    OAD







      
-- add the other plan
-- you can even use a different SQL_ID, what matters is the text matches the EXACT_MATCHING_SIGNATURE to be tied to SQL_HANDLE as a new SQL PLAN_NAME
DECLARE
my_plans pls_integer;
BEGIN
my_plans := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id => 'a5jq5khm9w64n',plan_hash_value=>'1949605896', fixed =>'YES', enabled=>'YES');
END;
/





-- SQL HANDLE is the same, there's a new PLAN NAME


SQL>@spm_baselines
Enter value for sql_text:
Enter value for exact_matching_signature:

PARSING_ CREATED              PLAN_NAME                                SQL_HANDLE                SQL_TEXT                            OPTIMIZER_COST ENA ACC FIX REP ORIGIN
-------- -------------------- ---------------------------------------- ------------------------- ----------------------------------- -------------- --- --- --- --- --------
KARLARAO 03/23/20 11:18:18    SQL_PLAN_fahs3brrwbxcm950a48a8           SQL_e543035defc5f593      select * from skew where skew=3                  8 YES YES YES YES MANUAL-L
                                                                                                                                                                    OAD

KARLARAO 03/23/20 11:41:32    SQL_PLAN_fahs3brrwbxcm08e93fe4           SQL_e543035defc5f593      select * from skew where skew=3                  2 YES YES YES YES MANUAL-L
                                                                                                                                                                    OAD







-- verify
set lines 300
set serveroutput off
select * from skew where skew=3;
select * from table(dbms_xplan.display_cursor);

 
 



Connected.
11:42:26 KARLARAO@cdb1>
set lines 300
set serveroutput off
select * from skew where skew=3;
select * from table(dbms_xplan.display_cursor);11:42:27 KARLARAO@cdb1> 11:42:27 KARLARAO@cdb1> 11:42:27 KARLARAO@cdb1>
ALL_DISTINCT       SKEW
------------ ----------
           3          3

11:42:27 KARLARAO@cdb1>

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  a5jq5khm9w64n, child number 0
-------------------------------------
select * from skew where skew=3

Plan hash value: 1949605896

------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |          |       |       |     2 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| SKEW     |     1 |     7 |     2   (0)| 00:00:01 |

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|*  2 |   INDEX RANGE SCAN                  | SKEW_IDX |     1 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("SKEW"=3)

Note
-----
   - SQL profile coe_693ccxff9a8ku_1949605896 used for this statement

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
   - SQL plan baseline SQL_PLAN_fahs3brrwbxcm08e93fe4 used for this statement


24 rows selected.







--## drop sql profile and verify baseline 
exec dbms_sqltune.drop_sql_profile(name => '&profile_name');


11:46:10 KARLARAO@cdb1> set lines 300
set serveroutput off
select * from skew where skew=3;
select * from table(dbms_xplan.display_cursor);11:46:11 KARLARAO@cdb1> 11:46:11 KARLARAO@cdb1>
ALL_DISTINCT       SKEW
------------ ----------
           3          3

11:46:11 KARLARAO@cdb1>

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  a5jq5khm9w64n, child number 0
-------------------------------------
select * from skew where skew=3

Plan hash value: 1949605896

------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |          |       |       |     2 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| SKEW     |     1 |     7 |     2   (0)| 00:00:01 |

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|*  2 |   INDEX RANGE SCAN                  | SKEW_IDX |     1 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("SKEW"=3)

Note
-----
   - SQL plan baseline SQL_PLAN_fahs3brrwbxcm08e93fe4 used for this statement

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------


23 rows selected.






--## here you'll see two plans 


11:46:34 KARLARAO@cdb1> @spm_baselines
Enter value for sql_text:
Enter value for exact_matching_signature:

PARSING_ CREATED              PLAN_NAME                                SQL_HANDLE                SQL_TEXT                            OPTIMIZER_COST ENA ACC FIX REP ORIGIN
-------- -------------------- ---------------------------------------- ------------------------- ----------------------------------- -------------- --- --- --- --- --------
KARLARAO 03/23/20 11:18:18    SQL_PLAN_fahs3brrwbxcm950a48a8           SQL_e543035defc5f593      select * from skew where skew=3                  8 YES YES YES YES MANUAL-L
                                                                                                                                                                    OAD

KARLARAO 03/23/20 11:41:32    SQL_PLAN_fahs3brrwbxcm08e93fe4           SQL_e543035defc5f593      select * from skew where skew=3                  2 YES YES YES YES MANUAL-L
                                                                                                                                                                    OAD


11:46:38 KARLARAO@cdb1>
11:46:38 KARLARAO@cdb1> @spm_plans
Enter value for sql_handle: SQL_e543035defc5f593

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

--------------------------------------------------------------------------------
SQL handle: SQL_e543035defc5f593
SQL text: select * from skew where skew=3
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
Plan name: SQL_PLAN_fahs3brrwbxcm08e93fe4         Plan id: 149503972
Enabled: YES     Fixed: YES     Accepted: YES     Origin: MANUAL-LOAD
Plan rows: From dictionary
--------------------------------------------------------------------------------

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Plan hash value: 1949605896

--------------------------------------------------------
| Id  | Operation                           | Name     |
--------------------------------------------------------
|   0 | SELECT STATEMENT                    |          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| SKEW     |
|   2 |   INDEX RANGE SCAN                  | SKEW_IDX |
--------------------------------------------------------


PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Plan name: SQL_PLAN_fahs3brrwbxcm950a48a8         Plan id: 2500479144
Enabled: YES     Fixed: YES     Accepted: YES     Origin: MANUAL-LOAD
Plan rows: From dictionary
--------------------------------------------------------------------------------

Plan hash value: 246648590

----------------------------------
| Id  | Operation         | Name |
----------------------------------

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |
|   1 |  TABLE ACCESS FULL| SKEW |
----------------------------------

36 rows selected.






--## let's try to disable that baseline 

set verify off
declare
myplan pls_integer;
begin
myplan:=DBMS_SPM.ALTER_SQL_PLAN_BASELINE (sql_handle => '&sql_handle',plan_name  => '&plan_name',attribute_name => 'ENABLED',   attribute_value => '&YES_OR_NO');
end;
/



set verify off
declare
myplan pls_integer;
begin
myplan:=DBMS_SPM.ALTER_SQL_PLAN_BASELINE (sql_handle => '&sql_handle',plan_name  => '&plan_name',attribute_name => 'ENABLED',   attribute_value => '&YES_OR_NO');
end;
/11:49:35 KARLARAO@cdb1> 11:49:35 KARLARAO@cdb1> 11:49:35   2  11:49:35   3  11:49:35   4  11:49:35   5  11:49:35   6
Enter value for sql_handle: SQL_e543035defc5f593
Enter value for plan_name: SQL_PLAN_fahs3brrwbxcm08e93fe4
Enter value for yes_or_no: no

PL/SQL procedure successfully completed.


 @spm_baselines
Enter value for sql_text:
Enter value for exact_matching_signature:

PARSING_ CREATED              PLAN_NAME                                SQL_HANDLE                SQL_TEXT                            OPTIMIZER_COST ENA ACC FIX REP ORIGIN
-------- -------------------- ---------------------------------------- ------------------------- ----------------------------------- -------------- --- --- --- --- --------
KARLARAO 03/23/20 11:18:18    SQL_PLAN_fahs3brrwbxcm950a48a8           SQL_e543035defc5f593      select * from skew where skew=3                  8 YES YES YES YES MANUAL-L
                                                                                                                                                                    OAD

KARLARAO 03/23/20 11:41:32    SQL_PLAN_fahs3brrwbxcm08e93fe4           SQL_e543035defc5f593      select * from skew where skew=3                  2 NO  YES YES YES MANUAL-L
                                                                                                                                                                    OAD





--## after disabling the full scan baseline was used 



11:50:04 KARLARAO@cdb1> set lines 300
set serveroutput off
select * from skew where skew=3;
select * from table(dbms_xplan.display_cursor);11:50:31 KARLARAO@cdb1> 11:50:31 KARLARAO@cdb1>
ALL_DISTINCT       SKEW
------------ ----------
           3          3

11:50:31 KARLARAO@cdb1>

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  a5jq5khm9w64n, child number 1
-------------------------------------
select * from skew where skew=3

Plan hash value: 246648590

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |     8 (100)|          |
|*  1 |  TABLE ACCESS FULL| SKEW |     1 |     7 |     8  (13)| 00:00:01 |

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("SKEW"=3)

Note
-----
   - SQL plan baseline SQL_PLAN_fahs3brrwbxcm950a48a8 used for this statement


22 rows selected.






--## let's disable the remaining baseline 



--## here the optimizer picked up the index 

set lines 300
set serveroutput off
select * from skew where skew=3;
select * from table(dbms_xplan.display_cursor);



11:52:49 KARLARAO@cdb1> set lines 300
set serveroutput off
select * from skew where skew=3;
select * from table(dbms_xplan.display_cursor);11:53:17 KARLARAO@cdb1> 11:53:17 KARLARAO@cdb1>
ALL_DISTINCT       SKEW
------------ ----------
           3          3

11:53:17 KARLARAO@cdb1>

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  a5jq5khm9w64n, child number 0
-------------------------------------
select * from skew where skew=3

Plan hash value: 1949605896

------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |          |       |       |     2 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| SKEW     |     1 |     7 |     2   (0)| 00:00:01 |

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|*  2 |   INDEX RANGE SCAN                  | SKEW_IDX |     1 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("SKEW"=3)


19 rows selected.








-- code to drop the individual baselines 


set verify off
DECLARE
  plans_dropped    PLS_INTEGER;
BEGIN
  plans_dropped := DBMS_SPM.drop_sql_plan_baseline (
sql_handle => '&sql_handle',
plan_name  => '&plan_name');
DBMS_OUTPUT.put_line(plans_dropped);
END;
 /




}}}




! automatic pickup of plans using evolve 
{{{

20:02:22 KARLARAO@cdb1> @spm_demo_query.sql

ALL_DISTINCT       SKEW
------------ ----------
           3          3


PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  a5jq5khm9w64n, child number 1
-------------------------------------
select * from skew where skew=3

Plan hash value: 246648590

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |     8 (100)|          |
|*  1 |  TABLE ACCESS FULL| SKEW |     1 |     7 |     8  (13)| 00:00:01 |

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("SKEW"=3)

Note
-----
   - SQL plan baseline SQL_PLAN_fahs3brrwbxcm950a48a8 used for this statement


22 rows selected.

20:02:31 KARLARAO@cdb1> @spm_baselines.sql
Enter value for sql_text:
Enter value for exact_matching_signature:

PARSING_ CREATED              PLAN_NAME                                SQL_HANDLE                SQL_TEXT                            OPTIMIZER_COST ENA ACC FIX REP ORIGIN
-------- -------------------- ---------------------------------------- ------------------------- ----------------------------------- -------------- --- --- --- --- --------

KARLARAO 03/22/20 19:58:58    SQL_PLAN_fahs3brrwbxcm950a48a8           SQL_e543035defc5f593      select * from skew where skew=3                  2 YES YES NO  YES AUTO-CAP
                                                                                                                                                                    TURE

KARLARAO 03/22/20 20:01:58    SQL_PLAN_fahs3brrwbxcm08e93fe4           SQL_e543035defc5f593      select * from skew where skew=3                  2 YES NO  NO  YES AUTO-CAP
                                                                                                                                                                    TURE



20:02:42 KARLARAO@cdb1> @spm_plans
Enter value for sql_handle: SQL_e543035defc5f593

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

--------------------------------------------------------------------------------
SQL handle: SQL_e543035defc5f593
SQL text: select * from skew where skew=3
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
Plan name: SQL_PLAN_fahs3brrwbxcm08e93fe4         Plan id: 149503972
Enabled: YES     Fixed: NO      Accepted: NO      Origin: AUTO-CAPTURE
Plan rows: From dictionary
--------------------------------------------------------------------------------

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Plan hash value: 1949605896

--------------------------------------------------------
| Id  | Operation                           | Name     |
--------------------------------------------------------
|   0 | SELECT STATEMENT                    |          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| SKEW     |
|   2 |   INDEX RANGE SCAN                  | SKEW_IDX |
--------------------------------------------------------


PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Plan name: SQL_PLAN_fahs3brrwbxcm950a48a8         Plan id: 2500479144
Enabled: YES     Fixed: NO      Accepted: YES     Origin: AUTO-CAPTURE
Plan rows: From dictionary
--------------------------------------------------------------------------------

Plan hash value: 246648590

----------------------------------
| Id  | Operation         | Name |
----------------------------------

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |
|   1 |  TABLE ACCESS FULL| SKEW |
----------------------------------

36 rows selected.









10:48:47 KARLARAO@cdb1>  @spm_evolve.sql
Enter value for sql_handle: SQL_e543035defc5f593
Enter value for verify: yes
Enter value for commit: yes
GENERAL INFORMATION SECTION
---------------------------------------------------------------------------------------------

 Task Information:
 ---------------------------------------------
 Task Name            : TASK_33661
 Task Owner           : KARLARAO
 Execution Name       : EXEC_36257
 Execution Type       : SPM EVOLVE
 Scope                : COMPREHENSIVE
 Status               : COMPLETED
 Started              : 03/23/2020 10:48:59
 Finished             : 03/23/2020 10:48:59
 Last Updated         : 03/23/2020 10:48:59
 Global Time Limit    : 2147483646
 Per-Plan Time Limit  : UNUSED
 Number of Errors     : 0
---------------------------------------------------------------------------------------------

SUMMARY
SECTION
---------------------------------------------------------------------------------------------
  Number of plans processed  : 1
  Number of findings         : 2
  Number of recommendations  : 1
  Number of errors           : 0
---------------------------------------------------------------------------------------------

DETAILS SECTION
---------------------------------------------------------------------------------------------
 Object ID          : 2
 Test Plan Name     : SQL_PLAN_fahs3brrwbxcm08e93fe4
 Base Plan Name     : SQL_PLAN_fahs3brrwbxcm950a48a8
 SQL Handle         : SQL_e543035defc5f593
 Parsing Schema     : KARLARAO
 Test Plan Creator  : KARLARAO

 SQL Text           : select * from skew where skew=3

Execution Statistics:
-----------------------------
                    Base Plan                     Test Plan
                    ----------------------------  ----------------------------
 Elapsed Time (s):  .000044                       .000003
 CPU Time (s):      .000019                       0
 Buffer Gets:       2                             0
 Optimizer Cost:    8                             2
 Disk Reads:        0                             0
 Direct Writes:     0                             0
 Rows Processed:    0                             0
 Executions:        10
10


FINDINGS SECTION
---------------------------------------------------------------------------------------------

Findings (2):
-----------------------------
 1. The plan was verified in 0.11000 seconds. It passed the benefit criterion
    because its verified performance was 6.67303 times better than that of the
    baseline plan.
 2. The plan was automatically accepted.

Recommendation:
-----------------------------
 Consider accepting the plan.


EXPLAIN PLANS SECTION
---------------------------------------------------------------------------------------------

Baseline Plan
-----------------------------
 Plan Id          : 42237
 Plan Hash Value  : 2500479144


---------------------------------------------------------------------
| Id  | Operation           | Name | Rows | Bytes | Cost | Time     |
---------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |    1 |     7 |    8 | 00:00:01 |
| * 1 |   TABLE ACCESS FULL | SKEW |    1 |     7 |    8 | 00:00:01 |
---------------------------------------------------------------------

Predicate Information (identified by operation id):
------------------------------------------
* 1 - filter("SKEW"=3)


Test Plan
-----------------------------
 Plan Id          : 42238
 Plan Hash Value  : 149503972

-------------------------------------------------------------------------------------------
| Id  | Operation                             | Name     | Rows | Bytes | Cost | Time
|
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |          |    1 |     7 |    2 | 00:00:01 |
|   1 |   TABLE ACCESS BY INDEX ROWID BATCHED | SKEW     |    1 |     7 |    2 | 00:00:01 |
| * 2 |    INDEX RANGE SCAN                   | SKEW_IDX |    1 |       |    1 | 00:00:01 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
------------------------------------------
* 2 - access("SKEW"=3)

---------------------------------------------------------------------------------------------

PL/SQL procedure successfully completed.











10:52:32 KARLARAO@cdb1> @spm_plans
Enter value for sql_handle: SQL_e543035defc5f593

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

--------------------------------------------------------------------------------
SQL handle: SQL_e543035defc5f593
SQL text: select * from skew where skew=3
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
Plan name: SQL_PLAN_fahs3brrwbxcm08e93fe4         Plan id: 149503972
Enabled: YES     Fixed: NO      Accepted: YES     Origin: AUTO-CAPTURE
Plan rows: From dictionary
--------------------------------------------------------------------------------

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Plan hash value: 1949605896

--------------------------------------------------------
| Id  | Operation                           | Name     |
--------------------------------------------------------
|   0 | SELECT STATEMENT                    |          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| SKEW     |
|   2 |   INDEX RANGE SCAN                  | SKEW_IDX |
--------------------------------------------------------


PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Plan name: SQL_PLAN_fahs3brrwbxcm950a48a8         Plan id: 2500479144
Enabled: YES     Fixed: NO      Accepted: YES     Origin: AUTO-CAPTURE
Plan rows: From dictionary
--------------------------------------------------------------------------------

Plan hash value: 246648590

----------------------------------
| Id  | Operation         | Name |
----------------------------------

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |
|   1 |  TABLE ACCESS FULL| SKEW |
----------------------------------

36 rows selected.











10:53:09 KARLARAO@cdb1> @spm_baselines
Enter value for sql_text:
Enter value for exact_matching_signature:

PARSING_ CREATED              PLAN_NAME                                SQL_HANDLE                SQL_TEXT                            OPTIMIZER_COST ENA ACC FIX REP ORIGIN
-------- -------------------- ---------------------------------------- ------------------------- ----------------------------------- -------------- --- --- --- --- --------

KARLARAO 03/22/20 19:58:58    SQL_PLAN_fahs3brrwbxcm950a48a8           SQL_e543035defc5f593      select * from skew where skew=3                  2 YES YES NO  YES AUTO-CAP
                                                                                                                                                                    TURE


KARLARAO 03/22/20 20:01:58    SQL_PLAN_fahs3brrwbxcm08e93fe4           SQL_e543035defc5f593      select * from skew where skew=3                  2 YES YES NO  YES AUTO-CAP
                                                                                                                                                                    TURE








10:56:04 KARLARAO@cdb1> set serveroutput off
10:56:14 KARLARAO@cdb1> select * from skew where skew=3;
select * from table(dbms_xplan.display_cursor);
ALL_DISTINCT       SKEW
------------ ----------
           3          3

10:56:16 KARLARAO@cdb1>

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  a5jq5khm9w64n, child number 0
-------------------------------------
select * from skew where skew=3

Plan hash value: 1949605896

------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |          |       |       |     2 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| SKEW     |     1 |     7 |     2   (0)| 00:00:01 |

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|*  2 |   INDEX RANGE SCAN                  | SKEW_IDX |     1 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("SKEW"=3)

Note
-----
   - SQL plan baseline SQL_PLAN_fahs3brrwbxcm08e93fe4 used for this statement

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------


23 rows selected.






}}}

<<showtoc>>

! from this guy https://medium.com/@benstr/meteorjs-vs-angularjs-aint-a-thing-3559b74d52cc
<<<
So bro, answer me… what should I learn?
First - Javascript, jQuery & maybe Node.

Second, It depends on your end goals…

… Wanna work for Facebook? Learn React, Flux, PHP, etc.
… Wanna work for Google? Learn Angular, Dart, Polymer, Python, etc.
… Wanna work for a 2 to 4 year old startup? Learn M-E-A-N
… Wanna work for a 5 to 10 year old startup? Learn Angular & Ruby on Rails
… Wanna create a new startup and impress everyone with how fast you add new features? Learn Meteor (& add whatever UI framework you want)

One last note for beginners. When building a web app you are going to deal with a lot of components (servers, databases, frameworks, pre-processors, packages, testing, …). To manage all this we created automated builders like Grunt & Gulp. After all, making web apps is serious and complicated business… or did we just make it complicated so it seems serious??

If you rather not bother with all that complicated build stuff then choose Meteor, it does it all auto-magically.
<<<

! angular-meteor 
http://angular-meteor.com/manifest


! bootstrap
http://stackoverflow.com/questions/14546709/what-is-bootstrap
http://getbootstrap.com/getting-started/
a good example is this http://joshcanfindit.com/



! discussion forums
http://www.quora.com/JavaScript-Frameworks/AngularJS-Meteor-Backbone-Express-or-plain-NodeJs-When-to-use-each-one
http://www.quora.com/Should-I-learn-Angular-js-or-Meteor



! ember.js 
!! https://www.emberscreencasts.com/
!! http://www.letscodejavascript.com/

!! vs backbone.js 
http://smus.com/backbone-and-ember/
backbone-ember-back-and-forth-transcript.txt https://gist.github.com/jashkenas/1732351

!! vs rails
https://www.airpair.com/ember.js/posts/top-mistakes-ember-rails
http://aokolish.me/blog/2014/11/16/8-reasons-i-won't-be-choosing-ember.js-for-my-next-app/

!! transactions, CRUD
http://bigbinary.com/videos/learn-ember-js/crud-application-in-ember-js  <-- video!
http://blog.trackets.com/2013/02/02/using-transactions-in-ember-data.html
http://blog.trackets.com/2013/01/27/ember-data-in-depth.html
http://embersherpa.com/articles/crud-example-app-without-ember-data/
http://discuss.emberjs.com/t/beginner-guidance-on-building-crud/4990
http://stackoverflow.com/questions/18691644/crud-operations-using-ember-model

!! sample real time web app 
http://www.codeproject.com/Articles/511031/A-sample-real-time-web-application-using-Ember-js

!! HTMLBars
http://www.lynda.com/Emberjs-tutorials/About-HTMLBars/178116/191855-4.html

!! ember and d3.js
https://corner.squareup.com/2012/04/building-analytics.html


! backbone.js 

http://stackoverflow.com/questions/16284724/what-does-var-app-app-do
<<<
If app is already defined, the it does nothing. If app is not defined, then it's equivalent to var app = {};
<<<
https://www.quora.com/What-are-the-pros-of-using-Handlebars-template-over-Underscore-js
https://engineering.linkedin.com/frontend/client-side-templating-throwdown-mustache-handlebars-dustjs-and-more
http://www.pluralsight.com/courses/choosing-javascript-framework
http://www.pluralsight.com/search/?searchTerm=backbone.js

I didn't really like backbone at all. It was a pain. https://news.ycombinator.com/item?id=4427556

!! backbone.js and d3.js 
Sam Selikoff - Using D3 with Backbone, Angular and Ember https://www.youtube.com/watch?v=ca3pQWc2-Xs <-- good stuff
https://github.com/samselikoff/talks/tree/master/4-apr2014-using-d3-backbone-angular-ember <-- good stuff
Backbone and D3 in a large, complex app https://groups.google.com/forum/#!topic/d3-js/3gmyzPOXNBM
D3 with Backbone / D3 with Angular / D3 with Ember http://stackoverflow.com/questions/17050921/d3-with-backbone-d3-with-angular-d3-with-ember

!! react.js as a view - Integrating React With Backbone
http://www.slideshare.net/RyanRoemer/backbonejs-with-react-views-server-rendering-virtual-dom-and-more <-- good stuff
http://timecounts.github.io/backbone-react-redux/#61
http://www.thomasboyt.com/2013/12/17/using-reactjs-as-a-backbone-view.html
https://blog.engineyard.com/2015/integrating-react-with-backbone
http://joelburget.com/backbone-to-react/
https://blog.mayflower.de/3937-Backbone-React.html
http://clayallsopp.com/posts/from-backbone-to-react/
http://leoasis.github.io/posts/2014/03/22/from_backbone_views_to_react/



! react.js 
http://www.pluralsight.com/search/?searchTerm=react.js

!! react as a view in ember 
http://discuss.emberjs.com/t/can-reactjs-be-used-as-a-view-within-emberjs/3470

!! react and d3.js 
http://nicolashery.com/integrating-d3js-visualizations-in-a-react-app/
https://www.codementor.io/reactjs/tutorial/3-steps-scalable-data-visualization-react-js-d3-js
http://10consulting.com/2014/02/19/d3-plus-reactjs-for-charting/

!! react vs ember 
Choosing Ember over React in 2016 https://blog.instant2fa.com/choosing-ember-over-react-in-2016-41a2e7fd341#.1712iqvw8
https://grantnorwood.com/why-i-chose-ember-over-react/
Check this React vs. Ember presentation by Alex Matchneer, a lot of good points on uni-directional flow. http://bit.ly/2fk0Ybe
http://www.creativebloq.com/web-design/react-goes-head-head-emberjs-31514361
http://www.slideshare.net/mraible/comparing-hot-javascript-frameworks-angularjs-emberjs-and-reactjs-springone-2gx-2015



! RoR
https://www.quora.com/Which-is-superior-between-Node-js-vs-RoR-vs-Go
http://www.hostingadvice.com/blog/nodejs-vs-golang/
https://www.codementor.io/learn-programming/ruby-on-rails-vs-node-js-backend-language-for-beginners
https://hackhands.com/use-ruby-rails-node-js-next-projectstartup/
https://www.quora.com/Which-server-side-programming-language-is-the-best-for-a-starting-programmer-Perl-PHP-Python-Ruby-JavaScript-Node-Scala-Java-Go-ASP-NET-or-ColdFusion
https://www.quora.com/Which-is-the-best-option-for-a-Ruby-on-Rails-developer-AngularJS-or-Ember-js






! references
https://en.wikipedia.org/wiki/Comparison_of_JavaScript_frameworks








http://www.cyberciti.biz/tips/what-is-devshm-and-its-practical-usage.html
http://superuser.com/questions/45342/when-should-i-use-dev-shm-and-when-should-i-use-tmp
http://download.oracle.com/docs/cd/B28359_01/server.111/b32009/appi_vlm.htm

tanel mentioned he used it as a persistent storage when he was doing a migration on this one database because it needs to do fast writes so he put the redo log on the /dev/shm.. this is dangerous because when the server crash then you have to do a restore/recover.. data residing in /dev/shm is not persistent on OS reboot..
<<showtoc>>


! install software on node2
{{{
[root@node2 scripts]# ./install_krb5.sh
}}}

! test login 
{{{
[root@node2 scripts]# kinit admin/admin
Password for admin/admin@EXAMPLE.COM: 
[root@node2 scripts]# 
[root@node2 scripts]# klist
Ticket cache: KEYRING:persistent:0:0
Default principal: admin/admin@EXAMPLE.COM

Valid starting       Expires              Service principal
01/08/2019 17:47:41  01/09/2019 17:47:41  krbtgt/EXAMPLE.COM@EXAMPLE.COM
[root@node2 scripts]# 
}}}

! configure kerberos in ambari 
[img(90%,90%)[https://i.imgur.com/GTkXrtL.png]]

[img(90%,90%)[https://i.imgur.com/u4opvr0.png]]

[img(90%,90%)[https://i.imgur.com/VbQmRUQ.png]]

[img(90%,90%)[https://i.imgur.com/oHHdXC1.png]]

[img(90%,90%)[https://i.imgur.com/MlbhMgw.png]]

[img(90%,90%)[https://i.imgur.com/7i7JRTT.png]]

[img(90%,90%)[https://i.imgur.com/o7RhsLh.png]]

[img(90%,90%)[https://i.imgur.com/yx6ORX2.png]]

* restart ambari server
[img(90%,90%)[https://i.imgur.com/dBtX7my.png]]

* manually restart other services 
[img(90%,90%)[https://i.imgur.com/GiWRr6d.png]]


! test kerberos from hdfs 
* it errors because only admin user is configured or have credentials 
{{{

[vagrant@node1 data]$ hadoop fs -ls
19/01/08 18:15:59 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
ls: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "node1.example.com/192.168.199.2"; destination host is: "node1.example.com":8020; 

}}}

! from KDC host, add principal on other users 
* kerberos can be linked to an existing active directory through TRUST (needs to be configured), so users will automatically be recognized 
* here we are adding the vagrant user to the principal 
{{{

-- summary commands 
sudo su - 
klist
kinit admin/admin
kadmin.local -q "addprinc vagrant"


-- detail 
[root@node2 scripts]# klist
klist: No credentials cache found (filename: /tmp/krb5cc_0)
[root@node2 scripts]# 
[root@node2 scripts]# 
[root@node2 scripts]# kinit admin/admin
Password for admin/admin@EXAMPLE.COM: 
[root@node2 scripts]# 
[root@node2 scripts]# 
[root@node2 scripts]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: admin/admin@EXAMPLE.COM

Valid starting       Expires              Service principal
01/08/2019 18:34:19  01/09/2019 18:34:19  krbtgt/EXAMPLE.COM@EXAMPLE.COM
[root@node2 scripts]# 
[root@node2 scripts]# 
[root@node2 scripts]# kadmin.local -q "addprinc vagrant"
Authenticating as principal admin/admin@EXAMPLE.COM with password.
WARNING: no policy specified for vagrant@EXAMPLE.COM; defaulting to no policy
Enter password for principal "vagrant@EXAMPLE.COM": <USE THE VAGRANT USER PASSWORD>
Re-enter password for principal "vagrant@EXAMPLE.COM": <USE THE VAGRANT USER PASSWORD>
Principal "vagrant@EXAMPLE.COM" created.

}}}


! test new user principal 
{{{
[vagrant@node1 ~]$ kinit
Password for vagrant@EXAMPLE.COM: 
[vagrant@node1 ~]$ 
[vagrant@node1 ~]$ klist
Ticket cache: FILE:/tmp/krb5cc_1000
Default principal: vagrant@EXAMPLE.COM

Valid starting       Expires              Service principal
01/08/2019 18:35:52  01/09/2019 18:35:52  krbtgt/EXAMPLE.COM@EXAMPLE.COM
[vagrant@node1 ~]$ 
[vagrant@node1 ~]$ hadoop fs -ls
Found 3 items
-rw-r--r--   3 vagrant vagrant   16257213 2019-01-08 06:27 salaries.csv
-rw-r--r--   3 vagrant vagrant   16257213 2019-01-08 06:31 salaries2.csv
drwxr-xr-x   - vagrant vagrant          0 2019-01-08 06:58 test

}}}


! list principals 
{{{
kadmin.local -q "list_principals"

[root@node2 scripts]# kadmin.local -q "list_principals"
Authenticating as principal admin/admin@EXAMPLE.COM with password.
HTTP/node1.example.com@EXAMPLE.COM
HTTP/node2.example.com@EXAMPLE.COM
HTTP/node3.example.com@EXAMPLE.COM
K/M@EXAMPLE.COM
activity_analyzer/node1.example.com@EXAMPLE.COM
activity_explorer/node1.example.com@EXAMPLE.COM
admin/admin@EXAMPLE.COM
ambari-qa-hadoop@EXAMPLE.COM
ambari-server-hadoop@EXAMPLE.COM
amshbase/node1.example.com@EXAMPLE.COM
amszk/node1.example.com@EXAMPLE.COM
dn/node1.example.com@EXAMPLE.COM
dn/node2.example.com@EXAMPLE.COM
dn/node3.example.com@EXAMPLE.COM
hdfs-hadoop@EXAMPLE.COM
hive/node1.example.com@EXAMPLE.COM
hive/node2.example.com@EXAMPLE.COM
hive/node3.example.com@EXAMPLE.COM
jhs/node2.example.com@EXAMPLE.COM
kadmin/admin@EXAMPLE.COM
kadmin/changepw@EXAMPLE.COM
kadmin/node2.example.com@EXAMPLE.COM
keyadmin@EXAMPLE.COM
kiprop/node2.example.com@EXAMPLE.COM
krbtgt/EXAMPLE.COM@EXAMPLE.COM
nm/node1.example.com@EXAMPLE.COM
nm/node2.example.com@EXAMPLE.COM
nm/node3.example.com@EXAMPLE.COM
nn/node1.example.com@EXAMPLE.COM
nn/node2.example.com@EXAMPLE.COM
nn@EXAMPLE.COM
ranger@EXAMPLE.COM
rangeradmin/node2.example.com@EXAMPLE.COM
rangerlookup/node2.example.com@EXAMPLE.COM
rangertagsync/node1.example.com@EXAMPLE.COM
rangertagsync/node2.example.com@EXAMPLE.COM
rangerusersync/node2.example.com@EXAMPLE.COM
rm/node2.example.com@EXAMPLE.COM
vagrant@EXAMPLE.COM
yarn/node2.example.com@EXAMPLE.COM
zookeeper/node1.example.com@EXAMPLE.COM
zookeeper/node2.example.com@EXAMPLE.COM
zookeeper/node3.example.com@EXAMPLE.COM
}}}



! other references 
https://community.pivotal.io/s/article/Kerberos-Cheat-Sheet




















.








<<showtoc>>

! SPNEGO

!! search for "auth" in hdfs advanced config 
* make sure all settings are configured as follows
[img(90%,90%)[ https://i.imgur.com/wAtMrPd.png ]]


!! test using curl 
{{{


[root@node2 scripts]# curl -u : --negotiate http://node1.example.com:50070/webhdfs/v1/?op=LISTSTATUS
{"FileStatuses":{"FileStatus":[
{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":16392,"group":"hadoop","length":0,"modificationTime":1546970946566,"owner":"yarn","pathSuffix":"app-logs","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16418,"group":"hdfs","length":0,"modificationTime":1546739119560,"owner":"hdfs","pathSuffix":"apps","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":16389,"group":"hadoop","length":0,"modificationTime":1546738288975,"owner":"yarn","pathSuffix":"ats","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16399,"group":"hdfs","length":0,"modificationTime":1546738301288,"owner":"hdfs","pathSuffix":"hdp","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16395,"group":"hdfs","length":0,"modificationTime":1546738294255,"owner":"mapred","pathSuffix":"mapred","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":16397,"group":"hadoop","length":0,"modificationTime":1546738323395,"owner":"mapred","pathSuffix":"mr-history","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":6,"fileId":16386,"group":"hdfs","length":0,"modificationTime":1546971003969,"owner":"hdfs","pathSuffix":"tmp","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":5,"fileId":16387,"group":"hdfs","length":0,"modificationTime":1546928769061,"owner":"hdfs","pathSuffix":"user","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}
]}}
[root@node2 scripts]# 
[root@node2 scripts]# 
[root@node2 scripts]# kdestroy
[root@node2 scripts]# 
[root@node2 scripts]# curl -u : --negotiate http://node1.example.com:50070/webhdfs/v1/?op=LISTSTATUS
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/>
<title>Error 401 Authentication required</title>
</head>
<body><h2>HTTP ERROR 401</h2>
<p>Problem accessing /webhdfs/v1/. Reason:
<pre>    Authentication required</pre></p><hr /><i><small>Powered by Jetty://</small></i><br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                

</body>
</html>


[root@node2 scripts]# curl -u : --negotiate http://node1.example.com:50070/webhdfs/v1/?op=LISTSTATUS -vvvvv
* About to connect() to node1.example.com port 50070 (#0)
*   Trying 192.168.199.2...
* Connected to node1.example.com (192.168.199.2) port 50070 (#0)
> GET /webhdfs/v1/?op=LISTSTATUS HTTP/1.1
> User-Agent: curl/7.29.0
> Host: node1.example.com:50070
> Accept: */*
> 
< HTTP/1.1 401 Authentication required
< Cache-Control: must-revalidate,no-cache,no-store
< Date: Tue, 08 Jan 2019 19:49:42 GMT
< Pragma: no-cache
< Date: Tue, 08 Jan 2019 19:49:42 GMT
< Pragma: no-cache
< Content-Type: text/html; charset=iso-8859-1
< X-FRAME-OPTIONS: SAMEORIGIN
* gss_init_sec_context() failed: : No Kerberos credentials available (default cache: /tmp/krb5cc_0)
< WWW-Authenticate: Negotiate
< Set-Cookie: hadoop.auth=; Path=/; HttpOnly
< Content-Length: 1404
< Server: Jetty(6.1.26.hwx)
< 
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/>
<title>Error 401 Authentication required</title>
</head>
<body><h2>HTTP ERROR 401</h2>
<p>Problem accessing /webhdfs/v1/. Reason:
<pre>    Authentication required</pre></p><hr /><i><small>Powered by Jetty://</small></i><br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                

</body>
</html>
* Connection #0 to host node1.example.com left intact



[root@node2 scripts]# curl -u : --negotiate http://node1.example.com:50070/webhdfs/v1/?op=LISTSTATUS -vvvvv
* About to connect() to node1.example.com port 50070 (#0)
*   Trying 192.168.199.2...
* Connected to node1.example.com (192.168.199.2) port 50070 (#0)
> GET /webhdfs/v1/?op=LISTSTATUS HTTP/1.1
> User-Agent: curl/7.29.0
> Host: node1.example.com:50070
> Accept: */*
> 
< HTTP/1.1 401 Authentication required
< Cache-Control: must-revalidate,no-cache,no-store
< Date: Tue, 08 Jan 2019 19:50:46 GMT
< Pragma: no-cache
< Date: Tue, 08 Jan 2019 19:50:46 GMT
< Pragma: no-cache
< Content-Type: text/html; charset=iso-8859-1
< X-FRAME-OPTIONS: SAMEORIGIN
< WWW-Authenticate: Negotiate
< Set-Cookie: hadoop.auth=; Path=/; HttpOnly
< Content-Length: 1404
< Server: Jetty(6.1.26.hwx)
< 
* Ignoring the response-body
* Connection #0 to host node1.example.com left intact
* Issue another request to this URL: 'http://node1.example.com:50070/webhdfs/v1/?op=LISTSTATUS'
* Found bundle for host node1.example.com: 0x1762e90
* Re-using existing connection! (#0) with host node1.example.com
* Connected to node1.example.com (192.168.199.2) port 50070 (#0)
* Server auth using GSS-Negotiate with user ''
> GET /webhdfs/v1/?op=LISTSTATUS HTTP/1.1
> Authorization: Negotiate YIICZQYJKoZIhvcSAQICAQBuggJUMIICUKADAgEFoQMCAQ6iBwMFACAAAACjggFhYYIBXTCCAVmgAwIBBaENGwtFWEFNUExFLkNPTaIkMCKgAwIBA6EbMBkbBEhUVFAbEW5vZGUxLmV4YW1wbGUuY29to4IBGzCCARegAwIBEqEDAgEBooIBCQSCAQXLZTgGMbj4xkzKM2CMLYH5zCAciK7lFaCnUvhul79oo/Id5YP2e8lW96h69TZHjp227eHfO1oKgyX1NJqvzDp6QJ5cOGo6QXKNfmx3dEkKPJgsg09w6FcvDaWflhclfH/pN4OKCBoo23IkcR8uv+FmAwKlhT0eA5a0yV9zeoGstRSAPrBA+t63xdBf8hZB9RtAI6ISLDI329OZkblKnTbwBesh7naY8hJtNNqPiLS2n5dd+KsG+cSnSD1EwOytBsnsVN0gRVg6718N95M70Da7DV64bPhaEfWimIfjOX+zaNOJCbpiIzwe34Oeo8MAimZvhahdIWFM/wUFy19FeTIZBtGE/lykgdUwgdKgAwIBEqKBygSBx5uFXt9DLbTQn8FDDz007/VG0EDw7J4o+erYUSejz6ylv4ueEFXo83xGK0I5Nag4DD3RtHXB44jdLmiRmW+Vx0zAck+M/0MqNg3X5xD4p0RKFicVklJw17FLMprpLHeWg1jcsKpCyHdNt8KQeB4modt2DY8okBCyJSMS3snCPt2mDLM0Erfd/MiHYOW2038mUSIPxv8vuEJYUv9zchJ6XAjMWCGA7UqvS5mU49jAsWyXhfTi4sIFWbNm4ftmS4o7d6eCPIvuqcQ=
> User-Agent: curl/7.29.0
> Host: node1.example.com:50070
> Accept: */*
> 
< HTTP/1.1 200 OK
< Cache-Control: no-cache
< Expires: Tue, 08 Jan 2019 19:50:46 GMT
< Date: Tue, 08 Jan 2019 19:50:46 GMT
< Pragma: no-cache
< Expires: Tue, 08 Jan 2019 19:50:46 GMT
< Date: Tue, 08 Jan 2019 19:50:46 GMT
< Pragma: no-cache
< Content-Type: application/json
< X-FRAME-OPTIONS: SAMEORIGIN
< WWW-Authenticate: Negotiate YGoGCSqGSIb3EgECAgIAb1swWaADAgEFoQMCAQ+iTTBLoAMCARKiRARChwZbpr515XQ6+c68a4ZMAPjEGIHhnQJjRn8yt4jQ9qe3DHOozQIWOkQyj6nexCoqhKPWKbc4YG0cMZ/ZcCOnA4g5
< Set-Cookie: hadoop.auth="u=admin&p=admin/admin@EXAMPLE.COM&t=kerberos&e=1547013046868&s=nx4sCU8jegk52hkosxLZaWgouLk="; Path=/; HttpOnly
< Transfer-Encoding: chunked
< Server: Jetty(6.1.26.hwx)
< 
{"FileStatuses":{"FileStatus":[
{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":16392,"group":"hadoop","length":0,"modificationTime":1546970946566,"owner":"yarn","pathSuffix":"app-logs","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16418,"group":"hdfs","length":0,"modificationTime":1546739119560,"owner":"hdfs","pathSuffix":"apps","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":16389,"group":"hadoop","length":0,"modificationTime":1546738288975,"owner":"yarn","pathSuffix":"ats","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16399,"group":"hdfs","length":0,"modificationTime":1546738301288,"owner":"hdfs","pathSuffix":"hdp","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":1,"fileId":16395,"group":"hdfs","length":0,"modificationTime":1546738294255,"owner":"mapred","pathSuffix":"mapred","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":16397,"group":"hadoop","length":0,"modificationTime":1546738323395,"owner":"mapred","pathSuffix":"mr-history","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":6,"fileId":16386,"group":"hdfs","length":0,"modificationTime":1546971003969,"owner":"hdfs","pathSuffix":"tmp","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":5,"fileId":16387,"group":"hdfs","length":0,"modificationTime":1546928769061,"owner":"hdfs","pathSuffix":"user","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}
]}}
* Closing connection 0

}}}



! Knox 
* another way of securing authentication is through knox to act as a gateway, see [[apache sentry vs ranger vs knox]]
[img(90%,90%)[https://i.imgur.com/5TdfGUh.png]]
[img(90%,90%)[https://i.imgur.com/BPUJVlB.jpg]]










<<showtoc>>

! architecture 
[img(90%,90%)[https://i.imgur.com/W50LYmu.png]]
[img(90%,90%)[https://i.imgur.com/vmGmoYO.png]]


! installation 

* On node1 ambari server, configure the connector 
{{{
yum -y install mysql-connector-java
ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar
}}}

* On ambari, add service 

[img(90%,90%)[https://i.imgur.com/VAyNRPj.png]]

* On node2, Configure mysql root account /usr/bin/mysql_secure_installation
* Keep in mind the Disallow root login remotely (answer should be n)
{{{
[root@node2 scripts]# /usr/bin/mysql_secure_installation 

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none): 
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] y
New password: 
Re-enter new password: 
Password updated successfully!
Reloading privilege tables..
 ... Success!


By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] n
 ... skipping.

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] n
 ... skipping.

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] n
 ... skipping.

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!
}}}


* Configure ranger user and create ranger database
{{{
mysql -u root -proot
CREATE USER 'ranger'@'localhost' IDENTIFIED BY 'ranger';
GRANT ALL PRIVILEGES ON *.* TO 'ranger'@'localhost';
GRANT ALL PRIVILEGES ON *.* TO 'ranger'@'node2.example.com';
GRANT ALL PRIVILEGES ON *.* TO 'ranger'@'%';
GRANT ALL PRIVILEGES ON *.* TO 'ranger'@'localhost' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO 'ranger'@'node2.example.com' IDENTIFIED BY 'ranger' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO 'ranger'@'localhost' IDENTIFIED BY 'ranger' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO 'ranger'@'%' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO 'ranger'@'%' IDENTIFIED BY 'ranger' WITH GRANT OPTION;
FLUSH PRIVILEGES;

system mysql -u ranger -pranger
SELECT CURRENT_USER();
create database ranger;
}}}


* Check users and passwords
{{{
[root@node2 admin]# mysql -u root -proot
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 976
Server version: 5.5.60-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> SELECT User, Host, Password FROM mysql.user;
+---------------+-------------------+-------------------------------------------+
| User          | Host              | Password                                  |
+---------------+-------------------+-------------------------------------------+
| root          | localhost         | *81F5E21E35407D884A6CD4A731AEBFB6AF209E1B |
| root          | node2.example.com | *81F5E21E35407D884A6CD4A731AEBFB6AF209E1B |
| root          | 127.0.0.1         | *81F5E21E35407D884A6CD4A731AEBFB6AF209E1B |
| root          | ::1               | *81F5E21E35407D884A6CD4A731AEBFB6AF209E1B |
| rangerinstall | %                 | *BA6F33B6015522D04D1B2CD0774983FEE64526DD |
| hive          | %                 | *4DF1D66463C18D44E3B001A8FB1BBFBEA13E27FC |
| rangeradmin   | %                 | *93E5B68E67576EF3867192792A3FA17A35376774 |
| rangeradmin   | localhost         | *93E5B68E67576EF3867192792A3FA17A35376774 |
| rangeradmin   | node2.example.com | *93E5B68E67576EF3867192792A3FA17A35376774 |
| ranger        | %                 | *84BB87F6BF7F61703B24CE1C9AA9C0E3F2286900 |
| ranger        | localhost         | *84BB87F6BF7F61703B24CE1C9AA9C0E3F2286900 |
| ranger        | node2.example.com | *84BB87F6BF7F61703B24CE1C9AA9C0E3F2286900 |
+---------------+-------------------+-------------------------------------------+
12 rows in set (0.06 sec)
}}}


* I don't think this is necessary, but I also added ranger to the principal 
{{{
[vagrant@node2 ~]$ sudo su -
Last login: Tue Jan  8 17:45:59 UTC 2019 on pts/0
[root@node2 ~]# 
[root@node2 ~]# 
[root@node2 ~]# kinit admin/admin
Password for admin/admin@EXAMPLE.COM: 
[root@node2 ~]# 
[root@node2 ~]# 
[root@node2 ~]# 
[root@node2 ~]# kadmin.local -q "addprinc ranger"
Authenticating as principal admin/admin@EXAMPLE.COM with password.
WARNING: no policy specified for ranger@EXAMPLE.COM; defaulting to no policy
Enter password for principal "ranger@EXAMPLE.COM": 
Re-enter password for principal "ranger@EXAMPLE.COM": 
Principal "ranger@EXAMPLE.COM" created.
}}}


[img(50%,50%)[https://i.imgur.com/KOWWdq0.png]]
* All Ranger components, KDC host, and mysql are on node2. The ambari-server is on node1
[img(90%,90%)[https://i.imgur.com/K4q5JGG.png]]
[img(90%,90%)[https://i.imgur.com/ZS98uJV.png]]
[img(90%,90%)[https://i.imgur.com/1vaHE8X.png]]
[img(90%,90%)[https://i.imgur.com/Ft6otqX.png]]
* Plugins will be installed after Ranger installation
[img(90%,90%)[https://i.imgur.com/qYZBkZX.png]]
* Uncheck previously configured properties, Click OK
[img(90%,90%)[https://i.imgur.com/9WX7NtI.png]]
[img(90%,90%)[https://i.imgur.com/94XHb8O.png]]
[img(90%,90%)[https://i.imgur.com/gvlLrPg.png]]
[img(90%,90%)[https://i.imgur.com/TEL3NTo.png]]
[img(90%,90%)[https://i.imgur.com/anjL7Pb.png]]
[img(90%,90%)[https://i.imgur.com/eW8REAC.png]]
* Go to http://192.168.199.3:6080 , then login as admin/admin
[img(50%,50%)[https://i.imgur.com/5dp5nUb.png]]
[img(90%,90%)[https://i.imgur.com/ycJQYPL.png]]



! install plugins 
* Go to Ranger - Configs - Ranger Plugin, Select HDFS and Hive plugins and click Save
[img(90%,90%)[https://i.imgur.com/8n22vUY.png]]
* Click OK
[img(90%,90%)[https://i.imgur.com/DoR5lOF.png]]
[img(90%,90%)[https://i.imgur.com/spaodiJ.png]]
* Stop and Start all services
[img(90%,90%)[https://i.imgur.com/ccTY5GM.png]]
[img(40%,40%)[https://i.imgur.com/cY2AKHn.png]]
[img(90%,90%)[https://i.imgur.com/WINcwWk.png]]




! errors 
!! ranger admin process is failing with connection failed 
{{{

        Connection failed to http://node2.example.com:6080/login.jsp 
        (Execution of 'curl --location-trusted -k --negotiate -u : -b /var/lib/ambari-agent/tmp/cookies/70a1480a-b71c-4152-815d-8a171bd0b85e 
        -c /var/lib/ambari-agent/tmp/cookies/70a1480a-b71c-4152-815d-8a171bd0b85e -w '%{http_code}' http://node2.example.com:6080/login.jsp 
        --connect-timeout 5 --max-time 7 -o /dev/null 1>/tmp/tmp5YAC3n 2>/tmp/tmpSeQnlb' returned 28.   
        % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0
curl: (28) Operation timed out after 7022 milliseconds with 0 out of -1 bytes received
000)
      
}}}
!!! troubleshooting and fix
* check the directory /var/log/ranger/admin
* read the catalina.out file 
{{{
[root@node2 admin]# cat catalina.out 
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000eab00000, 357564416, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 357564416 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /usr/hdp/2.6.5.1050-37/ranger-admin/ews/hs_err_pid1286.log
}}}
* add 4GB swapfile 
{{{
dd if=/dev/zero of=/opt/swapfile bs=1024k count=4096
mkswap /opt/swapfile
chmod 0600 /opt/swapfile

add on /etc/fstab
/opt/swapfile               swap                    swap    defaults        0 0

swapon -a

[root@node2 scripts]# free -h
              total        used        free      shared  buff/cache   available
Mem:           3.7G        3.3G        222M        9.9M        215M        190M
Swap:          5.0G        2.0G        3.0G
}}}





! other references 
Hadoop Certification - HDPCA - Install and Configure Ranger https://www.youtube.com/watch?v=2zeVvnw_bZs&t=1s 

















.


<<showtoc>>


[img(40%,40%)[ https://i.imgur.com/zuT36IY.png ]]


! background info - Ranger KMS (Key Management System) 
[img(50%,50%)[https://i.imgur.com/khqEFuj.png]]
[img(50%,50%)[https://i.imgur.com/EQJ97Vc.png]]
[img(50%,50%)[https://i.imgur.com/g5fUv8w.png]]
[img(50%,50%)[https://i.imgur.com/xfCR6dm.png]]

! install and configure



01
[img(90%,90%)[https://i.imgur.com/gKjnTZf.png]]
02
[img(90%,90%)[https://i.imgur.com/vGsbzJH.png]]
03
[img(90%,90%)[https://i.imgur.com/tvts1Up.png]]
04
[img(90%,90%)[https://i.imgur.com/HLf1SZk.png]]
05
* On the Advanced config -> "custom kms-site" add the keyadmin proxy user settings (last three lines) for kerberos authentication 
[img(90%,90%)[https://i.imgur.com/tfCKduW.png]]
06
[img(90%,90%)[https://i.imgur.com/qa1a6JQ.png]]
07
[img(90%,90%)[https://i.imgur.com/nC83kAh.png]]
08
[img(90%,90%)[https://i.imgur.com/77MW6NV.png]]
09
[img(90%,90%)[https://i.imgur.com/j15oGwA.png]]
10
[img(90%,90%)[https://i.imgur.com/dZUEb8p.png]]
11
[img(90%,90%)[https://i.imgur.com/kgo36fo.png]]
12
[img(90%,90%)[https://i.imgur.com/IGfqToA.png]]
13
[img(90%,90%)[https://i.imgur.com/oFBvaKY.png]]
14
* Edit the hadoop_kms, add EXAMPLE.COM
[img(90%,90%)[https://i.imgur.com/oKyKOw9.png]]
15
[img(90%,90%)[https://i.imgur.com/iUXg2v6.png]]
16
[img(90%,90%)[https://i.imgur.com/FoJSoOT.png]]
17
* Go to key manager, add a new key mykey01 to be used to create an encryption zone 
[img(90%,90%)[https://i.imgur.com/Z5JbTjh.png]]
18
[img(90%,90%)[https://i.imgur.com/DiGvYeD.png]]
19
[img(90%,90%)[https://i.imgur.com/8Kl7KVf.png]]
20
[img(90%,90%)[https://i.imgur.com/jISoHhM.png]]
21
* Go back to Access Manager, create a new policy and use the created key mykey01 and grant users to it 
[img(90%,90%)[https://i.imgur.com/JBnnxVs.png]]
22
[img(90%,90%)[https://i.imgur.com/0iBlArv.png]]
23
[img(90%,90%)[https://i.imgur.com/Vq5bAWV.png]]
24
[img(90%,90%)[https://i.imgur.com/6znPj2p.png]]


! create hdfs encryption zone (/encrypted folder only accessible by user vagrant)


!! Keytabs are stored in /etc/security/keytabs/ 
* these are binary files that can be used for kerberos authentication
{{{
 ls /etc/security/keytabs/
dn.service.keytab              jhs.service.keytab             rangeradmin.service.keytab     rangertagsync.service.keytab   smokeuser.headless.keytab      zk.service.keytab
hdfs.headless.keytab           nm.service.keytab              rangerkms.service.keytab       rangerusersync.service.keytab  spnego.service.keytab          
hive.service.keytab            nn.service.keytab              rangerlookup.service.keytab    rm.service.keytab              yarn.service.keytab            

less /etc/security/keytabs/hdfs.headless.keytab 
"/etc/security/keytabs/hdfs.headless.keytab" may be a binary file.  See it anyway? 
}}}

!! To list the principals in the keytab 
{{{
[root@node2 ~]# klist -kt /etc/security/keytabs/hdfs.headless.keytab 
Keytab name: FILE:/etc/security/keytabs/hdfs.headless.keytab
KVNO Timestamp           Principal
---- ------------------- ------------------------------------------------------
   1 01/08/2019 18:00:32 hdfs-hadoop@EXAMPLE.COM
   1 01/08/2019 18:00:32 hdfs-hadoop@EXAMPLE.COM
   1 01/08/2019 18:00:32 hdfs-hadoop@EXAMPLE.COM
   1 01/08/2019 18:00:32 hdfs-hadoop@EXAMPLE.COM
   1 01/08/2019 18:00:32 hdfs-hadoop@EXAMPLE.COM
}}}

!! Switching principal, from admin/admin to hdfs-hadoop
{{{
[root@node2 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: admin/admin@EXAMPLE.COM

Valid starting       Expires              Service principal
01/09/2019 01:53:15  01/10/2019 01:53:15  krbtgt/EXAMPLE.COM@EXAMPLE.COM
[root@node2 ~]# 

[root@node2 ~]# kdestroy
[root@node2 ~]# klist
klist: No credentials cache found (filename: /tmp/krb5cc_0)

[root@node2 ~]# kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-hadoop
[root@node2 ~]# 
[root@node2 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hdfs-hadoop@EXAMPLE.COM

Valid starting       Expires              Service principal
01/10/2019 00:53:19  01/11/2019 00:53:19  krbtgt/EXAMPLE.COM@EXAMPLE.COM
}}}


!! listing the encryption keys 
{{{
# "kinit -kt" is similar to using kinit but you'll NOT have to input a password because a keytab file is used
# user hdfs-hadoop errors with not allowed to do 'GET_KEYS'

[root@node2 ~]# kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-hadoop
[root@node2 ~]# hadoop key list -metadata
Cannot list keys for KeyProvider: KMSClientProvider[http://node2.example.com:9292/kms/v1/]: org.apache.hadoop.security.authorize.AuthorizationException: User:hdfs not allowed to do 'GET_KEYS'


# keyadmin is the most powerful user, has access to all keys and can view encrypted data. so protect this user 
# even kinit admin/admin will not be able to have access to the keys 

[root@node2 ~]# kinit keyadmin
Password for keyadmin@EXAMPLE.COM: 

[root@node2 ~]# hadoop key list -metadata
Listing keys for KeyProvider: KMSClientProvider[http://node2.example.com:9292/kms/v1/]
mykey01 : cipher: AES/CTR/NoPadding, length: 128, description: , created: Thu Jan 10 00:43:26 UTC 2019, version: 1, attributes: [key.acl.name=mykey01] 
}}}



!! create the new directory "encrypted" using hdfs-hadoop principal
{{{
[root@node2 ~]# kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-hadoop

[root@node2 ~]# hadoop fs -ls
Found 1 items
drwxr-xr-x   - hdfs hdfs          0 2019-01-09 17:14 .hiveJars

[root@node2 ~]# hadoop fs -ls /
Found 9 items
drwxrwxrwx   - yarn   hadoop          0 2019-01-09 17:48 /app-logs
drwxr-xr-x   - hdfs   hdfs            0 2019-01-06 01:45 /apps
drwxr-xr-x   - yarn   hadoop          0 2019-01-06 01:31 /ats
drwxr-xr-x   - hdfs   hdfs            0 2019-01-06 01:31 /hdp
drwxr-xr-x   - mapred hdfs            0 2019-01-06 01:31 /mapred
drwxrwxrwx   - mapred hadoop          0 2019-01-06 01:32 /mr-history
drwxr-xr-x   - hdfs   hdfs            0 2019-01-09 08:49 /ranger
drwxrwxrwx   - hdfs   hdfs            0 2019-01-08 18:10 /tmp
drwxr-xr-x   - hdfs   hdfs            0 2019-01-09 17:14 /user

[root@node2 ~]# hadoop fs -mkdir /encrypted

[root@node2 ~]# hadoop fs -ls /
Found 10 items
drwxrwxrwx   - yarn   hadoop          0 2019-01-09 17:48 /app-logs
drwxr-xr-x   - hdfs   hdfs            0 2019-01-06 01:45 /apps
drwxr-xr-x   - yarn   hadoop          0 2019-01-06 01:31 /ats
drwxr-xr-x   - hdfs   hdfs            0 2019-01-10 00:59 /encrypted
drwxr-xr-x   - hdfs   hdfs            0 2019-01-06 01:31 /hdp
drwxr-xr-x   - mapred hdfs            0 2019-01-06 01:31 /mapred
drwxrwxrwx   - mapred hadoop          0 2019-01-06 01:32 /mr-history
drwxr-xr-x   - hdfs   hdfs            0 2019-01-09 08:49 /ranger
drwxrwxrwx   - hdfs   hdfs            0 2019-01-08 18:10 /tmp
drwxr-xr-x   - hdfs   hdfs            0 2019-01-09 17:14 /user

[root@node2 ~]# hadoop fs -chown vagrant:vagrant /encrypted

[root@node2 ~]# hadoop fs -ls /
Found 10 items
drwxrwxrwx   - yarn    hadoop           0 2019-01-09 17:48 /app-logs
drwxr-xr-x   - hdfs    hdfs             0 2019-01-06 01:45 /apps
drwxr-xr-x   - yarn    hadoop           0 2019-01-06 01:31 /ats
drwxr-xr-x   - vagrant vagrant          0 2019-01-10 00:59 /encrypted
drwxr-xr-x   - hdfs    hdfs             0 2019-01-06 01:31 /hdp
drwxr-xr-x   - mapred  hdfs             0 2019-01-06 01:31 /mapred
drwxrwxrwx   - mapred  hadoop           0 2019-01-06 01:32 /mr-history
drwxr-xr-x   - hdfs    hdfs             0 2019-01-09 08:49 /ranger
drwxrwxrwx   - hdfs    hdfs             0 2019-01-08 18:10 /tmp
drwxr-xr-x   - hdfs    hdfs             0 2019-01-09 17:14 /user
}}}


!! create the encryption zone on "encrypted" folder using the mykey01
{{{
[root@node2 ~]# hdfs crypto -createZone -keyName mykey01 -path /encrypted 
Added encryption zone /encrypted

[root@node2 ~]# hadoop fs -ls /
Found 10 items
drwxrwxrwx   - yarn    hadoop           0 2019-01-09 17:48 /app-logs
drwxr-xr-x   - hdfs    hdfs             0 2019-01-06 01:45 /apps
drwxr-xr-x   - yarn    hadoop           0 2019-01-06 01:31 /ats
drwxr-xr-x   - vagrant vagrant          0 2019-01-10 01:01 /encrypted
drwxr-xr-x   - hdfs    hdfs             0 2019-01-06 01:31 /hdp
drwxr-xr-x   - mapred  hdfs             0 2019-01-06 01:31 /mapred
drwxrwxrwx   - mapred  hadoop           0 2019-01-06 01:32 /mr-history
drwxr-xr-x   - hdfs    hdfs             0 2019-01-09 08:49 /ranger
drwxrwxrwx   - hdfs    hdfs             0 2019-01-08 18:10 /tmp
drwxr-xr-x   - hdfs    hdfs             0 2019-01-09 17:14 /user
}}}


!! put files in the encryption zone and read it 
{{{
[vagrant@node2 ~]$ kinit
Password for vagrant@EXAMPLE.COM: 

[vagrant@node2 ~]$ hadoop fs -ls /encrypted
Found 1 items
drwxrwxrwt   - hdfs vagrant          0 2019-01-10 01:01 /encrypted/.Trash

[vagrant@node2 ~]$ hadoop fs -put /vagrant/data/constitution.txt /encrypted/constitution.txt

[vagrant@node2 ~]$ hadoop fs -cat /encrypted/constitution.txt | head
We the People of the United States, in Order to form a more perfect Union,
establish Justice, insure domestic Tranquility, provide for the common
defence, promote the general Welfare, and secure the Blessings of Liberty to
ourselves and our Posterity, do ordain and establish this Constitution for the
United States of America.
}}}


!! get files blocks location of the encrypted file in encryption zone 
{{{
[vagrant@node2 ~]$ hadoop fsck /encrypted/constitution.txt -files -blocks -locations
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Connecting to namenode via http://node1.example.com:50070/fsck?ugi=vagrant&files=1&blocks=1&locations=1&path=%2Fencrypted%2Fconstitution.txt
FSCK started by vagrant (auth:KERBEROS_SSL) from /192.168.199.3 for path /encrypted/constitution.txt at Thu Jan 10 01:12:38 UTC 2019
/encrypted/constitution.txt 44841 bytes, 1 block(s):  OK
0. BP-534825236-192.168.199.2-1546738263299:blk_1073741963_1146 len=44841 repl=3 [DatanodeInfoWithStorage[192.168.199.3:1019,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK], DatanodeInfoWithStorage[192.168.199.4:1019,DS-a66628de-4daa-433f-9aa2-d3a8c400d5c5,DISK], DatanodeInfoWithStorage[192.168.199.2:1019,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK]]

Status: HEALTHY
 Total size:	44841 B
 Total dirs:	0
 Total files:	1
 Total symlinks:		0
 Total blocks (validated):	1 (avg. block size 44841 B)
 Minimally replicated blocks:	1 (100.0 %)
 Over-replicated blocks:	0 (0.0 %)
 Under-replicated blocks:	0 (0.0 %)
 Mis-replicated blocks:		0 (0.0 %)
 Default replication factor:	3
 Average block replication:	3.0
 Corrupt blocks:		0
 Missing replicas:		0 (0.0 %)
 Number of data-nodes:		3
 Number of racks:		1
FSCK ended at Thu Jan 10 01:12:38 UTC 2019 in 12 milliseconds


The filesystem under path '/encrypted/constitution.txt' is HEALTHY
}}}


!! check if the file inside encryption zone is really encrypted
{{{
[vagrant@node2 ~]$ sudo su -
Last login: Thu Jan 10 01:03:59 UTC 2019 on pts/2
[root@node2 ~]# 
[root@node2 ~]# find /hadoop/hdfs/data/ -iname "blk_1073741963"
/hadoop/hdfs/data/current/BP-534825236-192.168.199.2-1546738263299/current/finalized/subdir0/subdir0/blk_1073741963
[root@node2 ~]# 
[root@node2 ~]# 
[root@node2 ~]# head /hadoop/hdfs/data/current/BP-534825236-192.168.199.2-1546738263299/current/finalized/subdir0/subdir0/blk_1073741963
t?ʌF??7h?0?Ɇ??Y???+5e????{??j?,=?(?>?>4e)??l?0?cfC۟??V???<5s?T??Y?Z?.?n9??,
}}}



!! copy the file from encryption zone to an outside folder 
{{{
[vagrant@node2 ~]$ hadoop fs -cp /encrypted/constitution.txt /tmp/constitution_copied.txt


# now login just as regular hdfs user, and you can read the file even without keys. meaning the file is not encrypted once outside of encryption zone 

[hdfs@node2 ~]$ hadoop fs -cat /tmp/constitution_copied.txt | head
We the People of the United States, in Order to form a more perfect Union,
establish Justice, insure domestic Tranquility, provide for the common
defence, promote the general Welfare, and secure the Blessings of Liberty to
ourselves and our Posterity, do ordain and establish this Constitution for the
United States of America.


[hdfs@node2 ~]$ hadoop fsck /tmp/constitution_copied.txt -files -blocks -locations
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Connecting to namenode via http://node1.example.com:50070/fsck?ugi=hdfs&files=1&blocks=1&locations=1&path=%2Ftmp%2Fconstitution_copied.txt
FSCK started by hdfs (auth:KERBEROS_SSL) from /192.168.199.3 for path /tmp/constitution_copied.txt at Thu Jan 10 01:18:08 UTC 2019
/tmp/constitution_copied.txt 44841 bytes, 1 block(s):  OK
0. BP-534825236-192.168.199.2-1546738263299:blk_1073741964_1147 len=44841 repl=3 [DatanodeInfoWithStorage[192.168.199.3:1019,DS-3390c406-9c65-467c-88b7-d2bdc6b7330b,DISK], DatanodeInfoWithStorage[192.168.199.2:1019,DS-f7935053-711f-4558-9c30-57a0fe071bde,DISK], DatanodeInfoWithStorage[192.168.199.4:1019,DS-a66628de-4daa-433f-9aa2-d3a8c400d5c5,DISK]]

Status: HEALTHY
 Total size:	44841 B
 Total dirs:	0
 Total files:	1
 Total symlinks:		0
 Total blocks (validated):	1 (avg. block size 44841 B)
 Minimally replicated blocks:	1 (100.0 %)
 Over-replicated blocks:	0 (0.0 %)
 Under-replicated blocks:	0 (0.0 %)
 Mis-replicated blocks:		0 (0.0 %)
 Default replication factor:	3
 Average block replication:	3.0
 Corrupt blocks:		0
 Missing replicas:		0 (0.0 %)
 Number of data-nodes:		3
 Number of racks:		1
FSCK ended at Thu Jan 10 01:18:08 UTC 2019 in 1 milliseconds


The filesystem under path '/tmp/constitution_copied.txt' is HEALTHY

[vagrant@node2 ~]$ sudo su -

[root@node2 ~]# find /hadoop/hdfs/data/ -iname "blk_1073741964"
/hadoop/hdfs/data/current/BP-534825236-192.168.199.2-1546738263299/current/finalized/subdir0/subdir0/blk_1073741964

[root@node2 ~]# head /hadoop/hdfs/data/current/BP-534825236-192.168.199.2-1546738263299/current/finalized/subdir0/subdir0/blk_1073741964
We the People of the United States, in Order to form a more perfect Union,
establish Justice, insure domestic Tranquility, provide for the common
defence, promote the general Welfare, and secure the Blessings of Liberty to
ourselves and our Posterity, do ordain and establish this Constitution for the
United States of America.
}}}


!! hdfs user can create subdirectories under encryption zone but can't create file 
{{{
[hdfs@node2 ~]$ hadoop fs -mkdir /encrypted/subdir

[hdfs@node2 ~]$ hadoop fs -put /vagrant/data/constitution.txt /encrypted/subdir/constitution2.txt
put: User:hdfs not allowed to do 'DECRYPT_EEK' on 'mykey01'
19/01/10 01:10:56 ERROR hdfs.DFSClient: Failed to close inode 24384
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /encrypted/subdir/constitution2.txt._COPYING_ (inode 24384): File does not exist. Holder DFSClient_NONMAPREDUCE_1910722416_1 does not have any open files.
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3697)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3785)
}}}


!! keyadmin can read any encrypted file on any encryption zone 
{{{
[hdfs@node2 ~]$ kinit keyadmin@EXAMPLE.COM
Password for keyadmin@EXAMPLE.COM: 

[hdfs@node2 ~]$ hdfs fs -cat /encrypted/constitution.txt | head
Error: Could not find or load main class fs
[hdfs@node2 ~]$ hadoop fs -cat /encrypted/constitution.txt | head
We the People of the United States, in Order to form a more perfect Union,
establish Justice, insure domestic Tranquility, provide for the common
defence, promote the general Welfare, and secure the Blessings of Liberty to
ourselves and our Posterity, do ordain and establish this Constitution for the
United States of America.


# admin/admin will not be able to read any encrypted file 
[hdfs@node2 ~]$ kinit admin/admin
Password for admin/admin@EXAMPLE.COM: 

[hdfs@node2 ~]$ klist
Ticket cache: FILE:/tmp/krb5cc_1006
Default principal: admin/admin@EXAMPLE.COM

Valid starting       Expires              Service principal
01/10/2019 01:30:08  01/11/2019 01:30:08  krbtgt/EXAMPLE.COM@EXAMPLE.COM

[hdfs@node2 ~]$ hadoop fs -cat /encrypted/constitution.txt | head
cat: User:admin not allowed to do 'DECRYPT_EEK' on 'mykey01'
}}}













! troubleshooting 

!! kms install properties file 
<<<
/usr/hdp/current/ranger-kms/install.properties
<<<

!! ranger kms install error "unable to connect to DB"
!!! error message 
{{{

stderr: 
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/RANGER_KMS/0.5.0.2.3/package/scripts/kms_server.py", line 121, in <module>
    KmsServer().execute()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 375, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/RANGER_KMS/0.5.0.2.3/package/scripts/kms_server.py", line 48, in install
    self.configure(env)
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 120, in locking_configure
    original_configure(obj, *args, **kw)
  File "/var/lib/ambari-agent/cache/common-services/RANGER_KMS/0.5.0.2.3/package/scripts/kms_server.py", line 90, in configure
    kms()
  File "/var/lib/ambari-agent/cache/common-services/RANGER_KMS/0.5.0.2.3/package/scripts/kms.py", line 183, in kms
    Execute(db_connection_check_command, path='/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin', tries=5, try_sleep=10, environment=env_dict)
  File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
    self.env.run()
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 262, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
    result = function(command, **kwargs)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
    tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 303, in _call
    raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/lib/jvm/jre//bin/java -cp /usr/lib/ambari-agent/DBConnectionVerification.jar:/usr/hdp/current/ranger-kms/ews/webapp/lib/mysql-connector-java.jar org.apache.ambari.server.DBConnectionVerification 'jdbc:mysql://node2:3306/rangerkms' rangerkms [PROTECTED] com.mysql.jdbc.Driver' returned 1. ERROR: Unable to connect to the DB. Please check DB connection properties.
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
}}}
!!! fix
* On the Advanced config -> "custom kms-site" add the keyadmin proxy user settings (last three lines) for kerberos authentication
[img(90%,90%)[ https://i.imgur.com/tUpTVUL.png ]]





















.




http://learnxinyminutes.com/docs/javascript/
https://developer.mozilla.org/en-US/docs/Web/JavaScript/A_re-introduction_to_JavaScript

https://en.wikipedia.org/wiki/Oracle_Exadata#Hardware_Configurations
Vagrant + Docker 
http://en.wikipedia.org/wiki/Vagrant_%28software%29
http://www.slideshare.net/3dgiordano/vagrant-docker
http://www.quora.com/What-is-the-difference-between-Docker-and-Vagrant-When-should-you-use-each-one
http://www.scriptrock.com/articles/docker-vs-vagrant


https://www.vagrantup.com/downloads.html
{{{
vagrant up node1 node2 node3
vagrant suspend 
vagrant destroy 

vagrant global-status
}}}

https://stackoverflow.com/questions/10953070/how-to-debug-vagrant-cannot-forward-the-specified-ports-on-this-vm-message
password for geerlingguy/centos7  https://github.com/geerlingguy/drupal-vm/issues/1203

https://docs.oracle.com/database/121/ADMIN/cdb_create.htm#ADMIN13514

<<<
A CDB contains the following files:

One control file

One active online redo log for a single-instance CDB, or one active online redo log for each instance of an Oracle RAC CDB

One set of temp files

There is one default temporary tablespace for the root and for each PDB.

One active undo tablespace for a single-instance CDB, or one active undo tablespace for each instance of an Oracle RAC CDB

Sets of system data files

The primary physical difference between a CDB and a non-CDB is in the non-undo data files. A non-CDB has only one set of system data files. In contrast, a CDB includes one set of system data files for each container in the CDB, including a set of system data files for each PDB. In addition, a CDB has one set of user-created data files for each container.

Sets of user-created data files

Each PDB has its own set of non-system data files. These data files contain the user-defined schemas and database objects for the PDB.

For backup and recovery of a CDB, Recovery Manager (RMAN) is recommended. PDB point-in-time recovery (PDB PITR) must be performed with RMAN. By default, RMAN turns on control file autobackup for a CDB. It is strongly recommended that control file autobackup is enabled for a CDB, to ensure that PDB PITR can undo data file additions or deletions.


<<<
https://leetcode.com/problems/two-sum/
{{{
Given an array of integers, return indices of the two numbers such that they add up to a specific target.

You may assume that each input would have exactly one solution, and you may not use the same element twice.

Example:

Given nums = [2, 7, 11, 15], target = 9,

Because nums[0] + nums[1] = 2 + 7 = 9,
return [0, 1].

Accepted
2,243,655
Submissions
5,021,879
}}}


{{{

# class Solution:
#     # def twoSum(nums, target):
#     def twoSum(self,nums, target):
        
#         for i in range(len(nums)):
#             # print(nums[i])
#             for j in range(i+1, len(nums)):
#                 # print(nums[i],nums[j])
#                 sum = nums[i] + nums[j]
#                 if sum == target:
#                     return(i,j)


class Solution:
    # def twoSum(nums, target):
    def twoSum(self,nums, target):

        if len(nums) <= 1:
            return False

        kv_hmap = dict()

        for i in range(len(nums)):  # 0,1,2,3,4
            # print(i)
            num = nums[i]           # 1,2,7,3,11
            # print(num)
            key = target - num      # 8,7,2,6,-2
            # print(key)

            if num in kv_hmap:
                # print ([kv_hmap[num], i])
                return( [kv_hmap[num],i] )
            else:
                kv_hmap[key] = i
}}}
turbo mode is disabled 
{{{
          <!-- Turbo Mode -->
          <!-- Description: Turbo Mode. -->
          <!-- Possible Values: "Disabled", "Enabled" -->
          <Turbo_Mode>Disabled</Turbo_Mode>
}}}

! cpu_topology script
{{{
[root@enkx3cel01 ~]# sh cpu_topology
        Product Name: SUN FIRE X4270 M3
        Product Name: ASSY,MOTHERBOARD,2U
model name      : Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz
processors  (OS CPU count)          0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
physical id (processor socket)      0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1
siblings    (logical CPUs/socket)   12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12
core id     (# assigned to a core)  0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5
cpu cores   (physical cores/socket) 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6
}}}

! intel cpu topology tool
{{{
[root@enkx3cel01 cpu-topology]# ./cpu_topology64.out


        Advisory to Users on system topology enumeration

This utility is for demonstration purpose only. It assumes the hardware topology
configuration within a coherent domain does not change during the life of an OS
session. If an OS support advanced features that can change hardware topology
configurations, more sophisticated adaptation may be necessary to account for
the hardware configuration change that might have added and reduced the number
of logical processors being managed by the OS.

User should also`be aware that the system topology enumeration algorithm is
based on the assumption that CPUID instruction will return raw data reflecting
the native hardware configuration. When an application runs inside a virtual
machine hosted by a Virtual Machine Monitor (VMM), any CPUID instructions
issued by an app (or a guest OS) are trapped by the VMM and it is the VMM's
responsibility and decision to emulate/supply CPUID return data to the virtual
machines. When deploying topology enumeration code based on querying CPUID
inside a VM environment, the user must consult with the VMM vendor on how an VMM
will emulate CPUID instruction relating to topology enumeration.



        Software visible enumeration in the system:
Number of logical processors visible to the OS: 24
Number of logical processors visible to this process: 24
Number of processor cores visible to this process: 12
Number of physical packages visible to this process: 2


        Hierarchical counts by levels of processor topology:
 # of cores in package  0 visible to this process: 6 .
         # of logical processors in Core 0 visible to this process: 2 .
         # of logical processors in Core  1 visible to this process: 2 .
         # of logical processors in Core  2 visible to this process: 2 .
         # of logical processors in Core  3 visible to this process: 2 .
         # of logical processors in Core  4 visible to this process: 2 .
         # of logical processors in Core  5 visible to this process: 2 .
 # of cores in package  1 visible to this process: 6 .
         # of logical processors in Core 0 visible to this process: 2 .
         # of logical processors in Core  1 visible to this process: 2 .
         # of logical processors in Core  2 visible to this process: 2 .
         # of logical processors in Core  3 visible to this process: 2 .
         # of logical processors in Core  4 visible to this process: 2 .
         # of logical processors in Core  5 visible to this process: 2 .


        Affinity masks per SMT thread, per core, per package:
Individual:
        P:0, C:0, T:0 --> 1
        P:0, C:0, T:1 --> 1z3

Core-aggregated:
        P:0, C:0 --> 1001
Individual:
        P:0, C:1, T:0 --> 2
        P:0, C:1, T:1 --> 2z3

Core-aggregated:
        P:0, C:1 --> 2002
Individual:
        P:0, C:2, T:0 --> 4
        P:0, C:2, T:1 --> 4z3

Core-aggregated:
        P:0, C:2 --> 4004
Individual:
        P:0, C:3, T:0 --> 8
        P:0, C:3, T:1 --> 8z3

Core-aggregated:
        P:0, C:3 --> 8008
Individual:
        P:0, C:4, T:0 --> 10
        P:0, C:4, T:1 --> 1z4

Core-aggregated:
        P:0, C:4 --> 10010
Individual:
        P:0, C:5, T:0 --> 20
        P:0, C:5, T:1 --> 2z4

Core-aggregated:
        P:0, C:5 --> 20020

Pkg-aggregated:
        P:0 --> 3f03f
Individual:
        P:1, C:0, T:0 --> 40
        P:1, C:0, T:1 --> 4z4

Core-aggregated:
        P:1, C:0 --> 40040
Individual:
        P:1, C:1, T:0 --> 80
        P:1, C:1, T:1 --> 8z4

Core-aggregated:
        P:1, C:1 --> 80080
Individual:
        P:1, C:2, T:0 --> 100
        P:1, C:2, T:1 --> 1z5

Core-aggregated:
        P:1, C:2 --> 100100
Individual:
        P:1, C:3, T:0 --> 200
        P:1, C:3, T:1 --> 2z5

Core-aggregated:
        P:1, C:3 --> 200200
Individual:
        P:1, C:4, T:0 --> 400
        P:1, C:4, T:1 --> 4z5

Core-aggregated:
        P:1, C:4 --> 400400
Individual:
        P:1, C:5, T:0 --> 800
        P:1, C:5, T:1 --> 8z5

Core-aggregated:
        P:1, C:5 --> 800800

Pkg-aggregated:
        P:1 --> fc0fc0


        APIC ID listings from affinity masks
OS cpu   0, Affinity mask 00000001 - apic id 0
OS cpu   1, Affinity mask 00000002 - apic id 2
OS cpu   2, Affinity mask 00000004 - apic id 4
OS cpu   3, Affinity mask 00000008 - apic id 6
OS cpu   4, Affinity mask 00000010 - apic id 8
OS cpu   5, Affinity mask 00000020 - apic id a
OS cpu   6, Affinity mask 00000040 - apic id 20
OS cpu   7, Affinity mask 00000080 - apic id 22
OS cpu   8, Affinity mask 00000100 - apic id 24
OS cpu   9, Affinity mask 00000200 - apic id 26
OS cpu  10, Affinity mask 00000400 - apic id 28
OS cpu  11, Affinity mask 00000800 - apic id 2a
OS cpu  12, Affinity mask 00001000 - apic id 1
OS cpu  13, Affinity mask 00002000 - apic id 3
OS cpu  14, Affinity mask 00004000 - apic id 5
OS cpu  15, Affinity mask 00008000 - apic id 7
OS cpu  16, Affinity mask 00010000 - apic id 9
OS cpu  17, Affinity mask 00020000 - apic id b
OS cpu  18, Affinity mask 00040000 - apic id 21
OS cpu  19, Affinity mask 00080000 - apic id 23
OS cpu  20, Affinity mask 00100000 - apic id 25
OS cpu  21, Affinity mask 00200000 - apic id 27
OS cpu  22, Affinity mask 00400000 - apic id 29
OS cpu  23, Affinity mask 00800000 - apic id 2b


Package 0 Cache and Thread details


Box Description:
Cache  is cache level designator
Size   is cache size
OScpu# is cpu # as seen by OS
Core   is core#[_thread# if > 1 thread/core] inside socket
AffMsk is AffinityMask(extended hex) for core and thread
CmbMsk is Combined AffinityMask(extended hex) for hw threads sharing cache
       CmbMsk will differ from AffMsk if > 1 hw_thread/cache
Extended Hex replaces trailing zeroes with 'z#'
       where # is number of zeroes (so '8z5' is '0x800000')
L1D is Level 1 Data cache, size(KBytes)= 32,  Cores/cache= 2, Caches/package= 6
L1I is Level 1 Instruction cache, size(KBytes)= 32,  Cores/cache= 2, Caches/package= 6
L2 is Level 2 Unified cache, size(KBytes)= 256,  Cores/cache= 2, Caches/package= 6
L3 is Level 3 Unified cache, size(KBytes)= 15360,  Cores/cache= 12, Caches/package= 1
      +-------------+-------------+-------------+-------------+-------------+-------------+
Cache |   L1D       |   L1D       |   L1D       |   L1D       |   L1D       |   L1D       |
Size  |   32K       |   32K       |   32K       |   32K       |   32K       |   32K       |
OScpu#|     0     12|     1     13|     2     14|     3     15|     4     16|     5     17|
Core  | c0_t0  c0_t1| c1_t0  c1_t1| c2_t0  c2_t1| c3_t0  c3_t1| c4_t0  c4_t1| c5_t0  c5_t1|
AffMsk|     1    1z3|     2    2z3|     4    4z3|     8    8z3|    10    1z4|    20    2z4|
CmbMsk|  1001       |  2002       |  4004       |  8008       | 10010       | 20020       |
      +-------------+-------------+-------------+-------------+-------------+-------------+

Cache |   L1I       |   L1I       |   L1I       |   L1I       |   L1I       |   L1I       |
Size  |   32K       |   32K       |   32K       |   32K       |   32K       |   32K       |
      +-------------+-------------+-------------+-------------+-------------+-------------+

Cache |    L2       |    L2       |    L2       |    L2       |    L2       |    L2       |
Size  |  256K       |  256K       |  256K       |  256K       |  256K       |  256K       |
      +-------------+-------------+-------------+-------------+-------------+-------------+

Cache |    L3                                                                             |
Size  |   15M                                                                             |
CmbMsk| 3f03f                                                                             |
      +-----------------------------------------------------------------------------------+

Combined socket AffinityMask= 0x3f03f


Package 1 Cache and Thread details


Box Description:
Cache  is cache level designator
Size   is cache size
OScpu# is cpu # as seen by OS
Core   is core#[_thread# if > 1 thread/core] inside socket
AffMsk is AffinityMask(extended hex) for core and thread
CmbMsk is Combined AffinityMask(extended hex) for hw threads sharing cache
       CmbMsk will differ from AffMsk if > 1 hw_thread/cache
Extended Hex replaces trailing zeroes with 'z#'
       where # is number of zeroes (so '8z5' is '0x800000')
      +-------------+-------------+-------------+-------------+-------------+-------------+
Cache |   L1D       |   L1D       |   L1D       |   L1D       |   L1D       |   L1D       |
Size  |   32K       |   32K       |   32K       |   32K       |   32K       |   32K       |
OScpu#|     6     18|     7     19|     8     20|     9     21|    10     22|    11     23|
Core  | c0_t0  c0_t1| c1_t0  c1_t1| c2_t0  c2_t1| c3_t0  c3_t1| c4_t0  c4_t1| c5_t0  c5_t1|
AffMsk|    40    4z4|    80    8z4|   100    1z5|   200    2z5|   400    4z5|   800    8z5|
CmbMsk| 40040       | 80080       |100100       |200200       |400400       |800800       |
      +-------------+-------------+-------------+-------------+-------------+-------------+

Cache |   L1I       |   L1I       |   L1I       |   L1I       |   L1I       |   L1I       |
Size  |   32K       |   32K       |   32K       |   32K       |   32K       |   32K       |
      +-------------+-------------+-------------+-------------+-------------+-------------+

Cache |    L2       |    L2       |    L2       |    L2       |    L2       |    L2       |
Size  |  256K       |  256K       |  256K       |  256K       |  256K       |  256K       |
      +-------------+-------------+-------------+-------------+-------------+-------------+

Cache |    L3                                                                             |
Size  |   15M                                                                             |
CmbMsk|fc0fc0                                                                             |
      +-----------------------------------------------------------------------------------+
}}}


! intel turbostat
{{{
[root@enkx3cel01 ~]# ./turbostat
pkg core CPU   %c0   GHz  TSC   %c1    %c3    %c6    %c7   %pc2   %pc3   %pc6   %pc7
               4.22 2.00 2.00  95.78   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   0   0   3.85 2.00 2.00  96.15   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   0  12   2.74 2.00 2.00  97.26   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   1   1  24.62 2.00 2.00  75.38   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   1  13  26.93 2.00 2.00  73.07   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   2   2   2.68 2.00 2.00  97.32   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   2  14   3.15 2.00 2.00  96.85   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   3   3   2.10 2.00 2.00  97.90   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   3  15   1.44 2.00 2.00  98.56   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   4   4   2.66 2.00 2.00  97.34   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   4  16   1.99 2.00 2.00  98.01   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   5   5   1.88 2.00 2.00  98.12   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   5  17   2.34 2.00 2.00  97.66   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   0   6   3.10 2.00 2.00  96.90   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   0  18   2.28 2.00 2.00  97.72   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   1   7   2.73 2.00 2.00  97.27   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   1  19   2.28 2.00 2.00  97.72   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   2   8   1.94 2.00 2.00  98.06   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   2  20   1.41 2.00 2.00  98.59   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   3   9   2.45 2.00 2.00  97.55   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   3  21   2.26 2.00 2.00  97.74   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   4  10   1.41 2.00 2.00  98.59   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   4  22   1.48 2.00 2.00  98.52   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   5  11   1.59 2.00 2.00  98.41   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   5  23   1.87 2.00 2.00  98.13   0.00   0.00   0.00   0.00   0.00   0.00   0.00
}}}




! cpu_topology script
{{{
[root@enkx3db01 cpu-topology]# sh ~root/cpu_topology
        Product Name: SUN FIRE X4170 M3
        Product Name: ASSY,MOTHERBOARD,1U
model name      : Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz
processors  (OS CPU count)          0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
physical id (processor socket)      0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1
siblings    (logical CPUs/socket)   16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
core id     (# assigned to a core)  0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
cpu cores   (physical cores/socket) 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8
}}}


! intel cpu topology tool
{{{
[root@enkx3db01 cpu-topology]# ./cpu_topology64.out


        Advisory to Users on system topology enumeration

This utility is for demonstration purpose only. It assumes the hardware topology
configuration within a coherent domain does not change during the life of an OS
session. If an OS support advanced features that can change hardware topology
configurations, more sophisticated adaptation may be necessary to account for
the hardware configuration change that might have added and reduced the number
of logical processors being managed by the OS.

User should also`be aware that the system topology enumeration algorithm is
based on the assumption that CPUID instruction will return raw data reflecting
the native hardware configuration. When an application runs inside a virtual
machine hosted by a Virtual Machine Monitor (VMM), any CPUID instructions
issued by an app (or a guest OS) are trapped by the VMM and it is the VMM's
responsibility and decision to emulate/supply CPUID return data to the virtual
machines. When deploying topology enumeration code based on querying CPUID
inside a VM environment, the user must consult with the VMM vendor on how an VMM
will emulate CPUID instruction relating to topology enumeration.



        Software visible enumeration in the system:
Number of logical processors visible to the OS: 32
Number of logical processors visible to this process: 32
Number of processor cores visible to this process: 16
Number of physical packages visible to this process: 2


        Hierarchical counts by levels of processor topology:
 # of cores in package  0 visible to this process: 8 .
         # of logical processors in Core 0 visible to this process: 2 .
         # of logical processors in Core  1 visible to this process: 2 .
         # of logical processors in Core  2 visible to this process: 2 .
         # of logical processors in Core  3 visible to this process: 2 .
         # of logical processors in Core  4 visible to this process: 2 .
         # of logical processors in Core  5 visible to this process: 2 .
         # of logical processors in Core  6 visible to this process: 2 .
         # of logical processors in Core  7 visible to this process: 2 .
 # of cores in package  1 visible to this process: 8 .
         # of logical processors in Core 0 visible to this process: 2 .
         # of logical processors in Core  1 visible to this process: 2 .
         # of logical processors in Core  2 visible to this process: 2 .
         # of logical processors in Core  3 visible to this process: 2 .
         # of logical processors in Core  4 visible to this process: 2 .
         # of logical processors in Core  5 visible to this process: 2 .
         # of logical processors in Core  6 visible to this process: 2 .
         # of logical processors in Core  7 visible to this process: 2 .


        Affinity masks per SMT thread, per core, per package:
Individual:
        P:0, C:0, T:0 --> 1
        P:0, C:0, T:1 --> 1z4

Core-aggregated:
        P:0, C:0 --> 10001
Individual:
        P:0, C:1, T:0 --> 2
        P:0, C:1, T:1 --> 2z4

Core-aggregated:
        P:0, C:1 --> 20002
Individual:
        P:0, C:2, T:0 --> 4
        P:0, C:2, T:1 --> 4z4

Core-aggregated:
        P:0, C:2 --> 40004
Individual:
        P:0, C:3, T:0 --> 8
        P:0, C:3, T:1 --> 8z4

Core-aggregated:
        P:0, C:3 --> 80008
Individual:
        P:0, C:4, T:0 --> 10
        P:0, C:4, T:1 --> 1z5

Core-aggregated:
        P:0, C:4 --> 100010
Individual:
        P:0, C:5, T:0 --> 20
        P:0, C:5, T:1 --> 2z5

Core-aggregated:
        P:0, C:5 --> 200020
Individual:
        P:0, C:6, T:0 --> 40
        P:0, C:6, T:1 --> 4z5

Core-aggregated:
        P:0, C:6 --> 400040
Individual:
        P:0, C:7, T:0 --> 80
        P:0, C:7, T:1 --> 8z5

Core-aggregated:
        P:0, C:7 --> 800080

Pkg-aggregated:
        P:0 --> ff00ff
Individual:
        P:1, C:0, T:0 --> 100
        P:1, C:0, T:1 --> 1z6

Core-aggregated:
        P:1, C:0 --> 1000100
Individual:
        P:1, C:1, T:0 --> 200
        P:1, C:1, T:1 --> 2z6

Core-aggregated:
        P:1, C:1 --> 2000200
Individual:
        P:1, C:2, T:0 --> 400
        P:1, C:2, T:1 --> 4z6

Core-aggregated:
        P:1, C:2 --> 4000400
Individual:
        P:1, C:3, T:0 --> 800
        P:1, C:3, T:1 --> 8z6

Core-aggregated:
        P:1, C:3 --> 8000800
Individual:
        P:1, C:4, T:0 --> 1z3
        P:1, C:4, T:1 --> 1z7

Core-aggregated:
        P:1, C:4 --> 10001z3
Individual:
        P:1, C:5, T:0 --> 2z3
        P:1, C:5, T:1 --> 2z7

Core-aggregated:
        P:1, C:5 --> 20002z3
Individual:
        P:1, C:6, T:0 --> 4z3
        P:1, C:6, T:1 --> 4z7

Core-aggregated:
        P:1, C:6 --> 40004z3
Individual:
        P:1, C:7, T:0 --> 8z3
        P:1, C:7, T:1 --> 8z7

Core-aggregated:
        P:1, C:7 --> 80008z3

Pkg-aggregated:
        P:1 --> ff00ff00


        APIC ID listings from affinity masks
OS cpu   0, Affinity mask 0000000001 - apic id 0
OS cpu   1, Affinity mask 0000000002 - apic id 2
OS cpu   2, Affinity mask 0000000004 - apic id 4
OS cpu   3, Affinity mask 0000000008 - apic id 6
OS cpu   4, Affinity mask 0000000010 - apic id 8
OS cpu   5, Affinity mask 0000000020 - apic id a
OS cpu   6, Affinity mask 0000000040 - apic id c
OS cpu   7, Affinity mask 0000000080 - apic id e
OS cpu   8, Affinity mask 0000000100 - apic id 20
OS cpu   9, Affinity mask 0000000200 - apic id 22
OS cpu  10, Affinity mask 0000000400 - apic id 24
OS cpu  11, Affinity mask 0000000800 - apic id 26
OS cpu  12, Affinity mask 0000001000 - apic id 28
OS cpu  13, Affinity mask 0000002000 - apic id 2a
OS cpu  14, Affinity mask 0000004000 - apic id 2c
OS cpu  15, Affinity mask 0000008000 - apic id 2e
OS cpu  16, Affinity mask 0000010000 - apic id 1
OS cpu  17, Affinity mask 0000020000 - apic id 3
OS cpu  18, Affinity mask 0000040000 - apic id 5
OS cpu  19, Affinity mask 0000080000 - apic id 7
OS cpu  20, Affinity mask 0000100000 - apic id 9
OS cpu  21, Affinity mask 0000200000 - apic id b
OS cpu  22, Affinity mask 0000400000 - apic id d
OS cpu  23, Affinity mask 0000800000 - apic id f
OS cpu  24, Affinity mask 0001000000 - apic id 21
OS cpu  25, Affinity mask 0002000000 - apic id 23
OS cpu  26, Affinity mask 0004000000 - apic id 25
OS cpu  27, Affinity mask 0008000000 - apic id 27
OS cpu  28, Affinity mask 0010000000 - apic id 29
OS cpu  29, Affinity mask 0020000000 - apic id 2b
OS cpu  30, Affinity mask 0040000000 - apic id 2d
OS cpu  31, Affinity mask 0080000000 - apic id 2f


Package 0 Cache and Thread details


Box Description:
Cache  is cache level designator
Size   is cache size
OScpu# is cpu # as seen by OS
Core   is core#[_thread# if > 1 thread/core] inside socket
AffMsk is AffinityMask(extended hex) for core and thread
CmbMsk is Combined AffinityMask(extended hex) for hw threads sharing cache
       CmbMsk will differ from AffMsk if > 1 hw_thread/cache
Extended Hex replaces trailing zeroes with 'z#'
       where # is number of zeroes (so '8z5' is '0x800000')
L1D is Level 1 Data cache, size(KBytes)= 32,  Cores/cache= 2, Caches/package= 8
L1I is Level 1 Instruction cache, size(KBytes)= 32,  Cores/cache= 2, Caches/package= 8
L2 is Level 2 Unified cache, size(KBytes)= 256,  Cores/cache= 2, Caches/package= 8
L3 is Level 3 Unified cache, size(KBytes)= 20480,  Cores/cache= 16, Caches/package= 1
      +-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+
Cache |     L1D         |     L1D         |     L1D         |     L1D         |     L1D         |     L1D         |     L1D         |     L1D         |
Size  |     32K         |     32K         |     32K         |     32K         |     32K         |     32K         |     32K         |     32K         |
OScpu#|       0       16|       1       17|       2       18|       3       19|       4       20|       5       21|       6       22|       7       23|
Core  |   c0_t0    c0_t1|   c1_t0    c1_t1|   c2_t0    c2_t1|   c3_t0    c3_t1|   c4_t0    c4_t1|   c5_t0    c5_t1|   c6_t0    c6_t1|   c7_t0    c7_t1|
AffMsk|       1      1z4|       2      2z4|       4      4z4|       8      8z4|      10      1z5|      20      2z5|      40      4z5|      80      8z5|
CmbMsk|   10001         |   20002         |   40004         |   80008         |  100010         |  200020         |  400040         |  800080         |
      +-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+

Cache |     L1I         |     L1I         |     L1I         |     L1I         |     L1I         |     L1I         |     L1I         |     L1I         |
Size  |     32K         |     32K         |     32K         |     32K         |     32K         |     32K         |     32K         |     32K         |
      +-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+

Cache |      L2         |      L2         |      L2         |      L2         |      L2         |      L2         |      L2         |      L2         |
Size  |    256K         |    256K         |    256K         |    256K         |    256K         |    256K         |    256K         |    256K         |
      +-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+

Cache |      L3                                                                                                                                       |
Size  |     20M                                                                                                                                       |
CmbMsk|  ff00ff                                                                                                                                       |
      +-----------------------------------------------------------------------------------------------------------------------------------------------+

Combined socket AffinityMask= 0xff00ff


Package 1 Cache and Thread details


Box Description:
Cache  is cache level designator
Size   is cache size
OScpu# is cpu # as seen by OS
Core   is core#[_thread# if > 1 thread/core] inside socket
AffMsk is AffinityMask(extended hex) for core and thread
CmbMsk is Combined AffinityMask(extended hex) for hw threads sharing cache
       CmbMsk will differ from AffMsk if > 1 hw_thread/cache
Extended Hex replaces trailing zeroes with 'z#'
       where # is number of zeroes (so '8z5' is '0x800000')
      +-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+
Cache |     L1D         |     L1D         |     L1D         |     L1D         |     L1D         |     L1D         |     L1D         |     L1D         |
Size  |     32K         |     32K         |     32K         |     32K         |     32K         |     32K         |     32K         |     32K         |
OScpu#|       8       24|       9       25|      10       26|      11       27|      12       28|      13       29|      14       30|      15       31|
Core  |   c0_t0    c0_t1|   c1_t0    c1_t1|   c2_t0    c2_t1|   c3_t0    c3_t1|   c4_t0    c4_t1|   c5_t0    c5_t1|   c6_t0    c6_t1|   c7_t0    c7_t1|
AffMsk|     100      1z6|     200      2z6|     400      4z6|     800      8z6|     1z3      1z7|     2z3      2z7|     4z3      4z7|     8z3      8z7|
CmbMsk| 1000100         | 2000200         | 4000400         | 8000800         | 10001z3         | 20002z3         | 40004z3         | 80008z3         |
      +-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+

Cache |     L1I         |     L1I         |     L1I         |     L1I         |     L1I         |     L1I         |     L1I         |     L1I         |
Size  |     32K         |     32K         |     32K         |     32K         |     32K         |     32K         |     32K         |     32K         |
      +-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+

Cache |      L2         |      L2         |      L2         |      L2         |      L2         |      L2         |      L2         |      L2         |
Size  |    256K         |    256K         |    256K         |    256K         |    256K         |    256K         |    256K         |    256K         |
      +-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+-----------------+

Cache |      L3                                                                                                                                       |
Size  |     20M                                                                                                                                       |
CmbMsk|ff00ff00                                                                                                                                       |
      +-----------------------------------------------------------------------------------------------------------------------------------------------+
}}}


! intel turbostat
{{{
[root@enkx3db01 ~]# ./turbostat
pkg core CPU   %c0   GHz  TSC   %c1    %c3    %c6    %c7   %pc2   %pc3   %pc6   %pc7
               0.73 1.99 2.89  99.27   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   0   0   1.71 1.86 2.89  98.29   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   0  16   0.82 1.88 2.89  99.18   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   1   1   3.66 1.60 2.89  96.34   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   1  17   3.34 1.97 2.89  96.66   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   2   2   0.20 2.12 2.89  99.80   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   2  18   0.32 2.68 2.89  99.68   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   3   3   0.43 2.28 2.89  99.57   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   3  19   0.32 1.47 2.89  99.68   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   4   4   0.14 2.61 2.89  99.86   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   4  20   0.14 1.90 2.89  99.86   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   5   5   0.09 1.98 2.89  99.91   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   5  21   0.18 1.80 2.89  99.82   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   6   6   0.14 1.94 2.89  99.86   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   6  22   0.03 2.12 2.89  99.97   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   7   7   0.02 2.28 2.89  99.98   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   7  23   0.02 2.02 2.89  99.98   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   0   8   3.49 2.37 2.89  96.51   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   0  24   1.30 2.48 2.89  98.70   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   1   9   0.85 2.39 2.89  99.15   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   1  25   0.54 2.66 2.89  99.46   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   2  10   0.49 1.92 2.89  99.51   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   2  26   0.23 2.17 2.89  99.77   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   3  11   0.24 2.18 2.89  99.76   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   3  27   0.57 1.65 2.89  99.43   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   4  12   0.22 2.30 2.89  99.78   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   4  28   0.28 2.10 2.89  99.72   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   5  13   0.44 1.79 2.89  99.56   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   5  29   0.10 2.02 2.89  99.90   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   6  14   0.05 2.46 2.89  99.95   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   6  30   0.06 2.44 2.89  99.94   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   7  15   2.24 1.44 2.89  97.76   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   7  31   0.70 2.23 2.89  99.30   0.00   0.00   0.00   0.00   0.00   0.00   0.00
}}}






turbo mode is disabled 
{{{
          <!-- Turbo Mode -->
          <!-- Description: Turbo Mode. -->
          <!-- Possible Values: "Disabled", "Enabled" -->
          <Turbo_Mode>Disabled</Turbo_Mode>
}}}

! cpu_topology script
{{{
[root@enkx3db02 cpu-topology]# sh ~root/cpu_topology
        Product Name: SUN FIRE X4170 M3
        Product Name: ASSY,MOTHERBOARD,1U
model name      : Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz
processors  (OS CPU count)          0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
physical id (processor socket)      0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1
siblings    (logical CPUs/socket)   8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8
core id     (# assigned to a core)  0 1 6 7 0 1 6 7 0 1 6 7 0 1 6 7
cpu cores   (physical cores/socket) 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
}}}


! intel cpu topology tool
{{{
[root@enkx3db02 cpu-topology]# ./cpu_topology64.out


        Advisory to Users on system topology enumeration

This utility is for demonstration purpose only. It assumes the hardware topology
configuration within a coherent domain does not change during the life of an OS
session. If an OS support advanced features that can change hardware topology
configurations, more sophisticated adaptation may be necessary to account for
the hardware configuration change that might have added and reduced the number
of logical processors being managed by the OS.

User should also`be aware that the system topology enumeration algorithm is
based on the assumption that CPUID instruction will return raw data reflecting
the native hardware configuration. When an application runs inside a virtual
machine hosted by a Virtual Machine Monitor (VMM), any CPUID instructions
issued by an app (or a guest OS) are trapped by the VMM and it is the VMM's
responsibility and decision to emulate/supply CPUID return data to the virtual
machines. When deploying topology enumeration code based on querying CPUID
inside a VM environment, the user must consult with the VMM vendor on how an VMM
will emulate CPUID instruction relating to topology enumeration.



        Software visible enumeration in the system:
Number of logical processors visible to the OS: 16
Number of logical processors visible to this process: 16
Number of processor cores visible to this process: 8
Number of physical packages visible to this process: 2


        Hierarchical counts by levels of processor topology:
 # of cores in package  0 visible to this process: 4 .
         # of logical processors in Core 0 visible to this process: 2 .
         # of logical processors in Core  1 visible to this process: 2 .
         # of logical processors in Core  2 visible to this process: 2 .
         # of logical processors in Core  3 visible to this process: 2 .
 # of cores in package  1 visible to this process: 4 .
         # of logical processors in Core 0 visible to this process: 2 .
         # of logical processors in Core  1 visible to this process: 2 .
         # of logical processors in Core  2 visible to this process: 2 .
         # of logical processors in Core  3 visible to this process: 2 .


        Affinity masks per SMT thread, per core, per package:
Individual:
        P:0, C:0, T:0 --> 1
        P:0, C:0, T:1 --> 100

Core-aggregated:
        P:0, C:0 --> 101
Individual:
        P:0, C:1, T:0 --> 2
        P:0, C:1, T:1 --> 200

Core-aggregated:
        P:0, C:1 --> 202
Individual:
        P:0, C:2, T:0 --> 4
        P:0, C:2, T:1 --> 400

Core-aggregated:
        P:0, C:2 --> 404
Individual:
        P:0, C:3, T:0 --> 8
        P:0, C:3, T:1 --> 800

Core-aggregated:
        P:0, C:3 --> 808

Pkg-aggregated:
        P:0 --> f0f
Individual:
        P:1, C:0, T:0 --> 10
        P:1, C:0, T:1 --> 1z3

Core-aggregated:
        P:1, C:0 --> 1010
Individual:
        P:1, C:1, T:0 --> 20
        P:1, C:1, T:1 --> 2z3

Core-aggregated:
        P:1, C:1 --> 2020
Individual:
        P:1, C:2, T:0 --> 40
        P:1, C:2, T:1 --> 4z3

Core-aggregated:
        P:1, C:2 --> 4040
Individual:
        P:1, C:3, T:0 --> 80
        P:1, C:3, T:1 --> 8z3

Core-aggregated:
        P:1, C:3 --> 8080

Pkg-aggregated:
        P:1 --> f0f0


        APIC ID listings from affinity masks
OS cpu   0, Affinity mask   000001 - apic id 0
OS cpu   1, Affinity mask   000002 - apic id 2
OS cpu   2, Affinity mask   000004 - apic id c
OS cpu   3, Affinity mask   000008 - apic id e
OS cpu   4, Affinity mask   000010 - apic id 20
OS cpu   5, Affinity mask   000020 - apic id 22
OS cpu   6, Affinity mask   000040 - apic id 2c
OS cpu   7, Affinity mask   000080 - apic id 2e
OS cpu   8, Affinity mask   000100 - apic id 1
OS cpu   9, Affinity mask   000200 - apic id 3
OS cpu  10, Affinity mask   000400 - apic id d
OS cpu  11, Affinity mask   000800 - apic id f
OS cpu  12, Affinity mask   001000 - apic id 21
OS cpu  13, Affinity mask   002000 - apic id 23
OS cpu  14, Affinity mask   004000 - apic id 2d
OS cpu  15, Affinity mask   008000 - apic id 2f


Package 0 Cache and Thread details


Box Description:
Cache  is cache level designator
Size   is cache size
OScpu# is cpu # as seen by OS
Core   is core#[_thread# if > 1 thread/core] inside socket
AffMsk is AffinityMask(extended hex) for core and thread
CmbMsk is Combined AffinityMask(extended hex) for hw threads sharing cache
       CmbMsk will differ from AffMsk if > 1 hw_thread/cache
Extended Hex replaces trailing zeroes with 'z#'
       where # is number of zeroes (so '8z5' is '0x800000')
L1D is Level 1 Data cache, size(KBytes)= 32,  Cores/cache= 2, Caches/package= 4
L1I is Level 1 Instruction cache, size(KBytes)= 32,  Cores/cache= 2, Caches/package= 4
L2 is Level 2 Unified cache, size(KBytes)= 256,  Cores/cache= 2, Caches/package= 4
L3 is Level 3 Unified cache, size(KBytes)= 20480,  Cores/cache= 8, Caches/package= 1
      +-----------+-----------+-----------+-----------+
Cache |  L1D      |  L1D      |  L1D      |  L1D      |
Size  |  32K      |  32K      |  32K      |  32K      |
OScpu#|    0     8|    1     9|    2    10|    3    11|
Core  |c0_t0 c0_t1|c1_t0 c1_t1|c2_t0 c2_t1|c3_t0 c3_t1|
AffMsk|    1   100|    2   200|    4   400|    8   800|
CmbMsk|  101      |  202      |  404      |  808      |
      +-----------+-----------+-----------+-----------+

Cache |  L1I      |  L1I      |  L1I      |  L1I      |
Size  |  32K      |  32K      |  32K      |  32K      |
      +-----------+-----------+-----------+-----------+

Cache |   L2      |   L2      |   L2      |   L2      |
Size  | 256K      | 256K      | 256K      | 256K      |
      +-----------+-----------+-----------+-----------+

Cache |   L3                                          |
Size  |  20M                                          |
CmbMsk|  f0f                                          |
      +-----------------------------------------------+

Combined socket AffinityMask= 0xf0f


Package 1 Cache and Thread details


Box Description:
Cache  is cache level designator
Size   is cache size
OScpu# is cpu # as seen by OS
Core   is core#[_thread# if > 1 thread/core] inside socket
AffMsk is AffinityMask(extended hex) for core and thread
CmbMsk is Combined AffinityMask(extended hex) for hw threads sharing cache
       CmbMsk will differ from AffMsk if > 1 hw_thread/cache
Extended Hex replaces trailing zeroes with 'z#'
       where # is number of zeroes (so '8z5' is '0x800000')
      +-----------+-----------+-----------+-----------+
Cache |  L1D      |  L1D      |  L1D      |  L1D      |
Size  |  32K      |  32K      |  32K      |  32K      |
OScpu#|    4    12|    5    13|    6    14|    7    15|
Core  |c0_t0 c0_t1|c1_t0 c1_t1|c2_t0 c2_t1|c3_t0 c3_t1|
AffMsk|   10   1z3|   20   2z3|   40   4z3|   80   8z3|
CmbMsk| 1010      | 2020      | 4040      | 8080      |
      +-----------+-----------+-----------+-----------+

Cache |  L1I      |  L1I      |  L1I      |  L1I      |
Size  |  32K      |  32K      |  32K      |  32K      |
      +-----------+-----------+-----------+-----------+

Cache |   L2      |   L2      |   L2      |   L2      |
Size  | 256K      | 256K      | 256K      | 256K      |
      +-----------+-----------+-----------+-----------+

Cache |   L3                                          |
Size  |  20M                                          |
CmbMsk| f0f0                                          |
      +-----------------------------------------------+
}}}


! intel turbostat
{{{
[root@enkx3db02 ~]# ./turbostat
pkg core CPU   %c0   GHz  TSC   %c1    %c3    %c6    %c7   %pc2   %pc3   %pc6   %pc7
               2.05 2.42 2.89  97.95   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   0   0   3.19 1.93 2.89  96.81   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   0   8   2.09 1.93 2.89  97.91   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   1   1   4.14 2.22 2.89  95.86   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   1   9  10.10 2.66 2.89  89.90   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   6   2   0.89 1.98 2.89  99.11   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   6  10   5.12 2.79 2.89  94.88   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   7   3   0.40 2.26 2.89  99.60   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   0   7  11   0.46 2.33 2.89  99.54   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   0   4   1.86 2.07 2.89  98.14   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   0  12   0.53 2.33 2.89  99.47   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   1   5   0.57 2.45 2.89  99.43   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   1  13   0.95 2.55 2.89  99.05   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   6   6   0.58 1.62 2.89  99.42   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   6  14   1.04 2.68 2.89  98.96   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   7   7   0.31 2.18 2.89  99.69   0.00   0.00   0.00   0.00   0.00   0.00   0.00
   1   7  15   0.58 2.75 2.89  99.42   0.00   0.00   0.00   0.00   0.00   0.00   0.00
}}}
http://rnm1978.wordpress.com/2011/02/02/instrumenting-obiee-for-tracing-oracle-db-calls/
http://rnm1978.wordpress.com/2010/01/26/identify-your-users-by-setting-client-id-in-oracle/

http://www.oracle-base.com/articles/10g/SQLTrace10046TrcsessAndTkprof10g.php
http://www.oracle-base.com/articles/10g/PerformanceTuningEnhancements10g.php

http://method-r.com/software/mrtools
http://method-r.com/component/content/article/115 <-- mrls
http://method-r.com/component/content/article/116 <-- mrnl
http://method-r.com/component/content/article/117 <-- mrskew

http://appsdba.com/docs/orcl_event_6340.html <-- trace file event timeline 
http://www.appsdba.com/blog/?category_name=oracle-dba&paged=2
http://www.appsdba.com/blog/?p=109 <-- trace file execution tree
http://appsdba.com/utilities_resource.htm 

http://www.juliandyke.com/Diagnostics/Trace/EnablingTrace.html
http://www.rittmanmead.com/2005/04/tracing-parallel-execution/
http://www.antognini.ch/2012/08/event-10046-full-list-of-levels/
http://www.sagecomputing.com.au/papers_presentations/lostwithoutatrace.pdf   <- good stuff, with sample codes
http://www.oracle-base.com/articles/8i/DBMS_APPLICATION_INFO.php    <- DBMS_APPLICATION_INFO : For Code Instrumentation
http://www.oracle-base.com/articles/misc/DBMS_SESSION.php <- DBMS_SESSION : Managing Sessions From a Connection Pool in Oracle Databases
http://www.oracle-base.com/articles/10g/SQLTrace10046TrcsessAndTkprof10g.php
http://www.petefinnigan.com/ramblings/how_to_set_trace.htm
http://psoug.org/reference/dbms_monitor.html
http://psoug.org/reference/dbms_applic_info.html
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:49818662859946

How to: Trace the SQL executed by SYSMAN Using a Trigger [ID 400937.1]
{{{
CREATE OR REPLACE TRIGGER logontrig AFTER logon ON database 
begin 
if ora_login_user = 'SYSMAN' then 
execute immediate 'alter session set tracefile_identifier = '||'SYSMAN'; 
execute immediate 'Alter session set events ''10046 trace name context forever, level 12'''; 
end if; 
end;
/
}}}


Capture 10046 Traces Upon User Login (without using a trigger) [ID 371678.1]
http://dbmentors.blogspot.com/2011/09/using-dbmsmonitor.html
http://docs.oracle.com/cd/B28359_01/network.111/b28531/app_context.htm <- application context
https://method-r.fogbugz.com/default.asp?method-r.11.139.2 <- hotsos ILO 
http://www.databasejournal.com/features/oracle/article.php/3435431/Oracle-Session-Tracing-Part-I.htm   <- Oracle Session Tracing Part I







''per module''
{{{
exec DBMS_MONITOR.serv_mod_act_trace_enable (service_name => 'FSTSTAH', module_name => 'EX_APPROVAL');
exec DBMS_MONITOR.serv_mod_act_trace_disable (service_name => 'FSTSTAH', module_name => 'EX_APPROVAL');
trcsess output=client.trc module=EX_APPROVAL *.trc
./orasrp --aggregate=no --binds=0 --recognize-idle-events=no --sys=no client.trc fsprd.html
tkprof client.trc client.tkprof sort=exeela 
}}}

''grep tkprof SQLs''
{{{
less client.tkprof-webapp | grep -B3 -A30 "SELECT L2.TREE_NODE_NUM" | egrep "SQL ID|total" | less

SQL ID: 9gxa3r2v0mkzp Plan Hash: 751140913
total       24      3.65       3.65          0       9103          0        4294
SQL ID: 9zssps0292n9m Plan Hash: 2156210208
total       17      2.64       2.64          0     206748          0        2901
SQL ID: 034a6u0h7psb1 Plan Hash: 2156210208
total        3      0.18       0.18          0       8929          0           4
SQL ID: 2yr2m4xfb14z0 Plan Hash: 4136997945
total        3      0.18       0.18          0       9102          0           3
SQL ID: 0rurft7y2paks Plan Hash: 3656446192
total       14      3.62       3.62          0       9102          0        2391
SQL ID: 99ugjzcz1j1r4 Plan Hash: 2156210208
total       24      2.62       2.62          0     206749          0        4337
SQL ID: 5fgb0cvhqy8w2 Plan Hash: 2156210208
total       28      3.26       3.26          0     215957          0        5077
SQL ID: amrb5fkaysu2r Plan Hash: 2156210208
total        3      0.14       0.14          0      11367          0           3
SQL ID: 3d6u5vjh1y5ny Plan Hash: 2156210208
total       20      3.26       3.27          0     215956          0        3450

}}}


{{{
select service_name, module from v$session where module = 'EX_APPROVAL'
 
SERVICE_NAME                                                     MODULE
---------------------------------------------------------------- ----------------------------------------------------------------
FSPRDOL                                                          EX_APPROVAL
FSPRDOL                                                          EX_APPROVAL
FSPRDOL                                                          EX_APPROVAL
FSPRDOL                                                          EX_APPROVAL
FSPRDOL                                                          EX_APPROVAL
FSPRDOL                                                          EX_APPROVAL
FSPRDOL                                                          EX_APPROVAL
FSPRDOL                                                          EX_APPROVAL
FSPRDOL                                                          EX_APPROVAL
 
9 rows selected.
 
 
 
SYS@fsprd2> SELECT * FROM DBA_ENABLED_TRACES ;
SYS@fsprd2>
SYS@fsprd2> /
 
no rows selected
 
SYS@fsprd2>
SYS@fsprd2>
SYS@fsprd2> exec DBMS_MONITOR.serv_mod_act_trace_enable (service_name => 'FSPRDOL', module_name => 'EX_APPROVAL');
 
PL/SQL procedure successfully completed.
 
 
SELECT 
TRACE_TYPE,
PRIMARY_ID,
QUALIFIER_ID1,
waits,
binds
FROM DBA_ENABLED_TRACES;
 
 
TRACE_TYPE            PRIMARY_ID                                                       QUALIFIER_ID1                                WAITS BINDS
--------------------- ---------------------------------------------------------------- ------------------------------------------------ ----- -----
SERVICE_MODULE        FSPRDOL                                                          EX_APPROVAL                                  TRUE  FALSE
 


--To disable
 exec DBMS_MONITOR.serv_mod_act_trace_disable (service_name => 'FSPRDOL', module_name => 'EX_APPROVAL');
}}}
<<showtoc>>


! 10046 and 10053

* when this is used the 10046 and 10053 data are contained in one trace file
* you can parse this using the tv10053.exe but not with lab128 (v10053.exe)

* you may have to regenerate a separate 10046 and 10053 traces for a less noisy session call graph on 10046 report 
* if you want a separate run of 10046 and 10053, then remove the 10053 trace on the testcase file and use DBMS_SQLDIAG.DUMP_TRACE at the end of the SQL execution as shown here [[10053]]

{{{

+++10046_10053++++
sqlplus <app user>/<pwd>

alter session set timed_statistics = true;
alter session set statistics_level=ALL;
alter session set max_dump_file_size=UNLIMITED;
alter session set tracefile_identifier='10046_10053';
alter session set events '10046 trace name context forever, level 12';
alter session set events '10053 trace name context forever, level 1';

>>>here run the query

--run dummy query to close cursor
select 1 from dual;

exit;

Find trc with suffix "10046_10053" in <diag> directory and upload it to the SR.

To find all trace files for the current instance >>>>> SELECT VALUE FROM V$DIAG_INFO WHERE NAME = 'Diag Trace';


select tracefile from v$process where addr=(select paddr from v$session where sid=sys_context('userenv','sid'));


}}}



! time series short_stack 

[[genstack loop, short_stack loop, time series short_stack]]




! perf and flamegraph

[[Flamegraph using SQL]]


<<showtoc>>


11g
http://structureddata.org/2011/08/18/creating-optimizer-trace-files/?utm_source=rss&utm_medium=rss&utm_campaign=creating-optimizer-trace-files

Examining the Oracle Database 10053 Trace Event Dump File
http://www.databasejournal.com/features/oracle/article.php/3894901/article.htm

Don Seiler
http://seilerwerks.wordpress.com/2007/08/17/dr-statslove-or-how-i-learned-to-stop-guessing-and-love-the-10053-trace/



! new way 
{{{

-- execute the SQL here 


-- put this at the end of the testcase file
BEGIN
  DBMS_SQLDIAG.DUMP_TRACE (
      p_sql_id    => 'd4cdk8w5sazzq',
      p_child_number=> 0,
      p_component => 'Compiler',
      p_file_id   => 'TESTCASE_COLUMN_GROUP_C0');
END;
/

BEGIN
  DBMS_SQLDIAG.DUMP_TRACE (
      p_sql_id    => 'd4cdk8w5sazzq',
      p_child_number=> 1,
      p_component => 'Compiler',
      p_file_id   => 'TESTCASE_COLUMN_GROUP_C1');
END;
/


select value from v$diag_info where name = 'Default Trace File';

select tracefile from v$process where addr=(select paddr from v$session where sid=sys_context('userenv','sid'));

rm  /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/*TESTCASE*

ls -ltr /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/*TESTCASE*

$ ls -ltr /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/*TESTCASE*
-rw-r-----. 1 oracle oinstall 326371 Aug 17 10:49 /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/orclcdb_ora_20180_TESTCASE_COLUMN_GROUP_C0.trm
-rw-r-----. 1 oracle oinstall 760584 Aug 17 10:49 /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/orclcdb_ora_20180_TESTCASE_COLUMN_GROUP_C0.trc
-rw-r-----. 1 oracle oinstall 323253 Aug 17 10:49 /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/orclcdb_ora_20180_TESTCASE_COLUMN_GROUP_C1.trm
-rw-r-----. 1 oracle oinstall 751171 Aug 17 10:49 /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/orclcdb_ora_20180_TESTCASE_COLUMN_GROUP_C1.trc
-rw-r-----. 1 oracle oinstall 318794 Aug 17 10:50 /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/orclcdb_ora_20274_TESTCASE_NO_COLUMN_GROUP_C0.trm
-rw-r-----. 1 oracle oinstall 745873 Aug 17 10:50 /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/orclcdb_ora_20274_TESTCASE_NO_COLUMN_GROUP_C0.trc
-rw-r-----. 1 oracle oinstall 318767 Aug 17 10:50 /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/orclcdb_ora_20274_TESTCASE_NO_COLUMN_GROUP_C1.trm
-rw-r-----. 1 oracle oinstall 745875 Aug 17 10:50 /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/orclcdb_ora_20274_TESTCASE_NO_COLUMN_GROUP_C1.trc

}}}


! generic 
{{{
99g0fgyrhb4n7

BEGIN
  DBMS_SQLDIAG.DUMP_TRACE (
      p_sql_id    => 'bmd4dk0p4r0pc',
      p_child_number=> 0,
      p_component => 'Compiler',
      p_file_id   => 'bmd4dk0p4r0pc');
END;
/

select tracefile from v$process where addr=(select paddr from v$session where sid=sys_context('userenv','sid'));


mv /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/orclcdb_ora_24762_TC_NOLOB_PEEKED.trc . 

cat orclcdb_ora_19285_TCPEEKED.trc | grep -hE "^DP|^AP"


mv /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/orclcdb_ora_25322_TC_NOLOB_ACTUAL.trc .

cat orclcdb_ora_25322_TC_NOLOB_ACTUAL.trc | grep -hE "^DP|^AP"


cat /u01/app/oracle/diag/rdbms/orclcdb/orclcdb/trace/orclcdb_ora_3776_bmd4dk0p4r0pc.trc  | grep -hE "^DP|^AP"
}}}


! your own session
{{{
trace the session

ALTER SESSION SET TRACEFILE_IDENTIFIER='LIO_TRACE';
ALTER SESSION SET EVENTS '10200 TRACE NAME CONTEXT FOREVER, LEVEL 1';

Then take the occurrence of the LIO reasons

$ less emrep_ora_9946_WATCH_CONSISTENT.trc | grep "started for block" | awk '{print $1} ' | sort | uniq -c
    324 ktrget2():
     44 ktrgtc2():


I found this too which more on tracking the objects
http://hoopercharles.wordpress.com/2011/01/24/watching-consistent-gets-10200-trace-file-parser/
}}}

! another session
{{{

1) create the files ss.sql and getlio.awk (see below)

2) get the sid and serial# and trace file name

SELECT s.sid, 
s.serial#,
s.server, 
lower( 
CASE 
WHEN s.server IN ('DEDICATED','SHARED') THEN 
i.instance_name || '_' || 
nvl(pp.server_name, nvl(ss.name, 'ora')) || '_' || 
p.spid || '.trc' 
ELSE NULL 
END 
) AS trace_file_name 
FROM v$instance i, 
v$session s, 
v$process p, 
v$px_process pp, 
v$shared_server ss 
WHERE s.paddr = p.addr 
AND s.sid = pp.sid (+) 
AND s.paddr = ss.paddr(+) 
AND s.type = 'USER' 
ORDER BY s.sid;

3) to start trace, set the 10200 event level 1

exec sys.dbms_system.set_ev(200   ,   11667, 10200, 1, '');

4) monitor the file size

while : ; do du -sm dw_ora_18177.trc ; echo "--" ; sleep 2 ; done

5) execute ss.sql on the sid for 5 times

6) to stop trace, set the 10200 event level 0

exec sys.dbms_system.set_ev(200   ,   11667, 10200, 0, '');

7) process the trace file and the oradebug output

-- get the top objects
awk -v trcfile=dw_ora_18177.trc -f getlio.awk

-- get the function names
less dw_ora_18177.trc | grep "started for block" | awk '{print $1} ' | sort | uniq -c

8) SQL to get the object names

	SELECT
	  OBJECT_NAME,
	  DATA_OBJECT_ID,
	  TO_CHAR(DATA_OBJECT_ID, 'XXXXX') HEX_DATA_OBJECT_ID
	FROM
	  DBA_OBJECTS
	WHERE
	  DATA_OBJECT_ID IN(
	    TO_NUMBER('15ced', 'XXXXX'))
	/

	OBJECT_NAME                                                                                                                      DATA_OBJECT_ID HEX_DA
	-------------------------------------------------------------------------------------------------------------------------------- -------------- ------
	OBJ$                                                                                                                                         18     12


	Summary obj for file: dw_ora_18177.trc
	---------------------------------
	0x00000012 2781466


	2781466 ktrget2():



#### ss.sql and getlio.awk scripts below

cat ss.sql
oradebug setospid &spid
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack
oradebug short_stack



$ cat getlio.awk
BEGIN {
   FS ="[ \t<>:]+"
    print "Details for file: " trcfile
   print "---------------------------------"
   while( getline < trcfile != EOF ){
      if ( $0 ~ /started for block/ ) {
      rdba[$6]+=1
      obj[$8]+=1
      both[$6","$8]+=1
      #print $6 " " rdba[$6] ", " $8 " " obj[$8]
      }
   }
   close (trcfile)
   print ""

   print ""
   print "Summary rdba and obj for file: " trcfile
   print "---------------------------------"
   for ( var in both) {
      #print var " " both[var]
   }

   print ""
   print "Summary obj for file: " trcfile
   print "---------------------------------"
   for ( var in obj ) {
      print var " " obj[var]
   }
}

}}}
https://leetcode.com/problems/customers-who-bought-all-products/
{{{
1045. Customers Who Bought All Products
Medium
SQL Schema

Table: Customer

+-------------+---------+
| Column Name | Type    |
+-------------+---------+
| customer_id | int     |
| product_key | int     |
+-------------+---------+
product_key is a foreign key to Product table.

Table: Product

+-------------+---------+
| Column Name | Type    |
+-------------+---------+
| product_key | int     |
+-------------+---------+
product_key is the primary key column for this table.

 

Write an SQL query for a report that provides the customer ids from the Customer table that bought all the products in the Product table.

For example:

Customer table:
+-------------+-------------+
| customer_id | product_key |
+-------------+-------------+
| 1           | 5           |
| 2           | 6           |
| 3           | 5           |
| 3           | 6           |
| 1           | 6           |
+-------------+-------------+

Product table:
+-------------+
| product_key |
+-------------+
| 5           |
| 6           |
+-------------+

Result table:
+-------------+
| customer_id |
+-------------+
| 1           |
| 3           |
+-------------+
The customers who bought all the products (5 and 6) are customers with id 1 and 3.

Accepted
4,086
Submissions
6,109
}}}

{{{
select customer_id
from customer
group by customer_id
having sum(distinct(product_key)) = (select sum(distinct(product_key)) from product)

-- select a.customer_id 
-- from customer a, product b
-- where a.product_key = b.product_key;


select
customer_id
from customer
group by customer_id
having count(distinct product_key) in (select count(*) from product);
}}}
http://www.freelists.org/post/oracle-l/SQL-High-version-count-because-of-too-many-varchar2-columns,12
http://t31808.db-oracle-general.databasetalk.us/sql-high-version-count-because-of-too-many-varchar2-columns-t31808.html

SQLs With Bind Variable Has Very High Version Count (Doc ID 258742.1)
{{{
event="10503 trace name context forever, level " 

For eg., if the maximum length of a bind variable in the application is 128, then 

event="10503 trace name context forever, level 128" 

The EVENT 10503 was added as a result of BUG:2450264 
This fix introduces the EVENT 10503 which enables users to specify a character bind buffer length. 
Depending on the length used, the character binds in the child cursor can all be created 
using the same bind length; 
skipping bind graduation and keeping the child chain relatively small. 
This helps to alleviate a potential cursor-sharing problem related to graduated binds. 

The level of the event is the bind length to use, in bytes. 
It is relevant for binds of types: 

Character (but NOT ANSI Fixed CHAR (type 96 == DTYAFC)) 
Raw 
Long Raw 
Long 

* There really is no limit for the EVENT 10503 but for the above datatypes. 
For non-PL/SQL calls, the maximum bind buffer size is 4001 (bytes). For PL/SQL, 
the maximum bind buffer size is 32K. 

* Specifying a buffer length which is greater than the pre-set maximum will cause the 
pre-set maximum to be used. To go back to using the pre-set lengths, specify '0' for the buffer 
length. 


Test the patch and event in development environment before implementing in the production environment. 
}}}
! tuning
http://www.redhat.com/promo/summit/2008/downloads/pdf/Thursday/Mark_Wagner.pdf
http://www.oracle.com/technetwork/server-storage/vm/ovm3-10gbe-perf-1900032.pdf
http://dak1n1.com/blog/7-performance-tuning-intel-10gbe
-- this will hog your server's memory in no time
{{{
select count(*) from dual connect by 1=1;
}}}

http://www.pythian.com/news/26003/rdbms-online-patching/

''Online Patching is a new feature introduced in 11.1.0.6. It will be delivered starting with RDBMS 11.2.0.2.0.''

http://goo.gl/2U3H3

http://apex.oracle.com/pls/apex/f?p=44785:24:0:::24:P24_CONTENT_ID,P24_PREV_PAGE:4679,1

RDBMS Online Patching Aka Hot Patching [ID 761111.1]
''Quick guide to package ORA- errors with ADRCI'' http://www.evernote.com/shard/s48/sh/e6086cd4-ab4e-4065-b145-323cfa545f80/a831bef2f6480f43c96bb23749df2710


http://goo.gl/mNnaD

''quick step by step'' https://support.oracle.com/CSP/main/article?cmd=show&type=ATT&id=443529.1:Steps&inline=1
How to Build a Testcase for Oracle Data Server Support to Reproduce ORA-600 and ORA-7445 Errors (Doc ID 232963.1)


To change the ADR base
<<<
ADR base = "/u01/app/oracle/product/11.2.0.3/dbhome_1/log"
adrci>
adrci>
''adrci> set base /u01/app/oracle''
adrci>
adrci> show home
ADR Homes:
diag/asm/+asm/+ASM4
diag/tnslsnr/pd01db04/listener
diag/tnslsnr/pd01db04/listener_fsprd
diag/tnslsnr/pd01db04/listener_temp
diag/tnslsnr/pd01db04/listener_mtaprd11
diag/tnslsnr/pd01db04/listener_scan2
diag/tnslsnr/pd01db04/listener_mvwprd
diag/tnslsnr/pd01db04/stat
diag/rdbms/dbm/dbm4
diag/rdbms/dbfsprd/DBFSPRD4
diag/rdbms/mtaprd11/mtaprd112
diag/rdbms/fsprd/fsprd2
diag/rdbms/fsqacdc/fsqa2
diag/rdbms/fsprddal/fsprd2
diag/rdbms/mtaprd11dal/mtaprd112
diag/rdbms/mvwprd/mvwprd2
diag/rdbms/mvwprddal/mvwprd2
diag/clients/user_oracle/host_783020838_80
diag/clients/user_oracle/host_783020838_11
<<<


{{{
Use ADRCI or SWB steps to create IPS packages
ADRCI
1. Enter ADRCI
# Adrci
2 shows the existence of the ADR home
adrci> show home
4 Setting ADR home
adrci> set home
5 shows all the problems
adrci> show problem
6 show all events
adrci> show incident
7 diagnostic information packed event
adrci> ips pack incident <incident id>
SWB
1 Log in to Enterprise Manager
2 Click the link 'support workbench'
3 Select 'all active' problem
4 Click the 'problem id' to view the corresponding event
5 Select the appropriate event
6 Click the 'quick package'
7 Enter the package name, description, choose whether to upload to oracle support
8 See the information package
9. Select the 'immediate' create the package, and click the button 'submit'

<br /> For more information, please read the following note for more information.
Note 422893.1 - 11g Understanding Automatic Diagnostic Repository.
Note 1091653.1 - "11g Quick Steps - How to create an IPS package using Support Workbench" [Video]
Note 443529.1 - 11g Quick Steps to Package and Send Critical Error Diagnostic Information to Support [Video] 
}}}

! purge
http://www.runshell.com/2013/01/oracle-how-to-purge-old-trace-and-dump.html


11g : Active Database Duplication
 	Doc ID:	Note:568034.1



-- DATABASE REPLAY

Oracle Database Replay Client Provisioning - Platform Download Matrix
  	Doc ID: 	815567.1

How To Find Database Replay Divergence Details [ID 1388309.1]


Oracle Database 11g: Interactive Quick Reference http://goo.gl/rQejT
{{{

New Products Installed in 11g:
------------------------------

1) Oracle APEX
	**- Installed by default

2) Oracle Warehouse Builder
	**- Installed by default

3) Oracle Configuration Manager
	- Offered, not installed by default
		two options:
			connected mode	
			disconnected mode

4) SQL Developer
	- Installed by default with template-based database installations
	- It is also installed with database client

5) Database Vault
	- Installed by default (OPTIONAL component - custom installation)



Changes in Install Options:
---------------------------

1) Oracle Configuration Manager
	- Starting 11g, Integrated with OUI (OPTIONAL component)

2) Oracle Data Mining
	- Selected on Enterprise Edition Installation type

3) Oracle Database Vault
	- Starting 11g, Integrated with OUI (OPTIONAL component - custom installation)

4) Oracle HTTP Server
	- Starting 11g, Available on separate media

5) Oracle Ultra Search
	- Starting 11g, Integrated with the Oracle Database

6) Oracle XML DB
	- Starting 11g, Installed by default



New Parameters:
---------------

MEMORY_TARGET
DIAGNOSTIC_DEST



New in ASM:
-----------

Automatic Storage Management Fast Mirror Resync
	see: Oracle Database Storage Administrator's Guide
SYSASM privilege
OSASM group



New Directories:
----------------

ADR_base/diag	<-- automatic diagnostic repository



Deprecated Components: 
----------------------

iSQL*Plus
Oracle Workflow
Oracle Data Mining Scoring Engine
Oracle Enterprise Manager Java Console




Overview of Installation:
-------------------------

CSS (Cluster Synchronization Services) does the synchronization between ASM and database instance
	for RAC, resides on Clusterware Home
	for Single Node-Single System, resides on home directory of ASM instance


Automatic Storage Management
	can be used starting 10.1.0.3 or later
	also, if you are 11.1 then you could use ASM from 10.1


Database Management Options:
	either you use:
	1) Enterprise Manager Grid Control
		Oracle Management Repository & Service --> Install Management Agent on each computer
	2) Local Database Control


Upgrading the database using RHEL 2.1 OS
	www.oracle.com/technology/tech/linux/pdf/rhel_23_upgrade.pdf



Preinstallation:
----------------


1) Logging In to the System as root

2) Checking the Hardware Requirements
	**NEW-parameters:
		memory_max_target
		memory_target

3) Checking the Software Requirements
	# Operating System Requirements
	# Kernel Requirements
	# Package Requirements
rpm -qa | grep -i "binutils"
rpm -qa | grep -i "compat-libstdc++"
rpm -qa | grep -i "elfutils-libelf"
rpm -qa | grep -i "elfutils-libelf-devel"
rpm -qa | grep -i "glibc"
rpm -qa | grep -i "glibc-common"
rpm -qa | grep -i "glibc-devel"
rpm -qa | grep -i "gcc"
rpm -qa | grep -i "gcc-c++"
rpm -qa | grep -i "libaio"
rpm -qa | grep -i "libaio-devel" 
rpm -qa | grep -i "libgcc"
rpm -qa | grep -i "libstdc++" 
rpm -qa | grep -i "libstdc++-devel"
rpm -qa | grep -i "make"
rpm -qa | grep -i "sysstat"
rpm -qa | grep -i "unixODBC"
rpm -qa | grep -i "unixODBC-devel"


NOT DISCOVERED:
rpm -qa | grep -i "elfutils-libelf-devel"
	dep: elfutils-libelf-devel-static-0.125-3.el5.i386.rpm
rpm -qa | grep -i "libaio-devel"
rpm -qa | grep -i "sysstat"
rpm -qa | grep -i "unixODBC"
rpm -qa | grep -i "unixODBC-devel"

	# Compiler Requirements
	# Additional Software Requirements

4) Preinstallation Requirements for Oracle Configuration Manager

5) Checking the Network Setup
	# Configuring Name Resolution
	# Installing on DHCP Computers
	# Installing on Multihomed Computers
	# Installing on Computers with Multiple Aliases
	# Installing on Non-Networked Computers

6) Creating Required Operating System Groups and Users
	**NEW-group:
		OSASM group...which has a usual name of "ASMADMIN"
		this group is for ASM storage administrators

groupadd oinstall
groupadd dba
groupadd oper
groupadd asmadmin
useradd -g oinstall -G dba,oper,asmadmin oracle

7) Configuring Kernel Parameters

in /etc/sysctl.conf
	# Controls the maximum shared segment size, in bytes
	kernel.shmmax = 4294967295
	
	# Controls the maximum number of shared memory segments, in pages
	kernel.shmall = 268435456
	
	fs.file-max = 102552
	kernel.shmmni = 4096
	kernel.sem = 250 32000 100 128
	net.ipv4.ip_local_port_range = 1024 65000
	net.core.rmem_default = 4194304
	net.core.rmem_max = 4194304
	net.core.wmem_default = 262144
	net.core.wmem_max = 262144

to increase shell limits:
in /etc/security/limits.conf
	oracle              soft    nproc   2047
	oracle              hard    nproc   16384
	oracle              soft    nofile  1024
	oracle              hard    nofile  65536

in /etc/pam.d/login
	session    required     /lib/security/pam_limits.so
	session    required     pam_limits.so

in /etc/profile
	if [ $USER = "oracle" ]; then
		if [ $SHELL = "/bin/ksh" ]; then
		ulimit -p 16384
		ulimit -n 65536
		else
		ulimit -u 16384 -n 65536
		fi
	fi

8) Identifying Required Software Directories

9) Identifying or Creating an Oracle Base Directory
root@localhost ~]# mkdir -p /u01/app
[root@localhost ~]# chown -R oracle:oinstall /u01/app
[root@localhost ~]# chmod -R 775 /u01/app

10) Choosing a Storage Option for Oracle Database and Recovery Files

11) Creating Directories for Oracle Database or Recovery Files
[root@localhost oracle]# mkdir flash_recovery_area
[root@localhost oracle]# chown oracle:oinstall flash_recovery_area/
[root@localhost oracle]# chmod 775 flash_recovery_area/

12) Preparing Disk Groups for an Automatic Storage Management Installation
13) Stopping Existing Oracle Processes
14) Configuring the oracle User's Environment
umask 022

export ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1
export ORACLE_BASE=/u01/app/oracle
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export ORACLE_SID=ora11

PATH=$ORACLE_HOME/bin:$PATH


}}}
Creating an Oracle ACFS File System https://docs.oracle.com/database/121/OSTMG/GUID-4C98CF06-8CCC-45F1-9316-C40FB3EFF268.htm#OSTMG94787
http://www.oracle-base.com/articles/11g/ACFS_11gR2.php

ACFS Technical Overview and Deployment Guide [ID 948187.1]  ''<-- ACFS now supports RMAN, DataPump on 11.2.0.3 above... BTW, it does not support archivelogs… You still have to have the FRA diskgroup to put your archivelogs/redo. At least you can have the ACFS as container of backupsets and data pump files''

''update''
11.2.0.3 now supports almost everything
http://docs.oracle.com/cd/E11882_01/server.112/e18951/asmfilesystem.htm#CACJFGCD
Starting with Oracle Automatic Storage Management 11g Release 2 (11.2.0.3), Oracle ACFS supports RMAN backups (BACKUPSET file type), archive logs (ARCHIVELOG file type), and Data Pump dumpsets (DUMPSET file type). Note that Oracle ACFS snapshots are not supported with these files.

''update 08/2014''
ACFS supported on Exadata
<<<
Creating AFCS file systems on Exadata storage requires the following:

Oracle Linux
Grid Infrastructure 12.1.0.2
Database files stored in ACFS on Exadata storage are subject to the following guidelines and restrictions:

Supported database versions are 10.2.0.4, 10.2.0.5, 11.2.0.4, and 12.1.
Hybrid Columnar Compression (HCC) support (for 11.2 and 12.1) requires fix for bug 19136936.
Exadata-offload features such as Smart Scan, Storage Indexes, IORM, Network RM, etc. are not supported.
Exadata Smart Flash Cache will cache read operations. Caching of write operations is expected in a later release.
No specialized cache hints are passed from the Database to the Exadata Storage layer, which means the Smart Flash Cache heuristics are based on I/O size, similar to any other block storage caching technology.
Exadata Smart Flash Logging is not supported.
Hardware Assisted Resilient Data (HARD) checks are not performed.
<<<


How To Install/Reinstall Or Deinstall ACFS Modules/Installation Manually? [ID 1371067.1]

http://www.oracle-base.com/articles/11g/DBFS_11gR2.php
http://ronnyegner.wordpress.com/2009/10/08/the-oracle-database-file-system-dbfs/
http://www.pythian.com/news/17849/chopt-utility/
http://perumal.org/enabling-and-disabling-database-options/
http://juliandyke.wordpress.com/2010/10/06/oracle-11-2-0-2-requires-multicasting-on-the-interconnect/
http://dbastreet.com/blog/?p=515
http://blog.ronnyegner-consulting.de/oracle-11g-release-2-install-guide/
{{{
the only difference it would make on the databases that will have the DBV and TDE configured is that when 
DBAs would try to create a user it has to go through the dvadmin user. Other databases that doesn’t have the 
DV schemas created and configured will still behave as is. 

Below is a sample of create a user in a DBV environment

SYS@dbv_1> SYS@dbv_1> select username from dba_users order by 1;

USERNAME
------------------------------
ANONYMOUS
APEX_030200
APEX_PUBLIC_USER
APPQOSSYS
BI
CTXSYS
DBSNMP
DIP
DVADMIN
DVF
DVOWNER
DVSYS

SYS@dbv_1> conn / as sysdba 
SYS@dbv_1> create user karlarao identified by karlarao;


create user karlarao identified by karlarao
                                   *
ERROR at line 1:
ORA-01031: insufficient privileges


SYS@dbv_1> conn dvadmin/<password>
Connected.
DVADMIN@dbv_1> create user karlarao identified by karlarao;

User created.
}}}
http://www.dpriver.com/blog/list-of-demos-illustrate-how-to-use-general-sql-parser/oracle-sql-query-rewrite/

{{{
1. (NOT) IN sub-query to (NOT) EXISTS sub-query

2. (NOT) EXISTS sub-query to (NOT) IN sub-query

3. Separate outer joined inline view using UNION ALL or add hint for the inline view

4. IN clause to UNION ALL statement

5. OR clause to UNION ALL statement

6. NVL function to UNION ALL statement

7. Re-write suppressed joined columns in the WHERE clause

8. VIEW expansion

9. NOT EXISTS to NOT IN hash anti-join

10. Make columns suppressed using RTRIM function or ‘+0’

11. Add hint to the statement

12. Co-related sub-query to inline View
}}}


! 2021 
Common Coding and Design mistakes (that really mess up performance) https://www.slideshare.net/SageComputing/optmistakesora11dist








.

https://balazspapp.wordpress.com/2018/04/05/oracle-18c-recover-standby-database-from-service/
https://www.virtual-dba.com/blog/refreshing-physical-standby-using-recover-from-service-on-12c/
https://dbtut.com/index.php/2019/12/27/recover-datbase-using-service-refresh-standby-database-in-oracle-12c/


Restoring and Recovering Files Over the Network (from SERVICE)
https://docs.oracle.com/database/121/BRADV/rcmadvre.htm#BRADV685

Creating a Physical Standby database using RMAN restore database from service (Doc ID 2283978.1)
http://emarcel.com/upgrade-oracle-database-12c-with-asm-12-1-0-1-to-12-1-0-2/
{{{
LISTENER =
  (ADDRESS_LIST=
        (ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=1521))
        (ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))

SID_LIST_LISTENER=
   (SID_LIST=
        (SID_DESC=
          (GLOBAL_DBNAME=orcl)
          (SID_NAME=orcl)
          (ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1)
        )
        (SID_DESC=
          (GLOBAL_DBNAME=noncdb)
          (SID_NAME=noncdb)
          (ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1)
        )

      )

SECURE_REGISTER_LISTENER = (IPC)


}}}

https://martincarstenbach.wordpress.com/2012/06/20/little-things-worth-knowing-static-and-dynamic-listener-registration/
http://kerryosborne.oracle-guy.com/papers/12c_Adaptive_Optimization.pdf
https://oracle-base.com/articles/12c/adaptive-plans-12cr1


https://blog.dbi-services.com/sql-monitoring-12102-shows-adaptive-plans/
https://blog.dbi-services.com/oracle-12c-adaptive-plan-inflexion-point/
https://blogs.oracle.com/letthesunshinein/sql-monitor-now-tells-you-whether-the-execution-plan-was-adaptive-or-not
https://oracle.readthedocs.io/en/latest/sql/plans/adaptive-query-optimization.html#sql-adaptive
<<showtoc>> 

! 12.1 

!! optimizer_adaptive_features 
* In 12.1, adaptive optimization as a whole is controlled by the dynamic parameter optimizer_adaptive_features, which defaults to TRUE. All of the features it controls are enabled when optimizer_features_enable >= 12.1


! 12.2 

!! optimizer_adaptive_features has been obsoleted, replaced by two new parameters

!! optimizer_adaptive_plans, defaults to TRUE
* The optimizer_adaptive_plans parameter controls whether the optimizer creates adaptive plans and defaults to TRUE.
* The most commonly seen use of adaptive plans is where different sub-plans that may use different join methods are selected at run time. For example, a nested loops join may be converted to a hash join once execution information has identified that it provides better performance. The plan has been adapted according to the data presented.

!! optimizer_adaptive_statistics, defaults to FALSE
* The optimizer_adaptive_statistics parameter controls whether the optimizer uses adaptive statistics and defaults to FALSE
* The creation of automatic extended statistics is controlled by the table-level statistics preference AUTO_STAT_EXTENSIONS, which defaults to OFF.  (AUTO_STAT_EXTENSIONS can be set using DBMS_STATS procedures like SET_TABLE_PREFS and SET_GLOBAL_PREFS.) These defaults have been chosen to place emphasis on achieving stable SQL execution plans
* Setting optimizer_features_enable has no effect on the features controlled by optimizer_adaptive_statistics. The creation of automatic extended statistics is controlled by the table-level statistics preference AUTO_STAT_EXTENSIONS, which defaults to OFF.




Monitoring Business Applications http://docs.oracle.com/cd/E24628_01/install.121/e24215/bussapps.htm#BEIBBHFH

It’s kind of a Service Type that combines information from:
* Systems (PSFT systems for example), 
* Service tests, 
* Real User experience Insight data and 
* Business Transaction Management data. 

http://hemantoracledba.blogspot.sg/2013/07/concepts-features-overturned-in-12c.html

Oracle Database 12c Release 1 Information Center (Doc ID 1595421.2)
Release Schedule of Current Database Releases (Doc ID 742060.1)

Master Note For Oracle Database 12c Release 1 (12.1) Database/Client Installation/Upgrade/Migration Standalone Environment (Non-RAC) (Doc ID 1520299.1)
Master Note of Linux OS Requirements for Database Server (Doc ID 851598.1)
Requirements for Installing Oracle Database 12.1 on RHEL5 or OL5 64-bit (x86-64) (Doc ID 1529433.1)
Requirements for Installing Oracle Database 12.1 on RHEL6 or OL6 64-bit (x86-64) (Doc ID 1529864.1)

Exadata 12.1.1.1.0 release and patch (16980054 ) (Doc ID 1571789.1)
http://ermanarslan.blogspot.com/2014/02/rac-listener-configuration-in-oracle.html
<<showtoc>>

<<<
12c, single instance installation featuring Oracle 12.1.0.2.0 on Oracle Linux 6.6. 
The system is configured with 8 GB of RAM and 2 virtual CPUs. 
The username and password match for the oracle account. Root password is r00t. 
The ORACLE_HOME is in /u01/app/oracle/product/12.1.0.2/dbhome_1
<<<


! LAB X: OEM EXPRESS
{{{
0) create a swingbench schema

method a: lights out using swingbench installation

$> ./oewizard -scale 1 -dbap change_on_install -u soe_master -p soe_master -cl -cs //localhost/NCDB -ts SOE -create
SwingBench Wizard
Author  :	 Dominic Giles
Version :	 2.5.0.949

Running in Lights Out Mode using config file : oewizard.xml

============================================
|           Datagenerator Run Stats        |
============================================
Connection Time                        0:00:00.004
Data Generation Time                   0:00:20.889
DDL Creation Time                      0:00:56.606
Total Run Time                         0:01:17.503
Rows Inserted per sec                      579,546
Data Generated (MB) per sec                   47.2
Actual Rows Generated                   13,007,340


Post Creation Validation Report
===============================
The schema appears to have been created successfully.

Valid Objects
=============
Valid Tables : 'ORDERS','ORDER_ITEMS','CUSTOMERS','WAREHOUSES','ORDERENTRY_METADATA','INVENTORIES','PRODUCT_INFORMATION','PRODUCT_DESCRIPTIONS','ADDRESSES','CARD_DETAILS'
Valid Indexes : 'PRD_DESC_PK','PROD_NAME_IX','PRODUCT_INFORMATION_PK','PROD_SUPPLIER_IX','PROD_CATEGORY_IX','INVENTORY_PK','INV_PRODUCT_IX','INV_WAREHOUSE_IX','ORDER_PK','ORD_SALES_REP_IX','ORD_CUSTOMER_IX','ORD_ORDER_DATE_IX','ORD_WAREHOUSE_IX','ORDER_ITEMS_PK','ITEM_ORDER_IX','ITEM_PRODUCT_IX','WAREHOUSES_PK','WHS_LOCATION_IX','CUSTOMERS_PK','CUST_EMAIL_IX','CUST_ACCOUNT_MANAGER_IX','CUST_FUNC_LOWER_NAME_IX','ADDRESS_PK','ADDRESS_CUST_IX','CARD_DETAILS_PK','CARDDETAILS_CUST_IX'
Valid Views : 'PRODUCTS','PRODUCT_PRICES'
Valid Sequences : 'CUSTOMER_SEQ','ORDERS_SEQ','ADDRESS_SEQ','LOGON_SEQ','CARD_DETAILS_SEQ'
Valid Code : 'ORDERENTRY'
Schema Created

Method b) exp/imp

FYI - the export information

[enkdb03:oracle:MBACH] /home/oracle/mbach/swingbench/bin
> expdp system/manager directory=oradir logfile=exp_soe_master.txt dumpfile=exp_soe_master.dmp schemas=soe_master

Export: Release 12.1.0.2.0 - Production on Mon Jun 8 05:20:55 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01":  system/******** directory=oradir logfile=exp_soe_master.txt dumpfile=exp_soe_master.dmp schemas=soe_master
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 1.219 GB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
. . exported "SOE_MASTER"."ORDER_ITEMS"                  228.4 MB 4290312 rows
. . exported "SOE_MASTER"."ADDRESSES"                    110.4 MB 1500000 rows
. . exported "SOE_MASTER"."CUSTOMERS"                    108.0 MB 1000000 rows
. . exported "SOE_MASTER"."ORDERS"                       129.1 MB 1429790 rows
. . exported "SOE_MASTER"."INVENTORIES"                  15.26 MB  901254 rows
. . exported "SOE_MASTER"."CARD_DETAILS"                 63.88 MB 1500000 rows
. . exported "SOE_MASTER"."LOGON"                        51.24 MB 2382984 rows
. . exported "SOE_MASTER"."PRODUCT_DESCRIPTIONS"         216.8 KB    1000 rows
. . exported "SOE_MASTER"."PRODUCT_INFORMATION"          188.1 KB    1000 rows
. . exported "SOE_MASTER"."ORDERENTRY_METADATA"          5.617 KB       4 rows
. . exported "SOE_MASTER"."WAREHOUSES"                   35.70 KB    1000 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
  /home/oracle/mbach/oradir/exp_soe_master.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Mon Jun 8 05:23:39 2015 elapsed 0 00:02:39


-> this needs to be imported into NCDB, taken from /u01/software

1) enable OEM express if you haven't already with your database.

Check if enabled: 
  select dbms_xdb.getHttpPort() from dual;
  select dbms_xdb_config.getHttpsPort() from dual;

If none returns a result, set it up
  exec dbms_xdb_config.sethttpsport(5500);

2) start charbench on the command line

 create a AWR snapshot (exec dbms_workload_repository.create_snapshot)

 ./charbench -u soe_master -p soe_master -cs //localhost/NCDB -uc 10 -min -10 -max 100 -stats full -rt 0:10 -bs 0:01 -a 

 create another AWR snapshot (exec dbms_workload_repository.create_snapshot)

3) view the activity with your OEM express

if you need to use port-forwarding:
	ssh -L<oem express port>localhost:<oem express port>

Then point your browser to it: https://<VM-IP>:<oem express port>/em

4) explore OEM express

Look at the performance overview page
Review the performance hub and look at the various panes available to you

5) Create an active-html AWR report

Review and admire it
}}}

! LAB X) SQL Monitor reports

{{{
SQL Monitor reports are an very useful performance monitoring and tuning tool. In this lab you will start experimenting with it. In order to do so you need a query. In the first step you'll create one of your own liking based on the SOE schema you imported earlier. Ensure to supply the /*+ monitor */ hint when executing it!

0) run a large query

select /*+ monitor gather_plan_statistics sqlmon001 */
count(*) 
from customers c, 
 addresses a, 
 orders o, 
 order_items oi
where o.order_id = oi.order_id
and o.customer_id = c.customer_id
and a.customer_id = c.customer_id
and c.credit_limit = 
  (select max(credit_limit) from customers);

1) Create a SQL Monitor report from OEM express

Navigate the User Interface and find your monitored query. Take a note of the SQL ID, you will need it in step 3

2) Create a text version of the same SQL report

The graphical monitoring report requires a GUI and once retrieved, also relies on loading data from Oracle's website. In secure environments you may not have access to the Internet. In this step you need to look up the documentation for dbms_sqltune.report_sql_monitor and produce a text version of the report.

select dbms_sqltune.report_sql_monitor('&sqlID') from dual;

Review the reports and have a look around
}}}

! LAB X: OTHER DEVELOPMENT FEATURES

{{{
This is a large-ish lab where you are going to explore various development-related features with the database. 

1) Advanced index compression

The first lab will introduce you to index compression. It's based on a table created as a subset of soe.order_items. Copy the following script and execute it in your environment. 

SET ECHO ON;

DROP TABLE t1 purge;

CREATE TABLE t1 NOLOGGING AS 
SELECT * FROM ORDER_ITEMS WHERE ROWNUM <= 1e6;

CREATE INDEX t1_i1 ON t1 (order_id,line_item_id,product_id);

CREATE INDEX t1_i2 ON t1 (order_id, line_item_id);

CREATE INDEX t1_i3 ON t1 (order_id,line_item_id,product_id,unit_price);

CREATE INDEX t1_i4 ON t1 (order_id);

COL segment_name FOR A5 HEA "INDEX";

SET ECHO OFF;
SPO index.txt;
PRO NO COMPRESS

SELECT segment_name,
       blocks
  FROM user_segments
 WHERE segment_name LIKE 'T1%'
   AND segment_type = 'INDEX'
 ORDER BY
       segment_name;

SPO OFF;

SET ECHO ON;

/*
DROP TABLE t1 purge;

CREATE TABLE t1 NOLOGGING 
AS 
SELECT * FROM ORDER_ITEMS WHERE ROWNUM <= 1e6;
*/

DROP INDEX t1_i1;
DROP INDEX t1_i2;
DROP INDEX t1_i3;
DROP INDEX t1_i4;

CREATE INDEX t1_i1 ON t1 (order_id,line_item_id,product_id) COMPRESS 2;

CREATE INDEX t1_i2 ON t1 (order_id, line_item_id) COMPRESS 1;

CREATE INDEX t1_i3 ON t1 (order_id,line_item_id,product_id,unit_price) COMPRESS 3;

CREATE INDEX t1_i4 ON t1 (order_id) COMPRESS 1;

SET ECHO OFF;
SPO index.txt APP;
PRO PREFIX COMPRESSION

SELECT segment_name,
       blocks
  FROM user_segments
 WHERE segment_name LIKE 'T1%'
   AND segment_type = 'INDEX'
 ORDER BY
       segment_name;

SPO OFF;

SET ECHO ON;

DROP INDEX t1_i1;
DROP INDEX t1_i2;
DROP INDEX t1_i3;
DROP INDEX t1_i4;

CREATE INDEX t1_i1 ON t1 (order_id,line_item_id,product_id) COMPRESS ADVANCED LOW;

CREATE INDEX t1_i2 ON t1 (order_id, line_item_id) COMPRESS ADVANCED LOW;

CREATE INDEX t1_i3 ON t1 (order_id,line_item_id,product_id,unit_price) COMPRESS ADVANCED LOW;

CREATE INDEX t1_i4 ON t1 (order_id) COMPRESS ADVANCED LOW;

SET ECHO OFF;
SPO index.txt APP;
PRO ADVANCED COMPRESSION

SELECT segment_name,
       blocks
  FROM user_segments
 WHERE segment_name LIKE 'T1%'
   AND segment_type = 'INDEX'
 ORDER BY
       segment_name;

SPO OFF;

SET ECHO ON;

Review file index.txt and have a look at the various compression results.


2) Sequences as default values

In this part of the lab you will create two tables and experiment with sequences as default values for surrogate keys. You will need to create the following:

- table the_old_way: make sure it has an "ID" column as primary key
- create a sequence
- create a trigger that populates the ID if not supplied in the insert command 
- insert 100000 rows

One Potential Solution:

Create a sequence to allow the population of the table using default values.

create sequence s cache 10000 noorder;

create a simple table to hold an ID column to be used as a primary key. Add a few random columns such as a timestamp and a vc to store information. Next you need to create a before insert trigger that captures the insert statement and sets the ID's value to sequence.nextval, but only if the ID column is not part of the insert statement! The next step is to create an anonymous PL/SQL block to insert 100000 rows into the table.

create table the_old_way (
  id number primary key,
   d  timestamp not null,
  vc varchar2(50) not null
)
/

create or replace trigger the_old_way_bit
before insert on the_old_way for each row
declare
begin
 if :new.id is null then
  :new.id := s.nextval;
 end if;
end;
/

begin
   for i in 1..100000 loop
    insert into the_old_way (d, vc) values (systimestamp, 'with trigger');
   end loop;
end;
/

Note down the time for the execution of the PL/SQL block

Part two of the lab is a test with sequences as default values for the column. Create another table similar to the first one created but this time without the trigger. Ensure that the ID column is used as a primary key and that it has the sequence's next value as its default value. Then insert 100000 and note the time.

drop sequence s;

create sequence s cache 10000 noorder;

create table the_12c_way (
   id number default s.nextval primary key,
   d  timestamp not null,
   vc varchar2(50) not null
)
/

begin
   for i in 1..100000 loop
    insert into the_12c_way (d, vc) values (systimestamp, 'with trigger');
   end loop;
end;
/

Finally create yet another table, but this time with identity columns. Ensure that the identity column is defined in the same way as the sequence you created earlier. Then insert again and note the time.

create table the_12c_way_with_id (
   id number generated always as identity (
     start with 1 cache 100000),
   d  timestamp not null,
   vc varchar2(50) not null
)
/

begin
   for i in 1..100000 loop
    insert into the_12c_way_with_id (d, vc) values (systimestamp, 'with identity');
   end loop;
end;
/

Before finishing this section review the objects created as part of the identity table's DDL.

col IDENTITY_OPTIONS for a50 wrap
col SEQUENCE_NAME    for a30
col COLUMN_NAME      for a15

select column_name, generation_type, sequence_name, identity_options from USER_TAB_IDENTITY_COLS;

3) Embed a function in the WITH clause

Create a statement that selects from t1 and uses a function declared in the with-clause of the query to return a truncated date.

with
 function silly_little_function (pi_d in date) 
 return date is
 begin 
  return trunc(pi_d); 
 end;
select order_id, silly_little_function(dispatch_date)
 from t1 where rownum < 11
/

4) Automatic gathering of table statistics

create table t2 as select * from t1 and check the table statistics. Are they current? Why are there table statistics during a CTAS statement?

SQL> create table t2 as select * from t1 sample (50);

Table created.

Elapsed: 00:00:00.73
SQL> select table_name, partitioned, num_rows from tabs where table_name = 'T2';

TABLE_NAME                     PAR   NUM_ROWS
------------------------------ --- ----------
T2                             NO      500736

Elapsed: 00:00:00.04

SQL> select count(*) from t2;

  COUNT(*)
----------
    500736

Elapsed: 00:00:00.10
SQL> select sql_id from v$sql where sql_text = 'create table t2 as select * from t1 sample (50)';

SQL_ID
-------------
0h72ryws535xf

SQL> select * from table(dbms_xplan.display_cursor('0h72ryws535xf',null));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID  0h72ryws535xf, child number 0
-------------------------------------
create table t2 as select * from t1 sample (50)

Plan hash value: 2307360015

-----------------------------------------------------------------------------------------
| Id  | Operation                        | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | CREATE TABLE STATEMENT           |      |       |       |  2780 (100)|          |
|   1 |  LOAD AS SELECT                  |      |       |       |            |          |
|   2 |   OPTIMIZER STATISTICS GATHERING |      |   500K|    24M|  2132   (1)| 00:00:01 |
|   3 |    TABLE ACCESS STORAGE SAMPLE   | T1   |   500K|    24M|  2132   (1)| 00:00:01 |
-----------------------------------------------------------------------------------------

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold


19 rows selected.

5) Top-N queries and pagination

Top-N queries used to be interesting in Oracle before 12c. In this lab you will appreciate their ease of use.

- list how many rows there are in table t1

select count(*) from t1;

- what are the min and max dispatch dates in the table?

SQL> alter session set nls_date_format='dd.mm.yyyy hh24:mi:ss';

SQL> select min(dispatch_date), max(dispatch_date) from t1;

MIN(DISPATCH_DATE)  MAX(DISPATCH_DATE)
------------------- -------------------
01.01.2012 00:00:00 03.05.2012 00:00:00

- Create a query that orders rows in t1 by dispatch date and shows the first 15 rows only

select order_id, dispatch_date, gift_wrap from t1 order by dispatch_date fetch first 15 rows only;

- create a query that orders rows in t1 by dispatch date and shows rows 150 to 155

select order_id, dispatch_date, gift_wrap from t1 order by dispatch_date offset 150 rows fetch next 5 rows only;

- rewrite the last query with the pre-12c syntax and compare results

http://www.oracle.com/technetwork/issue-archive/2006/06-sep/o56asktom-086197.html

select * 
  from ( select /*+ FIRST_ROWS(n) */ 
  a.*, ROWNUM rnum 
      from ( select order_id, dispatch_date, gift_wrap from t1 order by dispatch_date ) a 
      where ROWNUM <= 155 ) 
where rnum  > 150;

- Compare execution times and plans


SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID  b20zp6jrn2yag, child number 0
-------------------------------------
select *   from ( select /*+ FIRST_ROWS(n) */   a.*, ROWNUM rnum
from ( select order_id, dispatch_date, gift_wrap from t1 order by
dispatch_date ) a       where ROWNUM <= 155 ) where rnum  > 150

Plan hash value: 2771300550

---------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |      |       |       |       |  8202 (100)|          |
|*  1 |  VIEW                                    |      |   155 |  7285 |       |  8202   (1)| 00:00:01 |
|*  2 |   COUNT STOPKEY                          |      |       |       |       |            |          |
|   3 |    VIEW                                  |      |  1000K|    32M|       |  8202   (1)| 00:00:01 |
|*  4 |     SORT ORDER BY STOPKEY                |      |  1000K|    19M|    30M|  8202   (1)| 00:00:01 |
|   5 |      TABLE ACCESS STORAGE FULL FIRST ROWS| T1   |  1000K|    19M|       |  2134   (1)| 00:00:01 |
---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("RNUM">150)
   2 - filter(ROWNUM<=155)
   4 - filter(ROWNUM<=155)

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold

SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID  0xhpxrmzzbwkp, child number 0
-------------------------------------
select order_id, dispatch_date, gift_wrap from t1 order by
dispatch_date offset 150 rows fetch next 5 rows only

Plan hash value: 2433988517

--------------------------------------------------------------------------------------------
| Id  | Operation                   | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |      |       |       |       |  8202 (100)|          |
|*  1 |  VIEW                       |      |  1000K|    53M|       |  8202   (1)| 00:00:01 |
|*  2 |   WINDOW SORT PUSHED RANK   |      |  1000K|    19M|    30M|  8202   (1)| 00:00:01 |
|   3 |    TABLE ACCESS STORAGE FULL| T1   |  1000K|    19M|       |  2134   (1)| 00:00:01 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(("from$_subquery$_002"."rowlimit_$$_rownumber"<=CASE  WHEN (150>=0)
              THEN 150 ELSE 0 END +5 AND "from$_subquery$_002"."rowlimit_$$_rownumber">150))
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "DISPATCH_DATE")<=CASE  WHEN (150>=0)
              THEN 150 ELSE 0 END +5)

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold
}}}

! LAB X: PLUGGABLE DATABASES

{{{
Unlike the database in-memory option we can use the lab to experiment with Pluggable Databases. 

1) create a CDB

Use dbca in silent mode or any other technique you like to create a CDB with 1 PDB. Specify the data file location to be /u01/oradata and the FRA to go to /u01/fra. It is recommended to use oracle managed files for the database but you are free to chose whichever method you are most comfortable with.

2) log in to the CDB root

Once the CDB is created connect to it as SYSDBA and list all of the PDBs in the database. Where can you find them?

SQL> show pdbs

SQL> select con_id, name, open_mode, total_size from v$pdbs;

SQL> select pdb_id, pdb_name, status, logging, force_logging, force_nologging from dba_pdbs;

3) create a new PDB named MASTER from the seed

 - check if you are using OMF

  SQL> show parameter db_create_file_dest
  SQL> create pluggable database master admin user master_admin identified by secret roles=(dba) 
    2  default tablespace users datafile size 20m;
  SQL> alter pluggable database master open

4) list the MASTER PDBs data files

 - from the root
  SQL> select name from v$datafile where con_id = (select con_id from v$pdbs where name = 'MASTER');

 - from the PDB
  SQL> select name, bytes/power(1024,2) m from v$datafile;

  --> what is odd here? Compare with DBA_DATA_FILES

  SQL> select con_id, name, bytes/power(1024,2) m from v$datafile;

5) get familiar with the new dictionary views

The new architecture introduces new views and columns to existing views. Explore these, focus on the CDB% views and how they differ from the DBA views. Also check how many V$- views have a new column? Can you find evidence for linking packages in the PDB to the Root? 

 SQL> desc cdb_data_files
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 FILE_NAME                                          VARCHAR2(513)
 FILE_ID                                            NUMBER
 TABLESPACE_NAME                                    VARCHAR2(30)
 BYTES                                              NUMBER
 BLOCKS                                             NUMBER
 STATUS                                             VARCHAR2(9)
 RELATIVE_FNO                                       NUMBER
 AUTOEXTENSIBLE                                     VARCHAR2(3)
 MAXBYTES                                           NUMBER
 MAXBLOCKS                                          NUMBER
 INCREMENT_BY                                       NUMBER
 USER_BYTES                                         NUMBER
 USER_BLOCKS                                        NUMBER
 ONLINE_STATUS                                      VARCHAR2(7)
 CON_ID                                             NUMBER

SQL> desc dba_data_files
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 FILE_NAME                                          VARCHAR2(513)
 FILE_ID                                            NUMBER
 TABLESPACE_NAME                                    VARCHAR2(30)
 BYTES                                              NUMBER
 BLOCKS                                             NUMBER
 STATUS                                             VARCHAR2(9)
 RELATIVE_FNO                                       NUMBER
 AUTOEXTENSIBLE                                     VARCHAR2(3)
 MAXBYTES                                           NUMBER
 MAXBLOCKS                                          NUMBER
 INCREMENT_BY                                       NUMBER
 USER_BYTES                                         NUMBER
 USER_BLOCKS                                        NUMBER
 ONLINE_STATUS                                      VARCHAR2(7)

SQL> desc v$datafile

SQL> desc v$datafile
 Name														   Null?    Type
 ----------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------
 FILE#															    NUMBER
 CREATION_CHANGE#													    NUMBER
 CREATION_TIME														    DATE
 TS#															    NUMBER
 RFILE# 														    NUMBER
 STATUS 														    VARCHAR2(7)
 ENABLED														    VARCHAR2(10)
 CHECKPOINT_CHANGE#													    NUMBER
 CHECKPOINT_TIME													    DATE
 UNRECOVERABLE_CHANGE#													    NUMBER
 UNRECOVERABLE_TIME													    DATE
 LAST_CHANGE#														    NUMBER
 LAST_TIME														    DATE
 OFFLINE_CHANGE#													    NUMBER
 ONLINE_CHANGE# 													    NUMBER
 ONLINE_TIME														    DATE
 BYTES															    NUMBER
 BLOCKS 														    NUMBER
 CREATE_BYTES														    NUMBER
 BLOCK_SIZE														    NUMBER
 NAME															    VARCHAR2(513)
 PLUGGED_IN														    NUMBER
 BLOCK1_OFFSET														    NUMBER
 AUX_NAME														    VARCHAR2(513)
 FIRST_NONLOGGED_SCN													    NUMBER
 FIRST_NONLOGGED_TIME													    DATE
 FOREIGN_DBID														    NUMBER
 FOREIGN_CREATION_CHANGE#												    NUMBER
 FOREIGN_CREATION_TIME													    DATE
 PLUGGED_READONLY													    VARCHAR2(3)
 PLUGIN_CHANGE# 													    NUMBER
 PLUGIN_RESETLOGS_CHANGE#												    NUMBER
 PLUGIN_RESETLOGS_TIME													    DATE
 CON_ID 														    NUMBER

SQL> select object_name, object_type, namespace, sharing, oracle_maintained from dba_objects where object_name = 'DBMS_REPCAT_AUTH';

OBJECT_NAME			 OBJECT_TYPE		  NAMESPACE SHARING	  O
-------------------------------- ----------------------- ---------- ------------- -
DBMS_REPCAT_AUTH		 PACKAGE			  1 METADATA LINK Y
DBMS_REPCAT_AUTH		 PACKAGE BODY			  2 METADATA LINK Y
DBMS_REPCAT_AUTH		 SYNONYM			  1 METADATA LINK Y
DBMS_REPCAT_AUTH		 PACKAGE			  1 METADATA LINK Y
DBMS_REPCAT_AUTH		 PACKAGE BODY			  2 METADATA LINK Y

6) switch to the PDB as root

Explore ways to switch to the newly created PDB from the root without logging in again

  SQL> alter session set container = MASTER;

  SQL> select sys_context('userenv', 'con_name') from dual;

  SQL> select sys_context('userenv', 'con_id') from dual;

7) connect to the PDB using Net*8

Now try to connect to the PDB using Net*8 as MASTER_ADMIN. Ensure that you are connected to the correct container! Can you see that the user has the DBA role granted?

  $ sqlplus master_admin/secret@localhost/MASTER

  SQL> select sys_context('userenv', 'con_name') from dual;

  SQL> select sys_context('userenv', 'con_id') from dual;

  SQL> select * from session_privs;

8) view the privileges granted to MASTER_ADMIN

Now let's look a bit closer at the privileges granted to MASTER_ADMIN. Find the hierarchy of roles and grants. Which is the primary role granted to the user? How are the roles specified in the create pluggable database command linked to this role?

 SQL> select * from dba_role_privs where grantee = user;

GRANTEE                        GRANTED_ROLE                   ADM DEL DEF COM
------------------------------ ------------------------------ --- --- --- ---
MASTER_ADMIN                   PDB_DBA                        YES NO  YES NO

SQL> select * from dba_role_privs where grantee = 'PDB_DBA';

GRANTEE                        GRANTED_ROLE                   ADM DEL DEF COM
------------------------------ ------------------------------ --- --- --- ---
PDB_DBA                        DBA                            NO  NO  YES NO

Is it really the DBA role?

 SQL> select * from dba_sys_privs where grantee = 'DBA'; 

9) view the connection to the PDB from the root

You can see anyone connected to the PDBs from the root. In a separate session, connect to the MASTER PDB and try to identify that particular session from the CDB$ROOT.

  - connect to the PDB
   $ sqlplus master_admin/secret@localhost/MASTER
   SQL> exec dbms_application_info.set_client_info('find me!')

  - in another session, connect to the root
   $ sqlplus / as sysdba
   SQL> select username,sid,serial#,client_info,con_id from v$session where con_id = (select con_id from v$pdbs where name = 'MASTER');
 
USERNAME                              SID    SERIAL# CLIENT_INFO              CON_ID
------------------------------ ---------- ---------- -------------------- ----------
MASTER_ADMIN                           21      26215 find me!                      3

10) limit the maximum size of the PDB to 50M

PDBs are often used for consolidation. When consolidating, users pay for storage. We don't want them to use more than they pay for. Can you think of a way to limit the space available to a PDB? Can you test if that limit is enforced?

 - what is the minimum size you can set it to?

 SQL> alter pluggable database MASTER storage (maxsize 800M);
  
 - what is the PDB_MASTER's default tablespace?

 SQL> select default_tablespace from dba_users where username = user;

DEFAULT_TABLESPACE
------------------------------
USERS

 - check if the limit is enforced

 SQL> grant unlimited tablespace to master_admin;

 SQL> create table t1 nologging as select a.*, rpad(object_name, 200, 'x') large_c from dba_objects a;
 
 (may have to allow users to autoextend)

11) create a PDB from the MASTER

Creating a PDB from the SEED is only one way of creating a PDB. In the next step, create a PDB named PDB1 as a clone of MASTER. But first create a golden image of a database you'd like to use. To do so, create the following accounts in the MASTER PDB:

  + MONITORING
  + BACKUP
  + APPL_USER

Grant whichever privileges you like to grant to them. APPL_USER must have 3 tables in his schema: T1, T2 and T3. While you perform these tasks, tail the alert.log in a different session.

SQL> create user monitoring identified by monitoring;

User created.

SQL> grant select any dictionary to monitoring;

Grant succeeded.

SQL> create user backup identified by backup;

User created.

SQL> grant create session to backup;

Grant succeeded.

SQL> create user appl_user identified by appl_user;

User created.

SQL> alter user appl_user quota unlimited on users;

User altered.

SQL> grant connect , resource to appl_user;

Grant succeeded.

SQL> conn appl_user/appl_user@localhost/MASTER
Connected.

SQL> create table t1 as select * from all_objects ;

Table created.

SQL> select count(*) from t1;

  COUNT(*)
----------
     73704

SQL> create table t2 as select * from all_objects where rownum < 11 ;

Table created.

SQL> c.t2.t3
  1* create table t3 as select * from all_objects where rownum < 11
SQL> r
  1* create table t3 as select * from all_objects where rownum < 11

Table created.

SQL> show user
USER is "APPL_USER"
SQL>

- prepare the PDB for cloning

 alter pluggable database master close immediate;
 alter pluggable database master open read only;

- view the alert log

 adrci> set home CDB1
 adrci> show alert -tail -f

- clone the PDB

 SQL> create pluggable database pdb1 from master;

 (are you still tailing the alert.log?)

 SQL> alter pluggable database PDB1 open;

 + do you see the users you created? Do they have data in the tables?

SQL> conn appl_user/appl_user@localhost/PDB1
Connected.
SQL> select count(*) from t1;

  COUNT(*)
----------
     73704

SQL> select count(*) from t2;

  COUNT(*)
----------
        10

SQL> select count(*) from t3;

  COUNT(*)
----------
        10

 + perform any further validations you like

12) Create a metadata only clone

Since 12.1.0.2 it is possible to perform a metadata only clone. Try to perform one based on MASTER. Ensure that the tables in the new PDB have no data!

 - as SYSDBA
 
 SQL> create pluggable database pdb2 from master no data;

 SQL> alter pluggable database pdb2 open;

SQL> conn appl_user/appl_user@localhost/PDB2
Connected.
SQL> select count(*) from t1;

  COUNT(*)
----------
         0

13) Unplug and plug

In this lab you will unplug a PDB and plug it back in. Usually you'd perform these steps on a different CDB but due to space constraints it'll be the same one you will experiment with. Note that it is crucial to drop the PDB once unplugged. This isn't documented that clear in the official documentation set but nevertheless required. https://blogs.oracle.com/UPGRADE/entry/recent_news_about_pluggable_databases

The steps to perform are:
 a) unplug the PDB
 b) review the metadata file
 c) check for plug-in-compatibility (a formality in our case but important in real life)
 d) drop the PDB _keeping_ data files
 e) create the new PDB by plugging it in

All the while you are tailing the alert.log

- unplug the PDB

SQL> alter pluggable database pdb2 close immediate;

SQL> alter pluggable database pdb2 unplug into '/home/oracle/pdb2.xml';

-> keep tailing the alert.log!

- verify the contents of the XML file 

[oracle@server3 ~]$ cat /home/oracle/pdb2.xml
<?xml version="1.0" encoding="UTF-8"?>
<PDB>
  <xmlversion>1</xmlversion>
  <pdbname>PDB2</pdbname>
  <cid>5</cid>
  <byteorder>1</byteorder>
  <vsn>202375680</vsn>
  <vsns>
    <vsnnum>12.1.0.2.0</vsnnum>
    <cdbcompt>12.1.0.2.0</cdbcompt>
    <pdbcompt>12.1.0.2.0</pdbcompt>
    <vsnlibnum>0.0.0.0.22</vsnlibnum>
    <vsnsql>22</vsnsql>
    <vsnbsv>8.0.0.0.0</vsnbsv>
  </vsns>
  <dbid>1858507191</dbid>
  <ncdb2pdb>0</ncdb2pdb>
  <cdbid>628942599</cdbid>
  <guid>18135BAD243A6341E0530C64A8C0B88F</guid>
  <uscnbas>1675596</uscnbas>
  <uscnwrp>0</uscnwrp>
  <rdba>4194824</rdba>
  <tablespace>
    <name>SYSTEM</name>
    <type>0</type>
    <tsn>0</tsn>
    <status>1</status>
    <issft>0</issft>
    <file>
      <path>/u01/oradata/CDB2/18135BAD243A6341E0530C64A8C0B88F/datafile/o1_mf_system_bqfdbtg0_.dbf</path>
      <afn>17</afn>
      <rfn>1</rfn>
      <createscnbas>1674775</createscnbas>
      <createscnwrp>0</createscnwrp>
      <status>1</status>
      <fileblocks>32000</fileblocks>
      <blocksize>8192</blocksize>
      <vsn>202375680</vsn>
      <fdbid>1858507191</fdbid>
      <fcpsw>0</fcpsw>
      <fcpsb>1675592</fcpsb>
      <frlsw>0</frlsw>
      <frlsb>1594143</frlsb>
      <frlt>881895559</frlt>
    </file>
  </tablespace>
  <tablespace>
    <name>SYSAUX</name>
    <type>0</type>
    <tsn>1</tsn>
    <status>1</status>
    <issft>0</issft>
    <file>
      <path>/u01/oradata/CDB2/18135BAD243A6341E0530C64A8C0B88F/datafile/o1_mf_sysaux_bqfdbtg1_.dbf</path>
      <afn>18</afn>
      <rfn>4</rfn>
      <createscnbas>1674799</createscnbas>
      <createscnwrp>0</createscnwrp>
      <status>1</status>
      <fileblocks>65280</fileblocks>
      <blocksize>8192</blocksize>
      <vsn>202375680</vsn>
      <fdbid>1858507191</fdbid>
      <fcpsw>0</fcpsw>
      <fcpsb>1675592</fcpsb>
      <frlsw>0</frlsw>
      <frlsb>1594143</frlsb>
      <frlt>881895559</frlt>
    </file>
  </tablespace>
  <tablespace>
    <name>TEMP</name>
    <type>1</type>
    <tsn>2</tsn>
    <status>1</status>
    <issft>0</issft>
    <bmunitsize>128</bmunitsize>
    <file>
      <path>/u01/oradata/CDB2/18135BAD243A6341E0530C64A8C0B88F/datafile/o1_mf_temp_bqfdbtg1_.dbf</path>
      <afn>5</afn>
      <rfn>1</rfn>
      <createscnbas>1674776</createscnbas>
      <createscnwrp>0</createscnwrp>
      <status>0</status>
      <fileblocks>2560</fileblocks>
      <blocksize>8192</blocksize>
      <vsn>202375680</vsn>
      <autoext>1</autoext>
      <maxsize>4194302</maxsize>
      <incsize>80</incsize>
    </file>
  </tablespace>
  <tablespace>
    <name>USERS</name>
    <type>0</type>
    <tsn>3</tsn>
    <status>1</status>
    <issft>0</issft>
    <file>
      <path>/u01/oradata/CDB2/18135BAD243A6341E0530C64A8C0B88F/datafile/o1_mf_users_bqfdbtg1_.dbf</path>
      <afn>19</afn>
      <rfn>10</rfn>
      <createscnbas>1674802</createscnbas>
      <createscnwrp>0</createscnwrp>
      <status>1</status>
      <fileblocks>2560</fileblocks>
      <blocksize>8192</blocksize>
      <vsn>202375680</vsn>
      <fdbid>1858507191</fdbid>
      <fcpsw>0</fcpsw>
      <fcpsb>1675592</fcpsb>
      <frlsw>0</frlsw>
      <frlsb>1594143</frlsb>
      <frlt>881895559</frlt>
    </file>
  </tablespace>
  <optional>
    <ncdb2pdb>0</ncdb2pdb>
    <csid>178</csid>
    <ncsid>2000</ncsid>
    <options>
      <option>APS=12.1.0.2.0</option>
      <option>CATALOG=12.1.0.2.0</option>
      <option>CATJAVA=12.1.0.2.0</option>
      <option>CATPROC=12.1.0.2.0</option>
      <option>CONTEXT=12.1.0.2.0</option>
      <option>DV=12.1.0.2.0</option>
      <option>JAVAVM=12.1.0.2.0</option>
      <option>OLS=12.1.0.2.0</option>
      <option>ORDIM=12.1.0.2.0</option>
      <option>OWM=12.1.0.2.0</option>
      <option>SDO=12.1.0.2.0</option>
      <option>XDB=12.1.0.2.0</option>
      <option>XML=12.1.0.2.0</option>
      <option>XOQ=12.1.0.2.0</option>
    </options>
    <olsoid>0</olsoid>
    <dv>0</dv>
    <APEX>4.2.5.00.08:1</APEX>
    <parameters>
      <parameter>processes=300</parameter>
      <parameter>nls_language='ENGLISH'</parameter>
      <parameter>nls_territory='UNITED KINGDOM'</parameter>
      <parameter>sga_target=1073741824</parameter>
      <parameter>db_block_size=8192</parameter>
      <parameter>compatible='12.1.0.2.0'</parameter>
      <parameter>open_cursors=300</parameter>
      <parameter>pga_aggregate_target=536870912</parameter>
      <parameter>enable_pluggable_database=TRUE</parameter>
    </parameters>
    <tzvers>
      <tzver>primary version:18</tzver>
      <tzver>secondary version:0</tzver>
    </tzvers>
    <walletkey>0</walletkey>
    <opatches>
      <opatch>19769480</opatch>
      <opatch>20299022</opatch>
      <opatch>20299023</opatch>
      <opatch>20415564</opatch>
    </opatches>
    <hasclob>1</hasclob>
    <awr>
      <loadprofile>CPU Usage Per Sec=0.000000</loadprofile>
      <loadprofile>DB Block Changes Per Sec=0.000000</loadprofile>
      <loadprofile>Database Time Per Sec=0.000000</loadprofile>
      <loadprofile>Executions Per Sec=0.000000</loadprofile>
      <loadprofile>Hard Parse Count Per Sec=0.000000</loadprofile>
      <loadprofile>Logical Reads Per Sec=0.000000</loadprofile>
      <loadprofile>Logons Per Sec=0.000000</loadprofile>
      <loadprofile>Physical Reads Per Sec=0.000000</loadprofile>
      <loadprofile>Physical Writes Per Sec=0.000000</loadprofile>
      <loadprofile>Redo Generated Per Sec=0.000000</loadprofile>
      <loadprofile>Total Parse Count Per Sec=0.000000</loadprofile>
      <loadprofile>User Calls Per Sec=0.000000</loadprofile>
      <loadprofile>User Rollbacks Per Sec=0.000000</loadprofile>
      <loadprofile>User Transaction Per Sec=0.000000</loadprofile>
    </awr>
    <hardvsnchk>0</hardvsnchk>
  </optional>
</PDB>

- check for compatibility

SQL> drop pluggable database pdb2 keep datafiles;

DECLARE
  compatible CONSTANT VARCHAR2(3) :=
    CASE DBMS_PDB.CHECK_PLUG_COMPATIBILITY(
           pdb_descr_file => '/home/oracle/pdb2.xml',
           pdb_name       => 'PDB2')
    WHEN TRUE THEN 'YES'
    ELSE 'NO'
END;
BEGIN
  DBMS_OUTPUT.PUT_LINE(compatible);
END;
/

- If you get a YES then plug the PDB in 

SQL> create pluggable database pdb2 using '/home/oracle/pdb2.xml' nocopy tempfile reuse;

14) drop a PDB

You use the drop pluggable database command to drop the PDB.

SQL> alter pluggable database PDB2 close immediate;

SQL> drop pluggable database PDB2;

- what happens to its data files? Do you get an error? how do you correct the error?

LAB 5: RMAN and PDBs

1) Connect the the CDB$ROOT as RMAN and "report schema"

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB2

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    780      SYSTEM               YES     /u01/oradata/CDB2/datafile/o1_mf_system_bqf3ktdf_.dbf
3    600      SYSAUX               NO      /u01/oradata/CDB2/datafile/o1_mf_sysaux_bqf3jpvv_.dbf
4    355      UNDOTBS1             YES     /u01/oradata/CDB2/datafile/o1_mf_undotbs1_bqf3lz4q_.dbf
5    250      PDB$SEED:SYSTEM      NO      /u01/oradata/CDB2/datafile/o1_mf_system_bqf3phmo_.dbf
6    5        USERS                NO      /u01/oradata/CDB2/datafile/o1_mf_users_bqf3lxrp_.dbf
7    490      PDB$SEED:SYSAUX      NO      /u01/oradata/CDB2/datafile/o1_mf_sysaux_bqf3ph5z_.dbf
8    250      MASTER:SYSTEM        NO      /u01/oradata/CDB2/18119189D3265B51E0530C64A8C0A3AE/datafile/o1_mf_system_bqf4tntv_.dbf
9    510      MASTER:SYSAUX        NO      /u01/oradata/CDB2/18119189D3265B51E0530C64A8C0A3AE/datafile/o1_mf_sysaux_bqf4tnv2_.dbf
10   20       MASTER:USERS         NO      /u01/oradata/CDB2/18119189D3265B51E0530C64A8C0A3AE/datafile/o1_mf_users_bqf4v3cm_.dbf
14   250      PDB1:SYSTEM          NO      /u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_system_bqfd57b5_.dbf
15   510      PDB1:SYSAUX          NO      /u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_sysaux_bqfd57b7_.dbf
16   20       PDB1:USERS           NO      /u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_users_bqfd57b8_.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    60       TEMP                 32767       /u01/oradata/CDB2/datafile/o1_mf_temp_bqf3p96h_.tmp
2    20       PDB$SEED:TEMP        32767       /u01/oradata/CDB2/datafile/pdbseed_temp012015-06-09_02-59-59-AM.dbf
3    20       MASTER:TEMP          32767       /u01/oradata/CDB2/18119189D3265B51E0530C64A8C0A3AE/datafile/o1_mf_temp_bqf4tnv3_.dbf
4    20       PDB1:TEMP            32767       /u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_temp_bqfd57b8_.dbf

RMAN>

- what do you notice? How is the output different from the Non-CDB

2) Review the configuration settings

Have a look at the RMAN configuration settings. There is one item that is different from non-CDBs. Can you spot it?

RMAN> show all;

RMAN configuration parameters for database with db_unique_name CDB2 are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/12.1.0.2/dbhome_1/dbs/snapcf_CDB2.f'; # default


3) Back up the CDB

It is possible to backup up the CDB, CDB$ROOT, and PDBs. In this step you back up the entire CDB. Always good to have a full backup. If not yet in archivelog mode, change that and perform a full backup (incremental or full does not matter)

RMAN> shutdown immediate

startup mount
database closed
database dismounted
Oracle instance shut down

RMAN>
connected to target database (not started)
Oracle instance started
database mounted

Total System Global Area    1073741824 bytes

Fixed Size                     2932632 bytes
Variable Size                377487464 bytes
Database Buffers             687865856 bytes
Redo Buffers                   5455872 bytes

RMAN> alter database archivelog;

Statement processed

RMAN> alter database open;

Statement processed

RMAN> configure channel device type disk format '/u01/oraback/CDB2/%U';

new RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT   '/u01/oraback/CDB2/%U';
new RMAN configuration parameters are successfully stored

RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 2;

new RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
new RMAN configuration parameters are successfully stored

RMAN> backup database plus archivelog;


Starting backup at 09-JUN-15
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=16 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=27 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=18 RECID=1 STAMP=881907051
channel ORA_DISK_1: starting piece 1 at 09-JUN-15
channel ORA_DISK_1: finished piece 1 at 09-JUN-15
piece handle=/u01/oraback/CDB2/01q91lbd_1_1 tag=TAG20150609T061052 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 09-JUN-15

Starting backup at 09-JUN-15
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/oradata/CDB2/datafile/o1_mf_system_bqf3ktdf_.dbf
input datafile file number=00004 name=/u01/oradata/CDB2/datafile/o1_mf_undotbs1_bqf3lz4q_.dbf
channel ORA_DISK_1: starting piece 1 at 09-JUN-15
channel ORA_DISK_2: starting full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00003 name=/u01/oradata/CDB2/datafile/o1_mf_sysaux_bqf3jpvv_.dbf
input datafile file number=00006 name=/u01/oradata/CDB2/datafile/o1_mf_users_bqf3lxrp_.dbf
channel ORA_DISK_2: starting piece 1 at 09-JUN-15
channel ORA_DISK_1: finished piece 1 at 09-JUN-15
piece handle=/u01/oraback/CDB2/02q91lbf_1_1 tag=TAG20150609T061054 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00009 name=/u01/oradata/CDB2/18119189D3265B51E0530C64A8C0A3AE/datafile/o1_mf_sysaux_bqf4tnv2_.dbf
channel ORA_DISK_1: starting piece 1 at 09-JUN-15
channel ORA_DISK_2: finished piece 1 at 09-JUN-15
piece handle=/u01/oraback/CDB2/03q91lbf_1_1 tag=TAG20150609T061054 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:25
channel ORA_DISK_2: starting full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00015 name=/u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_sysaux_bqfd57b7_.dbf
channel ORA_DISK_2: starting piece 1 at 09-JUN-15
channel ORA_DISK_1: finished piece 1 at 09-JUN-15
piece handle=/u01/oraback/CDB2/04q91lc8_1_1 tag=TAG20150609T061054 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:08
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00007 name=/u01/oradata/CDB2/datafile/o1_mf_sysaux_bqf3ph5z_.dbf
channel ORA_DISK_1: starting piece 1 at 09-JUN-15
channel ORA_DISK_2: finished piece 1 at 09-JUN-15
piece handle=/u01/oraback/CDB2/05q91lc9_1_1 tag=TAG20150609T061054 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:10
channel ORA_DISK_2: starting full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00008 name=/u01/oradata/CDB2/18119189D3265B51E0530C64A8C0A3AE/datafile/o1_mf_system_bqf4tntv_.dbf
input datafile file number=00010 name=/u01/oradata/CDB2/18119189D3265B51E0530C64A8C0A3AE/datafile/o1_mf_users_bqf4v3cm_.dbf
channel ORA_DISK_2: starting piece 1 at 09-JUN-15
channel ORA_DISK_1: finished piece 1 at 09-JUN-15
piece handle=/u01/oraback/CDB2/06q91lcj_1_1 tag=TAG20150609T061054 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:08
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00014 name=/u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_system_bqfd57b5_.dbf
input datafile file number=00016 name=/u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_users_bqfd57b8_.dbf
channel ORA_DISK_1: starting piece 1 at 09-JUN-15
channel ORA_DISK_2: finished piece 1 at 09-JUN-15
piece handle=/u01/oraback/CDB2/07q91lcj_1_1 tag=TAG20150609T061054 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:07
channel ORA_DISK_2: starting full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00005 name=/u01/oradata/CDB2/datafile/o1_mf_system_bqf3phmo_.dbf
channel ORA_DISK_2: starting piece 1 at 09-JUN-15
channel ORA_DISK_1: finished piece 1 at 09-JUN-15
piece handle=/u01/oraback/CDB2/08q91lcr_1_1 tag=TAG20150609T061054 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
channel ORA_DISK_2: finished piece 1 at 09-JUN-15
piece handle=/u01/oraback/CDB2/09q91lcr_1_1 tag=TAG20150609T061054 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:07
Finished backup at 09-JUN-15

Starting backup at 09-JUN-15
current log archived
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=19 RECID=2 STAMP=881907108
channel ORA_DISK_1: starting piece 1 at 09-JUN-15
channel ORA_DISK_1: finished piece 1 at 09-JUN-15
piece handle=/u01/oraback/CDB2/0aq91ld5_1_1 tag=TAG20150609T061148 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 09-JUN-15

Starting Control File and SPFILE Autobackup at 09-JUN-15
piece handle=/u01/fra/CDB2/autobackup/2015_06_09/o1_mf_s_881907110_bqfgzb3n_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 09-JUN-15

RMAN>


4) Try to cause some trouble and get out unscathed

Assume someone in PDB1 removed an essential file from the database. Time to recover! In this part of the lab you
 a) close PDB1 
 b) remove a data file
 c) perform a full recovery (agree it not strictly speaking needed but a good test)
 d) open the database without data loss 

SQL> set lines 200
SQL> select name from v$datafile where con_id = (select con_id from v$pdbs where name = 'PDB1');

NAME
---------------------------------------------------------------------------------------------------------------------------
/u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_system_bqfd57b5_.dbf
/u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_sysaux_bqfd57b7_.dbf
/u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_users_bqfd57b8_.dbf

[oracle@server3 ~]$ rm /u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_users_bqfd57b8_.dbf
[oracle@server3 ~]$

SQL> alter pluggable database pdb1 open;
alter pluggable database pdb1 open
*
ERROR at line 1:
ORA-01157: cannot identify/lock data file 16 - see DBWR trace file
ORA-01110: data file 16:
'/u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_users_bqfd57b8_.dbf'

- perform full recovery

[oracle@server3 ~]$ rman target sys/change_on_install@localhost/PDB1

Recovery Manager: Release 12.1.0.2.0 - Production on Tue Jun 9 06:48:03 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: CDB2 (DBID=628942599, not open)

RMAN> run {
2> restore database;
3> recover database;
4> alter database open;
5> }

Starting restore at 09-JUN-15
        using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=255 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=269 device type=DISK

skipping datafile 14; already restored to file /u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_system_bqfd57b5_.dbf
skipping datafile 15; already restored to file /u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_sysaux_bqfd57b7_.dbf
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00016 to /u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_users_bqfd57b8_.dbf
channel ORA_DISK_1: reading from backup piece /u01/oraback/CDB2/08q91lcr_1_1
channel ORA_DISK_1: piece handle=/u01/oraback/CDB2/08q91lcr_1_1 tag=TAG20150609T061054
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 09-JUN-15

Starting recover at 09-JUN-15
using channel ORA_DISK_1
using channel ORA_DISK_2

starting media recovery
media recovery complete, elapsed time: 00:00:00

Finished recover at 09-JUN-15

Statement processed

RMAN>

- check if that worked

[oracle@server3 ~]$ sqlplus appl_user/appl_user@localhost/pdb1

SQL*Plus: Release 12.1.0.2.0 Production on Tue Jun 9 06:50:32 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Tue Jun 09 2015 06:50:12 -04:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select count(*) from t1;

  COUNT(*)
----------
     73704

SQL> select tablespace_name from tabs;

TABLESPACE_NAME
------------------------------
USERS
USERS
USERS

SQL>

5) Beware of PDB backups when dropping PDBs!

This is an example about what can be considered a bug, but is expected behaviour. Assume that you dropped a PDB accidentally including data files. How can you get it back? 

- create a PDB we don't really care about with a default tablespace named USERS

SQL> create pluggable database I_AM_AN_EX_PARROT admin user martin identified by secret default tablespace users datafile size 10m;

Pluggable database created.

SQL> alter pluggable database I_AM_AN_EX_PARROT open;

Pluggable database altered.

- create a level 0 backup of the CDB and make sure I_AM_AN_EX_PARRAT has been backed up. Validate both the PDB backup and the archivelogs.

RMAN> backup incremental level 0 database plus archivelog delete all input;

...

RMAN> list backup of pluggable database I_AM_AN_EX_PARROT;


List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
26      Incr 0  395.59M    DISK        00:00:02     11-JUN-15
        BP Key: 26   Status: AVAILABLE  Compressed: NO  Tag: TAG20150611T060056
        Piece Name: /u01/oraback/CDB2/0pq96tij_1_1
  List of Datafiles in backup set 26
  Container ID: 6, PDB Name: I_AM_AN_EX_PARROT
  File LV Type Ckp SCN    Ckp Time  Name
  ---- -- ---- ---------- --------- ----
  26   0  Incr 2345577    11-JUN-15 /u01/oradata/CDB2/183BC85FA4F548B1E0530C64A8C04B67/datafile/o1_mf_sysaux_bqlowk9f_.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
29      Incr 0  203.38M    DISK        00:00:02     11-JUN-15
        BP Key: 29   Status: AVAILABLE  Compressed: NO  Tag: TAG20150611T060056
        Piece Name: /u01/oraback/CDB2/0tq96tjc_1_1
  List of Datafiles in backup set 29
  Container ID: 6, PDB Name: I_AM_AN_EX_PARROT
  File LV Type Ckp SCN    Ckp Time  Name
  ---- -- ---- ---------- --------- ----
  25   0  Incr 2345609    11-JUN-15 /u01/oradata/CDB2/183BC85FA4F548B1E0530C64A8C04B67/datafile/o1_mf_system_bqlowk93_.dbf
  27   0  Incr 2345609    11-JUN-15 /u01/oradata/CDB2/183BC85FA4F548B1E0530C64A8C04B67/datafile/o1_mf_users_bqloxjmr_.dbf

- the backup exists!

RMAN> restore pluggable database I_AM_AN_EX_PARROT validate;

Starting restore at 11-JUN-15
using channel ORA_DISK_1
using channel ORA_DISK_2

channel ORA_DISK_1: starting validation of datafile backup set
channel ORA_DISK_2: starting validation of datafile backup set
channel ORA_DISK_1: reading from backup piece /u01/oraback/CDB2/0pq96tij_1_1
channel ORA_DISK_2: reading from backup piece /u01/oraback/CDB2/0tq96tjc_1_1
channel ORA_DISK_1: piece handle=/u01/oraback/CDB2/0pq96tij_1_1 tag=TAG20150611T060056
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
channel ORA_DISK_2: piece handle=/u01/oraback/CDB2/0tq96tjc_1_1 tag=TAG20150611T060056
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: validation complete, elapsed time: 00:00:01
Finished restore at 11-JUN-15

RMAN> RESTORE ARCHIVELOG ALL VALIDATE;

Starting restore at 11-JUN-15
using channel ORA_DISK_1
using channel ORA_DISK_2

channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_2: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /u01/oraback/CDB2/0gq96tfe_1_1
channel ORA_DISK_2: reading from backup piece /u01/oraback/CDB2/0hq96tff_1_1
channel ORA_DISK_1: piece handle=/u01/oraback/CDB2/0gq96tfe_1_1 tag=TAG20150611T060013
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_2: piece handle=/u01/oraback/CDB2/0hq96tff_1_1 tag=TAG20150611T060013
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: validation complete, elapsed time: 00:00:01
channel ORA_DISK_2: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece /u01/oraback/CDB2/0iq96tg9_1_1
channel ORA_DISK_2: reading from backup piece /u01/oraback/CDB2/0vq96tjp_1_1
channel ORA_DISK_1: piece handle=/u01/oraback/CDB2/0iq96tg9_1_1 tag=TAG20150611T060013
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
channel ORA_DISK_2: piece handle=/u01/oraback/CDB2/0vq96tjp_1_1 tag=TAG20150611T060233
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: validation complete, elapsed time: 00:00:01
Finished restore at 11-JUN-15

- drop the PDB including data files. We have a backup, should be ok even if we made a mistake. Tail the alert.log while executing the steps

RMAN> alter pluggable database I_AM_AN_EX_PARROT close;

Statement processed

RMAN> drop pluggable database I_AM_AN_EX_PARROT including datafiles;

Statement processed

2015-06-11 06:19:47.088000 -04:00
alter pluggable database I_AM_AN_EX_PARROT close
ALTER SYSTEM: Flushing buffer cache inst=0 container=6 local
2015-06-11 06:19:59.242000 -04:00
Pluggable database I_AM_AN_EX_PARROT closed
Completed: alter pluggable database I_AM_AN_EX_PARROT close
2015-06-11 06:20:15.885000 -04:00
drop pluggable database I_AM_AN_EX_PARROT including datafiles
2015-06-11 06:20:20.655000 -04:00
Deleted Oracle managed file /u01/oradata/CDB2/183BC85FA4F548B1E0530C64A8C04B67/datafile/o1_mf_users_bqloxjmr_.dbf
Deleted Oracle managed file /u01/oradata/CDB2/183BC85FA4F548B1E0530C64A8C04B67/datafile/o1_mf_temp_bqlowk9g_.dbf
Deleted Oracle managed file /u01/oradata/CDB2/183BC85FA4F548B1E0530C64A8C04B67/datafile/o1_mf_sysaux_bqlowk9f_.dbf
Deleted Oracle managed file /u01/oradata/CDB2/183BC85FA4F548B1E0530C64A8C04B67/datafile/o1_mf_system_bqlowk93_.dbf
Completed: drop pluggable database I_AM_AN_EX_PARROT including datafiles

- oops, that was a mistake! Call from the users: restore the PDB, it is production critical!

RMAN> report schema;

Report of database schema for database with db_unique_name CDB2

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    780      SYSTEM               YES     /u01/oradata/CDB2/datafile/o1_mf_system_bqf3ktdf_.dbf
3    680      SYSAUX               NO      /u01/oradata/CDB2/datafile/o1_mf_sysaux_bqf3jpvv_.dbf
4    355      UNDOTBS1             YES     /u01/oradata/CDB2/datafile/o1_mf_undotbs1_bqf3lz4q_.dbf
5    250      PDB$SEED:SYSTEM      NO      /u01/oradata/CDB2/datafile/o1_mf_system_bqf3phmo_.dbf
6    5        USERS                NO      /u01/oradata/CDB2/datafile/o1_mf_users_bqf3lxrp_.dbf
7    490      PDB$SEED:SYSAUX      NO      /u01/oradata/CDB2/datafile/o1_mf_sysaux_bqf3ph5z_.dbf
8    250      MASTER:SYSTEM        NO      /u01/oradata/CDB2/18119189D3265B51E0530C64A8C0A3AE/datafile/o1_mf_system_bqf4tntv_.dbf
9    510      MASTER:SYSAUX        NO      /u01/oradata/CDB2/18119189D3265B51E0530C64A8C0A3AE/datafile/o1_mf_sysaux_bqf4tnv2_.dbf
10   20       MASTER:USERS         NO      /u01/oradata/CDB2/18119189D3265B51E0530C64A8C0A3AE/datafile/o1_mf_users_bqf4v3cm_.dbf
14   250      PDB1:SYSTEM          NO      /u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_system_bqfd57b5_.dbf
15   520      PDB1:SYSAUX          NO      /u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_sysaux_bqfd57b7_.dbf
16   20       PDB1:USERS           NO      /u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_users_bqfk42sq_.dbf
23   260      PDBSBY:SYSTEM        NO      /u01/oradata/CDB2/1815250A88E57497E0530C64A8C01A28/datafile/o1_mf_system_bqfmtn2d_.dbf
24   520      PDBSBY:SYSAUX        NO      /u01/oradata/CDB2/1815250A88E57497E0530C64A8C01A28/datafile/o1_mf_sysaux_bqfmtn2l_.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    60       TEMP                 32767       /u01/oradata/CDB2/datafile/o1_mf_temp_bqf3p96h_.tmp
2    20       PDB$SEED:TEMP        32767       /u01/oradata/CDB2/datafile/pdbseed_temp012015-06-09_02-59-59-AM.dbf
3    20       MASTER:TEMP          32767       /u01/oradata/CDB2/18119189D3265B51E0530C64A8C0A3AE/datafile/o1_mf_temp_bqf4tnv3_.dbf
4    20       PDB1:TEMP            32767       /u01/oradata/CDB2/1813503BE37C62FEE0530C64A8C02F2C/datafile/o1_mf_temp_bqfd57b8_.dbf
5    20       PDBSBY:TEMP          32767       /u01/oradata/CDB2/1815250A88E57497E0530C64A8C01A28/datafile/o1_mf_temp_bqfmtn2l_.dbf

RMAN> run {
2> restore pluggable database I_AM_AN_EX_PARROT;
3> recover pluggable database I_AM_AN_EX_PARROT;
4> }

Starting restore at 11-JUN-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=280 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=55 device type=DISK
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 06/11/2015 06:22:07
RMAN-06813: could not translate pluggable database I_AM_AN_EX_PARROT

- Why? the backup was there a minute ago! Check the controlfile for the PDB backup:

RMAN> list backup of pluggable database I_AM_AN_EX_PARROT;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of list command at 06/11/2015 06:22:52
RMAN-06813: could not translate pluggable database I_AM_AN_EX_PARROT

- And indeed the backup is gone, as well as all the information with it.
}}}

! LAB X: Data Guard

{{{
Data Guard is an essential part of data protection. CDBs can be Data-Guarded as well. In this lab you will learn how to. It might be a bit more involved than previous labs and therefore we have most time here. The steps in the lab guide you through what needs to be done, the examples may have to be updated according to your environment.

1) create a physical standby of the CDB you used

- connect to the CDB as root and enable automatic standby_file_management 
- make sure that you use a SPFILE
- edit tnsnames.ora in $ORACLE_HOME to include the new standby database

CDBSBY =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = class<n>)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = CDBSBY)
    )
  )

- modify listener.ora in $ORACLE_HOME/network/admin/listener.ora and reload it. Ensure the names match your environment!

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = CDB2)
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0.2/dbhome_1)
      (SID_NAME = CDB2)
    )
    (SID_DESC =
      (GLOBAL_DBNAME = CDB2_DGMGRL)
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0.2/dbhome_1)
      (SID_NAME = CDB2)
    )
    (SID_DESC =
      (GLOBAL_DBNAME = CDBSBY)
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0.2/dbhome_1)
      (SID_NAME = CDBSBY)
    )
    (SID_DESC =
      (GLOBAL_DBNAME = CDBSBY_DGMGRL)
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0.2/dbhome_1)
      (SID_NAME = CDBSBY)
    )
  )

- use lsnrctl service to ensure the services are registered

- update oratab with the new standby database

CDBSBY:/u01/app/oracle/product/12.1.0.2/dbhome_1:N

- create a minimum ppfile for the clone 

*.audit_file_dest='/u01/app/oracle/admin/CDBSBY/adump'
*.audit_trail='db'
*.compatible='12.1.0.2.0'
*.db_block_size=8192
*.db_create_file_dest='/u01/oradata'
*.db_domain=''
*.db_name='CDB2'
*.db_unique_name='CDBSBY'
*.db_recovery_file_dest='/u01/fra'
*.db_recovery_file_dest_size=4560m
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=CDBSBYXDB)'
*.enable_pluggable_database=true
*.nls_language='ENGLISH'
*.nls_territory='UNITED KINGDOM'
*.open_cursors=300
*.pga_aggregate_target=512m
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=1024m
*.standby_file_management='AUTO'
*.undo_tablespace='UNDOTBS1'

- ensure the audit file dest is created

mkdir -vp /u01/app/oracle/admin/CDBSBY/adump

- copy the pwfile to allow remote login

[oracle@server3 dbs]$ cp orapwCDB2 orapwCDBSBY

- duplicate

[oracle@server3 ~]$  rman target sys/password@cdb2 auxiliary sys/password@cdbsby

RMAN> startup clone nomount

....

RMAN> duplicate target database for standby;

- Make sure to note down the control files. Their names are in the RMAN output

executing Memory Script

Starting restore at 09-JUN-15
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /u01/fra/CDB2/autobackup/2015_06_09/o1_mf_s_881907110_bqfgzb3n_.bkp
channel ORA_AUX_DISK_1: piece handle=/u01/fra/CDB2/autobackup/2015_06_09/o1_mf_s_881907110_bqfgzb3n_.bkp tag=TAG20150609T061150
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/u01/oradata/CDBSBY/controlfile/o1_mf_bqflhy0t_.ctl
output file name=/u01/fra/CDBSBY/controlfile/o1_mf_bqflhydr_.ctl
Finished restore at 09-JUN-15

- In this case they are:

output file name=/u01/oradata/CDBSBY/controlfile/o1_mf_bqflhy0t_.ctl
output file name=/u01/fra/CDBSBY/controlfile/o1_mf_bqflhydr_.ctl

- modify the pfile to include these. You can also use "show parameter control_files".

- create spfile from pfile and restart the standby

2) add the database into the broker configuration

- Enable the broker on primary and standby

SQL> alter system set dg_broker_start = true;

- add the databases to the broker configuration

[oracle@server3 ~]$ dgmgrl /
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDBA.

DGMGRL>  CREATE CONFIGURATION twelve as PRIMARY DATABASE IS 'CDB2' CONNECT IDENTIFIER IS 'CDB2';
Configuration "twelve" created with primary database "CDB2"

DGMGRL> add database 'CDBSBY' AS CONNECT IDENTIFIER IS 'CDBSBY';
Database "CDBSBY" added

- create standby redo logs on each database

Check their size in v$log, and create the files on each database. The following should work, you create group# + 1 SRLs per thread (there is only 1 in single instance Oracle)

SQL> begin
  2  for i in 1..4 loop
  3   execute immediate 'alter database add standby logfile size 52428800';
  4  end loop;
  5  end;
  6  /

- enable the configuration

DGMGRL> enable configuration
Enabled.

DGMGRL> show configuration

Configuration - twelve

  Protection Mode: MaxPerformance
  Members:
  CDB2   - Primary database
    CDBSBY - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 4 seconds ago)


3) create a new PDB on the primary, tail the alert.log to see what's happening on the standby

- be sure to have standby_file_management set to auto

DGMGRL> show database 'CDB2' standbyfilemanagement
  StandbyFileManagement = 'AUTO'
DGMGRL> show database 'CDBSBY' standbyfilemanagement
  StandbyFileManagement = 'AUTO'

SQL> select name, db_unique_name, database_role from v$database;

NAME      DB_UNIQUE_NAME                 DATABASE_ROLE
--------- ------------------------------ ----------------
CDB2      CDB2                           PRIMARY

SQL> create pluggable database PDBSBY admin user PDBSBY_ADMIN identified by secret;

- tail the primary alert.log

2015-06-09 07:34:43.265000 -04:00
create pluggable database PDBSBY admin user PDBSBY_ADMIN identified by *
 APEX_040200.WWV_FLOW_ADVISOR_CHECKS (CHECK_STATEMENT) - CLOB populated
2015-06-09 07:35:05.280000 -04:00
****************************************************************
Pluggable Database PDBSBY with pdb id - 5 is created as UNUSABLE.
If any errors are encountered before the pdb is marked as NEW,
then the pdb must be dropped
****************************************************************
Database Characterset for PDBSBY is WE8MSWIN1252
2015-06-09 07:35:06.834000 -04:00
Deleting old file#5 from file$
Deleting old file#7 from file$
Adding new file#23 to file$(old file#5)
Adding new file#24 to file$(old file#7)
2015-06-09 07:35:08.031000 -04:00
Successfully created internal service pdbsby at open
2015-06-09 07:35:12.391000 -04:00
ALTER SYSTEM: Flushing buffer cache inst=0 container=5 local
2015-06-09 07:35:20.225000 -04:00
****************************************************************
Post plug operations are now complete.
Pluggable database PDBSBY with pdb id - 5 is now marked as NEW.
****************************************************************
Completed: create pluggable database PDBSBY admin user PDBSBY_ADMIN identified by *

- tail the standby

2015-06-09 07:34:58.636000 -04:00
Recovery created pluggable database PDBSBY
2015-06-09 07:35:03.499000 -04:00
Recovery copied files for tablespace SYSTEM
Recovery successfully copied file /u01/oradata/CDBSBY/1815250A88E57497E0530C64A8C01A28/datafile/o1_mf_system_bqfmtn2d_.dbf from /u01/oradata/CDBSBY/datafile/o1_mf_system_bqflkyry_.dbf
2015-06-09 07:35:05.219000 -04:00
Successfully added datafile 23 to media recovery
Datafile #23: '/u01/oradata/CDBSBY/1815250A88E57497E0530C64A8C01A28/datafile/o1_mf_system_bqfmtn2d_.dbf'
2015-06-09 07:35:13.119000 -04:00
Recovery copied files for tablespace SYSAUX
Recovery successfully copied file /u01/oradata/CDBSBY/1815250A88E57497E0530C64A8C01A28/datafile/o1_mf_sysaux_bqfmtn2l_.dbf from /u01/oradata/CDBSBY/datafile/o1_mf_sysaux_bqflkpc2_.dbf
2015-06-09 07:35:17.968000 -04:00
Successfully added datafile 24 to media recovery
Datafile #24: '/u01/oradata/CDBSBY/1815250A88E57497E0530C64A8C01A28/datafile/o1_mf_sysaux_bqfmtn2l_.dbf'


4) switch over to CDBSBY

Using the broker connect to CDBSBY as sysdba. Then verify switchover readiness

[oracle@server3 ~]$ dgmgrl
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@cdbsby
Password:
Connected as SYSDBA.
DGMGRL> validate database 'CDBSBY';

  Database Role:     Physical standby database
  Primary Database:  CDB2

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

  Temporary Tablespace File Information:
    CDB2 TEMP Files:    5
    CDBSBY TEMP Files:  4

  Flashback Database Status:
    CDB2:    Off
    CDBSBY:  Off

  Current Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (CDB2)                  (CDBSBY)
    1         3                       3                       Insufficient SRLs

  Future Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (CDBSBY)                (CDB2)
    1         3                       0                       Insufficient SRLs
    Warning: standby redo logs not configured for thread 1 on CDB2

DGMGRL>

If you see "ready for switchover", do it:

DGMGRL> switchover to 'CDBSBY';
Performing switchover NOW, please wait...
New primary database "CDBSBY" is opening...
Oracle Clusterware is restarting database "CDB2" ...
Switchover succeeded, new primary is "CDBSBY"
DGMGRL>

4) check if you can access PDBSBY

SQL> select name,db_unique_name,database_role from v$database;

NAME      DB_UNIQUE_NAME                 DATABASE_ROLE
--------- ------------------------------ ----------------
CDB2      CDBSBY                         PRIMARY

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 MASTER                         READ WRITE NO
         4 PDB1                           READ WRITE NO
         5 PDBSBY                         READ WRITE NO

SQL> select name from v$datafile
  2  /

NAME
---------------------------------------------------------------------------------------------------------
/u01/oradata/CDBSBY/datafile/o1_mf_undotbs1_bqfljffy_.dbf
/u01/oradata/CDBSBY/1815250A88E57497E0530C64A8C01A28/datafile/o1_mf_system_bqfmtn2d_.dbf
/u01/oradata/CDBSBY/1815250A88E57497E0530C64A8C01A28/datafile/o1_mf_sysaux_bqfmtn2l_.dbf

SQL> select sys_context('userenv', 'con_name') from dual;

SYS_CONTEXT('USERENV','CON_NAME')
----------------------------------------------------------------------------------------------------------
PDBSBY


- keep the standby database! Will be needed for later on.
}}}

! LAB X: CDB Resource Manager

{{{
1) Create a CDB resource plan

Consolidation requires the creation of a CDB resource manager plan. Please ensure you have the following PDBs in your CDB:
- MASTER
- PDB1
- PDBSBY

Create a CDB plan for your CDB and set the distribution of CPU shares and utilisation limts as follows:
- MASTER: 1 share, limit 30
- PDB1: 5 shares, limit 100
- PDBSBY: 3 shares, limit 70

There is no need to limit PQ. To keep the lab simple, no PDB plans are needed.

Unfortunately due to a limited number of CPUs we cannot test the plans in action!

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 MASTER                         READ WRITE NO
         4 PDB1                           READ WRITE NO
         5 PDBSBY                         READ WRITE NO

- make sure you are in the ROOT

SQL> select sys_context('userenv','con_name') from dual;

declare
 v_plan_name varchar2(50) := 'ENKITEC_CDB_PLAN';
begin
 dbms_resource_manager.clear_pending_area;
 dbms_resource_manager.create_pending_area;

 dbms_resource_manager.create_cdb_plan(
  plan => v_plan_name,
  comment => 'A CDB plan for the 12c class'
 );

 dbms_resource_manager.create_cdb_plan_directive(
  plan => v_plan_name,
  pluggable_database => 'MASTER',
  shares => 1,
  utilization_limit => 30);

 dbms_resource_manager.create_cdb_plan_directive(
  plan => v_plan_name,
  pluggable_database => 'PDB1',
  shares => 5,
  utilization_limit => 100);

 dbms_resource_manager.create_cdb_plan_directive(
  plan => v_plan_name,
  pluggable_database => 'PDBSBY',
  shares => 3,
  utilization_limit => 70);

 dbms_resource_manager.validate_pending_area;
 dbms_resource_manager.submit_pending_area;
end;
/

2) Query the CDB Resource Plan dictionary information

COLUMN PLAN FORMAT A30
COLUMN STATUS FORMAT A10
COLUMN COMMENTS FORMAT A35
 
SELECT PLAN, STATUS, COMMENTS FROM DBA_CDB_RSRC_PLANS ORDER BY PLAN;

3) Query the CDB Plan directives in the dictionary

COLUMN PLAN HEADING 'Plan' FORMAT A26
COLUMN PLUGGABLE_DATABASE HEADING 'Pluggable|Database' FORMAT A25
COLUMN SHARES HEADING 'Shares' FORMAT 999
COLUMN UTILIZATION_LIMIT HEADING 'Utilization|Limit' FORMAT 999
COLUMN PARALLEL_SERVER_LIMIT HEADING 'Parallel|Server|Limit' FORMAT 999
 
SELECT PLAN, 
       PLUGGABLE_DATABASE, 
       SHARES, 
       UTILIZATION_LIMIT,
       PARALLEL_SERVER_LIMIT
  FROM DBA_CDB_RSRC_PLAN_DIRECTIVES
  ORDER BY PLAN;


4) Set the CDB resource plan

Set the new plan in the CDB$ROOT

SQL> alter system set RESOURCE_MANAGER_PLAN = 'ENKITEC_CDB_PLAN' scope=both;

System altered.

5) create a DBRM plan for MASTER PDB

Create a new database resource plan for 2 new consumer group, lowprio and highprio. The consumer group mappings are to be based on the oracle user. Do not forget to add the SYS_GROUP and OTHER_GROUP.

The various CPU entitlements are as follows:
- SYS_GROUP      - level 1 - 70
- HIGHPRIO_GROUP - level 1 - 100
- LOWPRIO_GROUP  - level 1 - 25
- OTHERS_GROUP   - level 1 - 45
- ORA$AUTOTASK   - level 1 - 15 

No other plan directives are needed. You can only have plans at level 1 in a PDB... Make sure you are connected against the PDB when executing the commands! Start by creating the users and grant them the connect role. Define the mapping based on the oracle user, and grant both the privilege to switch consumer groups. In the last step, create the plan and plan directives.

create user LOWPRIO identified by lowprio;
create user HIGHPRIO identified by highprio;

grant connect to LOWPRIO;
grant connect to HIGHPRIO;


begin
 dbms_resource_manager.clear_pending_area;
 dbms_resource_manager.create_pending_area;

 dbms_resource_manager.create_consumer_group('LOWPRIO_GROUP', 'for low priority processing');
 dbms_resource_manager.create_consumer_group('HIGHPRIO_GROUP', 'we will starve you');

 dbms_resource_manager.validate_pending_area();
 dbms_resource_manager.submit_pending_area();
end;
/

begin
 dbms_resource_manager.create_pending_area();
 dbms_resource_manager.set_consumer_group_mapping(
		dbms_resource_manager.oracle_user, 'LOWPRIO', 'LOWPRIO_GROUP');
 dbms_resource_manager.set_consumer_group_mapping(
		dbms_resource_manager.oracle_user, 'HIGHPRIO', 'HIGHPRIO_GROUP');
 dbms_resource_manager.submit_pending_area();
end;
/

begin
 dbms_resource_manager_privs.grant_switch_consumer_group('LOWPRIO','LOWPRIO_GROUP', true);
 dbms_resource_manager_privs.grant_switch_consumer_group('HIGHPRIO','HIGHPRIO_GROUP', true);
end;
/

BEGIN
 dbms_resource_manager.clear_pending_area();
 dbms_resource_manager.create_pending_area();
 
 dbms_resource_manager.create_plan(
 	plan => 'ENKITEC_MASTER_PDB_PLAN',
 	comment => 'sample DBRM plan for the training classes'
 );

 dbms_resource_manager.create_plan_directive(
  plan => 'ENKITEC_MASTER_PDB_PLAN',
  comment => 'sys_group is level 1',
  group_or_subplan => 'SYS_GROUP',
  mgmt_p1 => 50);

 dbms_resource_manager.create_plan_directive(
  plan => 'ENKITEC_MASTER_PDB_PLAN',
  group_or_subplan => 'HIGHPRIO_GROUP',
  comment => 'us before anyone else',
  mgmt_p1 => 30
 );

 -- artificially limit the resources
 dbms_resource_manager.create_plan_directive(
  plan => 'ENKITEC_MASTER_PDB_PLAN',
  group_or_subplan => 'LOWPRIO_GROUP',
  comment => 'then the LOWPRIO group',
  mgmt_p1 => 10
 );

 -- finally anyone not in a previous consumer group will be mapped to the
 -- OTHER_GROUPS
 dbms_resource_manager.create_plan_directive(
  plan => 'ENKITEC_MASTER_PDB_PLAN',
  group_or_subplan => 'OTHER_GROUPS',
  comment => 'all the rest',
  mgmt_p1 => 5
 );
 
 dbms_resource_manager.validate_pending_area();
 dbms_resource_manager.submit_pending_area();
end;
/

6) enable the PDB resource plan

alter system set resource_manager_plan = 'ENKITEC_MASTER_PDB_PLAN';

7) Verify the resource manager plans are correct in their respective container

SQL> select sys_context('userenv','con_name') from dual;

SYS_CONTEXT('USERENV','CON_NAME')
--------------------------------------------------------------------------------
CDB$ROOT

SQL> show parameter resource_manager_plan

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
resource_manager_plan                string      ENKITEC_CDB_PLAN
SQL>

SQL> alter session set container = master;

Session altered.

SQL> select sys_context('userenv','con_name') from dual;

SYS_CONTEXT('USERENV','CON_NAME')
--------------------------------------------------------------------------------
MASTER

SQL> show parameter resource_manager_plan

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
resource_manager_plan                string      ENKITEC_MASTER_PDB_PLAN
SQL>

8) connect as either highprio or lowprio to the PDB and check if the mapping works

[oracle@server3 ~]$ sqlplus highprio/highprio@localhost/master

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jun 10 05:58:55 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Wed Jun 10 2015 05:57:33 -04:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options


- in a different session

SQL> select con_id, resource_consumer_group from v$session where username = 'HIGHPRIO';

    CON_ID RESOURCE_CONSUMER_GROUP
---------- --------------------------------
         3 HIGHPRIO_GROUP
}}}

! LAB X: generic RMAN enhancements

{{{
In this lab you learn how to perform a table point in time recovery. 

1) create a table in a schema of your choice in the non-CDB and populate it with data

SQL> sho user
USER is "MARTIN"

SQL> create table recoverme tablespace users as select * from dba_objects;

Table created 

SQL> select table_name from tabs;

TABLE_NAME
--------------------------------------------------------------------------------
RECOVERME

2) get some information about the database, the most useful is the SCN. This is the SCN to recover to in the next steps, so take a note of it.

SQL> select db_unique_name, database_role, cdb, current_scn from v$database;

DB_UNIQUE_NAME                    DATABASE_ROLE CDB CURRENT_SCN
------------------------------ ---------------- --- -----------
NCDB                                    PRIMARY  NO     1766295

3) ensure there were rows in the table at this particular SCN

SQL> select count(*) from recoverme;

COUNT(*)
----------
91858

4) truncate the table to simulate something daft

SQL> truncate table recoverme;

table truncated.

5) try to salvage the table without having to revert to a restore

SQL> flashback table recoverme to scn 1766304;
flashback table recoverme to scn 1766304
*
ERROR at line 1:
ORA-08189: cannot flashback the table because row movement is not enabled


SQL> alter table recoverme enable row movement;

Table altered.

SQL> flashback table recoverme to scn 1766304;
flashback table recoverme to scn 1766304
*
ERROR at line 1:
ORA-01466: unable to read data - table definition has changed

6) After this proved unsuccessful perform a table point in time recovery

NB: which other recovery technique could you have tried in step 5?

RECOVER TABLE MARTIN.RECOVERME
UNTIL SCN 1766295
AUXILIARY DESTINATION '/u02/oradata/adata/oraback/NCDB/temp'
REMAP TABLE 'MARTIN'.'RECOVERME':'RECOVERME_RESTRD';
....
executing Memory Script

Oracle instance shut down

Performing import of tables...
IMPDP> Master table "SYS"."TSPITR_IMP_onvy_iEei" successfully loaded/unloaded
IMPDP> Starting "SYS"."TSPITR_IMP_onvy_iEei":
IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
IMPDP> . . imported "MARTIN"."RECOVERME_RESTRD" 10.01 MB 91858 rows
IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
IMPDP> Job "SYS"."TSPITR_IMP_onvy_iEei" successfully completed at Thu Jun 11 09:42:52 2015 elapsed 0 00:00:15
Import completed


Removing automatic instance
Automatic instance removed
auxiliary instance file /u02/oradata/adata/oraback/NCDB/temp/NCDB/datafile/o1_mf_temp_bqlldwvd_.tmp deleted
auxiliary instance file /u02/oradata/adata/oraback/NCDB/temp/ONVY_PITR_NCDB/onlinelog/o1_mf_3_bqllgqf9_.log deleted
auxiliary instance file /u02/oradata/adata/oraback/NCDB/temp/ONVY_PITR_NCDB/onlinelog/o1_mf_2_bqllgq2t_.log deleted
auxiliary instance file /u02/oradata/adata/oraback/NCDB/temp/ONVY_PITR_NCDB/onlinelog/o1_mf_1_bqllgpq2_.log deleted
auxiliary instance file /u02/oradata/adata/oraback/NCDB/temp/ONVY_PITR_NCDB/datafile/o1_mf_users_bqllgnnh_.dbf deleted
auxiliary instance file /u02/oradata/adata/oraback/NCDB/temp/NCDB/datafile/o1_mf_sysaux_bqllddlj_.dbf deleted
auxiliary instance file /u02/oradata/adata/oraback/NCDB/temp/NCDB/datafile/o1_mf_undotbs1_bqllddln_.dbf deleted
auxiliary instance file /u02/oradata/adata/oraback/NCDB/temp/NCDB/datafile/o1_mf_system_bqllddlb_.dbf deleted
auxiliary instance file /u02/oradata/adata/oraback/NCDB/temp/NCDB/controlfile/o1_mf_bqlld6c7_.ctl deleted
auxiliary instance file tspitr_onvy_78692.dmp deleted
Finished recover at 11.06.2015 09:42:54

7) check if the table is imported ok and has the rows needed.

SQL> conn martin/secret
Connected.
SQL> select table_name from tabs;

TABLE_NAME
--------------------------------------------------------------------------------
RECOVERME
RECOVERME_RESTRD

SQL> select count(*) from RECOVERME_RESTRD;

COUNT(*)
----------
91858

SQL> select count(*) from RECOVERME;

COUNT(*)
----------
0
}}}

! Lab X: Threaded Execution

{{{
In the final lab you will experiment with the changes introduced with threaded execution

1) Verify your current settings

Start your NCDB if it is not open. Have a look at all the OS process IDs. How many are started?

[oracle@server3 ~]$ ps -ef | grep NCDB
oracle   19728 19658  0 06:27 pts/0    00:00:00 screen -S NCDB
oracle   19729 19728  0 06:27 ?        00:00:00 SCREEN -S NCDB
oracle   19963     1  0 06:28 ?        00:00:00 ora_pmon_NCDB
oracle   19965     1  0 06:28 ?        00:00:00 ora_psp0_NCDB
oracle   19967     1  2 06:28 ?        00:00:11 ora_vktm_NCDB
oracle   19971     1  0 06:28 ?        00:00:00 ora_gen0_NCDB
oracle   19973     1  0 06:28 ?        00:00:00 ora_mman_NCDB
oracle   19977     1  0 06:28 ?        00:00:00 ora_diag_NCDB
oracle   19979     1  0 06:28 ?        00:00:00 ora_dbrm_NCDB
oracle   19981     1  0 06:28 ?        00:00:00 ora_vkrm_NCDB
oracle   19983     1  0 06:28 ?        00:00:00 ora_dia0_NCDB
oracle   19985     1  0 06:28 ?        00:00:02 ora_dbw0_NCDB
oracle   19987     1  0 06:28 ?        00:00:03 ora_lgwr_NCDB
oracle   19989     1  0 06:28 ?        00:00:00 ora_ckpt_NCDB
oracle   19991     1  0 06:28 ?        00:00:00 ora_lg00_NCDB
oracle   19993     1  0 06:28 ?        00:00:00 ora_smon_NCDB
oracle   19995     1  0 06:28 ?        00:00:00 ora_lg01_NCDB
oracle   19997     1  0 06:28 ?        00:00:00 ora_reco_NCDB
oracle   19999     1  0 06:28 ?        00:00:00 ora_lreg_NCDB
oracle   20001     1  0 06:28 ?        00:00:00 ora_pxmn_NCDB
oracle   20003     1  0 06:28 ?        00:00:00 ora_rbal_NCDB
oracle   20005     1  0 06:28 ?        00:00:00 ora_asmb_NCDB
oracle   20007     1  0 06:28 ?        00:00:01 ora_mmon_NCDB
oracle   20009     1  0 06:28 ?        00:00:00 ora_mmnl_NCDB
oracle   20013     1  0 06:28 ?        00:00:00 ora_mark_NCDB
oracle   20015     1  0 06:28 ?        00:00:00 ora_d000_NCDB
oracle   20017     1  0 06:28 ?        00:00:00 ora_s000_NCDB
oracle   20021     1  0 06:28 ?        00:00:00 ora_dmon_NCDB
oracle   20033     1  0 06:28 ?        00:00:00 ora_o000_NCDB
oracle   20039     1  0 06:28 ?        00:00:00 ora_o001_NCDB
oracle   20043     1  0 06:28 ?        00:00:00 ora_rvwr_NCDB
oracle   20045     1  0 06:28 ?        00:00:00 ora_insv_NCDB
oracle   20047     1  0 06:28 ?        00:00:00 ora_nsv1_NCDB
oracle   20049     1  0 06:28 ?        00:00:00 ora_fsfp_NCDB
oracle   20054     1  0 06:28 ?        00:00:00 ora_rsm0_NCDB
oracle   20056     1  0 06:29 ?        00:00:00 ora_tmon_NCDB
oracle   20058     1  0 06:29 ?        00:00:00 ora_arc0_NCDB
oracle   20060     1  0 06:29 ?        00:00:00 ora_arc1_NCDB
oracle   20062     1  0 06:29 ?        00:00:00 ora_arc2_NCDB
oracle   20064     1  0 06:29 ?        00:00:00 ora_arc3_NCDB
oracle   20066     1  0 06:29 ?        00:00:00 ora_o002_NCDB
oracle   20068     1  0 06:29 ?        00:00:00 ora_o003_NCDB
oracle   20074     1  0 06:29 ?        00:00:00 ora_o004_NCDB
oracle   20078     1  0 06:29 ?        00:00:00 ora_tt00_NCDB
oracle   20080     1  0 06:29 ?        00:00:00 ora_tt01_NCDB
oracle   20091     1  0 06:29 ?        00:00:00 ora_p000_NCDB
oracle   20093     1  0 06:29 ?        00:00:00 ora_p001_NCDB
oracle   20095     1  0 06:29 ?        00:00:00 ora_p002_NCDB
oracle   20097     1  0 06:29 ?        00:00:00 ora_p003_NCDB
oracle   20099     1  0 06:29 ?        00:00:00 ora_smco_NCDB
oracle   20101     1  0 06:29 ?        00:00:00 ora_w000_NCDB
oracle   20103     1  0 06:29 ?        00:00:00 ora_w001_NCDB
oracle   20107     1  0 06:29 ?        00:00:00 ora_aqpc_NCDB
oracle   20111     1  0 06:29 ?        00:00:00 ora_p004_NCDB
oracle   20113     1  0 06:29 ?        00:00:00 ora_p005_NCDB
oracle   20115     1  0 06:29 ?        00:00:00 ora_p006_NCDB
oracle   20117     1  0 06:29 ?        00:00:00 ora_p007_NCDB
oracle   20119     1  0 06:29 ?        00:00:00 ora_cjq0_NCDB
oracle   20121     1  0 06:29 ?        00:00:00 ora_qm02_NCDB
oracle   20125     1  0 06:29 ?        00:00:00 ora_q002_NCDB
oracle   20127     1  0 06:29 ?        00:00:00 ora_q003_NCDB
oracle   20131     1  0 06:29 ?        00:00:00 oracleNCDB (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
oracle   20305     1  1 06:30 ?        00:00:03 ora_m005_NCDB
oracle   20412     1  0 06:33 ?        00:00:00 ora_m004_NCDB
oracle   20481     1  0 06:34 ?        00:00:00 ora_j000_NCDB
oracle   20483     1  0 06:34 ?        00:00:00 ora_j001_NCDB
oracle   20508 20415  0 06:36 pts/15   00:00:00 grep --color=auto NCDB

[oracle@server3 ~]$ ps -ef | grep NCDB | grep -v grep | wc -l
66

- keep that number in mind

2) switch to threaded_execution

Connect as sysdba check if it is using threaded execution. If not, enable threaded execution and bounce the instance for the parameter to take effect. What do you notice when the database restarts?

SQL> show parameter threaded_

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
threaded_execution                   boolean     FALSE
SQL> alter system set threaded_execution=true scope=spfile;

System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ERROR:
ORA-01017: invalid username/password; logon denied


ORA-01017: invalid username/password; logon denied
SQL>

3) how can you start the database?

SQL> conn sys/change_on_install as sysdba
Connected.

SQL> alter database mount;

Database altered.

SQL> alter database open;

Database altered.

SQL>

3) check the OS processes now-how many are there?

[oracle@server3 ~]$ ps -ef | grep NCDB | egrep -vi "grep|screen" | nl
     1  oracle   20858     1  0 06:40 ?        00:00:00 ora_pmon_NCDB
     2  oracle   20860     1  0 06:40 ?        00:00:00 ora_psp0_NCDB
     3  oracle   20862     1  2 06:40 ?        00:00:05 ora_vktm_NCDB
     4  oracle   20866     1  0 06:40 ?        00:00:01 ora_u004_NCDB
     5  oracle   20872     1  6 06:40 ?        00:00:15 ora_u005_NCDB
     6  oracle   20879     1  0 06:40 ?        00:00:00 ora_dbw0_NCDB
     7  oracle   20959     1  0 06:42 ?        00:00:00 oracleNCDB (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
[oracle@server3 ~]$

4) Can you think of a reason why there are so few? Can you make the others appear? Clue: check man ps

[oracle@server3 ~]$ ps -eLf | grep NCDB | egrep -vi "grep|screen" | grep -v grep
oracle   20858     1 20858  0    1 06:40 ?        00:00:00 ora_pmon_NCDB
oracle   20860     1 20860  0    1 06:40 ?        00:00:00 ora_psp0_NCDB
oracle   20862     1 20862  2    1 06:40 ?        00:00:06 ora_vktm_NCDB
oracle   20866     1 20866  0   14 06:40 ?        00:00:00 ora_u004_NCDB
oracle   20866     1 20867  0   14 06:40 ?        00:00:00 ora_u004_NCDB
oracle   20866     1 20868  0   14 06:40 ?        00:00:00 ora_u004_NCDB
oracle   20866     1 20869  0   14 06:40 ?        00:00:00 ora_u004_NCDB
oracle   20866     1 20875  0   14 06:40 ?        00:00:00 ora_u004_NCDB
oracle   20866     1 20880  0   14 06:40 ?        00:00:00 ora_u004_NCDB
oracle   20866     1 20881  0   14 06:40 ?        00:00:00 ora_u004_NCDB
oracle   20866     1 20882  0   14 06:40 ?        00:00:00 ora_u004_NCDB
oracle   20866     1 20883  0   14 06:40 ?        00:00:00 ora_u004_NCDB
oracle   20866     1 20884  0   14 06:40 ?        00:00:00 ora_u004_NCDB
oracle   20866     1 20886  0   14 06:40 ?        00:00:00 ora_u004_NCDB
oracle   20866     1 20888  0   14 06:40 ?        00:00:00 ora_u004_NCDB
oracle   20866     1 20889  0   14 06:40 ?        00:00:00 ora_u004_NCDB
oracle   20866     1 20928  0   14 06:41 ?        00:00:00 ora_u004_NCDB
oracle   20872     1 20872  0   45 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20873  0   45 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20874  0   45 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20876  0   45 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20877  0   45 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20885  0   45 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20887  0   45 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20890  0   45 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20891  0   45 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20892  0   45 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20893  0   45 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20896  0   45 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20897  0   45 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20898  0   45 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20900  0   45 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20922  0   45 06:41 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20925  0   45 06:41 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20932  0   45 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20934  0   45 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20935  0   45 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20936  0   45 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20937  0   45 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20938  0   45 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20939  0   45 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20940  0   45 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20941  0   45 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20942  0   45 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20943  0   45 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20944  0   45 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20945  0   45 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20947  0   45 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20948  0   45 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20949  0   45 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20950  0   45 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20951  0   45 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20952  0   45 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20953  0   45 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20954  0   45 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20955  0   45 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 21088  0   45 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 21089  0   45 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 21091  0   45 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 21092  0   45 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 21138  0   45 06:44 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 21139  0   45 06:44 ?        00:00:00 ora_u005_NCDB
oracle   20879     1 20879  0    1 06:40 ?        00:00:00 ora_dbw0_NCDB
oracle   20959     1 20959  0    1 06:42 ?        00:00:00 oracleNCDB (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))

5) create a session using net*8 - is the session a process or a thread?

[oracle@server3 ~]$ sqlplus martin/secret@ncdb

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 11 06:45:57 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Thu Apr 23 2015 04:11:21 -04:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> select userenv('sid') from dual;

USERENV('SID')
--------------
            15


- in another session

SQL> select pid,sosid,spid,stid,execution_type from v$process where addr = (select paddr from v$session where sid = 15);

       PID SOSID                    SPID                     STID                     EXECUTION_
---------- ------------------------ ------------------------ ------------------------ ----------
        30 21178                    21178                    21178                    PROCESS

- this appears to be a process. Confirm on the OS-level:

[oracle@server3 ~]$ ps -eLf | grep 21178 | grep -v grep
oracle   21178     1 21178  0    1 06:45 ?        00:00:00 oracleNCDB (LOCAL=NO)

6) Now enable new sessions to be created as threads. In order to do so you need to change the listener and add DEDICATED_THROUGH_BROKER_listener = ON. Then reload the listener

7) connect again

[oracle@server3 ~]$ sqlplus martin/secret@ncdb

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 11 06:56:38 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Thu Jun 11 2015 06:45:58 -04:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> select userenv('sid') from dual;

USERENV('SID')
--------------
            31


- verify if it's a process or a thread

SQL> select pid,sosid,spid,stid,execution_type from v$process where addr = (select paddr from v$session where sid = 31);

       PID SOSID                    SPID                     STID                     EXECUTION_
---------- ------------------------ ------------------------ ------------------------ ----------
        30 20872_21481              20872                    21481                    THREAD

- it is. Can you see this on the OS too?

[oracle@server3 ~]$ ps -eLf | egrep 21481
oracle   20872     1 21481  0   46 06:56 ?        00:00:00 ora_u005_NCDB

- Note that this is the STID. The SPID is the thread ID

[oracle@server3 ~]$ ps -eLf | egrep 20872
oracle   20872     1 20872  0   49 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20873  0   49 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20874  0   49 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20876  0   49 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20877  0   49 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20885  0   49 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20887  0   49 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20890  0   49 06:40 ?        00:00:01 ora_u005_NCDB
oracle   20872     1 20891  0   49 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20892  0   49 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20893  0   49 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20896  0   49 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20898  0   49 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20900  0   49 06:40 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20922  0   49 06:41 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20925  0   49 06:41 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20932  0   49 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20934  0   49 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20935  0   49 06:42 ?        00:00:01 ora_u005_NCDB
oracle   20872     1 20936  0   49 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20937  0   49 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20938  0   49 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20939  0   49 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20940  0   49 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20941  0   49 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20942  0   49 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20943  0   49 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20944  0   49 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20945  0   49 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20947  0   49 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20948  0   49 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20949  0   49 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20950  0   49 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20951  0   49 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20952  0   49 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20953  0   49 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20954  0   49 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 20955  0   49 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 21088  0   49 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 21089  0   49 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 21091  0   49 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 21092  0   49 06:42 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 21481  0   49 06:56 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 21494  0   49 06:57 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 21499  0   49 06:57 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 21500  0   49 06:57 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 21521  0   49 06:59 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 21522  0   49 06:59 ?        00:00:00 ora_u005_NCDB
oracle   20872     1 21536  0   49 07:00 ?        00:00:00 ora_u005_NCDB

8) kill the user session with SID 31 ON THE OS LEVEL. Do not use alter system disconnect session!

[oracle@server3 ~]$ ps -eLf | egrep 21481
oracle   20872     1 21481  0   48 06:56 ?        00:00:00 ora_u005_NCDB
oracle   21578 21236 21578  0    1 07:01 pts/16   00:00:00 grep -E --color=auto 21481
[oracle@server3 ~]$ kill -9 21481
[oracle@server3 ~]$

- what do you see? Remember never to do this in production!


}}}


! end
http://facedba.blogspot.com/2017/10/recover-table-from-rman-backup-in.html
<<showtoc>>

! instrumentation for the before and after change
vi memcount.sh
{{{
echo "##### count the threads"
ps -eLf | grep $ORACLE_SID | wc -l
echo "##### count the processes"
ps -ef | grep $ORACLE_SID | wc -l
echo "##### CPU%, MEM%, VSZ, RSS for all users"
ps -A -o pcpu,pmem,vsz,rss | awk '{cpu += $1; mem += $2; vsz += $3; rss += $4} END {print cpu, mem, vsz/1024, rss/1024}'
echo "##### CPU%, MEM%, VSZ, RSS for oracle user"
ps -u $USER -o pcpu,pmem,vsz,rss | awk '{cpu += $1; mem += $2; vsz += $3; rss += $4} END {print cpu, mem, vsz/1024, rss/1024}'
echo "##### system memory"
free -m
echo "##### this sums the %MEM for all users"
ps aux | awk 'NR != 1 {x[$1] += $4} END{ for(z in x) {print z, x[z]"%"}}'
echo "##### this greps the current ORACLE_SID (excluding others) and sums the %MEM"
ps aux | grep $ORACLE_SID | awk 'NR != 1 {x[$1] += $4} END{ for(z in x) {print z, x[z]"%"}}'
}}}

! enable / disable
-- enable
alter system set threaded_execution=true scope=spfile sid='*';
srvctl stop database -d noncdb
srvctl start database -d noncdb
sqlplus sys/oracle@enkx4db01.enkitec.com/noncdb.enkitec.com as sysdba

-- disable
alter system set threaded_execution=false scope=spfile sid='*';
srvctl stop database -d noncdb
srvctl start database -d noncdb
sqlplus sys/oracle@enkx4db01.enkitec.com/noncdb.enkitec.com as sysdba


show parameter threaded

select sys_context('userenv','sid') from dual;
set lines 300
select s.username, s.sid, s.serial#, s.con_id, p.spid, p.sosid, p.stid, p.execution_type
from v$session s, v$process p
where s.sid = 270
and s.paddr = p.addr
/

select count(spid),spid,execution_type from v$process where background = 1 group by spid, execution_type;

select pname, pid, sosid, spid, stid, execution_type
from v$process where background = 1
order by pname
/

select pname, pid, sosid, spid, stid, execution_type
from v$process 
order by pname
/

ps -ef | grep noncdb
ps -eLf | grep noncdb


! before and after effect

-- BEFORE
{{{
$ sh memcount.sh
##### count the threads
49
##### count the processes
49
##### CPU%, MEM%, VSZ, RSS for all users
42.3 159.7 81873.2 1686.75
##### CPU%, MEM%, VSZ, RSS for oracle user
41 154.1 62122.9 1586.95
##### system memory
             total       used       free     shared    buffers     cached
Mem:           994        978         15          0          1        592
-/+ buffers/cache:        385        609
Swap:         1227        591        636
##### this sums the %MEM for all users
gdm 1.1%
oracle 153.9%
rpc 0%
dbus 0.1%
68 0.1%
rtkit 0%
postfix 0.1%
rpcuser 0%
root 4.2%
##### this greps the current ORACLE_SID (excluding others) and sums the %MEM
oracle 142.3%
}}}


-- AFTER
{{{
$ sh memcount.sh
##### count the threads
55
##### count the processes
7
##### CPU%, MEM%, VSZ, RSS for all users
58.3 92.9 56845.8 1005.93
##### CPU%, MEM%, VSZ, RSS for oracle user
57.1 87.3 37095.5 906.363
##### system memory
             total       used       free     shared    buffers     cached
Mem:           994        965         28          0          1        628
-/+ buffers/cache:        336        658
Swap:         1227        591        636
##### this sums the %MEM for all users
gdm 1.1%
oracle 87.4%
rpc 0%
dbus 0.1%
68 0.1%
rtkit 0%
postfix 0.1%
rpcuser 0%
root 4.2%
##### this greps the current ORACLE_SID (excluding others) and sums the %MEM
oracle 75.7%
}}}


! initial conclusions
* all in all the count of processes dropped from @@49 to 7@@, but what does this mean in terms of resource savings? 
I say this mostly affects the memory
** VSZ (virtual memory size) dropped from 62122.9 MB to 37095.5 MB for the oracle user which is a 40% decrease
** RSS (resident set size) dropped from 1586.95 MB to 906.363 MB for the oracle user which is a 42% decrease
** %MEM (ratio of the processes resident set size  to the physical memory on the machine)  dropped from 142.3% to 75.7% which is 46% decrease

So when you consolidate, the savings gained from changing to threaded_execution will be more physical memory headroom for more instances 
and even more when switched to PDB (multi-tenant) architecture

For CPU, there's really no effect. I say, the CPU workload requirements of an app will be the same and it's only going to decrease if you 1) tune 2) move to a faster CPU. 
See the slide 27-30 of this OOW presentation by Arup http://www.oracle.com/technetwork/oem/app-quality-mgmt/con8788-2088738.pdf  

! updates to the initial conclusions 

Had a talk with Frits on the memory part of 12c threaded_execution.. 

I did a research on the performance and here's the result https://twitter.com/karlarao/status/582053491079843840
For the memory here's the before and after  https://twitter.com/karlarao/status/581367258804396032 

On the side of performance, non-threaded_execution is faster. I'm definitive about that. 
For the memory gains, yes some of the sessions could still end up eating the same (SGA+PGA) memory but there are still some memory gains with some background processes although there are some inconsistencies 
* on the VM test that I did it showed ~40 decrease in RSS memory 
* but on the Exadata test it actually increased in RSS memory (from ~27258.4MB to ~42487.8MB). 

All in all, I don't like threaded_execution in terms of performance. For memory, that needs a little bit more investigation because I'm seeing different results on VM and non-VM environments. 



! FINAL: complete view of metrics for CPU speed comparison - elap_exec, lios_elap, us_lio
[img(95%,95%)[ http://i.imgur.com/WpgZHre.png ]]


! a lot of POLL syscalls which is an overhead causing slower overall performance
This blog post validated what I found. Here he showed that threaded_execution=true is doing a lot of POLL syscalls which is an overhead causing slower overall performance. 
The process ora_u00N was also discovered using sockets to communicate with its threads. 
http://blog.ora-600.pl/2015/12/17/oracle-12c-internals-of-threaded-execution/

{{{
Oracle instance with threaded_execution=false:

[root@rico ~]# strace -cp 12168
Process 12168 attached
^CProcess 12168 detached
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
  0.00    0.000000           0         2           read
  0.00    0.000000           0         2           write
  0.00    0.000000           0         1           semctl
  0.00    0.000000           0       159           getrusage
  0.00    0.000000           0        12           times
  0.00    0.000000           0         3           semtimedop
------ ----------- ----------- --------- --------- ----------------
100.00    0.000000                   179           total
Oracle instance with threaded_execution=true:

[root@rico fd]# strace -cp 12165
Process 12165 attached
^CProcess 12165 detached
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 84.22    0.113706           0    980840           poll
 10.37    0.014000        7000         2           read
  5.41    0.007310        1218         6           semtimedop
  0.00    0.000000           0         2           write
  0.00    0.000000           0         1           semctl
  0.00    0.000000           0       419           getrusage
  0.00    0.000000           0        12           times
------ ----------- ----------- --------- --------- ----------------
100.00    0.135016                981282           total
 
[root@rico fd]# strace -p 12165 -o /tmp/threaded_exec.out
Process 12165 attached
^CProcess 12165 detached
[root@rico fd]# grep poll /tmp/threaded_exec.out | tail
poll([{fd=63, events=POLLIN|POLLRDNORM}], 1, 0) = 0 (Timeout)
poll([{fd=63, events=POLLIN|POLLRDNORM}], 1, 0) = 0 (Timeout)
poll([{fd=63, events=POLLIN|POLLRDNORM}], 1, 0) = 0 (Timeout)
poll([{fd=63, events=POLLIN|POLLRDNORM}], 1, 0) = 0 (Timeout)
poll([{fd=63, events=POLLIN|POLLRDNORM}], 1, 0) = 0 (Timeout)
poll([{fd=63, events=POLLIN|POLLRDNORM}], 1, 0) = 0 (Timeout)
poll([{fd=63, events=POLLIN|POLLRDNORM}], 1, 0) = 0 (Timeout)
poll([{fd=63, events=POLLIN|POLLRDNORM}], 1, 0) = 0 (Timeout)
poll([{fd=63, events=POLLIN|POLLRDNORM}], 1, 0) = 0 (Timeout)
poll([{fd=63, events=POLLIN|POLLRDNORM}], 1, 0) = 0 (Timeout)

The STID 12165 was signed to SPID 8107:
SQL> get spid
  1  select spid, stid
  2  from v$process p, v$session s
  3  where p.addr=s.paddr
  4* and   s.sid=sys_context('userenv','sid')
SQL> /
 
SPID             STID
------------------------ ------------------------
8107             12165

Let’s check the file descriptors for this thread:
[root@rico ~]# cd /proc/8107/task/12165/fd
[root@rico fd]# ls -al | grep 63
lrwx------. 1 oracle oinstall 64 12-17 21:38 63 -> socket:[73968]
[root@rico fd]# lsof | grep 73968
ora_scmn_  8107    oracle   63u     IPv6              73968        0t0        TCP localhost:ncube-lm->localhost:32400 (ESTABLISHED)
[root@rico fd]# ps aux | grep 8107 | grep -v grep
oracle    8107  4.7 29.0 6155520 2901516 ?     Ssl  20:01   6:54 ora_u005_orclth
[root@rico fd]#

}}}












also check this tiddler [[12c New Features]]



https://docs.oracle.com/database/121/DWHSG/refresh.htm#DWHSG-GUID-51191C38-D52F-4A4D-B6FF-E631965AD69A
<<<
Types of Out-of-Place Refresh

There are three types of out-of-place refresh:

    out-of-place fast refresh

    This offers better availability than in-place fast refresh. It also offers better performance when changes affect a large part of the materialized view.

    out-of-place PCT refresh

    This offers better availability than in-place PCT refresh. There are two different approaches for partitioned and non-partitioned materialized views. If truncation and direct load are not feasible, you should use out-of-place refresh when the changes are relatively large. If truncation and direct load are feasible, in-place refresh is preferable in terms of performance. In terms of availability, out-of-place refresh is always preferable.

    out-of-place complete refresh

    This offers better availability than in-place complete refresh.

Using the refresh interface in the DBMS_MVIEW package, with method = ? and out_of_place = true, out-of-place fast refresh are attempted first, then out-of-place PCT refresh, and finally out-of-place complete refresh. An example is the following:

DBMS_MVIEW.REFRESH('CAL_MONTH_SALES_MV', method => '?', 
   atomic_refresh => FALSE, out_of_place => TRUE);
<<<



http://karandba.blogspot.com/2014/10/out-of-place-refresh-option-for.html
http://horia-berca-oracledba.blogspot.com/2013/10/out-of-place-materialized-view-refresh.html
https://community.oracle.com/mosc/discussion/4281580/partition-truncate-with-deferred-invalidation-any-side-effects
https://blog.dbi-services.com/oracle-12cr2-ddl-deferred-invalidation/
truncate table https://docs.oracle.com/database/121/SQLRF/statements_10007.htm#SQLRF01707

https://blogs.oracle.com/optimizer/optimizer-adaptive-features-in-oracle-database-12c-release-2


http://kerryosborne.oracle-guy.com/2013/11/24/12c-adaptive-optimization-part-1/
http://kerryosborne.oracle-guy.com/2013/12/09/12c-adaptive-optimization-part-2-hints/












.
Interesting observation about 15sec Top Activity graph
http://oracleprof.blogspot.com/2010/07/oem-performance-tab-and-active-session.html
https://leetcode.com/problems/combine-two-tables/
{{{
175. Combine Two Tables
Easy
SQL Schema

Table: Person

+-------------+---------+
| Column Name | Type    |
+-------------+---------+
| PersonId    | int     |
| FirstName   | varchar |
| LastName    | varchar |
+-------------+---------+
PersonId is the primary key column for this table.

Table: Address

+-------------+---------+
| Column Name | Type    |
+-------------+---------+
| AddressId   | int     |
| PersonId    | int     |
| City        | varchar |
| State       | varchar |
+-------------+---------+
AddressId is the primary key column for this table.

 

Write a SQL query for a report that provides the following information for each person in the Person table, regardless if there is an address for each of those people:

FirstName, LastName, City, State


Accepted
201,012
Submissions
365,130
}}}



{{{
/* Write your PL/SQL query statement below */

select a.FirstName, a.LastName, b.City, b.State from 
person a, address b
where a.personid = b.personid (+); 

}}}
https://leetcode.com/problems/second-highest-salary/
{{{
Write a SQL query to get the second highest salary from the Employee table.

+----+--------+
| Id | Salary |
+----+--------+
| 1  | 100    |
| 2  | 200    |
| 3  | 300    |
+----+--------+

For example, given the above Employee table, the query should return 200 as the second highest salary. If there is no second highest salary, then the query should return null.

+---------------------+
| SecondHighestSalary |
+---------------------+
| 200                 |
+---------------------+

Accepted
162,933
Submissions
566,565
}}}



{{{


SELECT  MAX(salary) AS SecondHighestSalary
FROM    employee
WHERE   salary NOT IN (
                        SELECT  MAX(salary)
                        FROM    employee
                      );
                      
                      
--select id, salary from
--(
--select a.id, a.salary, 
--    dense_rank() over(order by a.salary desc) drank 
--from test a)
--where drank = 2;                      
}}}
https://leetcode.com/problems/nth-highest-salary/
{{{
177. Nth Highest Salary
Medium

Write a SQL query to get the nth highest salary from the Employee table.

+----+--------+
| Id | Salary |
+----+--------+
| 1  | 100    |
| 2  | 200    |
| 3  | 300    |
+----+--------+

For example, given the above Employee table, the nth highest salary where n = 2 is 200. If there is no nth highest salary, then the query should return null.

+------------------------+
| getNthHighestSalary(2) |
+------------------------+
| 200                    |
+------------------------+

Accepted
81,363
Submissions
288,058
}}}



{{{
CREATE or replace FUNCTION getNthHighestSalary(N IN NUMBER) RETURN NUMBER IS
result NUMBER;
BEGIN
    /* Write your PL/SQL query statement below */
    select nvl(null,salary) salary 
    into result
    from
    (
    select distinct a.salary, 
        dense_rank() over(order by a.salary desc) drank 
    from employee a)
    where drank = N;
    
    RETURN result;
END;
/
select getNthHighestSalary(2) from dual;


CREATE or replace FUNCTION getNthHighestSalary2(N IN NUMBER) RETURN NUMBER IS
result NUMBER;

BEGIN
    select salary into result 
    from (select distinct(salary),rank() over (order by salary desc) as r  
            from test group by salary) where r=N;
return result;
END;
/
select getNthHighestSalary2(2) from dual;




select * from test;


select nvl('x',null) from (
select 1/NULL a from dual);

select 1/nvl(null,1) from dual; -- if not null return 1st , if null return 1
select 1/nvl(0,1) from dual; -- errors if zero
select 1/nullif(nvl( nullif(21,0) ,1),0) from dual; 

SELECT NULLIF(0,0) FROM DUAL;



select nvl(null,salary) salary from
(
select a.id, a.salary, 
    dense_rank() over(order by a.salary desc) drank 
from test a)
where drank = 2;
}}}
https://leetcode.com/problems/rank-scores/
{{{
Write a SQL query to rank scores. If there is a tie between two scores, both should have the same ranking. Note that after a tie, the next ranking number should be the next consecutive integer value. In other words, there should be no "holes" between ranks.

+----+-------+
| Id | Score |
+----+-------+
| 1  | 3.50  |
| 2  | 3.65  |
| 3  | 4.00  |
| 4  | 3.85  |
| 5  | 4.00  |
| 6  | 3.65  |
+----+-------+

For example, given the above Scores table, your query should generate the following report (order by highest score):

+-------+------+
| Score | Rank |
+-------+------+
| 4.00  | 1    |
| 4.00  | 1    |
| 3.85  | 2    |
| 3.65  | 3    |
| 3.65  | 3    |
| 3.50  | 4    |
+-------+------+

Accepted
78,882
Submissions
199,284
}}}

{{{
select  a.score score, 
        dense_rank() over(order by a.score desc) rank 
    from scores a;
}}}
https://leetcode.com/problems/employees-earning-more-than-their-managers/
{{{
The Employee table holds all employees including their managers. Every employee has an Id, and there is also a column for the manager Id.

+----+-------+--------+-----------+
| Id | Name  | Salary | ManagerId |
+----+-------+--------+-----------+
| 1  | Joe   | 70000  | 3         |
| 2  | Henry | 80000  | 4         |
| 3  | Sam   | 60000  | NULL      |
| 4  | Max   | 90000  | NULL      |
+----+-------+--------+-----------+

Given the Employee table, write a SQL query that finds out employees who earn more than their managers. For the above table, Joe is the only employee who earns more than his manager.

+----------+
| Employee |
+----------+
| Joe      |
+----------+

Accepted
134,240
Submissions
261,500
}}}

{{{
/* Write your PL/SQL query statement below */
select b.name as Employee 
from Employee a, Employee b
where a.id = b.managerid
and b.salary > a.salary;


select * from employeeslc a, employeeslc b
where a.employee_id = b.manager_id
and b.salary > a.salary;
}}}
https://leetcode.com/problems/department-highest-salary/

{{{
The Employee table holds all employees. Every employee has an Id, a salary, and there is also a column for the department Id.

+----+-------+--------+--------------+
| Id | Name  | Salary | DepartmentId |
+----+-------+--------+--------------+
| 1  | Joe   | 70000  | 1            |
| 2  | Jim   | 90000  | 1            |
| 3  | Henry | 80000  | 2            |
| 4  | Sam   | 60000  | 2            |
| 5  | Max   | 90000  | 1            |
+----+-------+--------+--------------+

The Department table holds all departments of the company.

+----+----------+
| Id | Name     |
+----+----------+
| 1  | IT       |
| 2  | Sales    |
+----+----------+

Write a SQL query to find employees who have the highest salary in each of the departments. For the above tables, your SQL query should return the following rows (order of rows does not matter).

+------------+----------+--------+
| Department | Employee | Salary |
+------------+----------+--------+
| IT         | Max      | 90000  |
| IT         | Jim      | 90000  |
| Sales      | Henry    | 80000  |
+------------+----------+--------+

Explanation:

Max and Jim both have the highest salary in the IT department and Henry has the highest salary in the Sales department.
Accepted
79,335
Submissions
250,355
}}}

{{{
/* Write your PL/SQL query statement below */
select department, employee, salary  from (
select a.departmentid, b.name department, a.name employee, a.salary, dense_rank() over(partition by a.departmentid order by a.salary desc) rank
from employee a, department b
where a.departmentid = b.id
)
where rank = 1;


-- in oracle
select department, employee, salary  from (
select a.department_id, b.department_name department, a.first_name employee, a.salary, dense_rank() over(partition by a.department_id order by a.salary desc) rank
from employeeslc a, departmentslc b
where a.department_id = b.department_id(+)
)
where rank = 1;


}}}
https://leetcode.com/problems/department-top-three-salaries/
{{{
The Employee table holds all employees. Every employee has an Id, and there is also a column for the department Id.

+----+-------+--------+--------------+
| Id | Name  | Salary | DepartmentId |
+----+-------+--------+--------------+
| 1  | Joe   | 85000  | 1            |
| 2  | Henry | 80000  | 2            |
| 3  | Sam   | 60000  | 2            |
| 4  | Max   | 90000  | 1            |
| 5  | Janet | 69000  | 1            |
| 6  | Randy | 85000  | 1            |
| 7  | Will  | 70000  | 1            |
+----+-------+--------+--------------+

The Department table holds all departments of the company.

+----+----------+
| Id | Name     |
+----+----------+
| 1  | IT       |
| 2  | Sales    |
+----+----------+

Write a SQL query to find employees who earn the top three salaries in each of the department. For the above tables, your SQL query should return the following rows (order of rows does not matter).

+------------+----------+--------+
| Department | Employee | Salary |
+------------+----------+--------+
| IT         | Max      | 90000  |
| IT         | Randy    | 85000  |
| IT         | Joe      | 85000  |
| IT         | Will     | 70000  |
| Sales      | Henry    | 80000  |
| Sales      | Sam      | 60000  |
+------------+----------+--------+

Explanation:

In IT department, Max earns the highest salary, both Randy and Joe earn the second highest salary, and Will earns the third highest salary. There are only two employees in the Sales department, Henry earns the highest salary while Sam earns the second highest salary.
Accepted
54,931
Submissions
189,029
}}}

{{{
select department, employee, salary from 
(
select b.name department, a.name employee, a.salary salary, dense_rank() over(partition by b.id order by a.salary desc) drank
from employee a, department b
where a.departmentid = b.id) a
where a.drank <= 3;


select * from employees;
select * from departments;


select b.department_name department, a.first_name employee, a.salary salary
from employees a, departments b
where a.department_id = b.department_id;


select department, employee, salary from 
(
select b.department_name department, a.first_name employee, a.salary salary, dense_rank() over(partition by b.department_name order by a.salary desc) drank
from employees a, departments b
where a.department_id = b.department_id) a
where a.drank <= 3;


select department, employee, salary from 
(
select b.name department, a.name employee, a.salary salary, dense_rank() over(partition by b.id order by a.salary desc) drank
from employee a, department b
where a.departmentid = b.id) a
where a.drank <= 3;
}}}
https://docs.oracle.com/en/database/oracle/oracle-database/18/newft/new-features.html#GUID-04A4834D-848F-44D5-8C34-36237D40F194

https://docs.oracle.com/en/database/oracle/oracle-database/19/newft/new-features.html#GUID-06A15128-1172-48E5-8493-CD670B9E57DC


! issues 


!! upgrade 
https://jonathanlewis.wordpress.com/2019/04/08/describe-upgrade/


!! 19c RAC limitations / licensing 
<<<
We know that a license of Oracle Database Standard Edition (DB SE) includes into it clustering services with Oracle Real Application Clusters (RAC) as a standard feature. Oracle RAC is not included in the Standard Edition of releases prior to Oracle Database 10g, nor is it an available option with those earlier releases. For Oracle DB SE that is no longer available on the price list, the free feature of RAC could be used to cluster up to a maximum of 4 sockets for eligible versions of DB SE.

For customers that are using Oracle DB SE, Oracle has now announced the de-support of RAC with Oracle DB SE 19c. If a customer attempts to upgrade to Oracle DB 19c, they will have 2 options (upgrade paths) to choose from:

OPTION 1:  Upgrade to DB EE, on which RAC 19c is an extra-add on, chargeable option (as opposed to standard feature with DB SE). Here, a customer will upgrade from Oracle RAC Standard Edition (SE) to Oracle RAC Enterprise Edition (EE). Note: If customer attempts to install RAC 19c Database using Standard Edition, the Oracle Universal Installer will prevent the installation.

OR

OPTION 2:  Convert Oracle RAC Standard Edition to  a Single Instance (Non-RAC) Standard Edition


There is another consideration. Most real life requirements are for business critical HA, that is Active Passive. If this is the real requirement then you can also use Clusterware from Oracle which comes included at no charge when you buy Oracle Linux support. If you are using Red Hat you don’t have to re install the OS. Oracle will just take over supporting your current Red Hat OS and you get to use Clusterware for free. Best part is that Oracle Linux is lower in price to buy than Red Hat. Over all a much much lower solution cost. Many Oracle customers are choosing this option.
<<<
* Auto STS Capture Task	
https://mikedietrichde.com/2020/05/28/do-you-love-unexpected-surprises-sys_auto_sts-in-oracle-19-7-0/
<<<
As far as I can see, the starting point for this is Bug 30001331 - CAPTURE SQL STATEMENTS INTO STS FOR PLAN STABILITY. It directed me to Bug 30260530 - CONTENT INCLUSION OF 30001331 IN DATABASE RU 19.7.0.0.0. So this seem to be present since 19.7.0.  And the capture into it happens by default.
<<<
http://www.evernote.com/shard/s48/sh/1a9c1779-94ec-4e5a-a26f-ba92ea08988e/3bb10603e76f4fb346d7df4328882dcd

Also check out this thread at oracle-l for options on 10GbE on V2 http://www.freelists.org/post/oracle-l/Exadata-V2-Compute-Node-10GigE-PCI-card-installation







{{{
create table parallel_t1(c1 int, c2 char(100));

insert into parallel_t1
select level, 'x'
from dual
connect by level <= 8000
;

commit;


alter system set db_file_multiblock_read_count=128;
*._db_block_prefetch_limit=0
*._db_block_prefetch_quota=0
*._db_file_noncontig_mblock_read_count=0

alter system flush buffer_cache;


-- generate one parallel query
select count(*) from parallel_t1;


16:28:36 SYS@orcl> shutdown abort
ORACLE instance shut down.
16:29:21 SYS@orcl> startup pfile='/home/oracle/app/oracle/product/11.2.0/dbhome_2/dbs/initorcl.ora'
ORACLE instance started.

Total System Global Area  456146944 bytes
Fixed Size                  1344840 bytes
Variable Size             348129976 bytes
Database Buffers          100663296 bytes
Redo Buffers                6008832 bytes
Database mounted.
Database opened.
16:29:33 SYS@orcl> alter system flush buffer_cache;

System altered.

16:29:38 SYS@orcl> show parameter db_file_multi

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_file_multiblock_read_count        integer     128
16:29:47 SYS@orcl>
16:29:47 SYS@orcl> set lines 300
16:29:51 SYS@orcl> col "Parameter" FOR a40
16:29:51 SYS@orcl> col "Session Value" FOR a20
16:29:51 SYS@orcl> col "Instance Value" FOR a20
16:29:51 SYS@orcl> col "Description" FOR a50
16:29:51 SYS@orcl> SELECT a.ksppinm "Parameter", b.ksppstvl "Session Value", c.ksppstvl "Instance Value", a.ksppdesc "Description"
16:29:51   2  FROM x$ksppi a, x$ksppcv b, x$ksppsv c
16:29:51   3  WHERE a.indx = b.indx AND a.indx = c.indx
16:29:51   4  AND substr(ksppinm,1,1)='_'
16:29:51   5  AND a.ksppinm like '%&parameter%'
16:29:51   6  /
Enter value for parameter: read_count

Parameter                                Session Value        Instance Value       Description
---------------------------------------- -------------------- -------------------- --------------------------------------------------
_db_file_exec_read_count                 128                  128                  multiblock read count for regular clients
_db_file_optimizer_read_count            128                  128                  multiblock read count for regular clients
_db_file_noncontig_mblock_read_count     0                    0                    number of noncontiguous db blocks to be prefetched
_sort_multiblock_read_count              2                    2                    multi-block read count for sort

16:29:54 SYS@orcl>
16:29:54 SYS@orcl> @mystat

628 rows created.


SNAP_DATE_END
-------------------
2014-09-08 16:29:57


SNAP_DATE_BEGIN
-------------------



no rows selected


no rows selected


0 rows deleted.

16:29:57 SYS@orcl> select count(*) from parallel_t1;

  COUNT(*)
----------
      8000

16:30:03 SYS@orcl> @mystat

628 rows created.


SNAP_DATE_END
-------------------
2014-09-08 16:30:05


SNAP_DATE_BEGIN
-------------------
2014-09-08 16:29:57


      Difference Statistics Name
---------------- --------------------------------------------------------------
               2 CPU used by this session
               4 CPU used when call started
               3 DB time
             628 HSC Heap Segment Block Changes
              10 SQL*Net roundtrips to/from client
              80 buffer is not pinned count
           3,225 bytes received via SQL*Net from client
           2,308 bytes sent via SQL*Net to client
              15 calls to get snapshot scn: kcmgss
               1 calls to kcmgas
              32 calls to kcmgcs
       1,097,728 cell physical IO interconnect bytes
               4 cluster key scan block gets
               4 cluster key scans
             672 consistent changes
             250 consistent gets
              12 consistent gets - examination
             250 consistent gets from cache
             211 consistent gets from cache (fastpath)
               1 cursor authentications
           1,307 db block changes
             703 db block gets
             703 db block gets from cache
              10 db block gets from cache (fastpath)
              18 enqueue releases
              19 enqueue requests
              14 execute count
             530 file io wait time
             149 free buffer requested
               5 index fetch by key
               2 index scans kdiixs1
             218 no work - consistent read gets
              42 non-idle wait count
              19 opened cursors cumulative
               5 parse count (failures)
              12 parse count (hard)
              19 parse count (total)
               1 parse time elapsed
              32 physical read IO requests
       1,097,728 physical read bytes
              32 physical read total IO requests
       1,097,728 physical read total bytes
             134 physical reads
             134 physical reads cache
             102 physical reads cache prefetch
              56 recursive calls
             629 redo entries
          88,372 redo size
             953 session logical reads
               3 shared hash latch upgrades - no wait
               3 sorts (memory)
               2 sorts (rows)
               5 sql area purged
               1 table fetch by rowid
             211 table scan blocks gotten
          13,560 table scan rows gotten
               4 table scans (short tables)
          42,700 undo change vector size
              17 user calls
               3 workarea executions - optimal
               4 workarea memory allocated

61 rows selected.


SNAP_DATE_BEGIN     SNAP_DATE_END
------------------- -------------------
2014-09-08 16:29:57 2014-09-08 16:30:05


1256 rows deleted.

16:30:05 SYS@orcl> set lines 300
16:30:38 SYS@orcl> col "Parameter" FOR a40
16:30:38 SYS@orcl> col "Session Value" FOR a20
16:30:38 SYS@orcl> col "Instance Value" FOR a20
16:30:38 SYS@orcl> col "Description" FOR a50
16:30:38 SYS@orcl> SELECT a.ksppinm "Parameter", b.ksppstvl "Session Value", c.ksppstvl "Instance Value", a.ksppdesc "Description"
16:30:38   2  FROM x$ksppi a, x$ksppcv b, x$ksppsv c
16:30:38   3  WHERE a.indx = b.indx AND a.indx = c.indx
16:30:38   4  AND substr(ksppinm,1,1)='_'
16:30:38   5  AND a.ksppinm like '%&parameter%'
16:30:38   6  /
Enter value for parameter: prefetch

Parameter                                Session Value        Instance Value       Description
---------------------------------------- -------------------- -------------------- --------------------------------------------------
_db_block_prefetch_quota                 0                    0                    Prefetch quota as a percent of cache size
_db_block_prefetch_limit                 0                    0                    Prefetch limit in blocks


}}}
{{{

-- CREATE THE JOB 
-- 1min interval --   repeat_interval => 'FREQ=MINUTELY;BYSECOND=0',
-- 2mins interval -- repeat_interval => 'FREQ=MINUTELY;INTERVAL=2;BYSECOND=0',
-- 10secs interval -- repeat_interval => 'FREQ=SECONDLY;INTERVAL=10',

BEGIN
    SYS.DBMS_SCHEDULER.CREATE_JOB (
            job_name => '"SYSTEM"."AWR_1MIN_SNAP"',
            job_type => 'PLSQL_BLOCK',
            job_action => 'BEGIN
dbms_workload_repository.create_snapshot;
END;',
            number_of_arguments => 0,
            start_date => SYSTIMESTAMP,
            repeat_interval => 'FREQ=MINUTELY;BYSECOND=0',
            end_date => NULL,
            job_class => '"SYS"."DEFAULT_JOB_CLASS"',
            enabled => FALSE,
            auto_drop => FALSE,
            comments => 'AWR_1MIN_SNAP',
            credential_name => NULL,
            destination_name => NULL);

    SYS.DBMS_SCHEDULER.SET_ATTRIBUTE( 
             name => '"SYSTEM"."AWR_1MIN_SNAP"', 
             attribute => 'logging_level', value => DBMS_SCHEDULER.LOGGING_OFF);
          
    SYS.DBMS_SCHEDULER.enable(
             name => '"SYSTEM"."AWR_1MIN_SNAP"');

END; 
/


-- ENABLE JOB
BEGIN
    SYS.DBMS_SCHEDULER.enable(
             name => '"SYSTEM"."AWR_1MIN_SNAP"');
END;
/   


-- RUN JOB
BEGIN
	SYS.DBMS_SCHEDULER.run_job('"SYSTEM"."AWR_1MIN_SNAP"');
END;
/


-- DISABLE JOB
BEGIN
    SYS.DBMS_SCHEDULER.disable(
             name => '"SYSTEM"."AWR_1MIN_SNAP"');
END;
/   


-- DROP JOB
BEGIN
    SYS.DBMS_SCHEDULER.DROP_JOB(job_name => '"SYSTEM"."AWR_1MIN_SNAP"',
                                defer => false,
                                force => true);
END;
/




-- MONITOR JOB
SELECT * FROM DBA_SCHEDULER_JOB_LOG WHERE job_name = 'AWR_1MIN_SNAP';

col JOB_NAME format a15
col START_DATE format a25
col LAST_START_DATE format a25
col NEXT_RUN_DATE format a25
SELECT job_name, enabled, start_date, last_start_date, next_run_date FROM DBA_SCHEDULER_JOBS WHERE job_name = 'AWR_1MIN_SNAP';

-- AWR get recent snapshot
select * from 
(SELECT s0.instance_number, s0.snap_id, s0.startup_time,
  TO_CHAR(s0.END_INTERVAL_TIME,'YYYY-Mon-DD HH24:MI:SS') snap_start,
  TO_CHAR(s1.END_INTERVAL_TIME,'YYYY-Mon-DD HH24:MI:SS') snap_end,
  round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) ela_min
FROM dba_hist_snapshot s0,
  dba_hist_snapshot s1
WHERE s1.snap_id           = s0.snap_id + 1
ORDER BY snap_id DESC)
where rownum < 11;

}}}




.
A short video about it, worth watching it whenever you get some time, only 12min:
https://www.ansible.com/quick-start-video


https://www.doag.org/formes/pubfiles/7375105/2015-K-INF-Frits_Hoogland-Automating__DBA__tasks_with_Ansible-Praesentation.pdf
https://fritshoogland.wordpress.com/2014/09/14/using-ansible-for-executing-oracle-dba-tasks/

https://learnxinyminutes.com/docs/ansible/

{{{
oracle@localhost.localdomain:/u01/oracle:orcl
$ s1

SQL*Plus: Release 12.1.0.1.0 Production on Tue Dec 16 00:53:22 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

00:53:23 SYS@orcl>  select name, cdb, con_id from v$database;

NAME      CDB     CON_ID
--------- --- ----------
ORCL      YES          0

00:53:23 SYS@orcl> select INSTANCE_NAME, STATUS, CON_ID from v$instance;

INSTANCE_NAME    STATUS           CON_ID
---------------- ------------ ----------
orcl             OPEN                  0

00:53:39 SYS@orcl> col name format A20
00:54:24 SYS@orcl> select name, con_id from v$services;

NAME                     CON_ID
-------------------- ----------
pdb1                          3
orclXDB                       1
orcl                          1
SYS$BACKGROUND                1
SYS$USERS                     1

00:54:30 SYS@orcl> select CON_ID, NAME, OPEN_MODE from v$pdbs;

    CON_ID NAME                 OPEN_MODE
---------- -------------------- ----------
         2 PDB$SEED             READ ONLY
         3 PDB1                 READ WRITE

00:57:49 SYS@orcl> show con_name

CON_NAME
------------------------------
CDB$ROOT
00:58:19 SYS@orcl> show con_id

CON_ID
------------------------------
1
00:58:25 SYS@orcl> SELECT sys_context('userenv','CON_NAME') from dual;

SYS_CONTEXT('USERENV','CON_NAME')
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
CDB$ROOT

00:58:36 SYS@orcl> SELECT sys_context('userenv','CON_ID') from dual;

SYS_CONTEXT('USERENV','CON_ID')
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1

00:58:44 SYS@orcl> col PDB_NAME format a8
00:59:43 SYS@orcl> col CON_ID format 99
00:59:51 SYS@orcl> select PDB_ID, PDB_NAME, DBID, GUID, CON_ID from cdb_pdbs;

    PDB_ID PDB_NAME       DBID GUID                             CON_ID
---------- -------- ---------- -------------------------------- ------
         2 PDB$SEED 4080030308 F081641BB43F0F7DE045000000000001      1
         3 PDB1     3345156736 F0832BAF14721281E045000000000001      1

01:00:07 SYS@orcl> col MEMBER format A40
01:00:42 SYS@orcl> select GROUP#, CON_ID, MEMBER from v$logfile;

    GROUP# CON_ID MEMBER
---------- ------ ----------------------------------------
         3      0 /u01/app/oracle/oradata/ORCL/onlinelog/o
                  1_mf_3_9fxn1pmn_.log

         3      0 /u01/app/oracle/fast_recovery_area/ORCL/
                  onlinelog/o1_mf_3_9fxn1por_.log

         2      0 /u01/app/oracle/oradata/ORCL/onlinelog/o
                  1_mf_2_9fxn1lmy_.log

         2      0 /u01/app/oracle/fast_recovery_area/ORCL/
                  onlinelog/o1_mf_2_9fxn1lox_.log

    GROUP# CON_ID MEMBER
---------- ------ ----------------------------------------

         1      0 /u01/app/oracle/oradata/ORCL/onlinelog/o
                  1_mf_1_9fxn1dq4_.log

         1      0 /u01/app/oracle/fast_recovery_area/ORCL/
                  onlinelog/o1_mf_1_9fxn1dsx_.log


6 rows selected.

01:00:49 SYS@orcl> col NAME format A60
01:01:28 SYS@orcl> select NAME , CON_ID from v$controlfile;

NAME                                                         CON_ID
------------------------------------------------------------ ------
/u01/app/oracle/oradata/ORCL/controlfile/o1_mf_9fxn1csd_.ctl      0
/u01/app/oracle/fast_recovery_area/ORCL/controlfile/o1_mf_9f      0
xn1d0k_.ctl


01:01:35 SYS@orcl> col file_name format A50
01:02:01 SYS@orcl> col tablespace_name format A8
01:02:10 SYS@orcl> col file_id format 9999
01:02:18 SYS@orcl> col con_id format 999
01:02:26 SYS@orcl> select FILE_NAME, TABLESPACE_NAME, FILE_ID, con_id from cdb_data_files order by con_id ;

FILE_NAME                                          TABLESPA FILE_ID CON_ID
-------------------------------------------------- -------- ------- ------
/u01/app/oracle/oradata/ORCL/datafile/o1_mf_system SYSTEM         1      1
_9fxmx6s1_.dbf

/u01/app/oracle/oradata/ORCL/datafile/o1_mf_sysaux SYSAUX         3      1
_9fxmvhl3_.dbf

/u01/app/oracle/oradata/ORCL/datafile/o1_mf_users_ USERS          6      1
9fxn0t8s_.dbf

/u01/app/oracle/oradata/ORCL/datafile/o1_mf_undotb UNDOTBS1       4      1
s1_9fxn0vgg_.dbf

FILE_NAME                                          TABLESPA FILE_ID CON_ID
-------------------------------------------------- -------- ------- ------

/u01/app/oracle/oradata/ORCL/datafile/o1_mf_system SYSTEM         5      2
_9fxn22po_.dbf

/u01/app/oracle/oradata/ORCL/datafile/o1_mf_sysaux SYSAUX         7      2
_9fxn22p3_.dbf

/u01/app/oracle/oradata/ORCL/F0832BAF14721281E0450 USERS         13      3
00000000001/datafile/o1_mf_users_9fxvoh6n_.dbf

/u01/app/oracle/oradata/ORCL/F0832BAF14721281E0450 SYSAUX        12      3

FILE_NAME                                          TABLESPA FILE_ID CON_ID
-------------------------------------------------- -------- ------- ------
00000000001/datafile/o1_mf_sysaux_9fxvnjdl_.dbf

/u01/app/oracle/oradata/ORCL/F0832BAF14721281E0450 APEX_226      14      3
00000000001/datafile/o1_mf_apex_226_9gfgd96o_.dbf  45286309
                                                   61551

/u01/app/oracle/oradata/ORCL/F0832BAF14721281E0450 SYSTEM        11      3
00000000001/datafile/o1_mf_system_9fxvnjdq_.dbf


10 rows selected.

01:02:40 SYS@orcl> col file_name format A42
01:03:49 SYS@orcl> select FILE_NAME, TABLESPACE_NAME, FILE_ID from dba_data_files;

FILE_NAME                                  TABLESPA FILE_ID
------------------------------------------ -------- -------
/u01/app/oracle/oradata/ORCL/datafile/o1_m SYSTEM         1
f_system_9fxmx6s1_.dbf

/u01/app/oracle/oradata/ORCL/datafile/o1_m SYSAUX         3
f_sysaux_9fxmvhl3_.dbf

/u01/app/oracle/oradata/ORCL/datafile/o1_m USERS          6
f_users_9fxn0t8s_.dbf

/u01/app/oracle/oradata/ORCL/datafile/o1_m UNDOTBS1       4
f_undotbs1_9fxn0vgg_.dbf

FILE_NAME                                  TABLESPA FILE_ID
------------------------------------------ -------- -------


01:03:56 SYS@orcl> col NAME format A12
01:07:23 SYS@orcl> select FILE#, ts.name, ts.ts#, ts.con_id
01:07:24   2  from v$datafile d, v$tablespace ts
01:07:30   3  where d.ts#=ts.ts#
01:07:39   4  and d.con_id=ts.con_id
01:07:46   5  order by 4,3;

     FILE# NAME                TS# CON_ID
---------- ------------ ---------- ------
         1 SYSTEM                0      1
         3 SYSAUX                1      1
         4 UNDOTBS1              2      1
         6 USERS                 4      1
         5 SYSTEM                0      2
         7 SYSAUX                1      2
        11 SYSTEM                0      3
        12 SYSAUX                1      3
        13 USERS                 3      3
        14 APEX_2264528          4      3
           630961551

     FILE# NAME                TS# CON_ID
---------- ------------ ---------- ------


10 rows selected.

01:07:52 SYS@orcl> col file_name format A47
01:08:23 SYS@orcl> select FILE_NAME, TABLESPACE_NAME, FILE_ID
01:08:30   2  from cdb_temp_files;

FILE_NAME                                       TABLESPA FILE_ID
----------------------------------------------- -------- -------
/u01/app/oracle/oradata/ORCL/datafile/o1_mf_tem TEMP           1
p_9fxn206l_.tmp

/u01/app/oracle/oradata/ORCL/F0832BAF14721281E0 TEMP           3
45000000000001/datafile/o1_mf_temp_9fxvnznp_.db
f

/u01/app/oracle/oradata/ORCL/datafile/pdbseed_t TEMP           2
emp01.dbf


01:08:36 SYS@orcl> col username format A22
01:09:09 SYS@orcl> select username, common, con_id from cdb_users
01:09:17   2  where username ='SYSTEM';

USERNAME               COM CON_ID
---------------------- --- ------
SYSTEM                 YES      1
SYSTEM                 YES      3
SYSTEM                 YES      2

01:09:22 SYS@orcl> select distinct username from cdb_users
01:09:37   2  where common ='YES';

USERNAME
----------------------
SPATIAL_WFS_ADMIN_USR
OUTLN
CTXSYS
SYSBACKUP
APEX_REST_PUBLIC_USER
ORACLE_OCM
APEX_PUBLIC_USER
MDDATA
GSMADMIN_INTERNAL
SYSDG
ORDDATA

USERNAME
----------------------
APEX_040200
DVF
MDSYS
GSMUSER
FLOWS_FILES
AUDSYS
DVSYS
OJVMSYS
APPQOSSYS
SI_INFORMTN_SCHEMA
ANONYMOUS

USERNAME
----------------------
LBACSYS
WMSYS
DIP
SYSKM
XS$NULL
OLAPSYS
SPATIAL_CSW_ADMIN_USR
APEX_LISTENER
SYSTEM
ORDPLUGINS
DBSNMP

USERNAME
----------------------
ORDSYS
XDB
GSMCATUSER
SYS

37 rows selected.

01:09:43 SYS@orcl> select distinct username, con_id from cdb_users
01:10:07   2  where common ='NO';

USERNAME               CON_ID
---------------------- ------
HR                          3
OE                          3
ADMIN                       3
PMUSER                      3
OBE                         3

01:10:26 SYS@orcl> select username, con_id from cdb_users
01:10:51   2  where common ='NO';

USERNAME               CON_ID
---------------------- ------
PMUSER                      3
HR                          3
ADMIN                       3
OE                          3
OBE                         3

01:10:59 SYS@orcl> col role format A30
01:11:34 SYS@orcl> select role, common, con_id from cdb_roles;

ROLE                           COM CON_ID
------------------------------ --- ------
CONNECT                        YES      1
RESOURCE                       YES      1
DBA                            YES      1
AUDIT_ADMIN                    YES      1
AUDIT_VIEWER                   YES      1
SELECT_CATALOG_ROLE            YES      1
EXECUTE_CATALOG_ROLE           YES      1
DELETE_CATALOG_ROLE            YES      1
CAPTURE_ADMIN                  YES      1
EXP_FULL_DATABASE              YES      1
IMP_FULL_DATABASE              YES      1

... output snipped ...

ROLE                           COM CON_ID
------------------------------ --- ------
DV_PATCH_ADMIN                 YES      2
DV_STREAMS_ADMIN               YES      2
DV_GOLDENGATE_ADMIN            YES      2
DV_XSTREAM_ADMIN               YES      2
DV_GOLDENGATE_REDO_ACCESS      YES      2
DV_AUDIT_CLEANUP               YES      2
DV_DATAPUMP_NETWORK_LINK       YES      2
DV_REALM_RESOURCE              YES      2
DV_REALM_OWNER                 YES      2

251 rows selected.

01:11:40 SYS@orcl> desc sys.system_privilege_map
 Name                                                                                                                                                  Null?    Type
 ----------------------------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------------------------------
 PRIVILEGE                                                                                                                                             NOT NULL NUMBER
 NAME                                                                                                                                                  NOT NULL VARCHAR2(40)
 PROPERTY                                                                                                                                              NOT NULL NUMBER

01:12:22 SYS@orcl> desc sys.table_privilege_map
 Name                                                                                                                                                  Null?    Type
 ----------------------------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------------------------------
 PRIVILEGE                                                                                                                                             NOT NULL NUMBER
 NAME                                                                                                                                                  NOT NULL VARCHAR2(40)

01:12:30 SYS@orcl> desc CDB_SYS_PRIVS
 Name                                                                                                                                                  Null?    Type
 ----------------------------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------------------------------
 GRANTEE                                                                                                                                                        VARCHAR2(128)
 PRIVILEGE                                                                                                                                                      VARCHAR2(40)
 ADMIN_OPTION                                                                                                                                                   VARCHAR2(3)
 COMMON                                                                                                                                                         VARCHAR2(3)
 CON_ID                                                                                                                                                         NUMBER

01:13:07 SYS@orcl> desc CDB_TAB_PRIVS
 Name                                                                                                                                                  Null?    Type
 ----------------------------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------------------------------
 GRANTEE                                                                                                                                                        VARCHAR2(128)
 OWNER                                                                                                                                                          VARCHAR2(128)
 TABLE_NAME                                                                                                                                                     VARCHAR2(128)
 GRANTOR                                                                                                                                                        VARCHAR2(128)
 PRIVILEGE                                                                                                                                                      VARCHAR2(40)
 GRANTABLE                                                                                                                                                      VARCHAR2(3)
 HIERARCHY                                                                                                                                                      VARCHAR2(3)
 COMMON                                                                                                                                                         VARCHAR2(3)
 TYPE                                                                                                                                                           VARCHAR2(24)
 CON_ID                                                                                                                                                         NUMBER

01:13:16 SYS@orcl> col grantee format A10
01:14:02 SYS@orcl> col granted_role format A28
01:14:09 SYS@orcl> select grantee, granted_role, common, con_id
01:14:16   2  from cdb_role_privs
01:14:22   3  where grantee='SYSTEM';

GRANTEE    GRANTED_ROLE                 COM CON_ID
---------- ---------------------------- --- ------
SYSTEM     DBA                          YES      1
SYSTEM     AQ_ADMINISTRATOR_ROLE        YES      1
SYSTEM     DBA                          YES      2
SYSTEM     AQ_ADMINISTRATOR_ROLE        YES      2
SYSTEM     DBA                          YES      3
SYSTEM     AQ_ADMINISTRATOR_ROLE        YES      3

6 rows selected.

01:14:29 SYS@orcl>

}}}
https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/release-notes/content/upgrading_parent.html
https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/release-notes/content/known_issues.html
— Oracle Mix - Oracle OpenWorld and Oracle Develop Suggest-a-Session
https://mix.oracle.com/oow10/faq
https://mix.oracle.com/oow10/streams


http://blogs.oracle.com/oracleopenworld/2010/06/missed_the_call_for_papers_dea.html
http://blogs.oracle.com/datawarehousing/2010/06/openworld_suggest-a-session_vo.html
http://structureddata.org/2010/07/13/oracle-openworld-2010-the-oracle-real-world-performance-group/
http://kevinclosson.wordpress.com/2010/08/26/whats-really-happening-at-openworld-2010/

BI 
http://www.rittmanmead.com/2010/09/03/rittman-mead-at-oracle-openworld-2010-san-francisco/
OCW 2010 photos by Karl Arao
http://www.flickr.com/photos/kylehailey/sets/72157625025196338/

Oracle Closed World 2010
http://www.flickr.com/photos/kylehailey/sets/72157625018583630/
-- scheduler builder username is karlara0
https://oracleus.wingateweb.com/scheduler/login.jsp


Volunteer geek work at RACSIG 9-10am Wed, Oct5
http://en.wikibooks.org/wiki/RAC_Attack_-_Oracle_Cluster_Database_at_Home/Events

my notes ... http://www.evernote.com/shard/s48/sh/6591ce43-e00f-4b5c-ad12-b1f1547183a7/2a146737c4bfb7dab7453ba0bcdb4677

''bloggers meetup''
http://blogs.portrix-systems.de/brost/good-morning-san-francisco-5k-partner-fun-run/
http://dbakevlar.com/2011/10/oracle-open-world-2011-followup/
https://connor-mcdonald.com/2019/09/24/all-the-openworld-2019-downloads/
Data Mining for Business Analytics https://learning.oreilly.com/library/view/data-mining-for/9781119549840/
https://www.dataminingbook.com/book/python-edition
https://github.com/gedeck/dmba


Agile Data Science 2.0 https://learning.oreilly.com/library/view/agile-data-science/9781491960103/
https://gumroad.com/d/910c45fe02199287cc2ff23abcfcf821
https://github.com/rjurney/Agile_Data_Code_2





making use of smart scan made the run times faster, cpu on a lower utilization, + can accommodate more databases 
http://www.evernote.com/shard/s48/sh/b1f43d49-1bcd-4319-b274-19a91cf338ac/f9f554d2d03b3f20db591d5e68392cbf

https://leetcode.com/problems/valid-anagram/
{{{
242. Valid Anagram
Easy

Given two strings s and t , write a function to determine if t is an anagram of s.

Example 1:

Input: s = "anagram", t = "nagaram"
Output: true

Example 2:

Input: s = "rat", t = "car"
Output: false

Note:
You may assume the string contains only lowercase alphabets.

Follow up:
What if the inputs contain unicode characters? How would you adapt your solution to such case?
Accepted
415,494
Submissions
769,852
}}}

{{{
# class Solution:
#     def isAnagram(self, s: str, t: str) -> bool:

        
class Solution:

    def isAnagram(self, text1, text2):

        # text1 = 'Dog '
        text1 = text1.replace(' ', '').lower()

        # text2 = 'God '
        text2 = text2.replace(' ', '').lower()

        if sorted(text1) == sorted(text2):
            return True
        else:
            return False

}}}
{{{
Glenn Fawcett 
http://glennfawcett.files.wordpress.com/2013/06/ciops_data_x3-2.jpg
---
It wasn’t actually SLOB, but that might be interesting.
I used a mod of my blkhammer populate script to populate a bunch of tables OLTP style to 
show how WriteBack is used. As expected, Exadata is real good on 
“db file sequential read”… in the sub picosecond range if I am not mistaken :)
---
That was just a simple OLTP style insert test that spawns a bunch of PLSQL.  Yes for sure 
their were spills to disk... But the benefit was the coalescing of blocks.  DBWR is flushing really 
mostly random blocks, but the write back flash is pretty huge these days.  I was seeing average 
iosize to disk being around 800k but only about 8k to flash.
}}}
Backup and Recovery Performance and Best Practices for Exadata Cell and Oracle Exadata Database Machine  Oracle Database Release 11.2.0.2 and 11.2.0.3
http://www.oracle.com/technetwork/database/features/availability/maa-tech-wp-sundbm-backup-11202-183503.pdf

ODA (Oracle Database Appliance): HowTo Configure Multiple Public Network on GI (Grid Infrastructure) (Doc ID 1501039.1)
Data Guard: Redo Transport Services – How to use a separate network in a RAC environment. (Doc ID 1210153.1)
Data Guard Physical Standby 11.2 RAC Primary to RAC Standby using a second network (Doc ID 1349977.1)


https://blog.gruntwork.io/why-we-use-terraform-and-not-chef-puppet-ansible-saltstack-or-cloudformation-7989dad2865c


https://www.udemy.com/course/learn-devops-infrastructure-automation-with-terraform/learn/lecture/5890850#overview
https://www.udemy.com/course/building-oracle-cloud-infrastructure-using-terraform/
https://www.udemy.com/course/oracle-database-automation-using-ansible/
https://www.udemy.com/course/oracle-database-and-elk-stack-lets-do-data-visualization/
https://www.udemy.com/course/automate-file-processing-in-oracle-db-using-dbms-scheduler/


! short and sweet
https://www.linkedin.com/learning/learning-terraform-2/next-steps
https://www.udemy.com/course/learn-devops-infrastructure-automation-with-terraform/learn/lecture/5886134#overview



! OCI example stack (MuShop app)
https://oracle-quickstart.github.io/oci-cloudnative/introduction/
https://github.com/oracle-quickstart/oci-cloudnative

..
<<showtoc>>

! @@Create a new@@ PDB from the seed PDB
@@quickest way is to DBCA@@
DBCA options: 
* create a new PDB
* create new PDB from PDB Archive
* create PDB from PDB file set (RMAN backup and PDB XML metadata file)
<<<
1) Copies the data files from PDB$SEED data files 
2) Creates tablespaces SYSTEM, SYSAUX 
3) Creates a full catalog including metadata pointing to Oracle-supplied objects 
4) Creates common users: 
>	– Superuser SYS 
>	– SYSTEM 
5) Creates a local user (PDBA) 
> granted local PDB_DBA role 
6) Creates a new default service 
7) After PDB creation make sure TNS entry is created 
> CONNECT sys/oracle@pdb2 AS SYSDBA 
> CONNECT oracle/oracle@pdb2 
<<<

! @@Plug a non-CDB@@ in a CDB
options:
* TTS
* full export/import
* TDB (transportable database)
* DBMS_PDB package
* Clone a Remote Non-CDB (you can do it remotely)
* replication (Golden Gate)
<<<
using DBMS_PDB package below (running on the same server):
{{{

Cleanly shutdown the non-CDB and start it in read-only mode.

sqlplus / as sysdba
SHUTDOWN IMMEDIATE;
STARTUP OPEN READ ONLY;

Describe the non-DBC using the DBMS_PDB.DESCRIBE procedure

BEGIN
  DBMS_PDB.DESCRIBE(
    pdb_descr_file => '/tmp/db12c.xml');
END;
/

Shutdown the non-CDB database.
SHUTDOWN IMMEDIATE;

Connect to an existing CDB and create a new PDB using the file describing the non-CDB database
CREATE PLUGGABLE DATABASE pdb4 USING '/tmp/db12c.xml'
  COPY;

ALTER SESSION SET CONTAINER=pdb4;
@$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql

ALTER SESSION SET CONTAINER=pdb4;
ALTER PLUGGABLE DATABASE OPEN;

08:24:03 SYS@cdb21> ALTER SESSION SET CONTAINER=pdb4;

Session altered.

08:24:23 SYS@cdb21>
08:24:24 SYS@cdb21>
08:24:24 SYS@cdb21> select name from v$datafile;

NAME
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+DATA/CDB2/DATAFILE/undotbs1.340.868397457
+DATA/CDB2/0C1D0762158FE2B6E053AA08A8C0D1F5/DATAFILE/system.391.868520259
+DATA/CDB2/0C1D0762158FE2B6E053AA08A8C0D1F5/DATAFILE/sysaux.390.868520259
+DATA/CDB2/0C1D0762158FE2B6E053AA08A8C0D1F5/DATAFILE/users.389.868520259

08:24:29 SYS@cdb21> conn / as sysdba
Connected.
08:24:41 SYS@cdb21> select name from v$datafile
08:24:45   2  ;

NAME
-----------------------------------------------------------------------------------------------------------------------------------------------------------
+DATA/CDB2/DATAFILE/system.338.868397411
+DATA/CDB2/DATAFILE/sysaux.337.868397377
+DATA/CDB2/DATAFILE/undotbs1.340.868397457
+DATA/CDB2/FD9AC20F64D244D7E043B6A9E80A2F2F/DATAFILE/system.346.868397513
+DATA/CDB2/DATAFILE/users.339.868397457
+DATA/CDB2/FD9AC20F64D244D7E043B6A9E80A2F2F/DATAFILE/sysaux.345.868397513
+DATA/CDB2/DATAFILE/undotbs2.348.868397775
+DATA/CDB2/DATAFILE/undotbs3.349.868397775
+DATA/CDB2/DATAFILE/undotbs4.350.868397775
+DATA/CDB2/0C1D0762158FE2B6E053AA08A8C0D1F5/DATAFILE/system.391.868520259
+DATA/CDB2/0C1D0762158FE2B6E053AA08A8C0D1F5/DATAFILE/sysaux.390.868520259
+DATA/CDB2/0C1D0762158FE2B6E053AA08A8C0D1F5/DATAFILE/users.389.868520259

12 rows selected.

08:24:46 SYS@cdb21> select FILE_NAME, TABLESPACE_NAME, FILE_ID, con_id from cdb_data_files order by con_id ;

FILE_NAME
-----------------------------------------------------------------------------------------------------------------------------------------------------------
TABLESPACE_NAME                   FILE_ID     CON_ID
------------------------------ ---------- ----------
+DATA/CDB2/DATAFILE/system.338.868397411
SYSTEM                                  1          1

+DATA/CDB2/DATAFILE/sysaux.337.868397377
SYSAUX                                  3          1

+DATA/CDB2/DATAFILE/undotbs1.340.868397457
UNDOTBS1                                4          1

+DATA/CDB2/DATAFILE/undotbs4.350.868397775
UNDOTBS4                               10          1

+DATA/CDB2/DATAFILE/undotbs2.348.868397775
UNDOTBS2                                8          1

+DATA/CDB2/DATAFILE/undotbs3.349.868397775
UNDOTBS3                                9          1

+DATA/CDB2/DATAFILE/users.339.868397457
USERS                                   6          1


7 rows selected.
}}}
<<<

! @@Clone a PDB@@ from another PDB
@@through SQL*Plus@@
<<<
{{{
This technique copies a source PDB from a CDB and plugs the copy in to a CDB. The source
PDB is in the local CDB.
The steps to clone a PDB within the same CDB are the following:
1. In init.ora, set DB_CREATE_FILE_DEST= 'PDB3dir' (OMF) or
PDB_FILE_NAME_CONVERT= 'PDB1dir', 'PDB3dir' (non OMF).
2. Connect to the root of the CDB as a common user with CREATE PLUGGABLE DATABASE
privilege.
3. Quiesce the source PDB used to clone, using the command ALTER PLUGGABLE
DATABASE pdb1 READ ONLY after closing the PDB using the command ALTER
PLUGGABLE DATABASE CLOSE
4. Use the command CREATE PLUGGABLE DATABASE to clone the PDB pdb3 FROM pdb1.
5. Then open the new pdb3 with the command ALTER PLUGGABLE DATABASE OPEN.
If you do not use OMF, in step 4, use the command CREATE PLUGGABLE DATABASE with the
clause FILE_NAME_CONVERT=(’pdb1dir’,’ pdb3dir’) to define the directory of the
source files to copy from PDB1 and the target directory for the new files of PDB3.

quick step by step

alter session set container=cdb$root;
show con_name
set db_create_file_dest

15:09:04 SYS@orcl> ALTER PLUGGABLE DATABASE pdb2 close;

Pluggable database altered.

15:09:30 SYS@orcl> ALTER PLUGGABLE DATABASE pdb2 open read only;

Pluggable database altered.

15:09:40 SYS@orcl> CREATE PLUGGABLE DATABASE PDB3 FROM PDB2;

Pluggable database created.

15:12:25 SYS@orcl> ALTER PLUGGABLE DATABASE pdb3 open;

Pluggable database altered.

15:12:58 SYS@orcl> select CON_ID, dbid, NAME, OPEN_MODE from v$pdbs;

    CON_ID NAME                           OPEN_MODE
— — —
         2 PDB$SEED                       READ ONLY
         3 PDB1                           READ WRITE
         4 PDB2                           READ ONLY
         5 PDB3                           READ WRITE

 select FILE_NAME, TABLESPACE_NAME, FILE_ID, con_id from cdb_data_files order by con_id ;

15:13:33 SYS@orcl> show parameter db_create

NAME                                 TYPE        VALUE
-
db_create_file_dest                  string      /u01/app/oracle/oradata
db_create_online_log_dest_1          string
db_create_online_log_dest_2          string
db_create_online_log_dest_3          string
db_create_online_log_dest_4          string
db_create_online_log_dest_5          string
15:15:14 SYS@orcl>
15:15:17 SYS@orcl>
15:15:17 SYS@orcl> show parameter file_name

NAME                                 TYPE        VALUE
-
db_file_name_convert                 string
log_file_name_convert                string
pdb_file_name_convert                string

ALTER PLUGGABLE DATABASE pdb2 close;
ALTER PLUGGABLE DATABASE pdb2 open;

CONNECT sys/oracle@pdb3 AS SYSDBA
CONNECT oracle/oracle@pdb3 
}}}
<<<

! @@Plug an unplugged PDB@@ into another CDB
@@quickest is DBCA, just do the unplug and plug from UI@@






! References:
http://oracle-base.com/articles/12c/multitenant-create-and-configure-pluggable-database-12cr1.php














http://kevinclosson.wordpress.com/2012/02/12/how-many-non-exadata-rac-licenses-do-you-need-to-match-exadata-performance/
{{{
kevinclosson
February 14, 2012 at 8:52 pm
Actually, Matt, I see nothing wrong with what the rep said. A single Exadata database grid host can drive a tremendous amount of storage throughput but it can only eat 3.2GB/s since there is but a single 40Gb HCA port active on each host. A single host can drive the storage grid nearly to saturation via Smart Scan…but as soon as the data flow back to the host approaches 3.2GB/s the Smart Scan will start to throttle. In fact single session (non-Parallel Query) can drive Smart Scan to well over 10GB/s in a full rack but, in that case you’d have a single foreground process on a single core of WSM-EP so there wouldn’t sufficient bandwidth to ingest much data..about 250MB/s can flow into a single session performing a Smart Scan. So the hypothetical there would be Smart Scan is churning through, let’s say, 10GB/s and Smart Scan is whittling down the payload by about 9.75GB/s through filtration and projection. Those are very close to realistic numbers I’ve just cited but I haven’t measured those sort of “atomics” in a year so I’m going by memory. Let’s say give or take 5% on my numbers.
<<<
}}}

http://forums.theregister.co.uk/forum/1/2011/12/12/ibm_vs_oracle_data_centre_optimisation/
{{{
Exadata: 2 Grids, 2 sets of roles.
>The Exadata storage nodes compress database files using a hybrid columnar algorithm so they take up less space and can be searched more quickly. They also run a chunk of the Oracle 11g code, pre-processing SQL queries on this compressed data before passing it off to the full-on 11g database nodes.
Exadata cells do not compress data. Data compression is done at load time (in the direct path) and compression (all varieties not just HCC) is code executed only on the RAC grid CPUS. Exadata users get no CPU help from the 168 cores in the storage grid when it comes to compressing data.
Exadata cells can, however, decompress HCC data (but not the other types of compressed data). I wrote "can" because cells monitor how busy they are and are constantly notified by the RAC servers about their respective CPU utilization. Since decompressing HCC data is murderously CPU-intensive the cells easily go processor-bound. At that time cells switch to "pass-through" mode shipping up to 40% of the HCC blocks to the RAC grid in compressed form. Unfortunately there are more CPUs in the storage grid than the RAC grid. There is a lot of writing on this matter on my blog and in the Expert Oracle Exadata book (Apress).
Also, while there are indeed 40GB DDR Infiniband paths to/from the RAC grid and the storage grid, there is only 3.2GB/s usable bandwidth for application payload between these grids. Therefore, the aggregate maximum data flow between the RAC grid and the cells is 25.6GB/s (3.2x8). There are 8 IB HCAs in either X2 model as well so the figure sticks for both. In the HP Oracle Database Mahine days that figure was 12.8GB/s.
With a maximum of 25.6 GB/s for application payload (Oracle's iDB protocol as it is called) one has to quickly do the math to see the mandatory data reduction rate in storage. That is, if only 25.6 GB/s fits through the network between these two grids yet a full rack can scan combined HDD+FLASH at 75 GB/s then you have to write SQL that throws away at least 66% of the data that comes off disk. Now, I'll be the first to point out that 66% payload reduction from cells is common. Indeed, the cells filter (WHERE predicate) and project columns (only the cited and join columns need shipped). However, compression changes all of that.
If scanning HCC data on a full rack Exadata configuration, and that data is compressed at the commonly cited compression ratio of 10:1 then the "effective" scan rate is 750GB/s. Now use the same predicates and cite the same columns and you'll get 66% reduced payload--or 255GB/s that needs to flow over iDB. That's about 10x over-subscription of the available 25.6 GB/s iDB bandwidth. When this occurs, I/O is throttled. That is, if the filtered/projected data produced by the cells is greater than 25.6GB/s then I/O wanes. Don't expect 10x query speedup because the product only has to perform 10% the I/O it would in the non-compressed case (given a HCC compression ratio of 10:1).
That is how the product works. So long as your service levels are met, fine. Just don't expect to see 75GB/s of HCC storage throughput with complex queries because this asymmetrical MPP architecture (Exadata) cannot scale that way (for more info see: http://bit.ly/tFauDA )
}}}


http://kevinclosson.wordpress.com/2011/11/23/mark-hurd-knows-cios-i-know-trivia-cios-may-not-care-about-either-hang-on-im-booting-my-cell-phone/#comment-37527
{{{
kevinclosson
November 28, 2011 at 7:09 pm
“I can see the shared nothing vs shared everything point in a CPU + separate storage perspective.”

…actually, I don’t fester about with the shared-disk versus shared nothing as I really don’t think it matters. It’s true that Real Application Clusters requires shared disk but that is not a scalability hindrance–so long as one works out the storage bandwidth requirements–a task that is not all that difficult with modern storage networking options. So long as ample I/O flow is plumbed into RAC it scales DW/BI workloads. It is as simple as that. On the other hand, what doesn’t scale is asymmetry. Asymmetry has never scaled as would be obvious to even the casual observer. As long as all code can run on all CPUs (symmetry) scalability is within reach. What I’m saying is that RAC actually has better scalability characteristics when running with conventional storage than with Exadata! That’s a preposterous statement to the folks who don’t actually know the technology, as well as those who are dishonest about the technology, but obvious to the rest of us. It’s simple computer science. One cannot take the code path of query processing, chop it off at the knees (filtration/projection) and offload that to some arbitrary percentage of your CPU assets and pigeon-hole all the rest of the code to the remaining CPUs and cross fingers.

A query cannot be equally CPU-intensive in all query code all the time. There is natural ebb and tide. If the query plan is at the point of intensive join processing it is not beneficial to have over fifty percent of the CPUs in the rack unable to process join code (as is the case with Exadata).

To address this sort of ebb/tide imbalance Oracle has “released” a “feature” referred to as “passthrough” where Exadata cells stop doing their value-add (filtration and HCC decompression) for up to about 40% of the data flowing off storage when cells get too busy (CPU-wise). At that point they just send unfiltered, compressed data to the RAC grid. The RAC grid, unfortunately, has less CPU cores than the storage grid and has brutally CPU-intensive work of its own to do (table join, sort, agg). “Passthrough” is discussed in the Expert Oracle Exadata (Apress) book.

This passthrough feature does allow water to find its level, as it were. When Exadata falls back to passthrough mode the whole configuration does indeed utilize all CPU and since idle CPU doesn’t do well to increase query processing performance this is a good thing. However, if Exadata cells stop doing the “Secret Sauce” (a.k.a., Offload Processing) when they get busy then why not just build a really large database grid (e.g., with the CPU count of all servers in an Exadata rack) and feed it with conventional storage? That way all CPU power is “in the right place” all the time. Well, the answer to that is clearly RAC licensing. Very few folks can afford to license enough cores to run a large enough RAC grid to make any of this matter. Instead they divert some monies that could go for a bigger database grid into “intelligent storage” and hope for the best.
}}}



http://www.snia.org/sites/default/education/tutorials/2008/fall/networking/DrorGoldenberg-Fabric_Consolidation_InfiniBand.pdf
3.2 GB/s unidirectional 
theoretical limit 3.2 GB/s measured due to server IO limitations


http://www.it-einkauf.de/images/PDF/677C777.pdf
{{{
INFINIBAND PHYSICAL-LAYER CHARACTERISTICS 
The InfiniBand physical-layer specification supports three data rates, designated 1X, 4X, and 12X, over both copper and fiber optic media. 
The base data rate, 1X single data rate (SDR), is clocked at 2.5 Gbps and is transmitted over two pairs of wires—transmit and receive—and 
yields an effective data rate of 2 Gbps full duplex (2 Gbps transmit, 2 Gbps receive). The 25 percent difference between data rate and  
clock rate is due to 8B/10B line encoding that dictates that for every 8 bits of data transmitted, an additional 2 bits of transmission 
overhead is incurred. 
}}}

infiniband cabling issues
{{{
InfiniBand cable presents a challenge within this environment because the cables are considerably thicker, heavier, and shorter in length 
to mitigate the effects of cross-talk and signal attenuation and achieve low bit error rates (BERs). To assure the operational integrity and 
performance of the HPC cluster, it is critically important to maintain the correct bend radius, or the integrity of the cable can be 
compromised such that the effects of cross-talk introduce unacceptable BERs. 
To address these issues, it is essential to thoroughly plan the InfiniBand implementation and provide a good cable management solution 
that enables easy expansion and replacement of failed cables and hardware. This is especially important when InfiniBand 12X or DDR 
technologies are being deployed because the high transmission rates are less tolerant to poor installation practices. 
}}}

http://www.redbooks.ibm.com/abstracts/tips0456.html
A single PCI Express serial link is a dual-simplex connection using two pairs of wires, one pair for transmit and one pair for receive, and can only transmit one bit per cycle. Although this sounds limiting, it can transmit at the extremely high speed of 2.5 Gbps, which equates to a burst mode of 320 MBps on a single connection. These two pairs of wires is called a lane.
{{{
Table: PCI Express maximum transfer rate
Lane width	Clock speed	Throughput (duplex, bits)	Throughput (duplex, bytes)	Initial expected uses
x1	2.5 GHz	5 Gbps	400 MBps	Slots, Gigabit Ethernet
x2	2.5 GHz	10 Gbps	800 MBps	
x4	2.5 GHz	20 Gbps	1.6 GBps	Slots, 10 Gigabit Ethernet, SCSI, SAS
x8	2.5 GHz	40 Gbps	3.2 GBps	
x16	2.5 GHz	80 Gbps	6.4 GBps	Graphics adapters
}}}


http://www.aiotestking.com/juniper/2011/07/when-using-a-40-gbps-switch-fabric-how-much-full-duplex-bandwidth-is-available-to-each-slot/
{{{
When using a 40 Gbps switch fabric, how much full duplex bandwidth is available to each slot?
A.
1.25 Gbps
}}}

Sun Blade 6048 InfiniBand QDR Switched Network Express Module Introduction
http://docs.oracle.com/cd/E19914-01/820-6705-10/chapter1.html
{{{
IB transfer rate (maximum)	
40 Gbps (QDR) per 4x IB port for the Sun Blade X6275 server module and 20 Gbps (DDR) per 4x IB port for the Sun Blade X6270 server module. There are two 4x IB ports per server module.

1,536 Gbps aggregate throughput
}}}


''email with Kevin''
<<<
on Exadata the 3.2 is establsihed by the PCI slot the HCA is sitting in. I don't scrutinize QDR IB these days. It would be duplex...would have to look it up.
<<<

wikipedia
<<<
http://en.wikipedia.org/wiki/InfiniBand where it mentioned about "The SDR connection's signalling rate is 2.5 gigabit per second (Gbit/s) in each direction per connection"
<<<

''The flash and HCA cards uses pci-e x8''
http://jarneil.wordpress.com/2012/02/02/upgradingdowngrading-exadata-ilom-firmware/
http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CFkQFjAA&url=http%3A%2F%2Fhusnusensoy.files.wordpress.com%2F2010%2F10%2Foracle-exadata-v2-fast-track.pptx&ei=M4zTT_mvLcGC2AWpufi6Dw&usg=AFQjCNFMAJgvIx9QuD3513dWS9nETkeXqw
{{{
IB Switches
3 x 36-port managed switches as opposed to Exadata v1 (2+1).
2 “leaf”
1 “spine” switches
Spine switch is only available for Full Rack because it is for connecting multiple full racks side by side.
A subnet manager running on one switch discovers the topology of the network.
HCA
Each node (RAC & Storage Cell) has a PCIe x8 40 Gbit HCA with two ports
Active-Standby Intracard Bonding.
}}}

F20 PCIe Card
{{{
Not a SATA/SAS SSD driver but a x8 PCIe device providing SATA/SAS interface.
4 Solid State Flash Disk Modules (FMod) each of 24 GB size
256 MB Cache
SuperCap Power Reserve (EnergyStorageModule) provides write-back operation mode.
ESM should be enabled for optimal write performance
Should be replaced in every two years.
Can be monitored using various tools like ILOM
Embedded SAS/SATA configuration will expose  16 (4 cards x 4 FMod) Linux devices.
/dev/sdn
4K sector boundary for Fmods
Each FMod consists of several NAND modules best performance can be reached with multithreading (32+ thread/FMod etc)

}}}







{{{

How To Avoid ORA-04030/ORA-12500 In 32 Bit Windows Environment
  	Doc ID: 	Note:373602.1

How to convert a 32-bit database to 64-bit database on Linux?
  	Doc ID: 	Note:341880.1 	
  	


-- PAE/AWE
		
		Some relief may be obtained by setting the /3GB flag as well as the /PAE flag in Oracle. This at least assures that up to 2 GB of memory is available for the Large Pool, 
		the Shared Pool, the PGA, and all user threads, after the AWE_WINDOW_SIZE parameter is taken into account. However, Microsoft recommends that the /3GB flag not be set if 
		the /AWE flag is set. This is due to the fact that the total amount of RAM accessible for ALL purposes is limited to 16 GB if the /3GB flag is set. RAM above 16 GB simply 
		�disappears� from the view of the OS. For PowerEdge 6850 servers that can support up to 64 GB of RAM, a limitation to only 16 GB of RAM is unacceptable.
		
		As noted previously, the model used for extended memory access under a 32-bit Operating System entails a substantial performance penalty. However, with a 64-bit OS, a flat linear model for memory used, with no need for PAE to access memory above 4 GB. Improved performance will be experienced for database SGA sizes greater than 3 GB, due to elimination of PAE overhead.
		
		
		MAXIMUM OF 4 GB OF ADDRESSABLE MEMORY FOR THE 32 BIT ARCHITECTURE. THIS IS A MAXIMUM PER PROCESS. THAT IS, EACH PROCESS MAY ALLOCATE UP TO 4 GB OF MEMORY
		
		2GB for OS
		2GB for USER THREADS
		
		1st workaround on 4GB limit: 
			- To expand the total memory used by Oracle above 2 GB, the /3GB flag may be set in the boot.ini file.	
				With the /3GB flag set, only 1 GB is used for the OS, and 3 GB is available for all user threads, including the Oracle SGA. 
			
		2nd workaround on 4GB limit: 
			- use the PAE, Intel 32-bit processors such as the Xeon processor support PAGING ADDRESS EXTENSIONS for large memory support
				MS Windows 2000 and 2003 support PAE through ADDRESS WINDOWING EXTENSIONS (AWE). PAE/AWE may be enabled by setting the /PAE flag in the boot.ini file. 
				The �USE_INDIRECT_BUFFERS=TRUE� parameter must also be set in the Oracle initialization file. In addition, the DB_BLOCK_BUFFERS parameter must be used 
				instead of the DB_CACHE parameter in the Oracle initialization file. With this method, Windows 2000 Server and Windows Server 2003 versions can support 
				up to 8 GB of total memory.
				Windows Advanced Server and Data Center versions support up to 64 GB of addressable memory with PAE/AWE.
			- One limitation of AWE is that only the Data Buffer component of the SGA may be placed in extended memory. Threads for other 
				SGA components such as the Shared Pool and the Large Pool, as well as the PGA and all Oracle user sessions must still fit inside 
				a relatively small memory area. THERE IS AN AWE_WINDOW_SIZE REGISTRY KEY PARAMETER THAT IS USED TO SET THE SIZE OF A KIND OF  �SWAP� AREA IN THE SGA. <-- swap area in SGA
				This �swap� area is used for mapping data blocks in upper memory to a lower memory location. By default, 
				this takes an additional 1 GB of low memory. This leaves only 2 GB of memory for everything other than the Buffer cache, assuming 
				the /3GB flag is set. If the /3GB flag is not set, only 1 GB of memory is available for the non-Buffer Cache components.
			- Note that the maximum addressable memory was limited to 16 GB of RAM
				Some relief may be obtained by setting the /3GB flag as well as the /PAE flag in Oracle. This at least assures that up to 2 GB of memory is available 
				for the Large Pool, the Shared Pool, the PGA, and all user threads, after the AWE_WINDOW_SIZE parameter is taken into account. However, Microsoft 
				recommends that the /3GB flag not be set if the /AWE flag is set. This is due to the fact that the total amount of RAM accessible for ALL purposes 
				is limited to 16 GB if the /3GB flag is set. RAM ABOVE 16 GB SIMPLY �DISAPPEARS� FROM THE VIEW OF THE OS. For PowerEdge 6850 servers that can support 
				up to 64 GB of RAM, a limitation to only 16 GB of RAM is unacceptable.
					This will give you (/3GB is set):
						3-4GB 	for Buffer Cache
						1GB 	for the swap area
						2GB 	for everything other than the Buffer Cache
						1GB 	for OS
					This will give you (/3GB is not set):
						3-4GB 	for Buffer Cache
						1GB 	for the swap area
						1GB 	for everything other than the Buffer Cache
						2GB 	for OS
			- Performance Tuning Corporation Benchmark:
					This will give you (/3GB is set):
						11GB 	for Buffer Cache
						.75GB 	for the swap area (AWE_MEMORY_WINDOW..minimum size that allowed the database to start)
						2.25GB 	for everything other than the Buffer Cache
						1GB 	for OS
					This will give you (/3GB is not set):
						11GB 	for Buffer Cache
						.75GB 	for the swap area (AWE_MEMORY_WINDOW..minimum size that allowed the database to start)
						1.25GB 	for everything other than the Buffer Cache
						2GB 	for OS

}}}

Using Large Pages for Oracle on Windows 64-bit (ORA_LPENABLE) http://blog.ronnyegner-consulting.de/2010/10/19/using-large-pages-for-oracle-on-windows-64-bit-ora_lpenable/
http://www.sketchup.com/download
<<<
3D XPoint (cross point) memory, which will be sold under the name Optane
<<<

https://www.intel.com/content/www/us/en/architecture-and-technology/intel-optane-technology.html
https://www.intel.com/content/www/us/en/architecture-and-technology/optane-memory.html
https://www.computerworld.com/article/3154051/data-storage/intel-unveils-its-optane-hyperfast-memory.html
https://www.computerworld.com/article/3082658/data-storage/intel-lets-slip-roadmap-for-optane-ssds-with-1000x-performance.html






http://docs.oracle.com/cd/E11857_01/em.111/e16790/ha_strategy.htm#EMADM9613
http://www.ibm.com/developerworks/linux/library/l-4kb-sector-disks/ <-- good stuff
http://ubuntuforums.org/showthread.php?t=1854524
http://ubuntuforums.org/showthread.php?t=1685666
http://ubuntuforums.org/showthread.php?t=1768635

also see [[Get BlockSize of OS]]


''Oracle related links''
1.8.1.4 Support 4 KB Sector Disk Drives http://docs.oracle.com/cd/E11882_01/server.112/e22487/chapter1.htm#FEATURENO08747 
Planning the Block Size of Redo Log Files http://docs.oracle.com/cd/E11882_01/server.112/e25494/onlineredo002.htm#ADMIN12891
Specifying the Sector Size for Drives http://docs.oracle.com/cd/E11882_01/server.112/e18951/asmdiskgrps.htm#OSTMG10203
Microsoft support policy for 4K sector hard drives in Windows http://support.microsoft.com/kb/2510009
ATA 4 KiB sector issues https://ata.wiki.kernel.org/index.php/ATA_4_KiB_sector_issues
http://en.wikipedia.org/wiki/Advanced_format
http://martincarstenbach.wordpress.com/2013/04/29/4k-sector-size-and-grid-infrastructure-11-2-installation-gotcha/
http://flashdba.com/4k-sector-size/
http://flashdba.com/install-cookbooks/installing-oracle-database-11-2-0-3-single-instance-using-4k-sector-size/
http://flashdba.com/2013/04/12/strange-asm-behaviour-with-4k-devices/
http://flashdba.com/2013/05/08/the-most-important-thing-you-need-to-know-about-flash/
http://www.storagenewsletter.com/news/disk/217-companies-hdd-since-1956
http://www.theregister.co.uk/2013/02/04/ihs_hdd_projections/
Alert: (Fix Is Ready + Additional Steps!) : After SAN Firmware Upgrade, ASM Diskgroups ( Using ASMLIB) Cannot Be Mounted Due To ORA-15085: ASM disk "" has inconsistent sector size. [1500460.1]


Design Tradeoffs for SSD Performance http://research.cs.wisc.edu/adsl/Publications/ssd-usenix08.pdf
Enabling Enterprise Solid State Disks Performance http://repository.cmu.edu/cgi/viewcontent.cgi?article=1732&context=compsci
https://flashdba.com/?s=4kb











http://karlarao.wordpress.com/2009/12/31/50-sql-performance-optimization-scenarios/

{{{
ORACLE SQL Performance Optimization Series (1)

1. The types of ORACLE optimizer
2. The way to visit Table
3. Shared SQL statement

ORACLE SQL Performance Optimization Series (2)

4. Select the table name of the most efficient order (only in the effective rule-based optimizer)
5. WHERE clause in the order of the connections
6. SELECT clause to avoid using ‘*’
7. Access to the database to reduce the number of

ORACLE SQL Performance Optimization Series (3)

8. Using the DECODE function to reduce the processing time
9. Integration of simple, non-associated database access
10. Remove duplicate records
11. Alternative DELETE with TRUNCATE
12. As much as possible the use of COMMIT

ORACLE SQL Performance Optimization Series (4)

13. Calculate the number of records
14. Where clause with the HAVING clause to replace
15. To reduce the query table
16. Through an internal function to improve SQL efficiency

ORACLE SQL Performance Optimization Series (5)

17. Use the table alias (Alias)
18. Replace IN with EXISTS
19. Replace NOT IN with NOT EXISTS

ORACLE SQL performance optimization Series (6)

20. Connect with the table to replace EXISTS
21. Replace DISTINCT with EXISTS
22. Recognition ‘inefficient implementation of the’ in SQL statements
23. Use TKPROF tool to query SQL Performance Status

ORACLE SQL Performance Optimization Series (7)

24. Analysis of SQL statements with EXPLAIN PLAN

ORACLE SQL Performance Optimization Series (8)

25. With the index to improve efficiency
26. Operation index

ORACLE SQL Performance Optimization Series (9)

27. The choice of the basis of the table
28. Number of equal index
29. Comparing and scope of the comparison equation
30. The index level is not clear

ORACLE SQL Performance Optimization Series (10)

31. Force index failure
32. Avoid the use of columns in the index calculation.
33. Auto Select Index
34. Avoid the use of NOT in the index column
35. With “= substitute”

ORACLE SQL Performance Optimization Series (11)

36. UNION replaced with the OR (for the index column)
37. To replace the OR with the IN
38. Avoid the use of columns in the index IS NULL and IS NOT NULL

ORACLE SQL Performance Optimization Series (12)

39. Always use the first column index
40. ORACLE internal operations
41. With the UNION-ALL replaced UNION (if possible)
42. Usage Tips (Hints)

ORACLE SQL Performance Optimization Series (13)

43. WHERE replaced with ORDER BY
44. Avoid changing the index of the column type
45. Need to be careful of the WHERE clause

ORACLE SQL Performance Optimization Series (14)

46. Connect multiple scan
47. CBO to use a more selective index of
48. Avoid the use of resource-intensive operations
49. GROUP BY Optimization
50. Use Date
51. Use explicit cursor (CURSORs)
52. Optimization EXPORT and IMPORT
53. Separate tables and indexes

ORACLE SQL Performance Optimization Series (15)

EXISTS / NOT EXISTS must be better than IN / NOT IN the efficiency of high?

ORACLE SQL Performance Optimization Series (16)

I used the view of how query results are wrong?

ORACLE SQL Performance Optimization Series (17)

Page Which writing efficient SQL?

ORACLE SQL Performance Optimization Series (18)

COUNT (rowid) / COUNT (pk) the efficiency of high?

ORACLE SQL Performance Optimization Series (19)

ORACLE data type implicit conversions

ORACLE SQL Performance Optimization Series (20)

The use of INDEX should pay attention to the three questions

ORACLE Tips (HINT) use (Part 1) (21)

ORACLE Tips (HINT) use (Part 2) (22)

Analysis of function-based index (Part 1) (23)

Analysis of function-based index (Part 2) (24)

How to achieve efficient paging query (25)

ORACLE achieved in the SELECT TOP N method (26)
}}}
http://highscalability.com/blog/2013/11/25/how-to-make-an-infinitely-scalable-relational-database-manag.html
http://dimitrik.free.fr/blog/archives/2013/11/mysql-performance-over-1m-qps-with-innodb-memcached-plugin-in-mysql-57.html

Average Active Sessions (AAS) is a metric of the database load. This value should not go above the CPU count, if it does then that means the database is working very hard or waiting a lot for something. 

''The AAS & CPU count is used as a yardstick for a possible performance problem (I suggest reading Kyle's stuff about this):''
{{{
    if AAS < 1 
      -- Database is not blocked
    AAS ~= 0 
      -- Database basically idle
      -- Problems are in the APP not DB
    AAS < # of CPUs
      -- CPU available
      -- Database is probably not blocked
      -- Are any single sessions 100% active?
    AAS > # of CPUs
      -- Could have performance problems
    AAS >> # of CPUS
      -- There is a bottleneck
}}}

''AAS Formula''
--
{{{
* AAS is either dbtime/elapsed
* or count/samples
* in the case of dba_hist_ count is count*10 since they only write out 1/10 samples (19751*10)/600 = 329.18
}}}
<<showtoc>>

This Tiddler will show you a new interesting metric included in the performance graph of Enterprise Manager 11g.. which is the ''CPU Wait'' or ''CPU + CPU Wait''

a little background.. 

I've done an IO test with the intention of bringing the system down to its knees and characterizing the IO performance on that level of stress. That time I want to know the IO performance of my R&D server http://www.facebook.com/photo.php?pid=5272015&l=d5f2be4166&id=552113028 (which I intend to run lots of VMs) having 8GB memory, IntelCore2Quad Q9500 & 5 x 1TB short stroked disk (on the outer 100GB area) and I was able to built from it an LVM stripe that produced about 900+ IOPS & 300+ MB/s on my ''Orion'' and ''dbms_resource_manager.calibrate_io'' runs and validated those numbers against the database I created by actually running ''256 parallel sessions'' doing SELECT * on a 300GB table http://goo.gl/PYYyH (the same disks are used but as ASM disks on the next 100GB area - short stroked). 

I'll start off by showing you how AAS is computed.. Then detail on how it is being graphed and show you the behavior of AAS on IO and CPU bound workload.. 

The tools I used for graphing the AAS: 
* Enterprise Manager 11g
** both the real time and historical graphs
* ASH Viewer by Alexander Kardapolov http://j.mp/dNidrB 
** this tool samples from the ASH itself and graphs it.. so it allows me to check the correctness and compare it with the ''real time'' graph of Enterprise Manager
* MS Excel and awr_topevents.sql 
** this tool samples from the DBA_HIST views and graphs it.. so it allows me to check the correctness and compare it with the ''historical'' graph of Enterprise Manager

Let's get started.. 


! How AAS is computed

AAS is the abstraction of database load and you can get it by the following means... 

!!!! 1) From ASH
<<<
[img[picturename| https://lh5.googleusercontent.com/_F2x5WXOJ6Q8/TZtyRXwwiOI/AAAAAAAABLA/BYOUYtXO1Vo/AASFromASH.png]]
<<<

!!!! 2) From DBA_HIST_ACTIVE_SESS_HISTORY
* In the case of DBA_HIST_ ''sample count'' is sample count*10 since they only write out 1/10 samples
<<<
[img[picturename| https://lh4.googleusercontent.com/_F2x5WXOJ6Q8/TZtyRcp7m_I/AAAAAAAABLI/sLqztbLY3Mw/AASFromDBA_HIST.png]]
<<<

!!!! 3) From the AWR Top Events
* The Top Events section unions the output of ''dba_hist_system_event'' (all the events) and the ''CPU'' from time model (''dba_hist_sys_time_model'') and then filter only the ''top 5'' and do this across the SNAP_IDs
** To get the ''high level AAS'' you have to divide DB Time / Elapsed Time
** To get the ''AAS for the Top Events'', you have to divide the ''time'' (from event or cpu) by ''elapsed time''
* You can see below that we are having ''the same'' AAS numbers compared to the ASH reports 
<<<
[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TZtyRdPqm3I/AAAAAAAABLE/o23FMIG1yeQ/AASFromAWRTop.png]]
<<<


! How AAS is being graphed
I have a dedicated blog post on this topic.. http://karlarao.wordpress.com/2010/07/25/graphing-the-aas-with-perfsheet-a-la-enterprise-manager/

So we already know how we get the AAS, and how is it graphed.. ''so what's my issue?''

''Remember I mentioned this on the blog post above.. ?''
<<<
"So what’s the effect? mm… on a high CPU activity period you’ll notice that there will be a higher AAS on the Top Activity Page compared to Performance Page. Simply because ASH samples every second and it does that quickly on every active session (the only way to see CPU usage realtime) while the time model CPU although it updates quicker (5secs I think) than v$sysstat “CPU used by this session” there could still be some lag time and it will still be based on Time Statistics (one of two ways to calculate AAS) which could be affected by averages."
<<<
I'll expound on that with test cases included.. ''see below!''


! AAS behavior on an IO bound load
* This is the graph of an IO bound load using ASH Viewer, this will be similar to the graph you will see on ''real time'' view of the Enterprise Manager 11g
<<<
[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TZ9Cp2Kc8aI/AAAAAAAABN0/1konJAJZMUo/highio-3.png]]
[img[picturename| https://lh4.googleusercontent.com/_F2x5WXOJ6Q8/TZt3yMWQUCI/AAAAAAAABLM/8d-I2RqvF3I/AASIObound.png]]
<<<
* This is the graph of the same workload using MS Excel and the script awr_topevents.sql, this will be the similar graph you will see on the ''historical'' view of the Enterprise Manager 11g
<<<
[img[picturename| https://lh6.googleusercontent.com/_F2x5WXOJ6Q8/TZ9FJ6cXxRI/AAAAAAAABN4/eWRs8SQd0ws/highio-4.png]]
<<<

As you can see from the images above and the numbers below.. the database is doing a lot of ''direct path read'' and we don't have a high load average. Although when you look at the OS statistics, from this IO intensive workload you will see high IO WAIT from the CPU.

Looking at the data below from AWR and ASH.. ''we see no discrepancies''.. now, let's compare this to the workload below where the database server is CPU bound and has a really high load average. 

''AAS Data from AWR''
<<<
[img[picturename| https://lh4.googleusercontent.com/_F2x5WXOJ6Q8/TZ8-nKFw2pI/AAAAAAAABNk/oozsoEgnmeE/highio-1.png]]
<<<

''AAS Data from ASH''
<<<
[img[picturename| https://lh6.googleusercontent.com/_F2x5WXOJ6Q8/TZ8-nFhmP7I/AAAAAAAABNo/x5kIF-HuhnY/highio-2.png]]
<<<


! AAS behavior on a CPU bound load

This is the Enterprise Manager 11g graph of a CPU bound load 
<<<
[img[picturename| https://lh5.googleusercontent.com/_F2x5WXOJ6Q8/TZt8ZxUeEUI/AAAAAAAABLY/gmclSmutRVg/AASCPUbound.png]]
<<<
This is the ASH Viewer graph of a CPU bound load 
* The dark green color you see below (18:30 - 22:00) is actually the ''CPU Wait'' metric that you are seeing on the Enterprise Manager graph above
* The light green color on the end part of the graph (22:00) is the ''Scheduler wait - resmgr: cpu quantum'' 
* The small hump on the 16:30-17:30 time frame is the IO bound load test case
<<<
[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TZ6emvgui7I/AAAAAAAABNI/fxVzQryIwKc/highcpu-4.png]]
<<<
Below are the data from AWR and ASH of the same time period ''(21:50 - 22:00)''.. see the high level and drill down numbers below 
... it seems like if the database server is ''high on CPU/high on runqueue'' or the ''"wait for CPU"'' appears.. then the AAS numbers from the AWR and ASH reports don't match anymore but I would expect ASH to be bigger because it has fine grained samples of 1 second. But as you can see (below).. 
* the ASH top events correctly accounted the CPU time ''(95.37 AAS)'' which was tagged as ''CPU + Wait  for CPU''
* while the AWR CPU seems to be idle ''(.2 AAS)''. 
And what's even more interesting is 
* the high level AAS on AWR is ''356.7'' 
* while on the ASH it is ''329.18'' 
that's a huge gap! Well that could be because of 
* the high DB Time ''(215947.8)'' on AWR 
* compared to what Sample Count ASH has ''(197510)''. 
Do you have any idea why is this happening? Interesting right? 

''AAS Data from AWR''
<<<
[img[picturename| https://lh5.googleusercontent.com/_F2x5WXOJ6Q8/TZ6BdKu23hI/AAAAAAAABMw/Nuwg_qTt6m8/highcpu-1.png]]

[img[picturename| https://lh5.googleusercontent.com/_F2x5WXOJ6Q8/TZ6BdrU46FI/AAAAAAAABM4/6Inv_8_Z5dc/highcpu-2.png]]
<<<

''AAS Data from ASH''
<<<
[img[picturename| https://lh6.googleusercontent.com/_F2x5WXOJ6Q8/TZ8rp2UTbWI/AAAAAAAABNg/6VBzvJxxApM/highcpu-3.png]]
<<<

''A picture is worth a thousand words...'' - To clearly explain this behavior of ''CPU not properly accounted'' I'll show you the graph of the data samples

__''AWR Top Events with CPU "not properly" accounted''__
<<<
* This is the high level AAS we are getting from the ''DB Time/Elapsed Time'' from the AWR report across SNAP_IDs.. this output comes from the script ''awr_genwl.sql'' (AAS column - http://goo.gl/MUWr) notice that there are AAS number as high as 350 and above.. the second occurence of 350+ is from the SNAP_ID 495-496 mentioned above..
[img[picturename| https://lh5.googleusercontent.com/_F2x5WXOJ6Q8/TZ61tG_iQ0I/AAAAAAAABNY/iKAy7j4Y534/highcpu-5.png]]
* Drilling down on the AAS components of that high level AAS we have to graph the output of the ''awr_topevents.sql''... given that this is still the same workload, you see here that only the ''Direct Path Read'' is properly accounted and when you look at the CPU time it seems to be idle... thus, giving lower AAS than the image above..
* Take note that SNAP_ID 495 the AWR ''CPU'' seems to be idle (.2 AAS) which is what is happening on this image
* Also on the 22:00 period, the database stopped waiting on CPU and started to wait on ''Scheduler''.. and then it matched again the high level AAS from the image above (AAS range of 320).. Interesting right? 
[img[picturename| https://lh5.googleusercontent.com/_F2x5WXOJ6Q8/TZ53u_cLWLI/AAAAAAAABMY/9QP2C4S7AUI/highcpu-6.png]]
* We will also have this same behavior on Enterprise Manager 11g when we go to the ''Top Activity page'' and change the ''Real Time'' to ''Historical''... see the similarities on the graph from MS Excel? So when you go ''Real Time'' you are actually pulling from ASH.. then when you go ''Historical'' you are just pulling the Top Timed events across SNAP_IDs and graphing it.. but when you have issues like CPU time not properly accounted you'll see a really different graph and if you are not careful and don't know what it means you may end up with bad conclusions.. 
[img[picturename| https://lh5.googleusercontent.com/_F2x5WXOJ6Q8/TZ6fz5UzkVI/AAAAAAAABNM/9xL8IukSM4A/highcpu-10.png]]
<<<

__''AWR Top Events with CPU "properly" accounted''__
<<<
* Now, this is really interesting... the graph shown below is from the ''Performance page'' and is also ''Historical'' but produced a different graph from the ''Top Activity page''... 
* Why and how did it account for the ''CPU Wait''? where did it pull the data that the ''Top Activity page'' missed? 
* This is an improvement in the Enterprise Manager! So I'm curious how is this happening...
[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TZ6ogMsAp0I/AAAAAAAABNQ/b9dTIxATxoY/highcpu-11.png]]
<<<

__''ASH with CPU "properly" accounted (well.. I say, ALWAYS!)''__

From the graph above & below where the CPU is properly accounted, you see the AAS is consistent at the range of 320.. 
[img[picturename| https://lh6.googleusercontent.com/_F2x5WXOJ6Q8/TZ53uvK1xkI/AAAAAAAABMU/7HThzn4uoEo/highcpu-7.png]]
What makes ASH different is the proper accounting of the ''CPU'' AAS component unlike the chart coming from awr_topevents.sql (mentioned on the AWR Top Events with CPU "not properly" accounted) where there's no CPU accounted at all... this could be the problem of DBA_HIST_SYS_TIME_MODEL - ''DB CPU'' metric that when the database server is high on runqueue and there are already scheduling issues in the OS the ''ASH is even more reliable'' on accounting all the CPU time.. 

Another thing that bothers me is why is it that the ''DB Time'' when applied to the AAS formula gives much higher AAS value than of the ASH? so that could also mean that ''the DB Time is another reliable source'' if the database server is high on runqueue.. 

If this is the case, from a pure AWR perspective... what I would do is have the output of ''awr_genwl.sql''.. then run the ''awr_topevents.sql''.. 
and then if I would see that my AAS is high on awr_genwl.sql with a really high "OS Load" and "CPU Utilization" and then if I compare it with the output of awr_topevents.sql and see a big discrepancy that would give me an idea that I'm experiencing the same issue mentioned here, and I would investigate further with the ASH data to solidify my conclusions.. 

If you are curious about the output of Time model statistics on SNAP_ID 495-496
the CPU values found here does not help either because they have low values..

{{{
   DB CPU = 126.70 sec
   BG CPU = 4.32 sec
   OS CPU (osstat) = 335.71 sec

Statistic Name                                       Time (s) % of DB Time
------------------------------------------ ------------------ ------------
sql execute elapsed time                            215,866.2        100.0
DB CPU                                                  126.7           .1
parse time elapsed                                       62.8           .0
hard parse elapsed time                                  60.0           .0
PL/SQL execution elapsed time                            33.9           .0
hard parse (sharing criteria) elapsed time                9.7           .0
sequence load elapsed time                                0.6           .0
PL/SQL compilation elapsed time                           0.2           .0
connection management call elapsed time                   0.0           .0
repeated bind elapsed time                                0.0           .0
hard parse (bind mismatch) elapsed time                   0.0           .0
DB time                                             215,947.9
background elapsed time                               1,035.5
background cpu time                                       4.3
          -------------------------------------------------------------
}}}

''Now we move on by splitting the ASH AAS components into their separate areas..''
* the ''CPU'' 
* and ''USER IO'' 
see the charts below.. 

This just shows that there is something about ASH properly accounting the ''CPU + WAIT FOR CPU'' whenever the database server is high on runqueue or OS load average... as well as the ''DB Time''
[img[picturename| https://lh6.googleusercontent.com/_F2x5WXOJ6Q8/TZ53wDeLd4I/AAAAAAAABMc/G5lodk6IAqE/highcpu-8.png]]
this is the ''USER IO'' AAS.. same as what is accounted in awr_topevents.sql
[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TZ53wKTIMVI/AAAAAAAABMg/dAihs-LYGfY/highcpu-9.png]]


So the big question for me is...

How does ASH and the Enterprise Manager performance page account for the "CPU + WAIT FOR CPU"? even if you drill down on the V$ACTIVE_SESSION_HISTORY you will not find this metric. So I'm really interested on where they pull the data.. :)


''update''
... and then I asked a couple of people, and I had a recent problem on a client site running on Exadata where I was troubleshooting their ETL runs. I was running 10046 for every run and found out that my unaccounted-for time is due to the CPU wait that is shown on this tiddler. So using Mr. Tools, and given that I'm having a similar workload.. I had an idea that the unaccounted-for time is the CPU wait. See the write up here http://www.evernote.com/shard/s48/sh/3ccc1e38-b5ef-46f8-bc75-371156ade4b3/69066fa2741f780f93b86af1626a1bcd , and I was right all along ;)


''AAS investigation updates:  Answered questions + bits of interesting findings''
http://www.evernote.com/shard/s48/sh/b4ecaaf2-1ceb-43ea-b58e-6f16079a775c/cb2e28e651c3993b325e66cc858c3935


''I've updated the awr_topevents.sql script to show CPU wait to solve the unnaccounted DB Time issue'' see the write up on the link below:
awr_topevents_v2.sql - http://www.evernote.com/shard/s48/sh/a64a656f-6511-4026-be97-467dccc82688/de5991c75289f16eee73c26c249a60bf



Thanks to the following people for reading/listening about this research, and for the interesting discussions and ideas around this topic: 
- Kyle Hailey, Riyaj Shamsudeen, Dave Abercrombie, Cary Millsap, John Beresniewicz


''Here's the MindMap of the AAS investigation'' http://www.evernote.com/shard/s48/sh/90cdf56f-da52-4dc5-91d0-a9540905baa6/9eb34e881a120f82f2dab0f5424208bf



! update (rmoug 2012 slides on cpu wait)
[img(100%,100%)[https://i.imgur.com/xArySP8.png]]
[img(100%,100%)[https://i.imgur.com/j4FiOwY.png]]
[img(100%,100%)[https://i.imgur.com/hw7ttDe.png]]
[img(100%,100%)[https://i.imgur.com/AQqrZSn.png]]
[img(100%,100%)[https://i.imgur.com/POHSIQ5.png]]
[img(100%,100%)[https://i.imgur.com/GXPupKb.png]]
xxx
[img(100%,100%)[https://i.imgur.com/94TlhTh.jpg]]
[img(100%,100%)[https://i.imgur.com/jtWvg7Z.png]]
[img(100%,100%)[https://i.imgur.com/PzDdJGs.png]]
[img(100%,100%)[https://i.imgur.com/SKmcj4K.png]]
[img(100%,100%)[https://i.imgur.com/MHTcbPD.png]]
[img(100%,100%)[https://i.imgur.com/83Jfspe.png]]













.



http://www.evernote.com/shard/s48/sh/a0875f07-26e6-4ec7-ab31-2d946925ef73/6d2fe9d6adc6f716a40ec87e35a0b264
https://blogs.oracle.com/RobertGFreeman/entry/exadata_support_for_acfs_and
''Further Reading:'' @@Brewer@@ (http://www.infoq.com/articles/cap-twelve-years-later-how-the-rules-have-changed) and @@Gilbert and Lynch@@ (http://groups.csail.mit.edu/tds/papers/Gilbert/Brewer2.pdf) on the CAP Theorem; @@Vogels@@ (http://queue.acm.org/detail.cfm?id=1466448) on Eventual Consistency, @@Hamilton@@ (http://perspectives.mvdirona.com/2010/02/24/ILoveEventualConsistencyBut.aspx) on its limitations, and @@Bailis and Ghodsi@@ (https://queue.acm.org/detail.cfm?id=2462076) on measuring it and more; and @@Sirer@@ (http://hackingdistributed.com/2013/03/23/consistency-alphabet-soup/) on the multiple meanings of consistency in Computer Science. @@Liveness manifestos@@ (http://cs.nyu.edu/acsys/beyond-safety/liveness.htm) has interesting definition variants for liveness and safety.

! Big Data 4Vs + 1 
<<<
Volume - scale at which data is generated 
Variety - different forms of data
Velocity - data arrives in continuous stream 
Veracity - uncertainty: data is not always accurate 
Value - immediacy and hidden relationships
<<<


! ACID
* redo and undo in Oracle provides ACID
<<<
Atomic - they all complete successfully, or not at all
Consistent - integrity rules. ACID consistency is all about database rules.
Isolated - locking
Durable - transaction is guaranteed
<<<

* ACID http://docs.oracle.com/database/122/CNCPT/glossary.htm#CNCPT89623 The basic properties of a database transaction that all Oracle Database transactions must obey. ACID is an acronym for atomicity, consistency, isolation, and durability.
* Transaction http://docs.oracle.com/database/122/CNCPT/glossary.htm#GUID-212D8EA1-D704-4D7B-A72D-72001965CE45 Logical unit of work that contains one or more SQL statements. All statements in a transaction commit or roll back together. The use of transactions is one of the most important ways that a database management system differs from a file system.
* Oracle Fusion Middleware Developing JTA Applications for Oracle WebLogic Server - ACID Properties of Transactions http://docs.oracle.com/middleware/12212/wls/WLJTA/gstrx.htm#WLJTA117
http://cacm.acm.org/magazines/2011/6/108651-10-rules-for-scalable-performance-in-simple-operation-datastores/fulltext
http://www.slideshare.net/jkanagaraj/oracle-vs-nosql-the-good-the-bad-and-the-ugly
http://highscalability.com/blog/2009/11/30/why-existing-databases-rac-are-so-breakable.html
Databases in the wild file:///C:/Users/karl/Downloads/Databases%20in%20the%20Wild%20(1).pdf




! CAP theorem
* CAP is a tool to explain trade-offs in distributed systems.
<<<
Consistent: All replicas of the same data will be the same value across a distributed system. CAP consistency promises that every replica of the same logical value, spread across nodes in a distributed system, has the same exact value at all times. Note that this is a logical guarantee, rather than a physical one. Due to the speed of light, it may take some non-zero time to replicate values across a cluster. The cluster can still present a logical view by preventing clients from viewing different values at different nodes.
Available: All live nodes in a distributed system can process operations and respond to queries.
Partition Tolerant: The system is designed to operate in the face of unplanned network connectivity loss between replicas. 
<<<
https://en.wikipedia.org/wiki/CAP_theorem
https://dzone.com/articles/better-explaining-cap-theorem
https://cloudplatform.googleblog.com/2017/02/inside-Cloud-Spanner-and-the-CAP-Theorem.html
http://guyharrison.squarespace.com/blog/2010/6/13/consistency-models-in-non-relational-databases.html  <- good stuff 
http://www.datastax.com/2014/08/comparing-oracle-rac-and-nosql <- good stuff
http://docs.oracle.com/database/121/GSMUG/toc.htm , http://www.oracle.com/technetwork/database/availability/global-data-services-12c-wp-1964780.pdf <- Database Global Data Services Concepts and Administration Guide [[Global Data Services]] 
http://www.oracle.com/technetwork/database/options/clustering/overview/backtothefuture-2192291.pdf  <- good stuff  Back to the Future with Oracle Database 12c
https://blogs.oracle.com/MAA/tags/cap  <- two parts good stuff
https://www.percona.com/live/mysql-conference-2013/sites/default/files/slides/aslett%20cap%20theorem.pdf  <- very good stuff
<<<
[img(100%,100%)[ http://i.imgur.com/q1QEtGI.png ]]
<<<
http://blog.nahurst.com/visual-guide-to-nosql-systems
<<<
[img(100%,100%)[ http://i.imgur.com/I7jYbVD.png ]]
<<<

http://www.ctodigest.com/2014/distributed-applications/the-distributed-relational-database-shattering-the-cap-theorem/
https://www.infoq.com/articles/cap-twelve-years-later-how-the-rules-have-changed
Spanner, TrueTime and the CAP Theorem https://research.google.com/pubs/pub45855.html , https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/45855.pdf
https://www.voltdb.com/blog/disambiguating-acid-and-cap <- difference between two Cs (voltdb founder)
https://martin.kleppmann.com/2015/05/11/please-stop-calling-databases-cp-or-ap.html <- nice, a lot of references! author of "Designing Data-Intensive Applications"
<<<
https://aphyr.com/posts/322-call-me-maybe-mongodb-stale-reads
http://blog.thislongrun.com/2015/04/cap-availability-high-availability-and_16.html
https://github.com/jepsen-io/knossos
https://aphyr.com/posts/288-the-network-is-reliable
http://dbmsmusings.blogspot.co.uk/2010/04/problems-with-cap-and-yahoos-little.html
https://codahale.com/you-cant-sacrifice-partition-tolerance/
https://www.somethingsimilar.com/2013/01/14/notes-on-distributed-systems-for-young-bloods/
http://henryr.github.io/cap-faq/
http://henryr.github.io/distributed-systems-readings/
<<<
http://blog.thislongrun.com/2015/03/the-confusing-cap-and-acid-wording.html
https://news.ycombinator.com/item?id=9285751
[img(100%,100%)[http://i.imgur.com/G9vV8Qh.png ]]
http://www.slideshare.net/AerospikeDB/acid-cap-aerospike
Next Generation Databases: NoSQL, NewSQL, and Big Data https://www.safaribooksonline.com/library/view/next-generation-databases/9781484213292/9781484213308_Ch09.xhtml#Sec2  
https://www.pluralsight.com/courses/cqrs-theory-practice
https://www.pluralsight.com/blog/software-development/relational-non-relational-databases
https://www.amazon.com/Seven-Concurrency-Models-Weeks-Programmers-ebook/dp/B00MH6EMN6/ref=mt_kindle?_encoding=UTF8&me=
https://en.wikipedia.org/wiki/Michael_Stonebraker#Data_Analysis_.26_Extraction
http://scaledb.blogspot.com/2011/03/cap-theorem-event-horizon.html

! Think twice before dropping ACID and throw your CAP away
https://static.rainfocus.com/oracle/oow19/sess/1552610610060001frc7/PF/AG_%20Think%20twice%20before%20dropping%20ACID%20and%20throw%20your%20CAP%20away%202019_09_16%20-%20oco_1568777054624001jIWT.pdf


! BASE (eventual consistency)
BASE – Basically Available Soft-state Eventually consistent is an acronym used to contrast this approach with the RDBMS ACID transactions described above.
http://www.allthingsdistributed.com/2008/12/eventually_consistent.html  <- amazon cto



! NRW notation 
NRW notation describes at a high level how a distributed database will trade off consistency, read performance and write performance.  NRW stands for:
N: the number of copies of each data item that the database will maintain. 
R: the number of copies that the application will access when reading the data item 
W: the number of copies of the data item that must be written before the write can complete.  






! Database test

!! jepsen 
A framework for distributed systems verification, with fault injection
https://github.com/jepsen-io/jepsen
https://www.youtube.com/watch?v=tRc0O9VgzB0


!! sqllogictest 
https://github.com/gregrahn/sqllogictest











.






{{{
connect / as sysdba

set serveroutput on

show user;

create or replace procedure mailserver_acl(
  aacl       varchar2,
  acomment   varchar2,
  aprincipal varchar2,
  aisgrant   boolean,
  aprivilege varchar2,
  aserver    varchar2,
  aport      number)
is
begin  
  begin
    DBMS_NETWORK_ACL_ADMIN.DROP_ACL(aacl);
     dbms_output.put_line('ACL dropped.....'); 
  exception
    when others then
      dbms_output.put_line('Error dropping ACL: '||aacl);
      dbms_output.put_line(sqlerrm);
  end;
  begin
    DBMS_NETWORK_ACL_ADMIN.CREATE_ACL(aacl,acomment,aprincipal,aisgrant,aprivilege);
    dbms_output.put_line('ACL created.....'); 
  exception
    when others then
      dbms_output.put_line('Error creating ACL: '||aacl);
      dbms_output.put_line(sqlerrm);
  end;  
  begin
    DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL(aacl,aserver,aport);
    dbms_output.put_line('ACL assigned.....');         
  exception
    when others then
      dbms_output.put_line('Error assigning ACL: '||aacl);
      dbms_output.put_line(sqlerrm);
  end;    
  commit;
  dbms_output.put_line('ACL commited.....'); 
end;
/
show errors



select acl, host, lower_port, upper_port from dba_network_acls

ACL                                      HOST                           LOWER_PORT UPPER_PORT
---------------------------------------- ------------------------------ ---------- ----------
/sys/acls/IFSAPP-PLSQLAP-Permission.xml  haiapp09.mfg.am.mds.       59080      59080

 select acl, principal, privilege, is_grant from dba_network_acl_privileges

ACL                                      PRINCIPAL                      PRIVILE IS_GR
---------------------------------------- ------------------------------ ------- -----
/sys/acls/IFSAPP-PLSQLAP-Permission.xml  IFSAPP                         connect true
/sys/acls/IFSAPP-PLSQLAP-Permission.xml  IFSSYS                         connect true



begin
  mailserver_acl(
    '/sys/acls/IFSAPP-PLSQLAP-Permission.xml',
    'ACL for used Email Server to connect',
    'IFSAPP',
    TRUE,
    'connect',
    'haiapp09.mfg.am.mds.',
    59080);    
end;
/


begin
   DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE('/sys/acls/IFSAPP-PLSQLAP-Permission.xml','IFSSYS',TRUE,'connect');
   commit;
end;
/
}}}
{{{

Summary:
> Implement Instance Caging
> Enable Parallel Force Query and Parallel Statement Queuing  
> A database trigger has to be created on the Active Data Guard for all databases to enable Parallel Force Query on the session level upon login
> Create a new Resource Management Plan to limit the per session parallelism to 4
> Enable IORM and set to objective of AUTO on the Storage Cells

Commands to implement the recommended changes:
> The numbers 1 and 2 need to be executed on each database of the Active Data Guard environment
> #3 needs to be executed on all the Storage Cells, use the dcli and execute only on the 1st storage cell if passwordless ssh is configured
> #4 needs to be executed on each database (ECC, EWM, GTS, APO) of the Primary site to create the new Resource Management Plan
> #5 needs to be executed on each database of the Active Data Guard environment to activate the Resource Management Plan

The behavior:
        instance caging is set to CPU_COUNT of 40 (83% max CPU utilization)	
	parallel 4 will be set to all users logged in as ENTERPRISE, no need for hints	
	although the hints override the session settings, the non-ENTERPRISE users will be throttled on the resource management layer to PX of 4 even if hints are set	
		RM plan has PX limit of 4 for other_groups 
		We can set a higher limit (let's say 8) for the ENTERPRISE users so they can override the PX 4 to a higher value through hints
	this configuration will be done on all 4 databases 	
		
Switchover steps - just in case the 4 DBs will switchover to Exadata:
	disable the px trigger	
	alter the resource plan to SAP primary	


######################################################################

1) instance caging 

alter system set cpu_count=40 scope=both sid='*';
alter system set resource_manager_plan=default_plan; 

2) statement queueing and create trigger

alter system set parallel_force_local=false scope=both sid='*';
alter system set parallel_max_servers=128 scope=both sid='*';
alter system set parallel_servers_target=64 scope=both sid='*';
alter system set parallel_min_servers=64 scope=both sid='*';
alter system set "_parallel_statement_queuing"=true scope=both sid='*';

-- alter trigger sys.adg_pxforce_trigger disable;

-- the trigger checks if ENTERPRISE user is logged on, if it's running as PHYSICAL STANDBY, and if it's running on X4DP cluster

CREATE OR REPLACE TRIGGER adg_pxforce_trigger
AFTER LOGON ON database
WHEN (USER in ('ENTERPRISE'))
BEGIN
IF (SYS_CONTEXT('USERENV','DATABASE_ROLE') IN ('PHYSICAL STANDBY'))
AND (UPPER(SUBSTR(SYS_CONTEXT ('USERENV','SERVER_HOST'),1,4)) IN ('X4DP'))
THEN
execute immediate 'alter session force parallel query parallel 4';
END IF;
END;
/


3) IORM AUTO

-- execute on each storage cell
cellcli -e list iormplan detail
cellcli -e alter iormplan objective = auto
cellcli -e alter iormplan active

-- use these commands if passwordless ssh is configured 
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan objective = auto'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan active'

######################################################################

4) RM plan to be created on the primary site


exec DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA;

BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.CREATE_PLAN(PLAN => 'px_force', COMMENT => 'force parallel query parallel 4');
DBMS_RESOURCE_MANAGER.CREATE_CONSUMER_GROUP(CONSUMER_GROUP => 'CG_ENTERPRISE',    COMMENT => 'CG for ENTERPRISE users');
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN =>'px_force', GROUP_OR_SUBPLAN => 'CG_ENTERPRISE', COMMENT => 'Directive for ENTERPRISE users', PARALLEL_DEGREE_LIMIT_P1 => 4);
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN =>'px_force', GROUP_OR_SUBPLAN => 'OTHER_GROUPS', COMMENT => 'Low priority users', PARALLEL_DEGREE_LIMIT_P1 => 4);
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/ 

begin
 dbms_resource_manager_privs.grant_switch_consumer_group(grantee_name => 'ENTERPRISE',consumer_group => 'CG_ENTERPRISE', grant_option => FALSE);
end;
/ 

begin
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SET_INITIAL_CONSUMER_GROUP ('ENTERPRISE', 'CG_ENTERPRISE');
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/


-- check config 

set wrap off
set head on
set linesize 300
set pagesize 132
col comments format a64

-- show current resource plan
select * from  V$RSRC_PLAN;

-- show all resource plans
select PLAN,NUM_PLAN_DIRECTIVES,CPU_METHOD,substr(COMMENTS,1,64) "COMMENTS",STATUS,MANDATORY 
from dba_rsrc_plans 
order by plan;

-- show consumer groups
select CONSUMER_GROUP,CPU_METHOD,STATUS,MANDATORY,substr(COMMENTS,1,64) "COMMENTS" 
from DBA_RSRC_CONSUMER_GROUPS 
order by consumer_group;

-- show  category
SELECT consumer_group, category
FROM DBA_RSRC_CONSUMER_GROUPS
ORDER BY category;

-- show mappings
col value format a30
select ATTRIBUTE, VALUE, CONSUMER_GROUP, STATUS 
from DBA_RSRC_GROUP_MAPPINGS
order by 3;

-- show mapping priority 
select * from DBA_RSRC_MAPPING_PRIORITY;

-- show directives 
SELECT plan,group_or_subplan,cpu_p1,cpu_p2,cpu_p3, PARALLEL_DEGREE_LIMIT_P1, status 
FROM dba_rsrc_plan_directives 
order by 1,3 desc,4 desc,5 desc;

-- show grants
select * from DBA_RSRC_CONSUMER_GROUP_PRIVS order by grantee;
select * from DBA_RSRC_MANAGER_SYSTEM_PRIVS order by grantee;

-- show scheduler windows
select window_name, resource_plan, START_DATE, DURATION, WINDOW_PRIORITY, enabled, active from dba_scheduler_windows;


5) enforce on the standby site
connect / as sysdba
--ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'FORCE:px_force';



-- revert
connect / as sysdba
exec DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA;

ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'default_plan';

BEGIN
  DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
  DBMS_RESOURCE_MANAGER.DELETE_PLAN_CASCADE ('px_force');
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/
}}}



Setting up Swingbench for Oracle Autonomous Data Warehousing (ADW) http://www.dominicgiles.com/blog/files/7fd178b363b32b85ab889edfca6cadb2-170.html
https://www.accenture.com/_acnmedia/pdf-108/accenture-destination-autonomous-oracle-database.pdf


! exploring ADW
https://content.dsp.co.uk/exploring-adw-part-1-uploading-more-than-1mb-to-object-storage
https://content.dsp.co.uk/exploring-autonomous-data-warehouse-loading-data


! how autonomous is ADW 
https://indico.cern.ch/event/757894/attachments/1720580/2777513/8b_AutonomousIsDataWarehouse_AntogniniSchnider.pdf







.
<<showtoc>>


! not explicit set compression

{{{

-- not explicit set compression --# for some reason this resulted to BASIC compression 
set timing on 
alter session set optimizer_ignore_hints = false;

create table SD_HECHOS_COBERTURA_PP_HCC_TEST
compress parallel as select /*+ NO_GATHER_OPTIMIZER_STATISTICS full(sd_hechos_cobertura_pp) */ * from sd_hechos_cobertura_pp;

}}}


! explicit set compression 
{{{
-- explicit set compression 

set timing on 
alter session set optimizer_ignore_hints = false;

create table SD_HECHOS_COBERTURA_PP_HCC_TEST
compress for QUERY HIGH ROW LEVEL LOCKING  parallel as select /*+ NO_GATHER_OPTIMIZER_STATISTICS full(sd_hechos_cobertura_pp) */ * from sd_hechos_cobertura_pp;

}}}
https://docs.oracle.com/en/database/oracle/oracle-database/21/nfcon/automatic-operations-256569003.html
<<<
! Automatic Operations
    Automatic Indexing Enhancements
    Automatic Index Optimization
    Automatic Materialized Views
    Automatic SQL Tuning Set
    Automatic Temporary Tablespace Shrink
    Automatic Undo Tablespace Shrink
    Automatic Zone Maps
    Object Activity Tracking System
    Sequence Dynamic Cache Resizing
<<<


.
{{{

-- registry 
SELECT /*+  NO_MERGE  */ /* 1a.15 */
       x.*
	   ,c.name con_name
  FROM cdb_registry x
       LEFT OUTER JOIN v$containers c ON c.con_id = x.con_id
ORDER BY
       x.con_id,
	   x.comp_id;


-- registry history 
SELECT /*+  NO_MERGE  */ /* 1a.17 */
       x.*
	   ,c.name con_name
  FROM cdb_registry_history x
       LEFT OUTER JOIN v$containers c ON c.con_id = x.con_id
 ORDER BY 1
	   ,x.con_id;


-- registry hierarchy
SELECT /*+  NO_MERGE  */ /* 1a.18 */
       x.*
	   ,c.name con_name
  FROM cdb_registry_hierarchy x
       LEFT OUTER JOIN v$containers c ON c.con_id = x.con_id
 ORDER BY
       1, 2, 3;       
}}}
https://docs.oracle.com/en-us/iaas/autonomous-database-serverless/doc/autonomous-cloud-links.html
https://blogs.oracle.com/datawarehousing/post/database-links-in-autonomous-database-shared-are-the-past---cloud-links-are-the-future
! JDBC 
JDBC Thin Connections with a Wallet (mTLS)
https://docs.oracle.com/en/cloud/paas/autonomous-database/serverless/adbsb/connect-jdbc-thin-wallet.html#GUID-BE543CFD-6FB4-4C5B-A2EA-9638EC30900D
JDBC Thin Connections Without a Wallet (TLS)
https://docs.oracle.com/en/cloud/paas/autonomous-database/serverless/adbsb/connect-jdbc-thin-tls.html#GUID-364DB7F0-6F4F-4C42-9395-4BA4D09F0483

https://stackoverflow.com/questions/57905056/how-to-connect-to-oracle-adw-instance-using-dbeaver
https://blog.toadworld.com/how-to-use-toad-for-oracle-with-oracle-autonomous-database-i
{{{

alter session set CONTAINER_DATA=CURRENT_DICTIONARY;


/*+ OPT_PARAM('CONTAINER_DATA', 'CURRENT_DICTIONARY') */
}}}
https://docs.oracle.com/en-us/iaas/autonomous-database-serverless/doc/database-links-oracledb-private.html#GUID-0D44ED37-2857-4D8B-AA7B-BC89445D11A4
{{{
set lines 400 pages 2000
col MESSAGE_TEXT format a200
col ORIGINATING_TIMESTAMP format a40
col MESSAGE_ARGUMENTS format a20

SELECT originating_timestamp,
       MESSAGE_TEXT
FROM v$diag_alert_ext
WHERE component_id = 'rdbms'
  AND originating_timestamp >= to_date('2021/11/23 21:00', 'yyyy/mm/dd hh24:mi')
  AND originating_timestamp <= to_date('2021/11/24 14:00', 'yyyy/mm/dd hh24:mi')
ORDER BY originating_timestamp;
}}}



{{{
SQL> set lines 400 pages 2000
SQL> col MESSAGE_TEXT format a200
SQL> col ORIGINATING_TIMESTAMP format a40
SQL> col MESSAGE_ARGUMENTS format a20
SQL> 
SQL> SELECT originating_timestamp,
  2         MESSAGE_TEXT
  3  FROM v$diag_alert_ext
  4  WHERE component_id = 'rdbms'
  5    AND originating_timestamp >= to_date('2021/11/23 21:00', 'yyyy/mm/dd hh24:mi')
  6    AND originating_timestamp <= to_date('2021/11/24 14:00', 'yyyy/mm/dd hh24:mi')
  7  ORDER BY originating_timestamp;

ORIGINATING_TIMESTAMP                    MESSAGE_TEXT                                                                                                                                                                                            
---------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
24-NOV-21 05.49.29.724000000 AM GMT      Space search: ospid:105880 starts dumping trace                                                                                                                                                         

}}}

https://github.com/oracle/data-warehouse-etl-offload-samples
https://docs.oracle.com/en-us/iaas/autonomous-database-serverless/doc/migration-autonomous-database.html
https://docs.oracle.com/en-us/iaas/autonomous-database-serverless/doc/whats-new-adwc.html
https://blogs.oracle.com/optimizer/post/migrating-to-adb-realtime-spm
Monitor the Performance of Autonomous Data Warehouse
https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/monitor-performance-intro.html#GUID-54CCC1C6-C32E-47F4-8EB6-64CD6EDB5938


! also you can dump the ASH (PDB level) and graph it
https://blogs.oracle.com/datawarehousing/post/parquet-files-oracle-database
https://docs.oracle.com/en/cloud/paas/autonomous-database/serverless/adbsb/format-options-json.html#GUID-3CE7574F-E78B-49D6-9F32-DC00AEE418F4
https://docs.oracle.com/en/cloud/paas/autonomous-database/serverless/adbsb/export-data-file-namingl.html#GUID-1A52F59C-2797-48A5-A058-950318DBE9AF
https://blogs.oracle.com/datawarehousing/post/export-in-parquet-autonomous-database
https://docs.oracle.com/en/cloud/paas/autonomous-database/serverless/adbsb/export-data-directory-parquet.html
* the service account created on root cdb 

{{{
C##CLOUD$SERVICE
}}}
! the new way 
{{{
the new and more user-friendly method of switching with CS_SESSION
CS_SESSION package https://docs.oracle.com/en/cloud/paas/autonomous-database/serverless/adbsb/cs-session-package.html#GUID-4894EED6-7CBA-47BC-B6F7-EF0B32553386
https://connor-mcdonald.com/2024/04/02/switching-services-on-autonomous-now-easier/

CS_SESSION.SWITCH_SERVICE('TPURGENT');
 
}}}


! the old way 

https://connor-mcdonald.com/2020/08/31/more-flexible-resource-usage-on-autonomous/
{{{

SQL> variable x varchar2(100)
SQL> exec dbms_session.switch_current_consumer_group('HIGH',:x,true);

PL/SQL procedure successfully completed.
}}}


<<showtoc>>

! to connect w/ sql developer 
Configure SQL Developer To Connect To TCPS Enabled DB (Doc ID 2908673.1)


! download the instant client dmg files 
{{{
instantclient_19_16 % pwd
/Users/kaarao/Documents/oci/instantclient_19_16
}}}

! download the jdbc driver 
<<<
https://www.oracle.com/database/technologies/appdev/jdbc-downloads.html
<<<

{{{
ls -ltr ~/Library/Tableau/Drivers 
total 23936
-rw-r--r--@ 1 kaarao  staff  4210517 Sep  9  2021 ojdbc8.jar.bak
-rw-r--r--@ 1 kaarao  staff  7270547 Dec  3 00:58 ojdbc11.jar
}}}


! in the wallet directory, edit the sqlnet.ora and ojdbc.properties 
{{{
cat sqlnet.ora 
WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY=/Users/kaarao/Documents/oci/instantclient_19_16/Wallet)))
SSL_SERVER_DN_MATCH=yes                                             


cat ojdbc.properties 
# Connection property while using Oracle wallets.
#oracle.net.wallet_location=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/Users/kaarao/Documents/oci/instantclient_19_16/Wallet)))
# FOLLOW THESE STEPS FOR USING JKS
# (1) Uncomment the following properties to use JKS.
# (2) Comment out the oracle.net.wallet_location property above
# (3) Set the correct password for both trustStorePassword and keyStorePassword.
# It's the password you specified when downloading the wallet from OCI Console or the Service Console.
javax.net.ssl.trustStore=/Users/kaarao/Documents/oci/instantclient_19_16/Wallet/truststore.jks
javax.net.ssl.trustStorePassword=password
javax.net.ssl.keyStore=/Users/kaarao/Documents/oci/instantclient_19_16/Wallet/keystore.jks
javax.net.ssl.keyStorePassword=password

}}}


! set your environment 
{{{
instantclient_19_16 % cat sqlplus.sh 


export ORACLE_HOME=/Users/kaarao/Documents/oci/instantclient_19_16
export SQLPATH=$ORACLE_HOME
export LD_LIBRARY_PATH=$ORACLE_HOME
export TNS_ADMIN=$ORACLE_HOME/Wallet_ADW
export PATH="$ORACLE_HOME:$PATH"


}}}



! open the app 
{{{
open /Applications/Tableau\ Desktop\ 2021.2.app/ 
}}}


! use the following on queries using HIGH service 
{{{
/*+ parallel(16) opt_param('CONTAINER_DATA' 'CURRENT_DICTIONARY') */ 
}}}


! connect 
<<<
use the tns alias 
use the admin/password 
<<<



! references 
<<<
https://kb.tableau.com/articles/howto/how-to-connect-to-oracle-adw-by-using-oracle-wallet-files
https://www.oracle.com/a/ocom/docs/tableauonline6-2_adw.pdf
https://www.oracle.com/a/ocom/docs/database/adw-connection-instructions-tableau-desktop.pdf
https://kb.tableau.com/articles/howto/connecting-to-oracle-autonomous-data-warehouse-using-jdbc

https://www.oracle.com/a/ocom/docs/database/adw-connection-instructions-tableau-desktop.pdf
https://community.tableau.com/s/question/0D58b0000BOr1kCCQR/connecting-to-oracle-adw-via-tableau-desktop-on-x86-macbook
Oracle Autonomous Data warehouse Connector to Tableau Cloud and Tableau Desktop https://community.tableau.com/s/idea/0874T000000HBfcQAG/detail
https://community.tableau.com/s/question/0D54T00001EaDwaSAF/tableau-server-unable-to-connect-to-oracle-autonomous-data-warehouse-adw
https://kb.mit.edu/confluence/display/istcontrib/Connect+to+the+Data+Warehouse+from+Tableau
https://www.tableau.com/support/drivers
https://help.tableau.com/current/pro/desktop/en-us/examples_oracle.htm?_gl=1*1rp2dsj*_ga*NDcwNjk5ODgyLjE3MzMyMDQ2NzY.*_ga_8YLN0SNXVS*MTczMzIwNDY3Ni4xLjEuMTczMzIwNTIyNS4wLjAuMA..
Enhancing Tableau Using Autonomous Data Warehouse
https://www.youtube.com/watch?v=a9-38DD2Sbw
"unable to initialize the key store oracle" <- need to edit the odbc properties
<<<

https://docs.oracle.com/en/cloud/paas/autonomous-database/adbsa/unavailable-oracle-database-features.html#GUID-B6FB5EFC-4828-43F4-BA63-72DA74FFDB87
<<<
Database Features Unavailable in Autonomous Database
Lists the Oracle Database features that are not available in Autonomous Database. Additionally, database features designed for administration are not available.

List of Unavailable Oracle Features

Oracle Real Application Testing (Database Replay)

Oracle Real Application Security Administration Console (RASADM)

Oracle OLAP: Not available in Autonomous Database. See Deprecation of Oracle OLAP for more information.

Oracle R capabilities of Oracle Advanced Analytics

Oracle Industry Data Models

Oracle Database Lifecycle Management Pack

Oracle Data Masking and Subsetting Pack

Oracle Cloud Management Pack for Oracle Database

Oracle Multimedia: Not available in Autonomous Database and deprecated in Oracle Database 18c.

Oracle Sharding

Java in DB

Oracle Workspace Manager
<<<
https://wiki.archlinux.org/index.php/AHCI
http://en.wikipedia.org/wiki/AHCI
http://en.wikipedia.org/wiki/NCQ

Disks from the Perspective of a File System - TCQ,NCQ,4KSectorSize,MRAM http://goo.gl/eWUK7


Power5
Power6	<-- most advanced processor, starting clock is 4Ghz
Power7	

Hardware Virtualization (LPAR)
1) Standard Partition 
	4 LPARs, each have its own dedicated resources (processor, memory)

2) Micropartition
	4 LPARs can utilize a pool of 8 processors
	2 LPARs can utilize 1 processor


Note:
- Dynamic allocation can happen, 
	CPU	5seconds
	Memory	1minute
http://www.oraclerant.com/?p=8
{{{
# Oracle Database environment variables
umask 022
export ORACLE_BASE='/oracle/app/oracle'
export ORACLE_HOME="${ORACLE_BASE}/product/10.2.0/db_1"
export AIXTHREAD_SCOPE=S
export PATH="${ORACLE_HOME}/OPatch:${ORACLE_HOME}/bin:${PATH}"
# export NLS_LANG=language_territory.characterset
export LIBPATH=$ORACLE_HOME/lib:$LIBPATH
export TNS_ADMIN=$ORACLE_HOME/network/admin
}}}
http://www.scribd.com/doc/2153747/AIX-EtherChannel-Load-Balancing-Options
http://gjilevski.wordpress.com/2009/12/13/hardware-solution-for-oracle-rac-11g-private-interconnect-aggregating/
http://www.freelists.org/post/oracle-l/Oracle-10g-R2-RAC-network-configuration
! show system configuration
<<<
* show overall system config
{{{
prtconf
}}}
* to give the highest installed maintenance level
{{{
$ oslevel -r
6100-05
}}}
* to give the known recommended ML
{{{
$ oslevel -rq
Known Recommended Maintenance Levels
------------------------------------
6100-06
6100-05
6100-04
6100-03
6100-02
6100-01
6100-00
}}}
* To show you Service Packs levels as well 
{{{
$ oslevel -s
6100-05-03-1036
}}}
* amount of real memory 
{{{
lsattr -El sys0 -a realmem
realmem 21757952 Amount of usable physical memory in Kbytes False
}}}
* Displays the system model name. For example, IBM, 9114-275
{{{
uname -M

-- on p6
IBM,8204-E8A

-- on p7
IBM,8205-E6C
}}}
<<<

! get CPU information
* get number of CPUs
{{{
lscfg | grep proc

-- on p6
+ proc0                                                      Processor
+ proc2                                                      Processor
+ proc4                                                      Processor
+ proc6                                                      Processor
+ proc8                                                      Processor
+ proc10                                                     Processor
+ proc12                                                     Processor
+ proc14                                                     Processor

-- on p7
+ proc0                                                                          Processor
+ proc4                                                                          Processor
}}}
* get CPU speed
{{{
lsattr -El proc0

-- on p6
frequency   4204000000     Processor Speed       False
smt_enabled true           Processor SMT enabled False
smt_threads 2              Processor SMT threads False
state       enable         Processor state       False
type        PowerPC_POWER6 Processor type        False

-- on p7
frequency   3550000000     Processor Speed       False
smt_enabled true           Processor SMT enabled False
smt_threads 4              Processor SMT threads False
state       enable         Processor state       False
type        PowerPC_POWER7 Processor type        False
}}}

{{{

# lsdev -Cc processor
proc0  Available 00-00 Processor
proc2  Available 00-02 Processor
proc4  Available 00-04 Processor
proc6  Available 00-06 Processor
proc8  Available 00-08 Processor
proc10 Available 00-10 Processor
Which says 6 processors but the following command shows it is only a single 6-way card:
lscfg -vp |grep -ip proc |grep "PROC"
    6 WAY PROC CUOD :
    
The problem seems to revolve around what is a cpu these days, is it a chip or a core or a single piece of a silicone wafer and whatever resides on that being counted as 1 or many.
IBM deem a core to be a CPU so they would say your system has 6 processors.
They are all on one card and may all be in one MCM / chip or there may be several MCMs / chips on that card but you have a 6 CPU system there.
lsdev shows 6 processors so AIX has configured 6 processors.
lscfg shows it is a CUoD 6 processor system and as AIX has configured all 6 it shows all 6 are activated by a suitable POD code.
The Oracle wiki at orafaq.com shows Oracle licence the Standard Edition by CPU (definition undefined) and Enterprise by core (again undefined).
http://www.orafaq.com/wiki/Oracle_Licensing
What ever you call a cpu or a core I would say you have a 6 way / 6 processor system there and the fact that all 6 may or may not be on one bit of silicone wafer will not make any difference.

#############################################################################

get number of processors, its name, physical location, Lists all processors
odmget -q"PdDvLn LIKE processor/*" CuDv

list specific processor, but it is more about Physical location etc, nothing about single/dual core etc
odmget -q"PdDvLn LIKE processor/* AND name=proc0" CuDv

#############################################################################

I've checked is on LPARs on two servers - p55A and p570 - both servers 8 CPUs and seems that in p55A there are 2 4-core CPUs and in 570 4 2-core CPUs.

$ lsattr -El sys0 -a modelname
modelname IBM,9133-55A Machine name False
$ lparstat -i|grep ^Active\ Phys
Active Physical CPUs in system : 8
$ lscfg -vp|grep WAY
4-WAY PROC CUOD :
4-WAY PROC CUOD :
$ lscfg -vp|grep proc
proc0 Processor
proc2 Processor
proc4 Processor
proc6 Processor
$

$ lsattr -El sys0 -a modelname
modelname IBM,9117-570 Machine name False
$ lparstat -i|grep ^Active\ Phys
Active Physical CPUs in system : 8
$ lscfg -vp|grep WAY
2-WAY PROC CUOD :
2-WAY PROC CUOD :
2-WAY PROC CUOD :
2-WAY PROC CUOD :
$ lscfg -vp|grep proc
proc0 Processor
proc2 Processor
proc4 Processor
proc6 Processor
$

#############################################################################

p550 with 2 quad-core processors (no LPARs):

/ #>lsattr -El sys0 -a modelname
modelname IBM,9133-55A Machine name False

/ #>lparstat -i|grep Active\ Phys
Active Physical CPUs in system : 8

/ #>lscfg -vp | grep WAY
2-WAY PROC CUOD :
2-WAY PROC CUOD :

/ #>lscfg -vp |grep proc
proc0 Processor
proc2 Processor
proc4 Processor
proc6 Processor
proc8 Processor
proc10 Processor
proc12 Processor
proc14 Processor

And the further detailed lscfg -vp output shows:
2-WAY PROC CUOD :
Record Name.................VINI
Flag Field..................XXPF
Hardware Location Code......U787B.001.DNWC2F7-P1-C9
Customer Card ID Number.....8313
Serial Number...............YL10HA68E008
FRU Number..................10N6469
Part Number.................10N6469
As you can see, the part number is 10N6469, which clearly is a quad-core cpu:
http://www.searchlighttech.com/searchResults.cfm?part=10N6469

#############################################################################

Power5 and Power6 processors are both Dual Core - Dual Threads.
The next Power7 should have 8 cores and each core can execute 4 threads (comes 2010) but less frequency (3.2Ghz max instead of 5.0Ghz on the power6).

#############################################################################

To get the information about the partition, enter the following command:
lparstat -i

#############################################################################

 lparstat -i
 lparstat
 lscfg | grep proc
 lsattr -El proc0
 uname -M
 lsattr -El sys0 -a realmem
 lscfg | grep proc
 lsdev -Cc processor
 lscfg -vp |grep -ip proc |grep "PROC"
 odmget -q"PdDvLn LIKE processor/*" CuDv
 odmget -q"PdDvLn LIKE processor/* AND name=proc0" CuDv
 odmget -q"PdDvLn LIKE processor/* AND name=proc14" CuDv
 lsattr -El sys0 -a modelname
 lparstat -i|grep ^Active\ Phys
 lscfg -vp|grep WAY
 lscfg -vp|grep proc
 lsattr -El sys0 -a modelname
 lparstat -i|grep Active\ Phys
 lscfg -vp | grep WAY
 lscfg -vp |grep proc
 lscfg -vp

#############################################################################

So the physical CPUs of the AIX box is 8… now it’s a bit tricky to get the real CPU% in AIX.. 
First you have to determine the CPUs of the machine

$ prtconf
System Model: IBM,8204-E8A
Machine Serial Number: 10F2441
Processor Type: PowerPC_POWER6
Processor Implementation Mode: POWER 6
Processor Version: PV_6_Compat
Number Of Processors: 8
Processor Clock Speed: 4204 MHz
CPU Type: 64-bit
Kernel Type: 64-bit
LPAR Info: 2 nad0019aixp21
Memory Size: 21248 MB
Good Memory Size: 21248 MB
Platform Firmware level: Not Available
Firmware Version: IBM,EL350_132
Console Login: enable
Auto Restart: true
Full Core: false

Then, execute the lparstat… 
•	The ent 2.30 is the entitled CPU capacity
•	The psize is the # of physical CPUs on the shared pool
•	The physc 4.42 means that the CPU usage went above the entitled capacity because it is “Uncapped”.. so to get the real CPU% just do a 4.42/8 = 55% utilization
•	55% utilization could either be applied on the 8Physical CPUs or 16 Logical CPUs… because that’s just the percentage used so I just put on the prov worksheet 60% 

$ lparstat 1 10000

System configuration: type=Shared mode=Uncapped smt=On lcpu=16 mem=21247 psize=8 ent=2.30

%user  %sys  %wait  %idle physc %entc  lbusy  vcsw phint
----- ----- ------ ------ ----- ----- ------ ----- -----
 91.4   7.6    0.8    0.3  3.94 171.1   29.9  4968  1352
 92.0   6.9    0.7    0.4  3.76 163.4   26.2  4548  1054
 93.1   6.0    0.5    0.3  4.42 192.3   33.2  4606  1316
 91.3   7.5    0.7    0.5  3.74 162.6   25.6  5220  1191
 93.4   5.7    0.6    0.3  4.07 176.9   28.7  4423  1239
 93.1   6.0    0.6    0.4  4.05 176.0   29.4  4709  1164
 92.3   6.7    0.6    0.5  3.46 150.2   24.8  4299   718
 92.2   6.9    0.6    0.4  3.69 160.6   27.9  4169   973
 91.9   7.3    0.5    0.3  4.06 176.5   33.2  4248  1233
}}}



! install IYs
{{{
To list all IYs
# instfix –i | pg
To show the filesets on a given IY
# instfix –avik IY59135
To commit a fileset
# smitty maintain_software
To list the fileset of an executable
# lslpp –w <full path of the executable>
To install an IY
# Uncompress <file>
# Tar –xvf <file>
# inutoc .
# smity installp
}}}


! iostat

{{{
> iostat -sl

System configuration: lcpu=4 drives=88 ent=0.20 paths=176 vdisks=8

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait physc % entc
          0.3         29.5               64.5  28.6    5.1      1.9   0.9  435.5

System: 
                           Kbps      tps    Kb_read   Kb_wrtn
                         30969.7     429.9   937381114927  200661442300

Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           1.3      61.9       7.6   1479300432  794583660
...

> iostat -st

System configuration: lcpu=4 drives=88 ent=0.20 paths=176 vdisks=8

tty:      tin         tout    avg-cpu: % user % sys % idle % iowait physc % entc
          0.3         29.5               64.5  28.6    5.1      1.9   0.9  435.5

System: 
                           Kbps      tps    Kb_read   Kb_wrtn
                         30969.7     429.9   937381298349  200661442605

}}}

{{{
$ iostat -DRTl 10 100

System configuration: lcpu=16 drives=80 paths=93 vdisks=2

Disks:                     xfers                                read                                write                                  queue                    time
-------------- -------------------------------- ------------------------------------ ------------------------------------ -------------------------------------- ---------
                 %tm    bps   tps  bread  bwrtn   rps    avg    min    max time fail   wps    avg    min    max time fail    avg    min    max   avg   avg  serv
                 act                                    serv   serv   serv outs              serv   serv   serv outs        time   time   time  wqsz  sqsz qfull
hdisk3           0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk13          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk15         61.5   3.3M 162.0   3.2M  90.2K 158.4   6.4    0.2   60.0     0    0   3.5   3.3    0.7    4.6     0    0   0.5    0.0   15.7    0.0   0.1  53.6  16:05:30
hdisk14         67.3   3.4M 166.2   3.3M  67.7K 162.3   7.2    0.2   71.8     0    0   3.9   2.8    0.8    5.7     0    0   1.0    0.0   36.0    0.0   0.1  63.0  16:05:30
hdisk8          58.9   3.0M 165.2   2.9M 112.8K 160.6   5.6    0.2   57.1     0    0   4.6   3.0    0.6    5.5     0    0   0.4    0.0   18.8    0.0   0.1  43.2  16:05:30
hdisk12         57.6   3.4M 151.3   3.3M  91.8K 147.4   6.0    0.2   54.7     0    0   3.9   3.1    0.6    4.7     0    0   0.5    0.0   23.4    0.0   0.1  43.6  16:05:30
hdisk11          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk10         86.0   2.9M 144.9   2.9M  58.0K 141.4  12.7    0.3  109.3     0    0   3.5   2.8    0.8    5.1     0    0   5.3    0.0   82.6    0.0   0.1  86.2  16:05:30
hdisk9           0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk16          0.1 402.8    0.1   0.0  402.8    0.0   0.0    0.0    0.0     0    0   0.1   8.8    8.8    8.8     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk5           0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk18          1.3 391.7K  17.1   0.0  391.7K   0.0   0.0    0.0    0.0     0    0  17.1   1.0    0.5    6.2     0    0   0.0    0.0    0.1    0.0   0.0   0.1  16:05:30
hdisk7           0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk4          43.7   3.2M 150.8   3.2M  67.7K 147.0   4.0    0.3   27.6     0    0   3.8   2.9    0.7    5.0     0    0   0.3    0.0   19.4    0.0   0.0  26.1  16:05:30
hdisk17          0.3   1.2K   0.3   0.0    1.2K   0.0   0.0    0.0    0.0     0    0   0.3   7.2    5.3    8.2     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk6          67.8   3.0M 151.8   2.9M  45.1K 149.1   7.6    0.2   58.4     0    0   2.8   2.8    0.7    4.6     0    0   0.5    0.0   27.1    0.0   0.1  51.6  16:05:30
hdisk21          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk27          0.4   1.2K   0.3   0.0    1.2K   0.0   0.0    0.0    0.0     0    0   0.3  16.7    7.7   34.3     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk23         61.3   3.3M 178.8   3.3M  59.6K 175.9   5.8    0.2   63.7     0    0   2.9   2.9    0.8    5.7     0    0   0.8    0.0   61.8    0.0   0.1  57.6  16:05:30
hdisk1          64.5   3.2M 149.7   3.2M  48.3K 146.8   7.0    0.3   45.0     0    0   2.9   2.5    0.9    4.5     0    0   0.7    0.0   46.4    0.0   0.1  42.0  16:05:30
hdisk20         64.8   3.3M 148.6   3.2M  90.2K 145.0   7.1    0.3   52.5     0    0   3.5   2.7    0.9    4.9     0    0   1.0    0.0   41.7    0.0   0.1  49.8  16:05:30
hdisk22          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk28          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk19         42.6   3.5M 162.6   3.4M  68.9K 160.0   3.6    0.2   22.2     0    0   2.7   1.6    0.5    4.3     0    0   0.1    0.0    8.2    0.0   0.0  27.2  16:05:30

Disks:                     xfers                                read                                write                                  queue                    time
-------------- -------------------------------- ------------------------------------ ------------------------------------ -------------------------------------- ---------
                 %tm    bps   tps  bread  bwrtn   rps    avg    min    max time fail   wps    avg    min    max time fail    avg    min    max   avg   avg  serv
                 act                                    serv   serv   serv outs              serv   serv   serv outs        time   time   time  wqsz  sqsz qfull
hdisk0          53.9   3.0M 153.7   3.0M  41.9K 151.1   5.1    0.2   38.4     0    0   2.6   3.0    1.1    4.6     0    0   0.2    0.0   14.7    0.0   0.0  31.7  16:05:30
hdisk26          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk2          63.6   3.2M 144.1   3.2M  64.4K 141.3   7.3    0.2   72.3     0    0   2.8   3.2    0.7    4.5     0    0   0.9    0.0   28.8    0.0   0.1  46.1  16:05:30
hdisk24         56.0   2.9M 139.6   2.8M  77.3K 135.3   6.2    0.2   56.6     0    0   4.3   3.0    1.0    4.7     0    0   0.5    0.0   19.0    0.0   0.1  34.9  16:05:30
hdisk30         65.5   3.3M 156.9   3.2M  70.9K 152.7   7.1    0.3   42.8     0    0   4.2   3.0    0.7    5.6     0    0   0.6    0.0   20.2    0.0   0.1  50.1  16:05:30
hdisk33          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk34          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk37          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk41          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk40         63.5   2.8M 148.2   2.7M 103.1K 143.9   7.0    0.2   42.0     0    0   4.3   2.9    1.0    5.2     0    0   0.8    0.0   19.2    0.0   0.1  49.7  16:05:30
hdisk38         60.6   3.0M 146.1   2.9M  70.9K 142.5   7.0    0.2   64.1     0    0   3.6   2.7    0.8    5.4     0    0   0.8    0.0   24.1    0.0   0.1  45.4  16:05:30
hdisk25          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk35         50.0   4.0M 197.6   3.9M 107.9K 193.2   3.7    0.2   37.7     0    0   4.3   3.0    0.6    5.4     0    0   0.3    0.0   15.2    0.0   0.0  41.9  16:05:30
hdisk32         41.9   3.0M 159.2   3.0M  54.8K 156.0   3.5    0.2   25.7     0    0   3.2   3.4    1.0    4.8     0    0   0.1    0.0   12.6    0.0   0.0  21.7  16:05:30
hdisk36          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk42         79.7   3.0M 159.3   2.9M  83.8K 155.5  10.1    0.2   92.3     0    0   3.8   2.6    0.9    5.3     0    0   2.2    0.0   50.5    0.0   0.1  79.7  16:05:30
hdisk31          3.6   2.1M  52.7   1.7M 391.7K  35.6   0.8    0.2    7.1     0    0  17.1   1.0    0.5    3.4     0    0   0.0    0.0    0.2    0.0   0.0   1.3  16:05:30
hdisk43         42.6   2.9M 144.2   2.8M  64.4K 140.9   4.0    0.2   34.3     0    0   3.2   3.0    1.3    5.4     0    0   0.1    0.0   10.9    0.0   0.0  21.2  16:05:30
hdisk52          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk48         51.2   3.7M 165.5   3.6M  69.3K 161.4   4.6    0.2   31.7     0    0   4.1   3.0    0.6    4.7     0    0   0.3    0.0   12.7    0.0   0.0  35.5  16:05:30
hdisk47          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk44          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk51         50.1   3.7M 187.6   3.6M  90.2K 183.5   3.7    0.2   40.0     0    0   4.1   3.2    1.1    5.0     0    0   0.4    0.0   37.8    0.0   0.0  44.4  16:05:30
hdisk39          0.1  37.7K   3.5  19.3K  18.3K   1.2   0.5    0.3    1.7     0    0   2.4   0.9    0.5    4.6     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30

Disks:                     xfers                                read                                write                                  queue                    time
-------------- -------------------------------- ------------------------------------ ------------------------------------ -------------------------------------- ---------
                 %tm    bps   tps  bread  bwrtn   rps    avg    min    max time fail   wps    avg    min    max time fail    avg    min    max   avg   avg  serv
                 act                                    serv   serv   serv outs              serv   serv   serv outs        time   time   time  wqsz  sqsz qfull
hdisk49          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk57          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk45         51.5   3.0M 154.3   3.0M  54.8K 151.5   4.7    0.2   31.6     0    0   2.8   3.2    1.3    5.1     0    0   0.2    0.0   12.5    0.0   0.0  28.1  16:05:30
hdisk50          7.9   2.1M  50.2   1.7M 391.7K  33.0   2.1    0.3   23.3     0    0  17.1   1.5    0.7   18.3     0    0   0.0    0.0    0.5    0.0   0.0   2.8  16:05:30
hdisk55         64.5   3.7M 169.6   3.6M  72.5K 166.0   6.1    0.2   55.9     0    0   3.6   3.4    0.8    5.2     0    0   0.4    0.0   17.6    0.0   0.1  47.0  16:05:30
hdisk54         66.9   3.6M 165.5   3.5M  80.6K 162.3   6.7    0.3   56.3     0    0   3.2   3.0    0.5    5.0     0    0   0.9    0.0   23.7    0.0   0.1  52.7  16:05:30
hdisk53          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk56         81.9   3.2M 142.5   3.1M  83.8K 138.8  11.6    0.3  117.6     0    0   3.6   3.3    1.1    5.3     0    0   1.9    0.0   42.9    0.0   0.1  72.4  16:05:30
hdisk58         82.2   3.6M 168.2   3.6M  77.3K 164.9   9.9    0.2   84.0     0    0   3.2   2.7    0.6    5.2     0    0   1.9    0.0   45.8    0.0   0.1  88.9  16:05:30
hdisk60          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk29         52.5   3.4M 172.4   3.4M  64.4K 170.1   4.3    0.2   51.9     0    0   2.3   2.6    1.0    5.5     0    0   0.2    0.0   12.5    0.0   0.0  37.1  16:05:30
hdisk59          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk61         46.6   3.0M 157.2   2.9M  58.0K 153.7   4.1    0.2   42.8     0    0   3.5   3.5    1.4    5.3     0    0   0.1    0.0    7.8    0.0   0.0  23.1  16:05:30
hdisk63          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk62         65.5   3.0M 152.3   2.9M  74.1K 148.7   7.4    0.3   66.8     0    0   3.6   2.6    0.8    5.4     0    0   1.0    0.0   43.2    0.0   0.1  56.1  16:05:30
hdisk68          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk65          0.3  19.6K   3.0   1.3K  18.3K   0.6   2.1    0.4    6.6     0    0   2.4   1.2    0.6    2.9     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk64         42.9   3.4M 145.4   3.4M  78.9K 141.5   4.1    0.2   25.1     0    0   3.9   3.0    0.7    5.6     0    0   0.3    0.0   14.5    0.0   0.0  23.7  16:05:30
hdisk67          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk46         66.8   3.4M 165.5   3.3M  93.4K 161.8   6.8    0.2   51.6     0    0   3.7   3.1    0.6    5.0     0    0   0.6    0.0   24.2    0.0   0.1  52.1  16:05:30
hdisk71          1.6 411.0K  18.3  19.3K 391.7K   1.2   0.6    0.3    3.2     0    0  17.1   1.1    0.5    3.1     0    0   0.0    0.0    0.1    0.0   0.0   0.1  16:05:30
hdisk70         61.5   2.7M 135.8   2.7M  62.4K 132.2   7.4    0.2  107.1     0    0   3.6   3.1    0.6    4.9     0    0   0.7    0.0   25.7    0.0   0.1  39.2  16:05:30
hdisk74         86.1   3.6M 182.2   3.5M  69.3K 178.9  10.7    0.2  108.8     0    0   3.3   3.2    0.8    5.3     0    0   4.2    0.0   98.7    0.0   0.1 119.1  16:05:30
hdisk72         58.2   2.5M 130.0   2.5M  80.6K 125.7   7.1    0.3   43.8     0    0   4.3   2.9    1.0    5.3     0    0   0.8    0.0   27.0    0.0   0.1  38.6  16:05:30

Disks:                     xfers                                read                                write                                  queue                    time
-------------- -------------------------------- ------------------------------------ ------------------------------------ -------------------------------------- ---------
                 %tm    bps   tps  bread  bwrtn   rps    avg    min    max time fail   wps    avg    min    max time fail    avg    min    max   avg   avg  serv
                 act                                    serv   serv   serv outs              serv   serv   serv outs        time   time   time  wqsz  sqsz qfull
hdisk75         47.3   3.3M 160.7   3.2M  69.3K 157.1   4.0    0.2   30.9     0    0   3.5   3.2    1.2    5.0     0    0   0.2    0.0   12.7    0.0   0.0  27.9  16:05:30
hdisk78         66.2   3.3M 168.3   3.2M  70.9K 165.5   6.7    0.2   48.5     0    0   2.9   3.8    2.0    5.1     0    0   0.9    0.0   31.5    0.0   0.1  56.3  16:05:30
hdisk69          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk77          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk73          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk76          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
hdisk66          0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
cd0              0.0   0.0    0.0   0.0    0.0    0.0   0.0    0.0    0.0     0    0   0.0   0.0    0.0    0.0     0    0   0.0    0.0    0.0    0.0   0.0   0.0  16:05:30
}}}


''AIX commands you should not leave home without'' http://www.ibm.com/developerworks/aix/library/au-dutta_cmds.html
''AIX system identification'' http://www.ibm.com/developerworks/aix/library/au-aix-systemid.html
''Determining CPU Speed in AIX'' http://www-01.ibm.com/support/docview.wss?uid=isg3T1000107
CPU monitoring and tuning http://www.ibm.com/developerworks/aix/library/au-aix5_cpu/
Too many Virtual Processors? https://www.ibm.com/developerworks/mydeveloperworks/blogs/AIXDownUnder/entry/too_many_virtual_processors365?lang=en
AIX Virtual Processor Folding is Misunderstood https://www.ibm.com/developerworks/mydeveloperworks/blogs/aixpert/entry/aix_virtual_processor_folding_in_misunderstood110?lang=en
How to find physical CPU socket count for IBM AIX http://www.tek-tips.com/viewthread.cfm?qid=1623771
Single/Dual Core Processor http://www.ibm.com/developerworks/forums/message.jspa?messageID=14270797
http://pic.dhe.ibm.com/infocenter/aix/v7r1/index.jsp?topic=%2Fcom.ibm.aix.cmds%2Fdoc%2Faixcmds3%2Flparstat.htm
lparstat command http://www.ibm.com/developerworks/forums/thread.jspa?messageID=14772565
Micropartitioning and Lparstat Output Virtual/Physical http://unix.ittoolbox.com/groups/technical-functional/ibm-aix-l/micropartitioning-and-lparstat-output-virtualphysical-4241112
Capped/Uncapped Partitions http://www.ibmsystemsmag.com/ibmi/trends/linux/See-Linux-Run/Sidebar--Capped-Uncapped-Partitions/
IBM PowerVM Virtualization Introduction and Configuration http://www.redbooks.ibm.com/abstracts/sg247940.html
iostat http://www.wmduszyk.com/wp-content/uploads/2011/01/PE23_Braden_Nasypany.pdf






















https://en.wikipedia.org/wiki/Application_lifecycle_management

''12c'' Getting Started with Oracle Application Management Pack (AMP) for Oracle E-Business Suite, Release 12.1.0.1 [ID 1434392.1]
''11g'' Getting Started with Oracle E-Business Suite Plug-in, Release 4.0 [ID 1224313.1]
''10g'' Getting Started with Oracle Application Management Pack and Oracle Application Change Management Pack for Oracle E-Business Suite, Release 3.1 [ID 982302.1]
''Application Management Suite for PeopleSoft (AMS4PSFT)'' http://www.oracle.com/technetwork/oem/app-mgmt/ds-apps-mgmt-suite-psft-166219.pdf
http://download.oracle.com/technology/products/oem/screenwatches/peoplesoft_amp/PeopleSoft_final.html
http://www.psoftsearch.com/managing-peoplesoft-with-application-management-suite/

http://www.oracle.com/technetwork/oem/em12c-screenwatches-512013.html#app_mgmt
https://apex.oracle.com/pls/apex/f?p=44785:24:9222314894074::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:6415,2

<<<
''AMS we bundled the licenses of AMP and RUEI together in a single skew. AMP already had multiple features in it off course.''
<<<
''11g'' 
http://gasparotto.blogspot.com/2011/04/manage-peoplesoft-with-oem-grid-control.html
http://gasparotto.blogspot.com/2011/04/manage-peoplesoft-with-oem-grid-control_08.html
http://gasparotto.blogspot.com/2011/04/manage-peoplesoft-with-oem-grid-control_09.html
peoplesoft plugin 8.52 install, peoplesoft plugin agent install,  http://oraclehowto.wordpress.com/category/oracle-enterprise-manager-11g-plugins/peoplesoft-plugin/

''10g'' http://www.oracle.com/us/products/enterprise-manager/mgmt-pack-for-psft-ds-068946.pdf?ssSourceSiteId=ocomcafr
http://modern-sql.com


http://gigaom.com/2012/10/30/meet-arms-two-newest-cores-for-faster-phones-and-greener-servers/
http://gigaom.com/cloud/facebook-amd-hp-and-others-team-up-to-plan-the-arm-data-center-takeover/
''the consortium'' http://www.linaro.org/linux-on-arm
http://www.arm.com/index.php

''ARM and moore's law'' http://www.technologyreview.com/news/507116/moores-law-is-becoming-irrelevant/, http://www.technologyreview.com/news/428481/the-moores-law-moon-shot/
https://sites.google.com/site/embtdbo/wait-event-documentation/ash---active-session-history
ASH patent http://www.google.com/patents?id=cQWbAAAAEBAJ&pg=PA2&source=gbs_selected_pages&cad=3#v=onepage&q&f=false
Practical ASH http://www.scribd.com/rvenrdra/d/44100090-Practical-Advice-on-the-Use-of-Oracle-Database-s-Active-Session-History
magic metirc? http://wenku.baidu.com/view/7d07b81b964bcf84b9d57b48.html?from=related
Sifting through the ASHes http://www.oracle.com/technetwork/database/focus-areas/manageability/ppt-active-session-history-129612.pdf





{{{
col name for a12
col program for a25
col calling_code for a30
col CPU for 9999
col IO for 9999
col TOTAL for 99999
col WAIT for 9999
col user_id for 99999
col sid for 9999
col sql_text format a10

set linesize 300

select /* usercheck */
        decode(nvl(to_char(s.sid),-1),-1,'DISCONNECTED','CONNECTED')
                                                        "STATUS",
        topsession.sid             "SID",
        topsession.serial#,
        u.username  "NAME",
        topsession.program                  "PROGRAM",
        topsession.sql_plan_hash_value,
        topsession.sql_id,        
        st.sql_text sql_text,
        topsession."calling_code",
        max(topsession.CPU)              "CPU",
        max(topsession.WAIT)       "WAITING",
        max(topsession.IO)                  "IO",
        max(topsession.TOTAL)           "TOTAL", 
        round((s.LAST_CALL_ET/60),2) ELAP_MIN
from (
				select * 
				from (
								select
								     ash.session_id sid,
								     ash.session_serial# serial#,
								     ash.user_id user_id,
								     ash.program,
								     ash.sql_plan_hash_value,
								     ash.sql_id, 
								    procs1.object_name || decode(procs1.procedure_name,'','','.')||
								    procs1.procedure_name ||' '||
								    decode(procs2.object_name,procs1.object_name,'',
									 decode(procs2.object_name,'','',' => '||procs2.object_name)) 
								    ||
								    decode(procs2.procedure_name,procs1.procedure_name,'',
								        decode(procs2.procedure_name,'','',null,'','.')||procs2.procedure_name)
								    "calling_code",	     
								     sum(decode(ash.session_state,'ON CPU',1,0))     "CPU",
								     sum(decode(ash.session_state,'WAITING',1,0))    -
								     sum(decode(ash.session_state,'WAITING',
								        decode(wait_class,'User I/O',1, 0 ), 0))    "WAIT" ,
								     sum(decode(ash.session_state,'WAITING',
								        decode(wait_class,'User I/O',1, 0 ), 0))    "IO" ,
								     sum(decode(session_state,'ON CPU',1,1))     "TOTAL"
								from 
									v$active_session_history ash,
									all_procedures procs1,
	                                all_procedures procs2
								where 
							        ash.PLSQL_ENTRY_OBJECT_ID  = procs1.object_id (+) and 
							        ash.PLSQL_ENTRY_SUBPROGRAM_ID = procs1.SUBPROGRAM_ID (+) and 
							        ash.PLSQL_OBJECT_ID   = procs2.object_id (+) and 
							        ash.PLSQL_SUBPROGRAM_ID  = procs2.SUBPROGRAM_ID (+) 
                                        and ash.sample_time > sysdate - 1
								group by session_id,user_id,session_serial#,program,sql_id,sql_plan_hash_value, 
								         procs1.object_name, procs1.procedure_name, procs2.object_name, procs2.procedure_name
								order by sum(decode(session_state,'ON CPU',1,1)) desc
				     ) 
				 where rownum < 10
      ) topsession,
        v$session s,
        (select sql_id, dbid, nvl(b.name, a.command_type) sql_text from dba_hist_sqltext a, audit_actions b where a.command_type =  b.action(+)) st,
        all_users u
where
        u.user_id =topsession.user_id and
        /* outer join to v$session because the session might be disconnected */
        topsession.sid         = s.sid         (+) and
        topsession.serial# = s.serial#   (+)   and
		st.sql_id(+)             = s.sql_id
		and topsession."calling_code" like '%&PACKAGE_NAME%'
group by  topsession.sid, topsession.serial#,
             topsession.user_id, topsession.program, topsession.sql_plan_hash_value, topsession.sql_id,
                     topsession."calling_code",
             s.username, s.sid,s.paddr,u.username, st.sql_text, s.LAST_CALL_ET
order by max(topsession.TOTAL) desc
/
}}}
{{{
col name for a12
col program for a25
col calling_code for a30
col CPU for 9999
col IO for 9999
col TOTAL for 99999
col WAIT for 9999
col user_id for 99999
col sid for 9999
col sql_text format a10

set linesize 300

select /* usercheck */
        decode(nvl(to_char(s.sid),-1),-1,'DISCONNECTED','CONNECTED')
                                                        "STATUS",
        topsession.sid             "SID",
        topsession.serial#,
        u.username  "NAME",
        topsession.program                  "PROGRAM",
        topsession.sql_plan_hash_value,
        topsession.sql_id,
        st.sql_text sql_text,
        topsession."calling_code",
        max(topsession.CPU)              "CPU",
        max(topsession.WAIT)       "WAITING",
        max(topsession.IO)                  "IO",
        max(topsession.TOTAL)           "TOTAL",
        round((s.LAST_CALL_ET/60),2) ELAP_MIN
from (
                                select *
                                from (
                                                                select
                                                                     ash.session_id sid,
                                                                     ash.session_serial# serial#,
                                                                     ash.user_id user_id,
                                                                     ash.program,
                                                                     ash.sql_plan_hash_value,
                                                                     ash.sql_id,
                                                                    procs1.object_name || decode(procs1.procedure_name,'','','.')||
                                                                    procs1.procedure_name ||' '||
                                                                    decode(procs2.object_name,procs1.object_name,'',
                                                                         decode(procs2.object_name,'','',' => '||procs2.object_name))
                                                                    ||
                                                                    decode(procs2.procedure_name,procs1.procedure_name,'',
                                                                        decode(procs2.procedure_name,'','',null,'','.')||procs2.procedure_name)
                                                                    "calling_code",
                                                                     sum(decode(ash.session_state,'ON CPU',1,0))     "CPU",
                                                                     sum(decode(ash.session_state,'WAITING',1,0))    -
                                                                     sum(decode(ash.session_state,'WAITING',
                                                                        decode(wait_class,'User I/O',1, 0 ), 0))    "WAIT" ,
                                                                     sum(decode(ash.session_state,'WAITING',
                                                                        decode(wait_class,'User I/O',1, 0 ), 0))    "IO" ,
                                                                     sum(decode(session_state,'ON CPU',1,1))     "TOTAL"
                                                                from
                                                                        dba_hist_active_sess_history ash,
                                                                        all_procedures procs1,
                                        all_procedures procs2
                                                                where
                                                                ash.PLSQL_ENTRY_OBJECT_ID  = procs1.object_id (+) and
                                                                ash.PLSQL_ENTRY_SUBPROGRAM_ID = procs1.SUBPROGRAM_ID (+) and
                                                                ash.PLSQL_OBJECT_ID   = procs2.object_id (+) and
                                                                ash.PLSQL_SUBPROGRAM_ID  = procs2.SUBPROGRAM_ID (+)
                                        and ash.sample_time > sysdate - 99
                                                                group by session_id,user_id,session_serial#,program,sql_id,sql_plan_hash_value,
                                                                         procs1.object_name, procs1.procedure_name, procs2.object_name, procs2.procedure_name
                                                                order by sum(decode(session_state,'ON CPU',1,1)) desc
                                     )
                                 where rownum < 50
      ) topsession,
        v$session s,
        (select sql_id, dbid, nvl(b.name, a.command_type) sql_text from dba_hist_sqltext a, audit_actions b where a.command_type =  b.action(+)) st,
        all_users u
where
        u.user_id =topsession.user_id and
        /* outer join to v$session because the session might be disconnected */
        topsession.sid         = s.sid         (+) and
        topsession.serial# = s.serial#   (+)   and
                st.sql_id(+)             = s.sql_id
                and topsession."calling_code" like '%&PACKAGE_NAME%'
group by  topsession.sid, topsession.serial#,
             topsession.user_id, topsession.program, topsession.sql_plan_hash_value, topsession.sql_id,
                     topsession."calling_code",
             s.username, s.sid,s.paddr,u.username, st.sql_text, s.LAST_CALL_ET
order by max(topsession.TOTAL) desc
/
}}}
{{{
col name for a12
col program for a25
col calling_code for a30
col CPU for 9999
col IO for 9999
col TOTAL for 99999
col WAIT for 9999
col user_id for 99999
col sid for 9999
col sql_text format a10

set linesize 300

select /* usercheck */
        decode(nvl(to_char(s.sid),-1),-1,'DISCONNECTED','CONNECTED')
                                                        "STATUS",
        topsession.sid             "SID",
        topsession.serial#,
        u.username  "NAME",
        topsession.program                  "PROGRAM",
        topsession.sql_plan_hash_value,
        topsession.sql_id,
        st.sql_text sql_text,
        topsession."calling_code",
        max(topsession.CPU)              "CPU",
        max(topsession.WAIT)       "WAITING",
        max(topsession.IO)                  "IO",
        max(topsession.TOTAL)           "TOTAL",
        round((s.LAST_CALL_ET/60),2) ELAP_MIN
from (
                                select *
                                from (
                                                                select
                                                                     ash.session_id sid,
                                                                     ash.session_serial# serial#,
                                                                     ash.user_id user_id,
                                                                     ash.program,
                                                                     ash.sql_plan_hash_value,
                                                                     ash.sql_id,
                                                                    procs1.object_name || decode(procs1.procedure_name,'','','.')||
                                                                    procs1.procedure_name ||' '||
                                                                    decode(procs2.object_name,procs1.object_name,'',
                                                                         decode(procs2.object_name,'','',' => '||procs2.object_name))
                                                                    ||
                                                                    decode(procs2.procedure_name,procs1.procedure_name,'',
                                                                        decode(procs2.procedure_name,'','',null,'','.')||procs2.procedure_name)
                                                                    "calling_code",
                                                                     sum(decode(ash.session_state,'ON CPU',1,0))     "CPU",
                                                                     sum(decode(ash.session_state,'WAITING',1,0))    -
                                                                     sum(decode(ash.session_state,'WAITING',
                                                                        decode(wait_class,'User I/O',1, 0 ), 0))    "WAIT" ,
                                                                     sum(decode(ash.session_state,'WAITING',
                                                                        decode(wait_class,'User I/O',1, 0 ), 0))    "IO" ,
                                                                     sum(decode(session_state,'ON CPU',1,1))     "TOTAL"
                                                                from
                                                                        v$active_session_history ash,
                                                                        all_procedures procs1,
                                        all_procedures procs2
                                                                where
                                                                ash.PLSQL_ENTRY_OBJECT_ID  = procs1.object_id (+) and
                                                                ash.PLSQL_ENTRY_SUBPROGRAM_ID = procs1.SUBPROGRAM_ID (+) and
                                                                ash.PLSQL_OBJECT_ID   = procs2.object_id (+) and
                                                                ash.PLSQL_SUBPROGRAM_ID  = procs2.SUBPROGRAM_ID (+)
                                        and ash.sample_time > sysdate - 1
                                                                group by session_id,user_id,session_serial#,program,sql_id,sql_plan_hash_value,
                                                                         procs1.object_name, procs1.procedure_name, procs2.object_name, procs2.procedure_name
                                                                order by sum(decode(session_state,'ON CPU',1,1)) desc
                                     )
                                 where rownum < 50
      ) topsession,
        v$session s,
        (select sql_id, dbid, nvl(b.name, a.command_type) sql_text from dba_hist_sqltext a, audit_actions b where a.command_type =  b.action(+)) st,
        all_users u
where
        u.user_id =topsession.user_id and
        /* outer join to v$session because the session might be disconnected */
        topsession.sid         = s.sid         (+) and
        topsession.serial# = s.serial#   (+)   and
                st.sql_id(+)             = s.sql_id
       and topsession.sql_id = '&SQLID'
group by  topsession.sid, topsession.serial#,
             topsession.user_id, topsession.program, topsession.sql_plan_hash_value, topsession.sql_id,
                     topsession."calling_code",
             s.username, s.sid,s.paddr,u.username, st.sql_text, s.LAST_CALL_ET
order by max(topsession.TOTAL) desc
/

}}}
{{{
$ cat ashtop
#!/bin/bash

while :; do
sqlplus "/ as sysdba" <<-EOF
@ashtop.sql
EOF
sleep 5
echo
done
}}}


{{{
-- (c) Kyle Hailey 2007, edited by Karl Arao 20091217

col name for a12
col program for a25
col calling_code for a25
col CPU for 9999
col IO for 9999
col TOTAL for 99999
col WAIT for 9999
col user_id for 99999
col sid for 9999
col sql_text format a10

set linesize 300

select /* usercheck */
        decode(nvl(to_char(s.sid),-1),-1,'DISCONNECTED','CONNECTED')
                                                        "STATUS",
        topsession.sid             "SID",
        topsession.serial#,
        u.username  "NAME",
        topsession.program                  "PROGRAM",
        topsession.sql_plan_hash_value,
        topsession.sql_id,        
        st.sql_text sql_text,
        topsession."calling_code",
        max(topsession.CPU)              "CPU",
        max(topsession.WAIT)       "WAITING",
        max(topsession.IO)                  "IO",
        max(topsession.TOTAL)           "TOTAL", 
        round((s.LAST_CALL_ET/60),2) ELAP_MIN
from (
				select * 
				from (
								select
								     ash.session_id sid,
								     ash.session_serial# serial#,
								     ash.user_id user_id,
								     ash.program,
								     ash.sql_plan_hash_value,
								     ash.sql_id, 
								    procs1.object_name || decode(procs1.procedure_name,'','','.')||
								    procs1.procedure_name ||' '||
								    decode(procs2.object_name,procs1.object_name,'',
									 decode(procs2.object_name,'','',' => '||procs2.object_name)) 
								    ||
								    decode(procs2.procedure_name,procs1.procedure_name,'',
								        decode(procs2.procedure_name,'','',null,'','.')||procs2.procedure_name)
								    "calling_code",	     
								     sum(decode(ash.session_state,'ON CPU',1,0))     "CPU",
								     sum(decode(ash.session_state,'WAITING',1,0))    -
								     sum(decode(ash.session_state,'WAITING',
								        decode(wait_class,'User I/O',1, 0 ), 0))    "WAIT" ,
								     sum(decode(ash.session_state,'WAITING',
								        decode(wait_class,'User I/O',1, 0 ), 0))    "IO" ,
								     sum(decode(session_state,'ON CPU',1,1))     "TOTAL"
								from 
									v$active_session_history ash,
									all_procedures procs1,
	                                all_procedures procs2
								where 
							        ash.PLSQL_ENTRY_OBJECT_ID  = procs1.object_id (+) and 
							        ash.PLSQL_ENTRY_SUBPROGRAM_ID = procs1.SUBPROGRAM_ID (+) and 
							        ash.PLSQL_OBJECT_ID   = procs2.object_id (+) and 
							        ash.PLSQL_SUBPROGRAM_ID  = procs2.SUBPROGRAM_ID (+) 
                                        and ash.sample_time > sysdate - 1/(60*24)
								group by session_id,user_id,session_serial#,program,sql_id,sql_plan_hash_value, 
								         procs1.object_name, procs1.procedure_name, procs2.object_name, procs2.procedure_name
								order by sum(decode(session_state,'ON CPU',1,1)) desc
				     ) 
				 where rownum < 10
      ) topsession,
        v$session s,
        (select sql_id, dbid, nvl(b.name, a.command_type) sql_text from dba_hist_sqltext a, audit_actions b where a.command_type =  b.action(+)) st,
        all_users u
where
        u.user_id =topsession.user_id and
        /* outer join to v$session because the session might be disconnected */
        topsession.sid         = s.sid         (+) and
        topsession.serial# = s.serial#   (+)   and
		st.sql_id(+)             = s.sql_id
group by  topsession.sid, topsession.serial#,
             topsession.user_id, topsession.program, topsession.sql_plan_hash_value, topsession.sql_id,
                     topsession."calling_code",
             s.username, s.sid,s.paddr,u.username, st.sql_text, s.LAST_CALL_ET
order by max(topsession.TOTAL) desc
/

}}}

grant CREATE SESSION to karlarao;
grant SELECT_CATALOG_ROLE to karlarao;
grant SELECT ANY DICTIONARY to karlarao;
usage:
{{{
./ash
or 
sh ash
}}}


create the file and do ''chmod 755 ash''.. this calls the aveactn300.sql
{{{
$ cat ~/dba/bin/ash
#!/bin/bash

while :; do
sqlplus "/ as sysdba" <<-EOF
@/home/oracle/dba/scripts/aveactn300.sql
EOF
sleep 5
echo
done
}}}


{{{
$ cat /home/oracle/dba/scripts/aveactn300.sql
-- (c) Kyle Hailey 2007

set lines 500
column f_days new_value v_days
select 1 f_days from dual;
column f_secs new_value v_secs
select 5 f_secs from dual;
--select &seconds f_secs from dual;
column f_bars new_value v_bars
select 5 f_bars from dual;
column aveact format 999.99
column graph format a50


column fpct format 99.99
column spct format 99.99
column tpct format 99.99
column fasl format 999.99
column sasl format 999.99
column first format a40
column second format a40


select to_char(start_time,'DD HH:MI:SS'),
       samples,
       --total,
       --waits,
       --cpu,
       round(fpct * (total/samples),2) fasl,
       decode(fpct,null,null,first) first,
       round(spct * (total/samples),2) sasl,
       decode(spct,null,null,second) second,
        substr(substr(rpad('+',round((cpu*&v_bars)/samples),'+') ||
        rpad('-',round((waits*&v_bars)/samples),'-')  ||
        rpad(' ',p.value * &v_bars,' '),0,(p.value * &v_bars)) ||
        p.value  ||
        substr(rpad('+',round((cpu*&v_bars)/samples),'+') ||
        rpad('-',round((waits*&v_bars)/samples),'-')  ||
        rpad(' ',p.value * &v_bars,' '),(p.value * &v_bars),10) ,0,50)
        graph
     --  spct,
     --  decode(spct,null,null,second) second,
     --  tpct,
     --  decode(tpct,null,null,third) third
from (
select start_time
     , max(samples) samples
     , sum(top.total) total
     , round(max(decode(top.seq,1,pct,null)),2) fpct
     , substr(max(decode(top.seq,1,decode(top.event,'ON CPU','CPU',event),null)),0,25) first
     , round(max(decode(top.seq,2,pct,null)),2) spct
     , substr(max(decode(top.seq,2,decode(top.event,'ON CPU','CPU',event),null)),0,25) second
     , round(max(decode(top.seq,3,pct,null)),2) tpct
     , substr(max(decode(top.seq,3,decode(top.event,'ON CPU','CPU',event),null)),0,25) third
     , sum(waits) waits
     , sum(cpu) cpu
from (
  select
       to_date(tday||' '||tmod*&v_secs,'YYMMDD SSSSS') start_time
     , event
     , total
     , row_number() over ( partition by id order by total desc ) seq
     , ratio_to_report( sum(total)) over ( partition by id ) pct
     , max(samples) samples
     , sum(decode(event,'ON CPU',total,0))    cpu
     , sum(decode(event,'ON CPU',0,total))    waits
  from (
    select
         to_char(sample_time,'YYMMDD')                      tday
       , trunc(to_char(sample_time,'SSSSS')/&v_secs)          tmod
       , to_char(sample_time,'YYMMDD')||trunc(to_char(sample_time,'SSSSS')/&v_secs) id
       , decode(ash.session_state,'ON CPU','ON CPU',ash.event)     event
       , sum(decode(session_state,'ON CPU',1,decode(session_type,'BACKGROUND',0,1))) total
       , (max(sample_id)-min(sample_id)+1)                    samples
     from
        v$active_session_history ash
     where
               sample_time > sysdate - &v_days
     group by  trunc(to_char(sample_time,'SSSSS')/&v_secs)
            ,  to_char(sample_time,'YYMMDD')
            ,  decode(ash.session_state,'ON CPU','ON CPU',ash.event)
     order by
               to_char(sample_time,'YYMMDD'),
               trunc(to_char(sample_time,'SSSSS')/&v_secs)
  )  chunks
  group by id, tday, tmod, event, total
) top
group by start_time
) aveact,
  v$parameter p
where p.name='cpu_count'
order by start_time
/
}}}
I got the job chain info of IBM Curam batch from the dev team

Here are the details of how the batch works 
<<<
        IBM Cúram Social Program Management 7.0.10 - 7.0.11

        Batch Streaming Architecture
        https://www.ibm.com/support/knowledgecenter/SS8S5A_7.0.11/com.ibm.curam.content.doc/BatchPerformanceMechanisms/c_BATCHPER_Architecture1BatchStreamingArchitecture1.html
        The Chunker
        https://www.ibm.com/support/knowledgecenter/SS8S5A_7.0.11/com.ibm.curam.content.doc/BatchPerformanceMechanisms/c_BATCHPER_Architecture1Chunker1.html
        The Stream
        https://www.ibm.com/support/knowledgecenter/SS8S5A_7.0.11/com.ibm.curam.content.doc/BatchPerformanceMechanisms/c_BATCHPER_Architecture1Stream1.html
<<<

Here's the SQL to pull data from SCHEDULER_HISTORY table 

{{{
 SELECT *
FROM   (SELECT Substr(H.job_name, Instr(H.job_name, 'jobId-') + 6, 20)
               JOB_ID
                      ,
               Substr(H.job_name, Instr(H.job_name, 'job-') + 4,
               Instr(H.job_name, '-jobId-') - ( Instr(H.job_name, 'job-') + 4 ))
                      JOB_NAME,
               H.start_time,
               H.end_time,
               Regexp_substr(H.job_name, '[A-Za-z0-9\-]+',
               Instr(H.job_name, '/'))
                      FUNCTIONAL_AREA,
               Nvl(Regexp_substr(H.job_name, 'tier-[0-9]+'), 'N/A')
               TIER,
               Substr(H.job_name, 1, Instr(H.job_name, '/') - 1)
                      ORDERED_OR_STANDALONE
        FROM   scheduler_history H
        WHERE  ( ( H.start_time BETWEEN :startTime AND :endTime )
                  OR ( :startTime BETWEEN H.start_time AND H.end_time
                        OR ( :startTime >= H.start_time
                             AND H.end_time IS NULL ) ) )
               AND H.start_time > To_date(:startTime, 'YYYYMMDD HH24:MI:SS') - 2
               AND H.job_name LIKE '%jobId%'
               AND H.job_name NOT LIKE '%parallel%'
               AND H.job_name NOT LIKE '%snyc%'
               AND H.job_name NOT LIKE '%Reporting%'
               AND H.job_name NOT LIKE '%Stream%'
        ORDER  BY 5,
                  3) sub1
ORDER  BY 3 ASC  
}}}

This is the dependency diagram of the jobs, I defined the levels 1 to 5 to clearly see the sequential dependency on the data set


[img(100%,100%)[ https://user-images.githubusercontent.com/3683046/116051201-53df5980-a646-11eb-8acc-85946c99a655.png]]



Here's the calculated field I used. The tableau developer needs to complete this as reflected on the diagram above. What I did is just the jobs executed on 20210317.xlsx data set

{{{
IF contains(lower(trim([Functional Area])),'daytimebatch')=true then 'level 0'
ELSEIF contains(lower(trim([Functional Area])),'standalone-only')=true then 'level 0'
ELSEIF contains(lower(trim([Functional Area])),'post-start-of-batchjobs')=true then 'level 1'

ELSEIF contains(lower(trim([Functional Area])),'recipientfile')=true then 'level 2'
ELSEIF contains(lower(trim([Functional Area])),'pre-financials')=true then 'level 2'

ELSEIF contains(lower(trim([Functional Area])),'post-financials-reports')=true then 'level 4'
ELSEIF contains(lower(trim([Functional Area])),'post-financials')=true then 'level 4'
ELSEIF contains(lower(trim([Functional Area])),'post-financials2')=true then 'level 4'
ELSEIF contains(lower(trim([Functional Area])),'pre-bulkprint')=true then 'level 4'

ELSEIF contains(lower(trim([Functional Area])),'bulkprint')=true then 'level 5'
ELSEIF contains(lower(trim([Functional Area])),'ebt-2')=true then 'level 5'
ELSEIF contains(lower(trim([Functional Area])),'ebt-response')=true then 'level 5'

ELSEIF contains(lower(trim([Functional Area])),'financials')=true then 'level 3'
ELSEIF contains(lower(trim([Functional Area])),'ebt')=true then 'level 4'

ELSE 'OTHER' END
}}}



Here's the gantt chart. 

From here we can be tactical and systematic when it comes to tuning. We can identify the blocking jobs and the longest running jobs and how it impacts the overall batch elapsed time. We can isolate these jobs from the Dynatrace instrumentation and even run the identified bad performing batch as standalone and then profile/tune the top SQLs. 


[img(100%,100%)[ https://user-images.githubusercontent.com/3683046/116051156-4924c480-a646-11eb-85bf-265efbe56fa4.png ]]




Here's how to create the gantt chart but the tableau developer needs to tap the SCHEDULER_HISTORY table directly instead of a data dump from SQL


[img(100%,100%)[ https://user-images.githubusercontent.com/3683046/116051200-5346c300-a646-11eb-88b1-6fbab7b01cb0.png ]]



https://www.linkedin.com/pulse/estimating-oltp-execution-latencies-using-ash-john-beresniewicz
{{{
WITH
    ash_summary
AS
(select
     ash.sql_id
    ,SUM(usecs_per_row)                     as DBtime_usecs
    ,1+MAX(sql_exec_id) - MIN(sql_exec_id)  as execs
    ,SUM(usecs_per_row)/(1+MAX(sql_exec_id) - MIN(sql_exec_id))/1000   
                                            as avg_latency_msec_ash
    ,SUM(elapsed_time)/SUM(executions)/1000               
                                            as avg_latency_msec_sqlstats
    ,MAX(substr(sql_text,1,150))            as sqltext
from
     v$active_session_history      ash
    ,v$sqlstats                    sql
where
    ash.sql_id is not null
and ash.sql_exec_id is not null
and sql.executions is not null
and sql.executions > 0
and ash.sql_id = sql.sql_id
group by
    ash.sql_id
)
select
    sql_id
    ,DBtime_usecs
    ,execs
    ,ROUND(avg_latency_msec_ash,6)      ash_latency_msec
    ,ROUND(avg_latency_msec_sqlstats,6) sqlstats_latency_msec
    ,sqltext
from
    ash_summary

order by 3 desc;

}}}
<<showtoc>>


! End to end picture of ORMB and OBIEE performance
[img(100%,100%)[https://user-images.githubusercontent.com/3683046/116047450-17a9fa00-a642-11eb-83fc-06b6d9c2c482.png]]
[img(100%,100%)[https://user-images.githubusercontent.com/3683046/116047452-17a9fa00-a642-11eb-939f-1ef82583b6c9.png]]
[img(100%,100%)[https://user-images.githubusercontent.com/3683046/116047454-18429080-a642-11eb-9ccf-d81de4442dd3.png]]
[img(100%,100%)[https://user-images.githubusercontent.com/3683046/116047455-18429080-a642-11eb-8b99-93520ddd7c29.png]]
[img[https://user-images.githubusercontent.com/3683046/116047456-18db2700-a642-11eb-8ca5-1c09c3ebc379.png]]
[img(100%,100%)[https://user-images.githubusercontent.com/3683046/116047465-1a0c5400-a642-11eb-8906-e790b70a9195.png]]
[img[https://user-images.githubusercontent.com/3683046/116047466-1a0c5400-a642-11eb-900d-be731beb1eb6.png]]
[img[https://user-images.githubusercontent.com/3683046/116047473-1b3d8100-a642-11eb-9392-c3ef0e6e7334.png]]
[img(100%,100%)[https://user-images.githubusercontent.com/3683046/116047475-1b3d8100-a642-11eb-9a0e-e8c8d24c126f.png]]
[img(100%,100%)[https://user-images.githubusercontent.com/3683046/116047476-1b3d8100-a642-11eb-98db-f14e777cc084.png]]
[img(100%,100%)[https://user-images.githubusercontent.com/3683046/116047479-1bd61780-a642-11eb-85c2-dd7a43bcad28.png]]
[img(100%,100%)[https://user-images.githubusercontent.com/3683046/116047484-1bd61780-a642-11eb-8e24-c93d0ceea5e3.png]]
[img(100%,100%)[https://user-images.githubusercontent.com/3683046/116047485-1c6eae00-a642-11eb-850d-f6f8b1ddd015.png]]
[img(100%,100%)[https://user-images.githubusercontent.com/3683046/116047487-1c6eae00-a642-11eb-811d-099f8aeca152.png]]
[img(100%,100%)[https://user-images.githubusercontent.com/3683046/116047491-1d9fdb00-a642-11eb-8eaf-5856619d69e8.png]]
[img[https://user-images.githubusercontent.com/3683046/116047492-1d9fdb00-a642-11eb-8938-1bc8d59b78e5.png]]
[img(100%,100%)[https://user-images.githubusercontent.com/3683046/116047494-1ed10800-a642-11eb-9ec0-af7f85d9ad0a.png]]



! Logic to separate the workload of OBIEE


Here's what I used on the ASH data to separate the workload of OBIEE.
* BIP reports (front end)
* ODI ETL jobs
* nqsserver (OBIEE processes)

{{{
Tableau calculated field: 
IF contains(lower(trim([Module])),'BIP')=true THEN 'BIP'
ELSEIF contains(lower(trim([Module])),'ODI')=true THEN 'ODI'
ELSEIF contains(lower(trim([Module])),'nqs')=true THEN 'nqsserver'
ELSE 'OTHER' END
}}}

[img[ https://user-images.githubusercontent.com/3683046/116047456-18db2700-a642-11eb-8ca5-1c09c3ebc379.png ]]

Some of the reports are instrumented enough that the ACTION column shows the report number. But separating the workload by module is the best
-- from http://www.perfvision.com/statspack/ash.txt

{{{
ASH Report For CDB10/cdb10
DB Name         DB Id    Instance     Inst Num Release     RAC Host
CPUs           SGA Size       Buffer Cache        Shared Pool    ASH Buffer Size
Top User Events
Top Background Events
Top Event P1/P2/P3 Values
Top Service/Module
Top Client IDs
Top SQL Command Types
Top SQL Statements
Top SQL using literals
Top Sessions
Top Blocking Sessions
Top DB Objects
Top DB Files
Top Latches
Activity Over Time
}}}
https://blog.tanelpoder.com/2011/10/24/what-the-heck-is-the-sql-execution-id-sql_exec_id/
-- from http://www.perfvision.com/statspack/ash.txt

{{{
ASH Report For CDB10/cdb10

DB Name         DB Id    Instance     Inst Num Release     RAC Host
------------ ----------- ------------ -------- ----------- --- ------------
CDB10         1193559071 cdb10               1 10.2.0.1.0  NO  tsukuba

CPUs           SGA Size       Buffer Cache        Shared Pool    ASH Buffer Size
---- ------------------ ------------------ ------------------ ------------------
   2        440M (100%)         28M (6.4%)       128M (29.1%)        4.0M (0.9%)


          Analysis Begin Time:   31-Jul-07 17:52:21
            Analysis End Time:   31-Jul-07 18:07:21
                 Elapsed Time:        15.0 (mins)
                 Sample Count:       2,647
      Average Active Sessions:        2.94
  Avg. Active Session per CPU:        1.47
                Report Target:   None specified

Top User Events                  DB/Inst: CDB10/cdb10  (Jul 31 17:52 to 18:07)

                                                               Avg Active
Event                               Event Class     % Activity   Sessions
----------------------------------- --------------- ---------- ----------
db file sequential read             User I/O             26.60       0.78
CPU + Wait for CPU                  CPU                   8.88       0.26
db file scattered read              User I/O              7.25       0.21
log file sync                       Commit                5.44       0.16
log buffer space                    Configuration         4.53       0.13
          -------------------------------------------------------------

Top Background Events            DB/Inst: CDB10/cdb10  (Jul 31 17:52 to 18:07)

                                                               Avg Active
Event                               Event Class     % Activity   Sessions
----------------------------------- --------------- ---------- ----------
db file parallel write              System I/O           21.61       0.64
log file parallel write             System I/O           18.21       0.54
          -------------------------------------------------------------

Top Event P1/P2/P3 Values        DB/Inst: CDB10/cdb10  (Jul 31 17:52 to 18:07)

Event                          % Event  P1 Value, P2 Value, P3 Value % Activity
------------------------------ ------- ----------------------------- ----------
Parameter 1                Parameter 2                Parameter 3
-------------------------- -------------------------- --------------------------
db file sequential read          26.97             "201","66953","1"       0.11
file#                      block#                     blocks

db file parallel write           21.61          "3","0","2147483647"       3.21
requests                   interrupt                  timeout

                                                "2","0","2147483647"       2.49


                                                "5","0","2147483647"       2.42


log file parallel write          18.21                "1","2022","1"       0.68
files                      blocks                     requests

db file scattered read            7.37             "201","72065","8"       0.23
file#                      block#                     blocks

log file sync                     5.48                "4114","0","0"       0.30
buffer#                    NOT DEFINED                NOT DEFINED

          -------------------------------------------------------------

Top Service/Module               DB/Inst: CDB10/cdb10  (Jul 31 17:52 to 18:07)

Service        Module                   % Activity Action               % Action
-------------- ------------------------ ---------- ------------------ ----------
SYS$USERS      UNNAMED                       50.70 UNNAMED                 50.70
SYS$BACKGROUND UNNAMED                       41.56 UNNAMED                 41.56
cdb10          OEM.SystemPool                 2.64 UNNAMED                  1.47
                                                   XMLLoader0               1.17
SYS$USERS      sqlplus@tsukuba (TNS V1-       1.55 UNNAMED                  1.55
cdb10          Lab128                         1.36 UNNAMED                  1.36
          -------------------------------------------------------------

Top Client IDs                   DB/Inst: CDB10/cdb10  (Jul 31 17:52 to 18:07)

                  No data exists for this section of the report.
          -------------------------------------------------------------

Top SQL Command Types            DB/Inst: CDB10/cdb10  (Jul 31 17:52 to 18:07)
-> 'Distinct SQLIDs' is the count of the distinct number of SQLIDs
      with the given SQL Command Type found over all the ASH samples
      in the analysis period

                                           Distinct            Avg Active
SQL Command Type                             SQLIDs % Activity   Sessions
---------------------------------------- ---------- ---------- ----------
INSERT                                           28      27.81       0.82
SELECT                                           45      12.73       0.37
UPDATE                                           11       3.85       0.11
DELETE                                            4       3.70       0.11
          -------------------------------------------------------------

Top SQL Statements              DB/Inst: CDB10/cdb10  (Jul 31 17:52 to 18:07)

       SQL ID    Planhash % Activity Event                             % Event
------------- ----------- ---------- ------------------------------ ----------
fd6a0p6333g8z  2993408006       7.59 db file sequential read              3.06
 SELECT trunc(SYSDATE, 'HH24') HOUR_STAMP, CM_ID, MA
X(SUBSTR(CM_DESC, 1, 12)) CM_DESC, MAX(UP_ID) UP_ID, MA
X(DOWN_ID) DOWN_ID, MAX(MAC_ID) MAC_ID, MAX(CMTS_
ID) CMTS_ID, SUM(BYTES_UP) SUM_BYTES_UP, SUM(BY

                                     direct path write temp               1.74

                                     db file scattered read               1.32

298wmz1kxjs1m  4251515144       5.25 CPU + Wait for CPU                   2.68
INSERT INTO CM_QOS_PROF SELECT :B1 , R.TOPOLOGYID, :B1 - :B4 , P.NODE_PROFILE_ID
, R.DOCSIFCMTSSERVICEQOSPROFILE FROM CM_SID_RAWDATA R, ( SELECT DISTINCT T.CMID,
 P.QOS_PROF_IDX, P.NODE_PROFILE_ID FROM TMP_TOP_SLOW_CM T, CMTS_QOS_PROF P WHERE
 T.CMTSID = P.TOPOLOGYID AND P.SECONDID = :B1 ) P WHERE R.BATCHID = :B3 AND R.PR

                                     db file sequential read              1.78

fhawr20n0wy5x  1792062018       3.40 db file sequential read              2.91
INSERT INTO TMP_CALC_HFC_SLOW_CM_TMP SELECT T.CMTSID, T.DOWNID, T.CMID, 0, 0, 0,
 T.DOWN_SNR_CNR_A3, T.DOWN_SNR_CNR_A2, T.DOWN_SNR_CNR_A1, T.DOWN_SNR_CNR_A0, R.S
YSUPTIME, R.DOCSIFSIGQUNERROREDS, R.DOCSIFSIGQCORRECTEDS, R.DOCSIFSIGQUNCORRECTA
BLES, R.DOCSIFSIGQSIGNALNOISE, :B3 , L.PREV_SECONDID, L.PREV_DOCSIFSIGQUNERRORED

3a11s4c86wdu5  1366293986       3.21 db file sequential read              1.85
DELETE FROM CM_RAWDATA WHERE BATCHID = 0 AND PROFINDX = :B1

                                     log buffer space                     1.06

998t5bbdfm5rm  1914870171       3.21 db file sequential read              1.70
INSERT INTO CM_RAWDATA SELECT PROFINDX, 0 BATCHID, TOPOLOGYID, SAMPLETIME, SYSUP
TIME, DOCSIFCMTSCMSTATUSVALUE, DOCSIFCMTSSERVICEINOCTETS, DOCSIFCMTSSERVICEOUTOC
TETS, DOCSIFCMSTATUSTXPOWER, DOCSIFCMTSCMSTATUSRXPOWER, DOCSIFDOWNCHANNELPOWER,
DOCSIFSIGQUNERROREDS, DOCSIFSIGQCORRECTEDS, DOCSIFSIGQUNCORRECTABLES, DOCSIFSIGQ

          -------------------------------------------------------------

Top SQL using literals           DB/Inst: CDB10/cdb10  (Jul 31 17:52 to 18:07)

                  No data exists for this section of the report.
          -------------------------------------------------------------

Top Sessions                    DB/Inst: CDB10/cdb10  (Jul 31 17:52 to 18:07)
-> '# Samples Active' shows the number of ASH samples in which the session
      was found waiting for that particular event. The percentage shown
      in this column is calculated with respect to wall clock time
      and not total database activity.
-> 'XIDs' shows the number of distinct transaction IDs sampled in ASH
      when the session was waiting for that particular event
-> For sessions running Parallel Queries, this section will NOT aggregate
      the PQ slave activity into the session issuing the PQ. Refer to
      the 'Top Sessions running PQs' section for such statistics.

   Sid, Serial# % Activity Event                             % Event
--------------- ---------- ------------------------------ ----------
User                 Program                          # Samples Active     XIDs
-------------------- ------------------------------ ------------------ --------
      126,    5      33.59 db file sequential read             18.62
STARGUS                                                 493/900 [ 55%]        4

                           CPU + Wait for CPU                   5.52
                                                        146/900 [ 16%]        2

                           db file scattered read               5.02
                                                        133/900 [ 15%]        2

      167,    1      21.80 db file parallel write              21.61
SYS                  oracle@tsukuba (DBW0)              572/900 [ 64%]        0

      166,    1      18.47 log file parallel write             18.21
SYS                  oracle@tsukuba (LGWR)              482/900 [ 54%]        0

      133,  763       9.67 db file sequential read              4.80
STARGUS                                                 127/900 [ 14%]        1

                           direct path write temp               1.74
                                                         46/900 [  5%]        0

                           db file scattered read               1.32
                                                         35/900 [  4%]        0

      152,  618       3.10 db file sequential read              1.10
STARGUS                                                  29/900 [  3%]        1

          -------------------------------------------------------------

Top Blocking Sessions            DB/Inst: CDB10/cdb10  (Jul 31 17:52 to 18:07)
-> Blocking session activity percentages are calculated with respect to
      waits on enqueues, latches and "buffer busy" only
-> '% Activity' represents the load on the database caused by
      a particular blocking session
-> '# Samples Active' shows the number of ASH samples in which the
      blocking session was found active.
-> 'XIDs' shows the number of distinct transaction IDs sampled in ASH
      when the blocking session was found active.

   Blocking Sid % Activity Event Caused                      % Event
--------------- ---------- ------------------------------ ----------
User                 Program                          # Samples Active     XIDs
-------------------- ------------------------------ ------------------ --------
      166,    1       5.48 log file sync                        5.48
SYS                  oracle@tsukuba (LGWR)              512/900 [ 57%]        0

          -------------------------------------------------------------

Top Sessions running PQs        DB/Inst: CDB10/cdb10  (Jul 31 17:52 to 18:07)

                  No data exists for this section of the report.
          -------------------------------------------------------------

Top DB Objects                   DB/Inst: CDB10/cdb10  (Jul 31 17:52 to 18:07)
-> With respect to Application, Cluster, User I/O and buffer busy waits only.

      Object ID % Activity Event                             % Event
--------------- ---------- ------------------------------ ----------
Object Name (Type)                                    Tablespace
----------------------------------------------------- -------------------------
          52652       4.08 db file scattered read               4.08
STARGUS.TMP_CALC_HFC_SLOW_CM_TMP (TABLE)              SYSTEM

          52543       3.32 db file sequential read              3.32
STARGUS.PK_CM_RAWDATA (INDEX)                         TS_STARGUS

          52698       3.21 db file sequential read              2.98
STARGUS.TMP_TOP_SLOW_CM (TABLE)                       SYSTEM

          52542       2.98 db file sequential read              2.98
STARGUS.CM_RAWDATA (TABLE)                            TS_STARGUS

          52699       1.78 db file sequential read              1.78
STARGUS.PK_TMP_TOP_SLOW_CM (INDEX)                    SYSTEM

          -------------------------------------------------------------

Top DB Files                     DB/Inst: CDB10/cdb10  (Jul 31 17:52 to 18:07)
-> With respect to Cluster and User I/O events only.

        File ID % Activity Event                             % Event
--------------- ---------- ------------------------------ ----------
File Name                                             Tablespace
----------------------------------------------------- -------------------------
              6      23.31 db file sequential read             19.83
/export/home/oracle10/oradata/cdb10/ts_stargus_01.dbf TS_STARGUS

                           db file scattered read               1.59


                           direct path write temp               1.59


          -------------------------------------------------------------

Top Latches                      DB/Inst: CDB10/cdb10  (Jul 31 17:52 to 18:07)

                  No data exists for this section of the report.
          -------------------------------------------------------------

Activity Over Time              DB/Inst: CDB10/cdb10  (Jul 31 17:52 to 18:07)
-> Analysis period is divided into smaller time slots
-> Top 3 events are reported in each of those slots
-> 'Slot Count' shows the number of ASH samples in that slot
-> 'Event Count' shows the number of ASH samples waiting for
   that event in that slot
-> '% Event' is 'Event Count' over all ASH samples in the analysis period

                         Slot                                   Event
Slot Time (Duration)    Count Event                             Count % Event
-------------------- -------- ------------------------------ -------- -------
17:52:21   (1.7 min)      354 log file parallel write              85    3.21
                              db file sequential read              82    3.10
                              db file parallel write               65    2.46
17:54:00   (2.0 min)      254 CPU + Wait for CPU                   73    2.76
                              db file sequential read              46    1.74
                              log file parallel write              44    1.66
17:56:00   (2.0 min)      323 log file parallel write              94    3.55
                              db file parallel write               85    3.21
                              db file sequential read              85    3.21
17:58:00   (2.0 min)      385 log file parallel write             109    4.12
                              db file parallel write               95    3.59
                              db file sequential read              71    2.68
18:00:00   (2.0 min)      470 db file sequential read             169    6.38
                              db file parallel write               66    2.49
                              log file parallel write              61    2.30
18:02:00   (2.0 min)      277 db file sequential read             139    5.25
                              db file parallel write               58    2.19
                              CPU + Wait for CPU                   39    1.47
18:04:00   (2.0 min)      364 db file parallel write              105    3.97
                              db file scattered read               90    3.40
                              db file sequential read              80    3.02
18:06:00   (1.4 min)      220 db file parallel write               67    2.53
                              db file scattered read               44    1.66
                              db file sequential read              42    1.59
          -------------------------------------------------------------

End of Report
}}}
<<<

Active Session History (ASH) performed an emergency flush. This may mean that ASH is undersized. If emergency flushes are a recurring issue, you may consider increasing ASH size by setting the value of _ASH_SIZE to a sufficiently large value. Currently, ASH size is 16777216 bytes. Both ASH size and the total number of emergency flushes since instance startup can be monitored by running the following query:
 select total_size,awr_flush_emergency_count from v$ash_info;

<<<
''RE: Finding Sessions using AWR Report - ASH'' http://www.evernote.com/shard/s48/sh/733fa2e6-4feb-45cf-ac1a-18a679d9bce5/d6f5a6382d71007a633bc30d0a225db6
When slicing and dicing the ASH data. Having the correct sample math and granularity matters! 

<<showtoc>>

! 1st example - CPU usage across container databases (CDB)
!! second granularity 
change to second granularity and apply the formula below
{{{
count(1)
}}}
[img(100%,100%)[https://i.imgur.com/Awjjz6o.png]]

!! minute granularity
change to minute granularity and apply the formula below
{{{
(count(1)*10)/60
}}}
[img(100%,100%)[https://i.imgur.com/KZ1IImy.png]]


! 2nd example - CPU usage across instances
* This is the consolidated view of CPU and Scheduler wait class of all instances
[img(100%,100%)[https://i.imgur.com/UTWySm3.png]]
* The data is filtered by CPU and Scheduler
[img(40%,40%)[ https://i.imgur.com/nVKkG1P.png]]
* Filtering on the peak July 29 period. If we change to Second granularity you see that the aggregation is incorrect if the minute granularity math is applied
[img(100%,100%)[https://i.imgur.com/qLQa8Zs.png]]
* Changing it back to count(1) with Second granularity shows the correct range of AAS CPU usage
[img(100%,100%)[https://i.imgur.com/xPP1kgh.png]]


<<showtoc>> 


! ASH granularity, SQL_EXEC_START - peoplesoft job troubleshooting
<<<
* SQL trace would be more granular and definitive on chasing the outlier elap/exec performance (particularly the < 1sec elapsed times) 
* SQL Monitoring is another way but with limitations (space, threshold, etc.) https://sqlmaria.com/2017/08/01/getting-the-most-out-of-oracle-sql-monitor/
* ASH is another way but you lose the granularity (especially the < 1sec elapsed times), although the sample_time and sql_exec_start can give you the general wall clock info when a particular SQL started and ended (more on this below)
<<<

!! 1) ASH granularity
Example is this SQL_ID 0fhpmaba4znqy which is executed thousands of times with .000x seconds response time per execute (PHV 2970305186)
{{{
SYS@FMSSTG:PS122STG1 AS SYSDBA> @sql_id
Enter value for sql_id: 0fhpmaba4znqy
SQL_TEXT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
UPDATE PS_RECV_LOAD_T15 SET RECEIVER_ID = ( SELECT DISTINCT RECEIVER_ID FROM PS_
1 row selected.
BEGINTIM INSTANCE    PLANHASH    EXECDELTA    ROWSDELTA  BUFFERGETSDELTA DISKREADSDELTA        IOWAITDELTA     CPUTIMEDELTA ELAPSEDTIMEDELTA ELAPSEDEXECDELTA    SNAP_ID
-------- -------- ----------- ------------ ------------ ---------------- -------------- ------------------ ---------------- ---------------- ---------------- ----------
02-07 15        1   986626382          110          110      235,624,455              7              3,674    1,367,154,117    1,370,452,852        12.458662       5632   W
.
.
02-08 00        1   986626382          188          188      400,969,410             16              8,354    2,292,949,823    2,298,467,546        12.225891       5641   W
 
02-08 15        1  3862886561       13,946       13,946       42,043,961            949            354,164      322,228,251      332,017,906          .023807       5656
.
.
>>02-11 22        3  2970305186       15,999       15,999        1,055,855            761            371,071        8,703,310        9,964,815          .000286       5691    B   
 
}}}


The ash_elap.sql output using dba_hist_active_sess_history view shows 1 exec and 0 on avg,min,max elapased
{{{
DBA_HIST_ACTIVE_SESSION_HISTORY - ash_elap exec (start to end) avg min max
------------------------------------------------------------------------
SQL_ID          SQL_PLAN_HASH_VALUE        COUNT(*)        AVG        MIN        MAX
--------------- ------------------------ ---------- ---------- ---------- ----------
0fhpmaba4znqy   2970305186                        1          0          0          0
}}}

Then using v$active_session_history, from 1 exec to 9, then shows .56 avg, 0 min, 1 max elapsed
{{{
ACTIVE_SESSION_HISTORY - ash_elap exec avg min max
------------------------------------------------------------------------
0fhpmaba4znqy   2970305186                        9        .56          0          1
}}}
SQL Tracing this would give way more than 9 exec and lower elapsed time numbers
 

!! 2) ASH SAMPLE_TIME and SQL_EXEC_START visualized

* sql_exec_start can give you the general wall clock info when a particular SQL started and ended, below is the same Peoplesoft workload. Here we are looking at the job level view of performance.
* So the main job that they say executes for 21K times with millisecond level per execute response time actually runs for 2hours overall. The job is called RSTR_RCVLOAD.

[img(60%,60%)[https://i.imgur.com/eNi3znY.png]]

* Below is the same highlighted 2 hours, only that the time period is across 1 month. David Kurtz’s PS360 (https://github.com/davidkurtz/ps360) generated this graph (Process Scheduler Process Map).

[img(100%,100%)[https://i.imgur.com/zNPyygJ.png]]

* As for the PHV 2970305186 above (ASH granularity). The SQL_ID 0fhpmaba4znqy compared to the 2hour end to end job run time is the tiny graph on its own axis (2nd row and it looks like it’s running for a few seconds end to end).
* The others highlighted below it are the rest of the SQL_IDs of RSTR_RCVLOAD

click here for full size image https://i.imgur.com/pJOs8DR.png
[img(100%,100%)[https://i.imgur.com/pJOs8DR.png]]

From the same ASH data. The graph below is the breakdown of the 2 hour time series of RSTR_RCVLOAD above (Process Scheduler Process Map section of PS360).
* The process started 10-FEB-19 10.06.05.000000 PM and ended 10-FEB-19 11.54.55.000000 PM based on sample_time and sql_exec_start.
* The graph is sliced by SQL TYPE and Plan Hash Value, then colored by SQL_ID.

<<<
* The red annotated font are the Plan Hash Values with multiple SQL_IDs. There are 9 of them that’s at least 30 minutes. 
* The black annotated font are the Plan Hash Values with single SQL_IDs. There are 5 of them 
<<<

click here for full size image https://i.imgur.com/QU47vcy.png
[img(100%,100%)[https://i.imgur.com/QU47vcy.png]]

* All these SQLs are all on CPU event (all green). And the 2 hours is executing in a serial manner. Using 1 CPU on 1 node.

click here for full size image https://i.imgur.com/foKJJOb.png
[img(100%,100%)[https://i.imgur.com/foKJJOb.png]]

* Below are the red and black plan hash values mentioned above
* The green color is the PHV 2970305186 (0fhpmaba4znqy mentioned in ASH granularity) which is also the tiny blip on the time series graph above 

[img(40%,40%)[https://i.imgur.com/thYqZcA.png]]
[img(40%,40%)[https://i.imgur.com/zCBgHbq.png]]

* In summary, when we looked at it from the job level, we uncovered more tuning opportunities because we can clearly see which SQLs and plan_hash_value are eating up the 2 hours end to end elapsed. But this workload is a batch job so having this approach works well.
* The wall clock mattered more vs the exact millisecond per execute granularity.
 

! the scripts used - ash dump and ash_elap 

* this ash dump script was used to generate the time series breakdown of the 2 hours end to end elapsed 
https://raw.githubusercontent.com/karlarao/pull_dump_and_explore_ash/master/ash/0_gvash_to_csv_12c.sql

* ash_elap scripts are used to generate the avg,min,max elapsed/exec
<<<
* ash_elap.sql  - get wall clock time, the filter is SQL_ID 
** https://raw.githubusercontent.com/karlarao/scripts/master/performance/ash_elap.sql
* ash_elap2.sql  - get wall clock time, the filter is “where run_time_sec < &run_time_sec”. So you can just say 0 and it will output all 
** https://raw.githubusercontent.com/karlarao/scripts/master/performance/ash_elap2.sql
* ash_elap_user.sql  - get wall clock time, the filter is user_id from dba_users. Here you can change the user_id fileter to ACTION, MODULE, or PROGRAM 
** https://raw.githubusercontent.com/karlarao/scripts/master/performance/ash_elap_user.sql
<<<

If you have multiple MODULEs or PROGRAMs and you want to expose that to the group by you can do that just like what I did below
[img(100%,100%)[https://i.imgur.com/dZHZOGz.png]]

<<<
Then if you want to detail on that SQL_ID, use  planx Y <sql_id>
https://raw.githubusercontent.com/karlarao/scripts/master/performance/planx.sql
<<<







.
{{{
https://mail.google.com/mail/u/0/#search/tanel+ash+tpt-oracle/FMfcgxwLtGsWjXhhlwQrvQDJVXGkqPMQ
https://github.com/tanelpoder/tpt-oracle/blob/master/ash/devent_hist.sql
https://raw.githubusercontent.com/tanelpoder/tpt-oracle/master/ash/devent_hist.sql


--parameter1:
direct*
cell*
^(direct|cell|log|db)

--parameter2:
1=1

edit the date filters accordingly 


}}}
{{{
If you see the time waited for IOs go up, but you're not trying to do more I/O (same amount of data & workload and exec plans haven't changed), you can report the individual I/O latencies to see if your I/O is just slower this time (due to other activity in the storage subsystem).

You can even estimate wait event counts in different latency buckets using ASH data (more granularity and flexibility compared to AWR).

https://github.com/tanelpoder/tpt-oracle/blob/master/ash/devent_hist.sql

SQL> @ash/devent_hist db.file.*read 1=1 "TIMESTAMP'2020-12-10 00:00:00'" "TIMESTAMP'2020-12-10 23:00:00'"

                                   Wait time    Num ASH   Estimated    Estimated    % Event  Estimated
Wait Event                        bucket ms+    Samples Total Waits    Total Sec       Time  Time Graph  
---------------------------- --------------- ---------- ----------- ------------ ---------- ------------ 
db file parallel read                    < 1          7     31592.4        315.9        8.1 |#         | 
                                         < 2          6      4044.5         80.9        2.1 |          | 
                                         < 4          5      1878.6         75.1        1.9 |          | 
                                         < 8          9      1407.2        112.6        2.9 |          | 
                                        < 16         19      1572.1        251.5        6.5 |#         | 
                                        < 32         36      1607.3        514.3       13.2 |#         | 
                                        < 64         35       809.8        518.3       13.3 |#         | 
                                       < 128         52       530.8        679.5       17.5 |##        | 
                                       < 256         44       284.6        728.7       18.7 |##        | 
                                       < 512         28          88        450.7       11.6 |#         | 
                                      < 1024          2         3.7         38.1          1 |          | 
                                      < 4096          1           1         41.0        1.1 |          | 
                                      < 8192          1           1         81.9        2.1 |          | 

db file scattered read                   < 1          4     17209.3        172.1       71.1 |#######   | 
                                         < 2          1       935.5         18.7        7.7 |#         | 
                                         < 4          3        1021         40.8       16.9 |##        | 
                                         < 8          1       131.7         10.5        4.3 |          | 

db file sequential read                  < 1        276   1354178.7     13,541.8        7.7 |#         | 
                                         < 2        221    150962.7      3,019.3        1.7 |          | 
                                         < 4        515    174345.3      6,973.8          4 |          | 
                                         < 8       1453    250309.8     20,024.8       11.4 |#         | 
                                        < 16       1974    181327.4     29,012.4       16.6 |##        | 
                                        < 32       2302    101718.4     32,549.9       18.6 |##        | 
                                        < 64       2122     49502.4     31,681.5       18.1 |##        | 
                                       < 128       1068     12998.8     16,638.4        9.5 |#         | 
                                       < 256        312      1855.9      4,751.1        2.7 |          | 
                                       < 512        260       763.7      3,909.9        2.2 |          | 
                                      < 1024         13        24.7        253.2         .1 |          | 
                                      < 4096         59          59      2,416.6        1.4 |          | 
                                      < 8192        127         127     10,403.8        5.9 |#         | 


This way, any potential latency outliers won't get hidden in averages.


}}}
I use the following scripts for quick troubleshooting
{{{
sqlmon.sql
snapper.sql

report_sql_monitor_html.sql
report_sql_monitor.sql

find_sql_awr.sql
dplan.sql
dplan_awr.sql
awr_plan_change.sql

px.sql
}}}




http://oracledoug.com/serendipity/index.php?/archives/1614-Network-Events-in-ASH.html

other articles by Doug about ASH 

Alternative Pictures Demo
That Pictures demo in full
Time Matters: Throughput vs. Response Time - Part 2
Diagnosing Locking Problems using ASH/LogMiner – The End
Diagnosing Locking Problems using ASH/LogMiner – Part 9
Diagnosing Locking Problems using ASH/LogMiner – Part 8
Diagnosing Locking Problems using ASH/LogMiner – Part 7
Diagnosing Locking Problems using ASH – Part 6
Diagnosing Locking Problems using ASH – Part 5
Diagnosing Locking Problems using ASH – Part 4
http://www.oaktable.net/content/ukoug-2011-ash-outliers
http://oracledoug.com/serendipity/index.php?/archives/1669-UKOUG-2011-Ash-Outliers.html#comments
http://oracledoug.com/ASHoutliers3c.sql
http://oracledoug.com/adaptive_thresholds_faq.pdf
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1525205200346930663   <-- JB and Graham comments


	

{{{
select sql_id,max(TEMP_SPACE_ALLOCATED)/(1024*1024*1024) gig 
from DBA_HIST_ACTIVE_SESS_HISTORY 
where 
sample_time > sysdate-2 and 
TEMP_SPACE_ALLOCATED > (50*1024*1024*1024) 
group by sql_id order by sql_id;
}}}


http://www.bobbydurrettdba.com/2012/05/10/finding-query-with-high-temp-space-usage-using-ash-views/






{{{

define _dbid=1769737394
define _start_time='02/24/23 00:01:00'
define _end_time='02/28/23 16:02:00'
select *
from dba_hist_active_sess_history
where 
dbid = &_dbid
and sample_time BETWEEN to_date('&_start_time', 'MM/DD/YY HH24:MI:SS') AND to_date('&_end_time', 'MM/DD/YY HH24:MI:SS')
and sql_id in 
('1j1cvsmq8kf7r'
,'418gttvnzvyzd'
,'59shz73htgztq'
,'7qgy0nz3m9axg')
/
}}}
Visualizing Active Session History (ASH) Data With R http://structureddata.org/2011/12/20/visualizing-active-session-history-ash-data-with-r/
also talks about TIME_WAITED – micro, only the last sample is fixed up, the others will have TIME_WAITED=0
thanks to John Beresniewicz for this info. http://dboptimizer.com/2011/07/20/oracle-time-units-in-v-views/
{{{
select
     decode(current_obj#
            ,0
            ,'undo block'
            ,-1
            ,'cpu'
            ,current_obj#) cur_obj
   , count(1)
from
     gv$active_session_history
where
   sample_time between to_date('&date_from', 'ddmmyyyy hh24:mi:ss')
                  and  to_date('&date_from', 'ddmmyyyy hh24:mi:ss')
and event = 'db file sequential read'
and sql_id = '&sql_id'
group by current_obj#
order by 2 asc;

For a wait event =  db file sequential read
-- if current_obj = 0 then this means you are reading from undo block(useful to check read consistency)  
-- if current_obj = -1 then this means you are working on cpu  
}}}
DAVE ABERCROMBIE research on AAS and ASH
http://aberdave.blogspot.com/search?updated-max=2011-04-02T08:09:00-07:00&max-results=7
http://dboptimizer.com/2011/10/20/tuning-blog-entries/
{{{
ASH

SQL execution times from ASH – using ASH to see SQL execution times and execution time variations
AAS on AWR – my favorite ASH query that shows AAS  wait classes  as an ascii graph
CPU Wait vs CPU Usage
Simulated ASH 2.1
AWR

Wait  Metrics vs v$system_event
Statistic Metrics verses v$sysstat
I/O latency fluctuations
I/O wait histograms
Redo over weeks
AWR mining
Diff’ing AWR reports
Importing AWR repositories
Redo

LGWR redo write times (log file parallel write)
Ratio of Redo bytes to Datablocks writes
Etc

V$ view time units S,CS,MS,US
Parsing 10046 traces
SQL

Display Cursor Explained – what are all those display_cursor options and what exactly is the data
VST – vistual sql tunning

VST in DB Optimizer 3.0
VST with 100 Tables !
SQL Joins using sets
Visualizing SQL Queries
VST – product design
View expansion with VST
Outer Joins Graphically
}}}
* ASM Mind Map
http://jarneil.wordpress.com/2008/08/26/the-asm-mind-map/

* v$asm_disk
http://www.rachelp.nl/index_kb.php?menu=articles&actie=show&id=10



http://www.freelists.org/post/oracle-l/ASM-on-SAN,5
http://www.freelists.org/post/oracle-l/ASM-and-EMC-PowerPath
ASM and shared pool sizing - http://www.evernote.com/shard/s48/sh/c3535415-30fd-42fa-885a-85df36616e6e/288c13d20095240c8882594afed99e8b

Bug 11684854 : ASM ORA-4031 IN LARGE POOL FROM CREATE DISKGROUP
14292825: DEFAULT MEMORY PARAMETER VALUES FOR 11.2 ASM INSTANCES LOW
https://twiki.cern.ch/twiki/bin/view/PDBService/ASM_Internals <-- GOOD STUFF
https://twiki.cern.ch/twiki/bin/view/PDBService/HAandPerf
{{{
ASM considerations on SinglePath and MultiPath across versions (OCR,VD,DATA)

In general you gotta have a facility/mechanism for:

	* multipathing -> persistent naming -> ASM


on 10gR2, 11gR1 for your OCR and VD you must use the following:

	* 
		* clustered filesystem (OCFS2) or NFS
		* raw devices (RHEL4) or udev (RHEL5)


on 11gR2, for your OCR and VD you must use the following:

	* 
		* clustered filesystem or NFS
		* ASM (mirrored at least 3 disks) 

-----------------------
Single Path 
-----------------------

If you have ASMlib you will go with this setup

	* 
		*   ASMlib -> ASM"


If you don't have ASMlib and Powerpath you will go with this setup

	* 
		* 10gR2 and 11g
			* raw devices
			* udev -> ASM

		* 11gR2

			* udev -> ASM

-----------------------
Multi Path
-----------------------

If you have ASMlib and Powerpath you will go with this setup

	* 
		* 10gR2, 11g, 11gR2

			* "powerpath -> ASMlib -> ASM"



If you don't have ASMlib and Powerpath you will go with this setup

	* 
		* 10gR2
			* "dm multipath (dev mapper) -> raw devices -> ASM"


		* 11g and 11gR2
			* "dm multipath (dev mapper) -> ASM"



you can also be flexible and go with 

	* 
		*   "dm multipath (dev mapper) -> ASMlib -> ASM"

-----------------------
Notes
-----------------------

kpartx confuses me..just do this.. 
- assign and share luns on all nodes.
- fdisk the luns and update partition table on all nodes
- configure multipath
- use </dev/mapper/<mpath_alias>
- create asm storage using above devices
https://forums.oracle.com/forums/thread.jspa?threadID=2288213
}}}

http://www.evernote.com/shard/s48/sh/0012dbf5-6648-4792-84ff-825a363f68d3/a744de57fdb99349388e21cdd9c6059a
http://www.pythian.com/news/1078/oracle-11g-asm-diskgroup-compatibility/

http://www.freelists.org/post/oracle-l/Does-ocssdbin-started-from-11gASM-home-support-diskgroups-mounted-by-10g-ASM-instance,5
{{{
Hi Sanjeev,

I'd like to clear some info first.

1st)... the ocssd.bin

the CSS is created when:
- you use ASM as storage
- when you install Clusterware (RAC, but Clusterware has its separate
home already)

  For Oracle Real Application Clusters installations, the CSS daemon
is installed with Oracle Clusterware in a separate Oracle home
directory (also called the Clusterware home directory). For
single-node installations, the CSS daemon is installed in and runs
from the same Oracle home as Oracle Database.

You could identify the Oracle home directory being used to run the CSS daemon:

# cat /etc/oracle/ocr.loc

The output from this command is similar to the following:

[oracle@dbrocaix01 bin]$ cat /etc/oracle/ocr.loc
ocrconfig_loc=/oracle/app/oracle/product/10.2.0/asm_1/cdata/localhost/local.ocr
local_only=TRUE

The ocrconfig_loc parameter specifies the location of the Oracle
Cluster Registry (OCR) used by the CSS daemon. The path up to the
cdata directory is the Oracle home directory where the CSS daemon is
running (/oracle/app/oracle/product/10.2.0/asm_1 in this example). To
confirm you could grep the css deamon and see that it's running on
that home

[oracle@dbrocaix01 bin]$ ps -ef | grep -i css
oracle    4950     1  0 04:23 ?        00:00:00
/oracle/app/oracle/product/10.2.0/asm_1/bin/ocssd.bin
oracle    5806  5609  0 04:26 pts/1    00:00:00 grep -i css

Note:
If the value of the local_only parameter is FALSE, Oracle Clusterware
is installed on this system.


2nd)... ASM and Database compatibility

I'll supply you with some references..

Note 337737.1 Oracle Clusterware - ASM - Database Version Compatibility
Note 363254.1 Applying one-off Oracle Clusterware patches in a mixed
version home environment

and Chapter 4, page 116-120 of Oracle ASM (under the hood & practical
deployment guide) 10g & 11g

In the book it says that there are two types of compatibility settings
between ASM and the RDBMS:
  1) instance-level software compatibility settings
        - the COMPATIBLE parameter (mine is 10.2.0), this defines what
software features are available to the instance. Setting the
COMPATIBLE parameter in the ASM instance
        to 10.1 will not enable you to use 11g ASM new features (variable
extents, etc.)

  2) diskgroup-specific settings
        - COMPATIBLE.ASM and COMPATIBLE.RDBMS which are persistently stored
in the ASM diskgroup metadata..these compatibility settings are
specific to a diskgroup and control which
          attributes are available to the ASM diskgroup and which are
available to the database.
        - COMPATIBLE.RDBMS, which defaults to 10.1 in 11g, is the minimum
COMPATIBLE version setting of a database that can mount the
diskgroup.. once you advanced it, it cannot be reversed
        - COMPATIBLE.ASM, which controls the persistent format of the on-disk
ASM metadata structures. The ASM compatibility defaults to 10.1 in 11g
and must always be greater than or equal to the RDBMS compatibility
level.. once you advanced it, it cannot be reversed

    The combination of the compatibility parameter setting of the
database, the software version of the database, and the RDBMS
compatibility setting of a diskgroup determines whether a database
instance is permitted to mount a given diskgroup. The compatibility
setting also determines which ASM features are available for a
diskgroup.

    An ASM instance can support different RDBMS clients with different
compatibility settings, as long as the database COMPATIBLE init.ora
parameter setting of each database instance is greater than or equal
to the RDBMS compatibility of all diskgroups.

    You could also read more here...
http://download.oracle.com/docs/cd/B28359_01/server.111/b31107/asmdiskgrps.htm#CHDDIGBJ




So the following info will give us some background on your environment

cat /etc/oracle/ocr.loc
ps -ef | grep -i css
cat /etc/oratab
select name, group_number, value from v$asm_attribute order by 2;
select db_name, status,software_version,compatible_version from v$asm_client;
select name,compatibility, database_compatibility from v$asm_diskgroup;



I hope I did not confuse you with all of this info.





- Karl Arao
http://karlarao.wordpress.com
}}}
http://blog.ronnyegner-consulting.de/2009/10/27/asm-resilvering-or-how-to-recovery-your-asm-in-crash-scenarios/
http://www.ardentperf.com/2010/07/15/asm-mirroring-no-hot-spare-disk/
http://asmsupportguy.blogspot.com/2010/05/how-to-map-asmlib-disk-to-device-name.html
http://uhesse.wordpress.com/2010/12/01/database-migration-to-asm-with-short-downtime/
{{{
backup as copy database format '+DATA';
switch database to copy;
}}}
''Migrating Databases from non-ASM to ASM and Vice-Versa'' http://www.idevelopment.info/data/Oracle/DBA_tips/Automatic_Storage_Management/ASM_33.shtml


-- ''OCFS to ASM''
''How to Migrate an Existing RAC database to ASM'' http://www.colestock.com/blogs/2008/05/how-to-migrate-existing-rac-database-to.html
http://oss.oracle.com/pipermail/oracleasm-users/2009-June/000094.html
{{{

[root@uscdcmix30 ~]#  time dd if=/dev/VgCDCMIX30_App/app_new bs=8192 count=655360 of=/dev/null

655360+0 records in

655360+0 records out


 
real    0m39.045s

user    0m0.083s

sys     0m6.467s

[root@uscdcmix30 ~]# time dd if=/dev/oracleasm/disks/DGMIX03 bs=8192 count=655360 of=/dev/null

655360+0 records in

655360+0 records out


 
real    1m1.784s

user    0m0.084s

sys     0m14.914s

[root@uscdcmix30 ~]# time dd if=/dev/oracleasm/disks/DGMIX04 bs=8192 count=655360 of=/dev/null

655360+0 records in

655360+0 records out


 
real    1m17.748s

user    0m0.069s

sys     0m13.409s

[root@uscdcmix30 ~]# time dd if=/dev/oracleasm/disks/DGMIX03 bs=8192 count=655360 of=/dev/null

655360+0 records in

655360+0 records out


 
real    1m2.702s

user    0m0.090s

sys     0m16.682s

[root@uscdcmix30 ~]# time dd if=/dev/oracleasm/disks/DGMIX04 bs=8192 count=655360 of=/dev/null

655360+0 records in

655360+0 records out


 
real    1m19.698s

user    0m0.079s

sys     0m16.774s

[root@uscdcmix30 ~]# time dd if=/dev/oracleasm/disks/DGMIX03 bs=8192 count=655360 of=/dev/null

655360+0 records in

655360+0 records out


 
real    1m2.037s

user    0m0.085s

sys     0m14.386s

[root@uscdcmix30 ~]# time dd if=/dev/oracleasm/disks/DGMIX03 bs=8192 count=655360 of=/dev/null

655360+0 records in

655360+0 records out


 
real    1m2.822s

user    0m0.052s

sys     0m11.703s

[root@uscdcmix30 ~]# oracleasm listdisks

DGCRM01

DGCRM02

DGCRM03

DGCRM04

DGCRM05

DGCRM06

DGMIX01

DGMIX02

DGMIX03

DGMIX04

[root@uscdcmix30 ~]# oracleasm deletedisk DGMIX03

Clearing disk header: done

Dropping disk: done

[root@uscdcmix30 ~]# time dd if=/dev/emcpowers1 bs=8192 count=655360 of=/dev/null

655360+0 records in

655360+0 records out


 
real    1m0.955s

user    0m0.044s

sys     0m11.446s

[root@uscdcmix30 ~]# pvcreate /dev/emcpowers1

  Physical volume "/dev/emcpowers1" successfully created

[root@uscdcmix30 ~]# vgcreate VgTemp /dev/emcpowers1

  /dev/emcpowero: open failed: No such device

  /dev/emcpowero1: open failed: No such device

  Volume group "VgTemp" successfully created

[root@uscdcmix30 ~]# vgs

  VG              #PV #LV #SN Attr   VSize   VFree

  VgCDCCRM30_App    1   1   0 wz--n- 101.14G      0

  VgCDCCRM30_Arch   1   1   0 wz--n- 101.14G      0

  VgCDCMIX30_App    1   1   0 wz--n- 100.00G      0

  VgTemp            1   0   0 wz--n- 100.00G 100.00G

  vg00              1   7   0 wz--n- 136.50G  66.19G

  vg01              1   1   0 wz--n- 101.14G      0

  vg03              2   1   0 wz--n- 505.74G 101.14G

[root@uscdcmix30 ~]# lvcreate -L 102396 -n TestLV VgTemp

  Logical volume "TestLV" created

[root@uscdcmix30 ~]# time dd if=/dev/VgTemp/TestLV bs=8192 count=655360 of=/dev/null

655360+0 records in

655360+0 records out


 
real    0m34.027s

user    0m0.056s

sys     0m4.698s

}}}
How to create ASM filesystem in Oracle 11gR2
http://translate.google.com/translate?sl=auto&tl=en&u=http://www.dbform.com/html/2010/1255.html
OTN ASM
http://www.oracle.com/technology/tech/linux/asmlib/index.html
http://www.oracle.com/technology/tech/linux/asmlib/raw_migration.html
http://www.oracle.com/technology/tech/linux/asmlib/multipath.html
http://www.oracle.com/technology/tech/linux/asmlib/persistence.html

ASM using ASMLib and Raw Devices
http://www.oracle-base.com/articles/10g/ASMUsingASMLibAndRawDevices.php
Raw devices with release 11: Note ID 754305.1
# 
However, the Unbreakable Enterprise Kernel is optional, 
and Oracle Linux continues to include a Red Hat compatible kernel, compiled directly from Red Hat 
Enterprise Linux source code, for customers who require strict RHEL compatibility. Oracle also 
recommends the  Unbreakable Enterprise Kernel when running third party software and third party 
hardware.


# Performance improvements

latencytop?


# ASMlib and virtualization modules in the kernel

Updated Kernel Modules
The Unbreakable Enterprise Kernel includes both OCFS2 1.6 as well as Oracle ASMLib, the kernel 
driver for Oracle’s Automatic Storage Management feature.  There is no need to install separate RPMs 
to implement these kernel features.  Also, the Unbreakable Enterprise Kernel can be run directly on 
bare metal or as a virtual guest on Oracle VM, both in hardware virtualized (HVM) and paravirtualized (PV) mode, as it implements the paravirt_ops instruction set and includes the xen_netfront and 
xen_blkfront drivers.

# 
Unbreakable Enterprise Kernel itself already includes ocfs2 and oracleasm



Questions:
1) Since it will be a new kernel, what if I have a third party module like EMC Powerpath? I'm sure ill have to reinstall it once I use the new 
kernel. But, once reinstalled.. will it be certified with EMC (or vice versa)? 
2) Also, Oracle says, if you have to maintain compatibility with a third party module. You can use the old vanilla kernel. Questions is, since the 
ASMlib module is already integrated on the Unbreakable Kernel, once I use the non-Unbreakable kernel do they also have the old style RPM 
(oracleasm-`uname -r` - kernel driver) for having the ASMlib module? 
OR 
if it's not supported at all and I'm 

ASMLIB has three components.
1. oracleasm-support - user space shell scripts
2. oracleasmlib - user space library (closed source)	
3. oracleasm-`uname -r` - kernel driver		<-- kernel dependent

###############################################################################################3

-- from this thread http://www.freelists.org/post/oracle-l/ASM-and-EMC-PowerPath

! 
! The Storage Report (ASM -> Linux -> EMC)
Below is a sample storage info that you should have, it clearly shows the relationship from the Oracle layer (ASM), Linux, and SAN storage. This info is very useful for you and the storage engineer. So you would know which is which in case of catastrophic problems..

Very useful for storage activities like: 
* SAN Migration
* Add/Remove disk
* Powerpath upgrade
* Kernel upgrade

//(Note: The images below might be too big on your current screen resolution, to have a better view just right click and download the images or ''double click'' on this page to see the full path of the images..)//

[img[picturename| https://lh6.googleusercontent.com/_F2x5WXOJ6Q8/TaFARe_nqfI/AAAAAAAABOI/jXAshWxpfw8/powerpath1.png]]
[img[picturename| https://lh6.googleusercontent.com/_F2x5WXOJ6Q8/TaFARSJDiLI/AAAAAAAABOE/SoDU7jrddUQ/powerpath2.png]]
[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TaFAUat0HcI/AAAAAAAABOM/qe5qoeF3wTw/powerpath3.png]]

! 
! What info do you need to produce the report? 
''You need the following:''
* AWR time series output (my scripts http://karlarao.wordpress.com/scripts-resources)
* output of the command ''powermt display dev=all'' (run as root)
* RDA
* SAR (because I just love looking at the performance data)
* sysreport (run as root)

''You have to collect'' this on each server / instance and properly arrange them per folder so you won't have a hard time documenting the bits of info you need on the Excel sheet
[img[picturename| https://lh4.googleusercontent.com/_F2x5WXOJ6Q8/TaFK4F_SexI/AAAAAAAABPQ/VjSQm0_uUUM/powerdevices4.png]]

''Below is the drill down on each folder'', the data you'll see is from a separate two RAC clusters.. each with it's own SAN storage.. the project I'm working on here is to migrate/consolidate them into a single SAN storage (newly purchased). So I need to collect all these data to help on planning the activity and mitigate the risks/issues. Also the collection of performance data is a must to verify if the IO requirements of the databases can be handled by the new SAN. On this project I have verified that the Capacity exceeds the current requirements. 
* AWR
<<<
per server
[img[picturename| https://lh6.googleusercontent.com/_F2x5WXOJ6Q8/TaFDmcOnhBI/AAAAAAAABOk/lxo8_tbLqX4/powerdevices5-awr.png]]
> per instance
> [img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TaFKbFENV1I/AAAAAAAABPE/nUCFo_HOjHY/powerdevices5-awr2.png]]
>> awr output on each instance
>> [img[picturename| https://lh5.googleusercontent.com/_F2x5WXOJ6Q8/TaFKbZB6nnI/AAAAAAAABPI/8MVhDN5Q_rI/powerdevices5-awr3.png]]
<<<
* powermt display dev=all
<<<
[img[picturename| https://lh5.googleusercontent.com/_F2x5WXOJ6Q8/TaFDmG2XayI/AAAAAAAABOg/0Lo8QoDbm_A/powerdevices6-powermt.png]]
<<<
* RDA
<<<
[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TaFDmsNLT2I/AAAAAAAABOs/sHa-KUryYFo/powerdevices7-rda.png]]
<<<
* SAR
<<<
[img[picturename| https://lh6.googleusercontent.com/_F2x5WXOJ6Q8/TaFDmcOnhBI/AAAAAAAABOk/lxo8_tbLqX4/powerdevices5-awr.png]]
> sample output
> [img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TaFDmyLyKAI/AAAAAAAABOw/f5UgyqVu09I/powerdevices8-sar.png]]
<<<
* sysreport 
<<<
[img[picturename| https://lh4.googleusercontent.com/_F2x5WXOJ6Q8/TaFDm08TuYI/AAAAAAAABO0/bZzSwZX6Vqc/powerdevices8-sysreport.png]]
> sample output
> [img[picturename| https://lh4.googleusercontent.com/_F2x5WXOJ6Q8/TaFDnQpFRHI/AAAAAAAABO4/SiDKdZE9kOY/powerdevices8-sysreport2.png]]
<<<

! 
! Putting it all together 
On the Excel sheet, you have to fill in the following sections 
* From RDA
** ASM Library Information
** ASM Library Disk Information
** Disk Partitions
** Operating System Setup->Operating System Packages
** Operating System Setup->Disk Drives->Disk Mounts
** Oracle Cluster Registry (Cluster -> Cluster Information -> ocrcheck)
* From ''powermt'' command
** Logical Device IDs and names
* From sysreport
** raw devices (possible for OCR and Voting Disk)
** fstab (check for OCFS2 mounts)
* Double check from OS commands
** Voting Disk (''crsctl query css votedisk'')
** ls -l /dev/	
** /etc/init.d/oracleasm querydisk <device_name>

''Below are the output from the various sources...'' this will show you how to map the ''ASM disk'' to a particular ''EMC power device'' (follow the ''RED ARROWS'').. you have to do it on all "ASM disks" and the method will also be the same on accounting the ''raw devices'', ''OCFS2'', and ''OCR'' for their mapping on their respective EMC power devices..

To do the correlated report of the ASM, Linux, and SAN storage.. follow the ''BLUE ARROWS''..

You will also see below that having this proper accounting and correlating it from the ASM, Linux, and EMC storage level you will never go wrong and you have the definitive information that you can share with the EMC Storage Engineer which they can also ''double check''.. in that way both the ''DBAs and the Storage guys will be on the same page''.

[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TaFvF-b2-tI/AAAAAAAABPk/cdnpkq6yLeg/emcreport10.png]]

Notice above that the ''emcpowerr'' and ''emcpowers'' have no allocations, so what does that mean? can we allocate these devices now? ... mm ''no!'' ... ''stop''...''move back''... ''think''...

I will do the following: 
* Run this query to check if it's recognized as ''FOREIGN'' or ''CANDIDATE''
{{{
set lines 400
col name format a20
col label format a20
col path format a20
col redundancy format a20
select a.group_number, a.name, a.header_status, a.mount_status, a.state, a.total_mb, a.free_mb, a.label, path, a.redundancy
from v$asm_disk a
order by 1,2;

GROUP_NUMBER NAME                 HEADER_STATU STATE      TOTAL_MB    FREE_MB LABEL                PATH                 REDUNDANCY
------------ -------------------- ------------ -------- ---------- ---------- -------------------- -------------------- --------------------
}}}
* I've done some precautions on my data gathering by checking on the ''fstab'' and ''raw devices config'' and found out that ''there are no pointers to the two devices''.. 
** I have an obsessive–compulsive tendencies just to make sure that these devices are not used by some services. If accidentally these EMC power devices were used for something else let's say as a filesystem.. Oracle will still allow you to do the ADD/DROP operation on these devices wiping out all the data on those devices! 
* Another thing I would do is validate it with my storage engineer or the in-house DBA if these disks exist for the purpose of expanding the disk group. 

If everything is okay. I can safely say they are candidate disks for expanding the space of my current disk group and go ahead with the activity.


! 
! From Matt Zito (former EMC solutions architect)
<<<
Hey guys,

I haven't gotten this email address straightened out on Oracle-L yet, but I figured I'd drop you a note, and you could forward it on to the list if you cared to.

The doc you read is correct, powerpath will cheerfully work with any of the devices you send IOs to, because the kernel driver intercepts requests for all devices and routes them through itself before dishing them down the appropriate path.

However, setting scandisks to the emcpower has the administrative benefits of making sure the disks don't show up twice.  However, even if ASM picks the first of the two disks, it will still be load-balanced successfully.

Thanks,
Matt Zito
(former EMC solutions architect)
<<<

https://blogs.oracle.com/XPSONHA/entry/asr_snmp_on_exadata

Oracle Auto Service Request (ASR) [ID 1185493.1]
''ASR Documentation'' http://www.oracle.com/technetwork/server-storage/asr/documentation/index.html?ssSourceSiteId=ocomen


What DBAs Need to Know - Data Guard 11g ASYNC Redo Transport
http://www.oracle.com/technetwork/database/features/availability/316925-maa-otn-173423.pdf

http://www.oracle.com/technetwork/database/availability/maa-gg-performance-1969630.pdf
http://www.oracle.com/technetwork/database/availability/sync-2437177.pdf
http://www.oracle.com/au/products/database/maa-wp-10gr2-dataguardnetworkbestpr-134557.pdf
https://docs.oracle.com/en/cloud/paas/autonomous-database/dedicated/adbaz/
https://docs.oracle.com/en/cloud/paas/autonomous-database/dedicated/xxcdx/index.html#GUID-4302CC25-3F3E-419F-A7D2-4793DFEB33C2
<<<
•	Distribution affinity: Determines whether an Autonomous Database must be opened across a minimum or maximum of nodes. By default, Minimum nodes is selected with Maximum nodes being the other option
Database split threshold (CPU): The CPU value beyond which an Autonomous Database will be opened across multiple nodes. The default value of this attribute is 16 for OCPUs and 64 for ECPUs.
Node failover reservation (%): Determines the percentage of CPUs reserved across nodes to support node failover. Allowed values are 0%, 25%, and 50%, with 50% being the default option.
Distribution affinity: Determines whether an Autonomous Database must be opened across a minimum or maximum of nodes. By default, Minimum nodes is selected with Maximum nodes being the other option.
<<<
''10mins AWR snap interval, 144 samples in a day, 1008 samples in 7days, 4032 samples in 4weeks, 52560 samples in 1year''

''Good chapter on HOW to read AWR reports'' http://filezone.orapub.com/FF_Book/v4Chap9.pdf

{{{
Understand each field of AWR (Doc ID 884046.1)
AWR report is broken into multiple parts.

1)Instance information:-
This provides information the instance name , number,snapshot ids,total time the report was taken for and the database time during this elapsed time.

Elapsed time= end snapshot time - start snapshot time
Database time= Work done by database during this much elapsed time( CPU and I/o both add to Database time).If this is lesser than the elapsed time by a great margin, then database is idle.Database time does not include time spend by the background processes.

2)Cache Sizes : This shows the size of each SGA region after AMM has changed them. This information
can be compared to the original init.ora parameters at the end of the AWR report.

3)Load Profile: This important section shows important rates expressed in units of per second and
transactions per second.This is very important for understanding how is the instance behaving.This has to be compared to base line report to understand the expected load on the machine and the delta during bad times.

4)Instance Efficiency Percentages (Target 100%): This section talks about how close are the vital ratios like buffer cache hit, library cache hit,parses etc.These can be taken as indicators ,but should not be a cause of worry if they are low.As the ratios cold be low or high based in database activities, and not due to real performance problem.Hence these are not stand alone statistics, should be read for a high level view .

5)Shared Pool Statistics: This summarizes changes to the shared pool during the snapshot
period.

6)Top 5 Timed Events :This is the section which is most relevant for analysis.This section shows what % of database time was the wait event seen for.Till 9i, this was the way to backtrack what was the total database time for the report , as there was no Database time column in 9i.

7)RAC Statistics :This part is seen only incase of cluster instance.This provides important indication on the average time take for block transfer, block receiving , messages ., which can point to performance problems in the Cluster instead of database.

8)Wait Class : This Depicts which wait class was the area of contention and where we need to focus.Was that network, concurrency, cluster, i/o Application, configuration etc.

9)Wait Events Statistics Section: This section shows a breakdown of the main wait events in the
database including foreground and background database wait events as well as time model, operating
system, service, and wait classes statistics.

10)Wait Events: This AWR report section provides more detailed wait event information for foreground
user processes which includes Top 5 wait events and many other wait events that occurred during
the snapshot interval.

11)Background Wait Events: This section is relevant to the background process wait events.

12)Time Model Statistics: Time mode statistics report how database-processing time is spent. This
section contains detailed timing information on particular components participating in database
processing.This gives information about background process timing also which is not included in database time.

13)Operating System Statistics: This section is important from OS server contention point of view.This section shows the main external resources including I/O, CPU, memory, and network usage.

14)Service Statistics: The service statistics section gives information services and their load in terms of CPU seconds, i/o seconds, number of buffer reads etc.

15)SQL Section: This section displays top SQL, ordered by important SQL execution metrics.

a)SQL Ordered by Elapsed Time: Includes SQL statements that took significant execution
time during processing.

b)SQL Ordered by CPU Time: Includes SQL statements that consumed significant CPU time
during its processing.

c)SQL Ordered by Gets: These SQLs performed a high number of logical reads while
retrieving data.

d)SQL Ordered by Reads: These SQLs performed a high number of physical disk reads while
retrieving data.

e)SQL Ordered by Parse Calls: These SQLs experienced a high number of reparsing operations.

f)SQL Ordered by Sharable Memory: Includes SQL statements cursors which consumed a large
amount of SGA shared pool memory.

g)SQL Ordered by Version Count: These SQLs have a large number of versions in shared pool
for some reason.

16)Instance Activity Stats: This section contains statistical information describing how the database
operated during the snapshot period.

17)I/O Section: This section shows the all important I/O activity.This provides time it took to make 1 i/o say Av Rd(ms), and i/o per second say Av Rd/s.This should be compared to the baseline to see if the rate of i/o has always been like this or there is a diversion now.

18)Advisory Section: This section show details of the advisories for the buffer, shared pool, PGA and
Java pool.

19)Buffer Wait Statistics: This important section shows buffer cache waits statistics.

20)Enqueue Activity: This important section shows how enqueue operates in the database. Enqueues are
special internal structures which provide concurrent access to various database resources.

21)Undo Segment Summary: This section gives a summary about how undo segments are used by the database.
Undo Segment Stats: This section shows detailed history information about undo segment activity.

22)Latch Activity: This section shows details about latch statistics. Latches are a lightweight
serialization mechanism that is used to single-thread access to internal Oracle structures.The latch should be checked by its sleeps.The sleepiest Latch is the latch that is under contention , and not the latch with high requests.Hence  run through the sleep breakdown part of this section to arrive at the latch under highest contention.

23)Segment Section: This portion is important to make a guess in which segment and which segment type the contention could be.Tally this with the top 5 wait events.

Segments by Logical Reads: Includes top segments which experienced high number of
logical reads.

Segments by Physical Reads: Includes top segments which experienced high number of disk
physical reads.

Segments by Buffer Busy Waits: These segments have the largest number of buffer waits
caused by their data blocks.

Segments by Row Lock Waits: Includes segments that had a large number of row locks on
their data.

Segments by ITL Waits: Includes segments that had a large contention for Interested
Transaction List (ITL). The contention for ITL can be reduced by increasing INITRANS storage
parameter of the table.

24)Dictionary Cache Stats: This section exposes details about how the data dictionary cache is
operating.

25)Library Cache Activity: Includes library cache statistics  which are needed in case you see library cache in top 5 wait events.You might want to see if the reload/invalidations are causing the contention or there is some other issue with library cache.

26)SGA Memory Summary:This would tell us the difference in the respective pools at the start and end of report.This could be an indicator of setting minimum value for each, when sga)target is being used..

27)init.ora Parameters: This section shows the original init.ora parameters for the instance during
the snapshot period.

There would be more Sections in case of RAC setups to provide details.
}}}


''A SQL Performance History from AWR''
http://www.toadworld.com/BLOGS/tabid/67/EntryId/125/A-SQL-Performance-History-from-AWR.aspx  <-- This could also be possible to graph using my awr_topsqlx.sql

''miTrend AWR Report / StatsPack Gathering Procedures Instructions'' https://community.emc.com/docs/DOC-13949 <-- EMCs tool with nice PPT and paper, also talks about "burst" periods for IO sizing, raid adjusted IOPS, EFDs IOPS

http://pavandba.files.wordpress.com/2009/11/owp_awr_historical_analysis.pdf







{{{
set arraysize 5000

COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;

COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;

ttitle center 'AWR Top SQL Report' skip 2
set pagesize 50000
set linesize 300

col snap_id     format 99999            heading "Snap|ID"
col tm          format a15              heading "Snap|Start|Time"
col inst        format 90               heading "i|n|s|t|#"
col dur         format 990.00          heading "Snap|Dur|(m)"
col sql_id      format a15              heading "SQL|ID"
col phv         format 99999999999      heading "Plan|Hash|Value"
col module      format a20              heading "Module"
col elap        format 999990.00        heading "Elapsed|Time|(s)"
col elapexec    format 999990.00        heading "Elapsed|Time|per exec|(s)"
col cput        format 999990.00        heading "CPU|Time|(s)"
col iowait      format 999990.00        heading "IO|Wait|(s)"
col bget        format 99999999990      heading "LIO"
col dskr        format 99999999990      heading "PIO"
col rowp        format 99999999990      heading "Rows"
col exec        format 9999990          heading "Exec"
col prsc        format 999999990        heading "Parse|Count"
col pxexec      format 9999990          heading "PX|Exec"
col pctdbt      format 990              heading "DB Time|%"
col aas         format 990.00           heading "A|A|S"
col time_rank   format 90               heading "Time|Rank"
col sql_text    format a40              heading "SQL|Text"

     select *
       from (
             select
                  sqt.snap_id snap_id,
                  TO_CHAR(sqt.tm,'MM/DD/YY HH24:MI') tm,
                  sqt.inst inst,
                  sqt.dur dur,
                  sqt.sql_id sql_id,   
                  sqt.phv phv,                
                  to_clob(decode(sqt.module, null, null, sqt.module)) module,
                  nvl((sqt.elap), to_number(null)) elap,
                  nvl((sqt.elapexec), to_number(null)) elapexec,
                  nvl((sqt.cput), to_number(null)) cput,
                  sqt.iowait iowait,
                  sqt.bget bget, 
                  sqt.dskr dskr, 
                  sqt.rowp rowp,
                  sqt.exec exec, 
                  sqt.prsc prsc, 
                  sqt.pxexec pxexec,
                  sqt.aas aas,
                  sqt.time_rank time_rank
                  , nvl(st.sql_text, to_clob('** SQL Text Not Available **')) sql_text     -- PUT/REMOVE COMMENT TO HIDE/SHOW THE SQL_TEXT
             from        (
                          select snap_id, tm, inst, dur, sql_id, phv, module, elap, elapexec, cput, iowait, bget, dskr, rowp, exec, prsc, pxexec, aas, time_rank
                          from
                                             (
                                               select 
                                                      s0.snap_id snap_id,
                                                      s0.END_INTERVAL_TIME tm,
                                                      s0.instance_number inst,
                                                      round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                              + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                              + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                              + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
                                                      e.sql_id sql_id, 
                                                      e.plan_hash_value phv, 
                                                      max(e.module) module,
                                                      sum(e.elapsed_time_delta)/1000000 elap,
                                                      decode((sum(e.executions_delta)), 0, to_number(null), ((sum(e.elapsed_time_delta)) / (sum(e.executions_delta)) / 1000000)) elapexec,
                                                      sum(e.cpu_time_delta)/1000000     cput, 
                                                      sum(e.iowait_delta)/1000000 iowait,
                                                      sum(e.buffer_gets_delta) bget,
                                                      sum(e.disk_reads_delta) dskr, 
                                                      sum(e.rows_processed_delta) rowp,
                                                      sum(e.executions_delta)   exec,
                                                      sum(e.parse_calls_delta) prsc,
                                                      sum(px_servers_execs_delta) pxexec,
                                                      (sum(e.elapsed_time_delta)/1000000) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                                            + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                                            + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                                            + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60) aas,
                                                      DENSE_RANK() OVER (
                                                      PARTITION BY s0.snap_id ORDER BY e.elapsed_time_delta DESC) time_rank
                                               from 
                                                   dba_hist_snapshot s0,
                                                   dba_hist_snapshot s1,
                                                   dba_hist_sqlstat e
                                                   where 
                                                    s0.dbid                   = &_dbid                -- CHANGE THE DBID HERE!
                                                    AND s1.dbid               = s0.dbid
                                                    and e.dbid                = s0.dbid                                                
                                                    AND s0.instance_number    = &_instancenumber      -- CHANGE THE INSTANCE_NUMBER HERE!
                                                    AND s1.instance_number    = s0.instance_number
                                                    and e.instance_number     = s0.instance_number                                                 
                                                    AND s1.snap_id            = s0.snap_id + 1
                                                    and e.snap_id             = s0.snap_id + 1                                              
                                               group by 
                                                    s0.snap_id, s0.END_INTERVAL_TIME, s0.instance_number, e.sql_id, e.plan_hash_value, e.elapsed_time_delta, s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME
                                             )
                          where 
                          time_rank <= 5                                     -- GET TOP 5 SQL ACROSS SNAP_IDs... YOU CAN ALTER THIS TO HAVE MORE DATA POINTS
                         ) 
                        sqt,
                        dba_hist_sqltext st 
             where st.sql_id(+)             = sqt.sql_id
             and st.dbid(+)                 = &_dbid
-- AND TO_CHAR(tm,'D') >= 1                                                  -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(tm,'D') <= 7
-- AND TO_CHAR(tm,'HH24MI') >= 0900                                          -- Hour
-- AND TO_CHAR(tm,'HH24MI') <= 1800
-- AND tm >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss')     -- Data range
-- AND tm <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
-- AND snap_id in (338,339)
-- AND snap_id >= 335 and snap_id <= 339
-- AND snap_id = 3172
-- and sqt.sql_id = 'dj3n91vxsyaq5'
-- AND lower(st.sql_text) like 'select%'
-- AND lower(st.sql_text) like 'insert%'
-- AND lower(st.sql_text) like 'update%'
-- AND lower(st.sql_text) like 'merge%'
-- AND pxexec > 0
-- AND aas > .5
             order by 
             -- snap_id                             -- TO GET SQL OUTPUT ACROSS SNAP_IDs SEQUENTIALLY AND ASC
             nvl(sqt.elap, -1) desc, sqt.sql_id     -- TO GET SQL OUTPUT BY ELAPSED TIME
             )
where rownum <= 20
;
}}}
{{{
set arraysize 5000

COLUMN blocksize NEW_VALUE _blocksize NOPRINT
select distinct block_size blocksize from v$datafile;

COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;

COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;

COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;

COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;

-- ttitle center 'AWR CPU and IO Workload Report' skip 2
set pagesize 50000
set linesize 550

col instname    format a15              heading instname        -- instname
col hostname    format a30              heading hostname        -- hostname
col tm          format a17              heading tm              -- "tm"
col id          format 99999            heading id              -- "snapid"
col inst        format 90               heading inst            -- "inst"
col dur         format 999990.00        heading dur             -- "dur"
col cpu         format 90               heading cpu             -- "cpu"
col cap         format 9999990.00       heading cap             -- "capacity"
col dbt         format 999990.00        heading dbt             -- "DBTime"
col dbc         format 99990.00         heading dbc             -- "DBcpu"
col bgc         format 99990.00         heading bgc             -- "BGcpu"
col rman        format 9990.00          heading rman            -- "RMANcpu"
col aas         format 990.0            heading aas             -- "AAS"
col totora      format 9999990.00       heading totora          -- "TotalOracleCPU"
col busy        format 9999990.00       heading busy            -- "BusyTime"
col load        format 990.00           heading load            -- "OSLoad"
col totos       format 9999990.00       heading totos           -- "TotalOSCPU"
col mem         format 999990.00        heading mem             -- "PhysicalMemorymb"
col IORs        format 9990.000         heading IORs            -- "IOPsr"
col IOWs        format 9990.000         heading IOWs            -- "IOPsw"
col IORedo      format 9990.000         heading IORedo          -- "IOPsredo"
col IORmbs      format 9990.000         heading IORmbs          -- "IOrmbs"
col IOWmbs      format 9990.000         heading IOWmbs          -- "IOwmbs"
col redosizesec format 9990.000         heading redosizesec     -- "Redombs"
col logons      format 990              heading logons          -- "Sess"
col logone      format 990              heading logone          -- "SessEnd"
col exsraw      format 99990.000        heading exsraw          -- "Execrawdelta"
col exs         format 9990.000         heading exs             -- "Execs"
col ucs         format 9990.000         heading ucs             -- "UserCalls"
col ucoms       format 9990.000         heading ucoms           -- "Commit"
col urs         format 9990.000         heading urs             -- "Rollback"
col oracpupct   format 990              heading oracpupct       -- "OracleCPUPct"
col rmancpupct  format 990              heading rmancpupct      -- "RMANCPUPct"
col oscpupct    format 990              heading oscpupct        -- "OSCPUPct"
col oscpuusr    format 990              heading oscpuusr        -- "USRPct"
col oscpusys    format 990              heading oscpusys        -- "SYSPct"
col oscpuio     format 990              heading oscpuio         -- "IOPct"

SELECT * FROM
( 
  SELECT trim('&_instname') instname, 
         trim('&_dbid') db_id, 
         trim('&_hostname') hostname, 
          s0.snap_id id,
         TO_CHAR(s0.END_INTERVAL_TIME,'MM/DD/YY HH24:MI:SS') tm,
         s0.instance_number inst,
  round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
  s3t1.value AS cpu,
  (round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value cap,
  (s5t1.value - s5t0.value) / 1000000 as dbt,
  (s6t1.value - s6t0.value) / 1000000 as dbc,
  (s7t1.value - s7t0.value) / 1000000 as bgc,
  round(DECODE(s8t1.value,null,'null',(s8t1.value - s8t0.value) / 1000000),2) as rman,
  ((s5t1.value - s5t0.value) / 1000000)/60 /  round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas,
  round(((s6t1.value - s6t0.value) / 1000000) + ((s7t1.value - s7t0.value) / 1000000),2) totora,
  -- s1t1.value - s1t0.value AS busy,  -- this is osstat BUSY_TIME
  round(s2t1.value,2) AS load,
  (s1t1.value - s1t0.value)/100 AS totos,
  ((round(((s6t1.value - s6t0.value) / 1000000) + ((s7t1.value - s7t0.value) / 1000000),2)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                                              + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                                              + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                                              + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oracpupct,
  ((round(DECODE(s8t1.value,null,'null',(s8t1.value - s8t0.value) / 1000000),2)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                                              + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                                              + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                                              + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as rmancpupct,
  (((s1t1.value - s1t0.value)/100) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                                              + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                                              + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                                              + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oscpupct,
  (((s17t1.value - s17t0.value)/100) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                                              + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                                              + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                                              + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oscpuusr,
  (((s18t1.value - s18t0.value)/100) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                                              + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                                              + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                                              + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oscpusys,
  (((s19t1.value - s19t0.value)/100) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                                              + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                                              + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                                              + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oscpuio
FROM dba_hist_snapshot s0,
  dba_hist_snapshot s1,
  dba_hist_osstat s1t0,         -- BUSY_TIME
  dba_hist_osstat s1t1,
  dba_hist_osstat s17t0,        -- USER_TIME
  dba_hist_osstat s17t1,
  dba_hist_osstat s18t0,        -- SYS_TIME
  dba_hist_osstat s18t1,
  dba_hist_osstat s19t0,        -- IOWAIT_TIME
  dba_hist_osstat s19t1,
  dba_hist_osstat s2t1,         -- osstat just get the end value
  dba_hist_osstat s3t1,         -- osstat just get the end value
  dba_hist_sys_time_model s5t0,
  dba_hist_sys_time_model s5t1,
  dba_hist_sys_time_model s6t0,
  dba_hist_sys_time_model s6t1,
  dba_hist_sys_time_model s7t0,
  dba_hist_sys_time_model s7t1,
  dba_hist_sys_time_model s8t0,
  dba_hist_sys_time_model s8t1
WHERE s0.dbid            = &_dbid    -- CHANGE THE DBID HERE!
AND s1.dbid              = s0.dbid
AND s1t0.dbid            = s0.dbid
AND s1t1.dbid            = s0.dbid
AND s2t1.dbid            = s0.dbid
AND s3t1.dbid            = s0.dbid
AND s5t0.dbid            = s0.dbid
AND s5t1.dbid            = s0.dbid
AND s6t0.dbid            = s0.dbid
AND s6t1.dbid            = s0.dbid
AND s7t0.dbid            = s0.dbid
AND s7t1.dbid            = s0.dbid
AND s8t0.dbid            = s0.dbid
AND s8t1.dbid            = s0.dbid
AND s17t0.dbid            = s0.dbid
AND s17t1.dbid            = s0.dbid
AND s18t0.dbid            = s0.dbid
AND s18t1.dbid            = s0.dbid
AND s19t0.dbid            = s0.dbid
AND s19t1.dbid            = s0.dbid
AND s0.instance_number   = &_instancenumber   -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number   = s0.instance_number
AND s1t0.instance_number = s0.instance_number
AND s1t1.instance_number = s0.instance_number
AND s2t1.instance_number = s0.instance_number
AND s3t1.instance_number = s0.instance_number
AND s5t0.instance_number = s0.instance_number
AND s5t1.instance_number = s0.instance_number
AND s6t0.instance_number = s0.instance_number
AND s6t1.instance_number = s0.instance_number
AND s7t0.instance_number = s0.instance_number
AND s7t1.instance_number = s0.instance_number
AND s8t0.instance_number = s0.instance_number
AND s8t1.instance_number = s0.instance_number
AND s17t0.instance_number = s0.instance_number
AND s17t1.instance_number = s0.instance_number
AND s18t0.instance_number = s0.instance_number
AND s18t1.instance_number = s0.instance_number
AND s19t0.instance_number = s0.instance_number
AND s19t1.instance_number = s0.instance_number
AND s1.snap_id           = s0.snap_id + 1
AND s1t0.snap_id         = s0.snap_id
AND s1t1.snap_id         = s0.snap_id + 1
AND s2t1.snap_id         = s0.snap_id + 1
AND s3t1.snap_id         = s0.snap_id + 1
AND s5t0.snap_id         = s0.snap_id
AND s5t1.snap_id         = s0.snap_id + 1
AND s6t0.snap_id         = s0.snap_id
AND s6t1.snap_id         = s0.snap_id + 1
AND s7t0.snap_id         = s0.snap_id
AND s7t1.snap_id         = s0.snap_id + 1
AND s8t0.snap_id         = s0.snap_id
AND s8t1.snap_id         = s0.snap_id + 1
AND s17t0.snap_id         = s0.snap_id
AND s17t1.snap_id         = s0.snap_id + 1
AND s18t0.snap_id         = s0.snap_id
AND s18t1.snap_id         = s0.snap_id + 1
AND s19t0.snap_id         = s0.snap_id
AND s19t1.snap_id         = s0.snap_id + 1
AND s1t0.stat_name       = 'BUSY_TIME'
AND s1t1.stat_name       = s1t0.stat_name
AND s17t0.stat_name       = 'USER_TIME'
AND s17t1.stat_name       = s17t0.stat_name
AND s18t0.stat_name       = 'SYS_TIME'
AND s18t1.stat_name       = s18t0.stat_name
AND s19t0.stat_name       = 'IOWAIT_TIME'
AND s19t1.stat_name       = s19t0.stat_name
AND s2t1.stat_name       = 'LOAD'
AND s3t1.stat_name       = 'NUM_CPUS'
AND s5t0.stat_name       = 'DB time'
AND s5t1.stat_name       = s5t0.stat_name
AND s6t0.stat_name       = 'DB CPU'
AND s6t1.stat_name       = s6t0.stat_name
AND s7t0.stat_name       = 'background cpu time'
AND s7t1.stat_name       = s7t0.stat_name
AND s8t0.stat_name       = 'RMAN cpu time (backup/restore)'
AND s8t1.stat_name       = s8t0.stat_name
)
-- WHERE 
-- tm > to_char(sysdate - 30, 'MM/DD/YY HH24:MI')
-- id  in (select snap_id from (select * from r2toolkit.r2_regression_data union all select * from r2toolkit.r2_outlier_data))
-- id in (336)
-- aas > 1
-- oracpupct > 50
-- oscpupct > 50
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') >= 1     -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') <= 7
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') >= 0900     -- Hour
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') <= 1800
-- AND s0.END_INTERVAL_TIME >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss')     -- Data range
-- AND s0.END_INTERVAL_TIME <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
ORDER BY id ASC;
}}}
{{{

-- TO VIEW DB INFO
set lines 300
select dbid,instance_number,version,db_name,instance_name, host_name 
from dba_hist_database_instance 
where instance_number = (select instance_number from v$instance)
and rownum < 2;

-- TO VIEW RETENTION INFORMATION
select * from dba_hist_wr_control;
set lines 300
select b.name, a.DBID,
   ((TRUNC(SYSDATE) + a.SNAP_INTERVAL - TRUNC(SYSDATE)) * 86400)/60 AS SNAP_INTERVAL_MINS,
   ((TRUNC(SYSDATE) + a.RETENTION - TRUNC(SYSDATE)) * 86400)/60 AS RETENTION_MINS,
   ((TRUNC(SYSDATE) + a.RETENTION - TRUNC(SYSDATE)) * 86400)/60/60/24 AS RETENTION_DAYS,
   TOPNSQL
from dba_hist_wr_control a, v$database b
where a.dbid = b.dbid;

/*
-- SET RETENTION PEROID TO 31 DAYS (UNIT IS MINUTES)
execute dbms_workload_repository.modify_snapshot_settings (interval => 30, retention => 43200);
-- SET RETENTION PEROID TO 365 DAYS (UNIT IS MINUTES)
exec dbms_workload_repository.modify_snapshot_settings (interval => 30, retention => 525600);

-- Create Snapshot
BEGIN
  DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT ();
END;
/
*/

-- AWR get recent snapshot
set lines 300
select * from 
(SELECT s0.instance_number, s0.snap_id, 
  to_char(s0.startup_time,'yyyy-mon-dd hh24:mi:ss') startup_time,
  TO_CHAR(s0.END_INTERVAL_TIME,'yyyy-mon-dd hh24:mi:ss') snap_start,
  TO_CHAR(s1.END_INTERVAL_TIME,'yyyy-mon-dd hh24:mi:ss') snap_end,
  round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) ela_min
FROM dba_hist_snapshot s0,
  dba_hist_snapshot s1
WHERE s1.snap_id           = s0.snap_id + 1
ORDER BY snap_id DESC)
where rownum < 11;

-- MIN/MAX for dba_hist tables
select count(*) snap_count from dba_hist_snapshot;
select min(snap_id) min_snap, max(snap_id) max_snap from dba_hist_snapshot;
select to_char(min(end_interval_time),'yyyy-mon-dd hh24:mi:ss') min_date, to_char(max(end_interval_time),'yyyy-mon-dd hh24:mi:ss') max_date from dba_hist_snapshot;


/*
-- STATSPACK get recent snapshot
	  set lines 300
	  col what format a30
	  set numformat 999999999999999
	  alter session set NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS';
	  select sysdate from dual;
	  select instance, what, job, next_date, next_sec from user_jobs;
	  select * from 
	      (select 
		    s0.instance_number, s0.snap_id snap_id, s0.startup_time,
		    to_char(s0.snap_time,'YYYY-Mon-DD HH24:MI:SS') snap_start,
		    to_char(s1.snap_time,'YYYY-Mon-DD HH24:MI:SS') snap_end,
		    (s1.snap_time-s0.snap_time)*24*60 ela_min,
		    s0.dbid, s0.snap_level, s0.snapshot_exec_time_s 
	      from	stats$snapshot s0,
		      stats$snapshot s1
	      where s1.snap_id  = s0.snap_id + 1
	      ORDER BY s0.snap_id DESC)
	      where rownum < 11;


-- MIN/MAX for statspack tables
col min_dt format a14
col max_dt format a14
col host_name format a12
select	
	t1.dbid, 
	t1.instance_number,
        t2.version,
        t2.db_name,
	t2.instance_name,
        t2.host_name,
	min(to_char(t1.snap_time,'YYYY-Mon-DD HH24')) min_dt,
	max(to_char(t1.snap_time,'YYYY-Mon-DD HH24')) max_dt
from	stats$snapshot t1,
        stats$database_instance t2
where   t1.dbid = t2.dbid
  and   t1.snap_id = t2.snap_id
group by
	t1.dbid, 
	t1.instance_number,
        t2.version,
        t2.db_name,
	t2.instance_name,
        t2.host_name
/
*/


/*
AWR reports:

Running Workload Repository Reports Using Enterprise Manager
Running Workload Repository Compare Period Report Using Enterprise Manager
Running Workload Repository Reports Using SQL Scripts



Running Workload Repository Reports Using SQL Scripts
-----------------------------------------------------

You can view AWR reports by running the following SQL scripts:

The @?/rdbms/admin/awrrpt.sql SQL script generates an HTML or text report that displays statistics for a range of snapshot Ids.

The awrrpti.sql SQL script generates an HTML or text report that displays statistics for a range of snapshot Ids on 
a specified database and instance.

The awrsqrpt.sql SQL script generates an HTML or text report that displays statistics of a particular SQL statement for a 
range of snapshot Ids. Run this report to inspect or debug the performance of a SQL statement.

The awrsqrpi.sql SQL script generates an HTML or text report that displays statistics of a particular SQL statement for a 
range of snapshot Ids on a specified database and instance. Run this report to inspect or debug the performance of a SQL statement on a specific database and instance.

The awrddrpt.sql SQL script generates an HTML or text report that compares detailed performance attributes and configuration 
settings between two selected time periods.

The awrddrpi.sql SQL script generates an HTML or text report that compares detailed performance attributes and configuration 
settings between two selected time periods on a specific database and instance.

awrsqrpt.sql -- SQL performance report
*/

}}}
{{{
set arraysize 5000

COLUMN blocksize NEW_VALUE _blocksize NOPRINT
select distinct block_size blocksize from v$datafile;

COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;

COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;

COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;

COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;

-- ttitle center 'AWR IO Workload Report' skip 2
set pagesize 50000
set linesize 550

col instname       format a15              heading instname            -- instname
col hostname       format a30              heading hostname            -- hostname
col tm             format a17              heading tm                  -- "tm"
col id             format 99999            heading id                  -- "snapid"
col inst           format 90               heading inst                -- "inst"
col dur            format 999990.00        heading dur                 -- "dur"
col cpu            format 90               heading cpu                 -- "cpu"
col cap            format 9999990.00       heading cap                 -- "capacity"
col dbt            format 999990.00        heading dbt                 -- "DBTime"
col dbc            format 99990.00         heading dbc                 -- "DBcpu"
col bgc            format 99990.00         heading bgc                 -- "BGcpu"
col rman           format 9990.00          heading rman                -- "RMANcpu"
col aas            format 990.0            heading aas                 -- "AAS"
col totora         format 9999990.00       heading totora              -- "TotalOracleCPU"
col busy           format 9999990.00       heading busy                -- "BusyTime"
col load           format 990.00           heading load                -- "OSLoad"
col totos          format 9999990.00       heading totos               -- "TotalOSCPU"
col mem            format 999990.00        heading mem                 -- "PhysicalMemorymb"
col IORs           format 99990.000        heading IORs                -- "IOPsr"
col IOWs           format 99990.000        heading IOWs                -- "IOPsw"
col IORedo         format 99990.000        heading IORedo              -- "IOPsredo"
col IORmbs         format 99990.000        heading IORmbs              -- "IOrmbs"
col IOWmbs         format 99990.000        heading IOWmbs              -- "IOwmbs"
col redosizesec    format 99990.000        heading redosizesec         -- "Redombs"
col logons         format 990              heading logons              -- "Sess"
col logone         format 990              heading logone              -- "SessEnd"
col exsraw         format 99990.000        heading exsraw              -- "Execrawdelta"
col exs            format 9990.000         heading exs                 -- "Execs"
col oracpupct      format 990              heading oracpupct           -- "OracleCPUPct"
col rmancpupct     format 990              heading rmancpupct          -- "RMANCPUPct"
col oscpupct       format 990              heading oscpupct            -- "OSCPUPct"
col oscpuusr       format 990              heading oscpuusr            -- "USRPct"
col oscpusys       format 990              heading oscpusys            -- "SYSPct"
col oscpuio        format 990              heading oscpuio             -- "IOPct"
col SIORs          format 99990.000        heading SIORs               -- "IOPsSingleBlockr"
col MIORs          format 99990.000        heading MIORs               -- "IOPsMultiBlockr"
col TIORmbs        format 99990.000        heading TIORmbs             -- "Readmbs"
col SIOWs          format 99990.000        heading SIOWs               -- "IOPsSingleBlockw"
col MIOWs          format 99990.000        heading MIOWs               -- "IOPsMultiBlockw"
col TIOWmbs        format 99990.000        heading TIOWmbs             -- "Writembs"
col TIOR           format 99990.000        heading TIOR                -- "TotalIOPsr"
col TIOW           format 99990.000        heading TIOW                -- "TotalIOPsw"
col TIOALL         format 99990.000        heading TIOALL              -- "TotalIOPsALL"
col ALLRmbs        format 99990.000        heading ALLRmbs             -- "TotalReadmbs"
col ALLWmbs        format 99990.000        heading ALLWmbs             -- "TotalWritembs"
col GRANDmbs       format 99990.000        heading GRANDmbs            -- "TotalmbsALL"
col readratio      format 990              heading readratio           -- "ReadRatio"
col writeratio     format 990              heading writeratio          -- "WriteRatio"
col diskiops       format 99990.000        heading diskiops            -- "HWDiskIOPs"
col numdisks       format 99990.000        heading numdisks            -- "HWNumofDisks"
col flashcache     format 990              heading flashcache          -- "FlashCacheHitsPct"
col cellpiob       format 99990.000        heading cellpiob            -- "CellPIOICmbs"
col cellpiobss     format 99990.000        heading cellpiobss          -- "CellPIOICSmartScanmbs"
col cellpiobpreoff format 99990.000        heading cellpiobpreoff      -- "CellPIOpredoffloadmbs"
col cellpiobsi     format 99990.000        heading cellpiobsi          -- "CellPIOstorageindexmbs"
col celliouncomb   format 99990.000        heading celliouncomb        -- "CellIOuncompmbs"
col cellpiobs      format 99990.000        heading cellpiobs           -- "CellPIOsavedfilecreationmbs"
col cellpiobsrman  format 99990.000        heading cellpiobsrman       -- "CellPIOsavedRMANfilerestorembs"

SELECT * FROM
( 
  SELECT trim('&_instname') instname, 
         trim('&_dbid') db_id, 
         trim('&_hostname') hostname, 
         s0.snap_id id,
         TO_CHAR(s0.END_INTERVAL_TIME,'MM/DD/YY HH24:MI:SS') tm,
         s0.instance_number inst,
  round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
   (((s20t1.value - s20t0.value) - (s21t1.value - s21t0.value)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as SIORs,
   ((s21t1.value - s21t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as MIORs,
   (((s22t1.value - s22t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as TIORmbs,
   (((s23t1.value - s23t0.value) - (s24t1.value - s24t0.value)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as SIOWs,
   ((s24t1.value - s24t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as MIOWs,
   (((s25t1.value - s25t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as TIOWmbs,
   ((s13t1.value - s13t0.value)  / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as IORedo, 
   (((s14t1.value - s14t0.value)/1024/1024)  / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as redosizesec,
    ((s33t1.value - s33t0.value) / (s20t1.value - s20t0.value))*100 as flashcache,
   (((s26t1.value - s26t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as cellpiob,
   (((s31t1.value - s31t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as cellpiobss,
   (((s29t1.value - s29t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as cellpiobpreoff,
   (((s30t1.value - s30t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as cellpiobsi,
   (((s32t1.value - s32t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as celliouncomb,
   (((s27t1.value - s27t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as cellpiobs,
   (((s28t1.value - s28t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as cellpiobsrman
FROM dba_hist_snapshot s0,
  dba_hist_snapshot s1,
  dba_hist_sysstat s13t0,       -- redo writes, diffed
  dba_hist_sysstat s13t1,
  dba_hist_sysstat s14t0,       -- redo size, diffed
  dba_hist_sysstat s14t1,
  dba_hist_sysstat s20t0,       -- physical read total IO requests, diffed
  dba_hist_sysstat s20t1,
  dba_hist_sysstat s21t0,       -- physical read total multi block requests, diffed
  dba_hist_sysstat s21t1,  
  dba_hist_sysstat s22t0,       -- physical read total bytes, diffed
  dba_hist_sysstat s22t1,  
  dba_hist_sysstat s23t0,       -- physical write total IO requests, diffed
  dba_hist_sysstat s23t1,
  dba_hist_sysstat s24t0,       -- physical write total multi block requests, diffed
  dba_hist_sysstat s24t1,
  dba_hist_sysstat s25t0,       -- physical write total bytes, diffed
  dba_hist_sysstat s25t1,
  dba_hist_sysstat s26t0,       -- cell physical IO interconnect bytes, diffed, cellpiob
  dba_hist_sysstat s26t1,
  dba_hist_sysstat s27t0,       -- cell physical IO bytes saved during optimized file creation, diffed, cellpiobs
  dba_hist_sysstat s27t1,
  dba_hist_sysstat s28t0,       -- cell physical IO bytes saved during optimized RMAN file restore, diffed, cellpiobsrman
  dba_hist_sysstat s28t1,
  dba_hist_sysstat s29t0,       -- cell physical IO bytes eligible for predicate offload, diffed, cellpiobpreoff
  dba_hist_sysstat s29t1,
  dba_hist_sysstat s30t0,       -- cell physical IO bytes saved by storage index, diffed, cellpiobsi
  dba_hist_sysstat s30t1,
  dba_hist_sysstat s31t0,       -- cell physical IO interconnect bytes returned by smart scan, diffed, cellpiobss
  dba_hist_sysstat s31t1,
  dba_hist_sysstat s32t0,       -- cell IO uncompressed bytes, diffed, celliouncomb
  dba_hist_sysstat s32t1,
  dba_hist_sysstat s33t0,       -- cell flash cache read hits
  dba_hist_sysstat s33t1
WHERE s0.dbid            = &_dbid    -- CHANGE THE DBID HERE!
AND s1.dbid              = s0.dbid
AND s13t0.dbid            = s0.dbid
AND s13t1.dbid            = s0.dbid
AND s14t0.dbid            = s0.dbid
AND s14t1.dbid            = s0.dbid
AND s20t0.dbid            = s0.dbid
AND s20t1.dbid            = s0.dbid
AND s21t0.dbid            = s0.dbid
AND s21t1.dbid            = s0.dbid
AND s22t0.dbid            = s0.dbid
AND s22t1.dbid            = s0.dbid
AND s23t0.dbid            = s0.dbid
AND s23t1.dbid            = s0.dbid
AND s24t0.dbid            = s0.dbid
AND s24t1.dbid            = s0.dbid
AND s25t0.dbid            = s0.dbid
AND s25t1.dbid            = s0.dbid
AND s26t0.dbid            = s0.dbid
AND s26t1.dbid            = s0.dbid
AND s27t0.dbid            = s0.dbid
AND s27t1.dbid            = s0.dbid
AND s28t0.dbid            = s0.dbid
AND s28t1.dbid            = s0.dbid
AND s29t0.dbid            = s0.dbid
AND s29t1.dbid            = s0.dbid
AND s30t0.dbid            = s0.dbid
AND s30t1.dbid            = s0.dbid
AND s31t0.dbid            = s0.dbid
AND s31t1.dbid            = s0.dbid
AND s32t0.dbid            = s0.dbid
AND s32t1.dbid            = s0.dbid
AND s33t0.dbid            = s0.dbid
AND s33t1.dbid            = s0.dbid
AND s0.instance_number   = &_instancenumber   -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number   = s0.instance_number
AND s13t0.instance_number = s0.instance_number
AND s13t1.instance_number = s0.instance_number
AND s14t0.instance_number = s0.instance_number
AND s14t1.instance_number = s0.instance_number
AND s20t0.instance_number = s0.instance_number
AND s20t1.instance_number = s0.instance_number
AND s21t0.instance_number = s0.instance_number
AND s21t1.instance_number = s0.instance_number
AND s22t0.instance_number = s0.instance_number
AND s22t1.instance_number = s0.instance_number
AND s23t0.instance_number = s0.instance_number
AND s23t1.instance_number = s0.instance_number
AND s24t0.instance_number = s0.instance_number
AND s24t1.instance_number = s0.instance_number
AND s25t0.instance_number = s0.instance_number
AND s25t1.instance_number = s0.instance_number
AND s26t0.instance_number = s0.instance_number
AND s26t1.instance_number = s0.instance_number
AND s27t0.instance_number = s0.instance_number
AND s27t1.instance_number = s0.instance_number
AND s28t0.instance_number = s0.instance_number
AND s28t1.instance_number = s0.instance_number
AND s29t0.instance_number = s0.instance_number
AND s29t1.instance_number = s0.instance_number
AND s30t0.instance_number = s0.instance_number
AND s30t1.instance_number = s0.instance_number
AND s31t0.instance_number = s0.instance_number
AND s31t1.instance_number = s0.instance_number
AND s32t0.instance_number = s0.instance_number
AND s32t1.instance_number = s0.instance_number
AND s33t0.instance_number = s0.instance_number
AND s33t1.instance_number = s0.instance_number
AND s1.snap_id            = s0.snap_id + 1
AND s13t0.snap_id         = s0.snap_id
AND s13t1.snap_id         = s0.snap_id + 1
AND s14t0.snap_id         = s0.snap_id
AND s14t1.snap_id         = s0.snap_id + 1
AND s20t0.snap_id         = s0.snap_id
AND s20t1.snap_id         = s0.snap_id + 1
AND s21t0.snap_id         = s0.snap_id
AND s21t1.snap_id         = s0.snap_id + 1
AND s22t0.snap_id         = s0.snap_id
AND s22t1.snap_id         = s0.snap_id + 1
AND s23t0.snap_id         = s0.snap_id
AND s23t1.snap_id         = s0.snap_id + 1
AND s24t0.snap_id         = s0.snap_id
AND s24t1.snap_id         = s0.snap_id + 1
AND s25t0.snap_id         = s0.snap_id
AND s25t1.snap_id         = s0.snap_id + 1
AND s26t0.snap_id         = s0.snap_id
AND s26t1.snap_id         = s0.snap_id + 1
AND s27t0.snap_id         = s0.snap_id
AND s27t1.snap_id         = s0.snap_id + 1
AND s28t0.snap_id         = s0.snap_id
AND s28t1.snap_id         = s0.snap_id + 1
AND s29t0.snap_id         = s0.snap_id
AND s29t1.snap_id         = s0.snap_id + 1
AND s30t0.snap_id         = s0.snap_id
AND s30t1.snap_id         = s0.snap_id + 1
AND s31t0.snap_id         = s0.snap_id
AND s31t1.snap_id         = s0.snap_id + 1
AND s32t0.snap_id         = s0.snap_id
AND s32t1.snap_id         = s0.snap_id + 1
AND s33t0.snap_id         = s0.snap_id
AND s33t1.snap_id         = s0.snap_id + 1
AND s13t0.stat_name       = 'redo writes'
AND s13t1.stat_name       = s13t0.stat_name
AND s14t0.stat_name       = 'redo size'
AND s14t1.stat_name       = s14t0.stat_name
AND s20t0.stat_name       = 'physical read total IO requests'
AND s20t1.stat_name       = s20t0.stat_name
AND s21t0.stat_name       = 'physical read total multi block requests'
AND s21t1.stat_name       = s21t0.stat_name
AND s22t0.stat_name       = 'physical read total bytes'
AND s22t1.stat_name       = s22t0.stat_name
AND s23t0.stat_name       = 'physical write total IO requests'
AND s23t1.stat_name       = s23t0.stat_name
AND s24t0.stat_name       = 'physical write total multi block requests'
AND s24t1.stat_name       = s24t0.stat_name
AND s25t0.stat_name       = 'physical write total bytes'
AND s25t1.stat_name       = s25t0.stat_name
AND s26t0.stat_name       = 'cell physical IO interconnect bytes'
AND s26t1.stat_name       = s26t0.stat_name
AND s27t0.stat_name       = 'cell physical IO bytes saved during optimized file creation'
AND s27t1.stat_name       = s27t0.stat_name
AND s28t0.stat_name       = 'cell physical IO bytes saved during optimized RMAN file restore'
AND s28t1.stat_name       = s28t0.stat_name
AND s29t0.stat_name       = 'cell physical IO bytes eligible for predicate offload'
AND s29t1.stat_name       = s29t0.stat_name
AND s30t0.stat_name       = 'cell physical IO bytes saved by storage index'
AND s30t1.stat_name       = s30t0.stat_name
AND s31t0.stat_name       = 'cell physical IO interconnect bytes returned by smart scan'
AND s31t1.stat_name       = s31t0.stat_name
AND s32t0.stat_name       = 'cell IO uncompressed bytes'
AND s32t1.stat_name       = s32t0.stat_name
AND s33t0.stat_name       = 'cell flash cache read hits'
AND s33t1.stat_name       = s33t0.stat_name
)
-- WHERE 
-- tm > to_char(sysdate - 30, 'MM/DD/YY HH24:MI')
-- id  in (select snap_id from (select * from r2toolkit.r2_regression_data union all select * from r2toolkit.r2_outlier_data))
-- id in (338)
-- aas > 1
-- oscpuio > 50
-- rmancpupct > 0
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') >= 1     -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') <= 7
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') >= 0900     -- Hour
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') <= 1800
-- AND s0.END_INTERVAL_TIME >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss')     -- Data range
-- AND s0.END_INTERVAL_TIME <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
ORDER BY id ASC;
}}}
{{{

Network requirements below. TX is transmit, RX is received 
https://github.com/carlos-sierra/esp_collect/blob/master/sql/esp_collect_requirements_awr.sql


SUM(CASE WHEN h.stat_name = 'bytes sent via SQL*Net to client'                   THEN h.value ELSE 0 END) tx_cl,
       SUM(CASE WHEN h.stat_name = 'bytes received via SQL*Net from client'             THEN h.value ELSE 0 END) rx_cl,
       SUM(CASE WHEN h.stat_name = 'bytes sent via SQL*Net to dblink'                   THEN h.value ELSE 0 END) tx_dl,
       SUM(CASE WHEN h.stat_name = 'bytes received via SQL*Net from dblink'             THEN h.value ELSE 0 END) rx_dl


ROUND(MAX((tx_cl + rx_cl + tx_dl + rx_dl) / elapsed_sec)) nw_peak_bytes,
       ROUND(MAX((tx_cl + tx_dl) / elapsed_sec)) nw_tx_peak_bytes,
       ROUND(MAX((rx_cl + rx_dl) / elapsed_sec)) nw_rx_peak_bytes,       


Interconnect below

SUM(CASE WHEN h.stat_name = 'gc cr blocks received'                         THEN h.value ELSE 0 END) gc_cr_bl_rx,
       SUM(CASE WHEN h.stat_name = 'gc current blocks received'             THEN h.value ELSE 0 END) gc_cur_bl_rx,
       SUM(CASE WHEN h.stat_name = 'gc cr blocks served'                THEN h.value ELSE 0 END) gc_cr_bl_serv,
       SUM(CASE WHEN h.stat_name = 'gc current blocks served'               THEN h.value ELSE 0 END) gc_cur_bl_serv, 
       SUM(CASE WHEN h.stat_name = 'gcs messages sent'                  THEN h.value ELSE 0 END) gcs_msg_sent, 
       SUM(CASE WHEN h.stat_name = 'ges messages sent'                  THEN h.value ELSE 0 END) ges_msg_sent, 
       SUM(CASE WHEN d.name      = 'gcs msgs received'                  THEN d.value ELSE 0 END) gcs_msg_rcv, 
       SUM(CASE WHEN d.name      = 'ges msgs received'                  THEN d.value ELSE 0 END) ges_msg_rcv, 
       SUM(CASE WHEN p.parameter_name = 'db_block_size'                 THEN to_number(p.value) ELSE 0 END) block_size        

ROUND(MAX(((gc_cr_bl_rx + gc_cur_bl_rx + gc_cr_bl_serv + gc_cur_bl_serv)*block_size)+((gcs_msg_sent + ges_msg_sent + gcs_msg_rcv + ges_msg_rcv)*200) / elapsed_sec)) ic_peak_bytes,
              


}}}
{{{
set arraysize 5000

COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;

COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;

COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;

COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;

-- ttitle center 'AWR Services Statistics Report' skip 2
set pagesize 50000
set linesize 550

col instname    format a15
col hostname    format a30
col tm          format a15              heading tm           --"Snap|Start|Time"
col id          format 99999            heading id           --"Snap|ID"
col inst        format 90               heading inst         --"i|n|s|t|#"
col dur         format 999990.00        heading dur          --"Snap|Dur|(m)"
col cpu         format 90               heading cpu          --"C|P|U"
col cap         format 9999990.00       heading cap          --"***|Total|CPU|Time|(s)"
col dbt         format 999990.00        heading dbt          --"DB|Time"
col dbc         format 99990.00         heading dbc          --"DB|CPU"
col bgc         format 99990.00         heading bgc          --"Bg|CPU"
col rman        format 9990.00          heading rman         --"RMAN|CPU"
col aas         format 990.0            heading aas          --"A|A|S"
col totora      format 9999990.00       heading totora       --"***|Total|Oracle|CPU|(s)"
col busy        format 9999990.00       heading busy         --"Busy|Time"
col load        format 990.00           heading load         --"OS|Load"
col totos       format 9999990.00       heading totos        --"***|Total|OS|CPU|(s)"
col mem         format 999990.00        heading mem          --"Physical|Memory|(mb)"
col IORs        format 9990.000         heading IORs         --"IOPs|r"
col IOWs        format 9990.000         heading IOWs         --"IOPs|w"
col IORedo      format 9990.000         heading IORedo       --"IOPs|redo"
col IORmbs      format 9990.000         heading IORmbs       --"IO r|(mb)/s"
col IOWmbs      format 9990.000         heading IOWmbs       --"IO w|(mb)/s"
col redosizesec format 9990.000         heading redosizesec  --"Redo|(mb)/s"
col logons      format 990              heading logons       --"Sess"
col logone      format 990              heading logone       --"Sess|End"
col exsraw      format 99990.000        heading exsraw       --"Exec|raw|delta"
col exs         format 9990.000         heading exs          --"Exec|/s"
col oracpupct   format 990              heading oracpupct    --"Oracle|CPU|%"
col rmancpupct  format 990              heading rmancpupct   --"RMAN|CPU|%"
col oscpupct    format 990              heading oscpupct     --"OS|CPU|%"
col oscpuusr    format 990              heading oscpuusr     --"U|S|R|%"
col oscpusys    format 990              heading oscpusys     --"S|Y|S|%"
col oscpuio     format 990              heading oscpuio      --"I|O|%"
col phy_reads   format 99999990.00      heading phy_reads    --"physical|reads"
col log_reads   format 99999990.00      heading log_reads    --"logical|reads"

select  trim('&_instname') instname, trim('&_dbid') db_id, trim('&_hostname') hostname, snap_id,
        TO_CHAR(tm,'MM/DD/YY HH24:MI:SS') tm, 
        inst,
        dur,
        service_name, 
        round(db_time / 1000000, 1) as dbt, 
        round(db_cpu  / 1000000, 1) as dbc,
        phy_reads, 
        log_reads,
        aas
 from (select 
          s1.snap_id,
          s1.tm,
          s1.inst,
          s1.dur,
          s1.service_name, 
          sum(decode(s1.stat_name, 'DB time', s1.diff, 0)) db_time,
          sum(decode(s1.stat_name, 'DB CPU',  s1.diff, 0)) db_cpu,
          sum(decode(s1.stat_name, 'physical reads', s1.diff, 0)) phy_reads,
          sum(decode(s1.stat_name, 'session logical reads', s1.diff, 0)) log_reads,
          round(sum(decode(s1.stat_name, 'DB time', s1.diff, 0))/1000000,1)/60 / s1.dur as aas
   from
     (select s0.snap_id snap_id,
             s0.END_INTERVAL_TIME tm,
             s0.instance_number inst,
            round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
             e.service_name     service_name, 
             e.stat_name        stat_name, 
             e.value - b.value  diff
       from dba_hist_snapshot s0,
            dba_hist_snapshot s1,
            dba_hist_service_stat b,
            dba_hist_service_stat e
       where 
         s0.dbid                  = &_dbid            -- CHANGE THE DBID HERE!
         and s1.dbid              = s0.dbid
         and b.dbid               = s0.dbid
         and e.dbid               = s0.dbid
         and s0.instance_number   = &_instancenumber  -- CHANGE THE INSTANCE_NUMBER HERE!
         and s1.instance_number   = s0.instance_number
         and b.instance_number    = s0.instance_number
         and e.instance_number    = s0.instance_number
         and s1.snap_id           = s0.snap_id + 1
         and b.snap_id            = s0.snap_id
         and e.snap_id            = s0.snap_id + 1
         and b.stat_id            = e.stat_id
         and b.service_name_hash  = e.service_name_hash) s1
   group by 
     s1.snap_id, s1.tm, s1.inst, s1.dur, s1.service_name
   order by 
     snap_id asc, aas desc, service_name)
-- where 
-- AND TO_CHAR(tm,'D') >= 1     -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(tm,'D') <= 7
-- AND TO_CHAR(tm,'HH24MI') >= 0900     -- Hour
-- AND TO_CHAR(tm,'HH24MI') <= 1800
-- AND tm >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss')     -- Data range
-- AND tm <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
-- snap_id = 338
-- and snap_id >= 335 and snap_id <= 339
-- aas > .5
;
}}}
{{{
trx/sec = [UCOMS]+[URS]
}}}

{{{
set arraysize 5000

COLUMN blocksize NEW_VALUE _blocksize NOPRINT
select distinct block_size blocksize from v$datafile;

COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;

COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;

COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;

COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;

-- ttitle center 'AWR CPU and IO Workload Report' skip 2
set pagesize 50000
set linesize 550

col instname    format a15              heading instname        -- instname
col hostname    format a30              heading hostname        -- hostname
col tm          format a17              heading tm              -- "tm"
col id          format 99999            heading id              -- "snapid"
col inst        format 90               heading inst            -- "inst"
col dur         format 999990.00        heading dur             -- "dur"
col cpu         format 90               heading cpu             -- "cpu"
col cap         format 9999990.00       heading cap             -- "capacity"
col dbt         format 999990.00        heading dbt             -- "DBTime"
col dbc         format 99990.00         heading dbc             -- "DBcpu"
col bgc         format 99990.00         heading bgc             -- "BGcpu"
col rman        format 9990.00          heading rman            -- "RMANcpu"
col aas         format 990.0            heading aas             -- "AAS"
col totora      format 9999990.00       heading totora          -- "TotalOracleCPU"
col busy        format 9999990.00       heading busy            -- "BusyTime"
col load        format 990.00           heading load            -- "OSLoad"
col totos       format 9999990.00       heading totos           -- "TotalOSCPU"
col mem         format 999990.00        heading mem             -- "PhysicalMemorymb"
col IORs        format 9990.000         heading IORs            -- "IOPsr"
col IOWs        format 9990.000         heading IOWs            -- "IOPsw"
col IORedo      format 9990.000         heading IORedo          -- "IOPsredo"
col IORmbs      format 9990.000         heading IORmbs          -- "IOrmbs"
col IOWmbs      format 9990.000         heading IOWmbs          -- "IOwmbs"
col redosizesec format 9990.000         heading redosizesec     -- "Redombs"
col logons      format 990              heading logons          -- "Sess"
col logone      format 990              heading logone          -- "SessEnd"
col exsraw      format 99990.000        heading exsraw          -- "Execrawdelta"
col exs         format 9990.000         heading exs             -- "Execs"
col ucs         format 9990.000         heading ucs             -- "UserCalls"
col ucoms       format 9990.000         heading ucoms           -- "Commit"
col urs         format 9990.000         heading urs             -- "Rollback"
col lios        format 9999990.00       heading lios            -- "LIOs"
col oracpupct   format 990              heading oracpupct       -- "OracleCPUPct"
col rmancpupct  format 990              heading rmancpupct      -- "RMANCPUPct"
col oscpupct    format 990              heading oscpupct        -- "OSCPUPct"
col oscpuusr    format 990              heading oscpuusr        -- "USRPct"
col oscpusys    format 990              heading oscpusys        -- "SYSPct"
col oscpuio     format 990              heading oscpuio         -- "IOPct"

SELECT * FROM
( 
  SELECT trim('&_instname') instname, 
         trim('&_dbid') db_id, 
         trim('&_hostname') hostname, 
          s0.snap_id id,
         TO_CHAR(s0.END_INTERVAL_TIME,'MM/DD/YY HH24:MI:SS') tm,
         s0.instance_number inst,
  round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
  round(s4t1.value/1024/1024/1024,2) AS memgb,
  round(s37t1.value/1024/1024/1024,2) AS sgagb,
  round(s36t1.value/1024/1024/1024,2) AS pgagb,
     s9t0.value logons, 
   ((s10t1.value - s10t0.value)  / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as exs, 
   ((s40t1.value - s40t0.value)  / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as ucs, 
   ((s38t1.value - s38t0.value)  / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as ucoms, 
   ((s39t1.value - s39t0.value)  / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as urs,
   ((s41t1.value - s41t0.value)  / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as lios
FROM dba_hist_snapshot s0,
  dba_hist_snapshot s1,
  dba_hist_osstat s4t1,         -- osstat just get the end value 
  (select snap_id, dbid, instance_number, sum(value) value from dba_hist_sga group by snap_id, dbid, instance_number) s37t1, -- total SGA allocated, just get the end value
  dba_hist_pgastat s36t1,		-- total PGA allocated, just get the end value 
  dba_hist_sysstat s9t0,        -- logons current, sysstat absolute value should not be diffed
  dba_hist_sysstat s10t0,       -- execute count, diffed
  dba_hist_sysstat s10t1,
  dba_hist_sysstat s38t0,       -- user commits, diffed
  dba_hist_sysstat s38t1,
  dba_hist_sysstat s39t0,       -- user rollbacks, diffed
  dba_hist_sysstat s39t1,
  dba_hist_sysstat s40t0,       -- user calls, diffed
  dba_hist_sysstat s40t1,
  dba_hist_sysstat s41t0,       -- session logical reads, diffed
  dba_hist_sysstat s41t1
WHERE s0.dbid            = &_dbid    -- CHANGE THE DBID HERE!
AND s1.dbid              = s0.dbid
AND s4t1.dbid            = s0.dbid
AND s9t0.dbid            = s0.dbid
AND s10t0.dbid            = s0.dbid
AND s10t1.dbid            = s0.dbid
AND s36t1.dbid            = s0.dbid
AND s37t1.dbid            = s0.dbid
AND s38t0.dbid            = s0.dbid
AND s38t1.dbid            = s0.dbid
AND s39t0.dbid            = s0.dbid
AND s39t1.dbid            = s0.dbid
AND s40t0.dbid            = s0.dbid
AND s40t1.dbid            = s0.dbid
AND s41t0.dbid            = s0.dbid
AND s41t1.dbid            = s0.dbid
AND s0.instance_number   = &_instancenumber   -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number   = s0.instance_number
AND s4t1.instance_number = s0.instance_number
AND s9t0.instance_number = s0.instance_number
AND s10t0.instance_number = s0.instance_number
AND s10t1.instance_number = s0.instance_number
AND s36t1.instance_number = s0.instance_number
AND s37t1.instance_number = s0.instance_number
AND s38t0.instance_number = s0.instance_number
AND s38t1.instance_number = s0.instance_number
AND s39t0.instance_number = s0.instance_number
AND s39t1.instance_number = s0.instance_number
AND s40t0.instance_number = s0.instance_number
AND s40t1.instance_number = s0.instance_number
AND s41t0.instance_number = s0.instance_number
AND s41t1.instance_number = s0.instance_number
AND s1.snap_id           = s0.snap_id + 1
AND s4t1.snap_id         = s0.snap_id + 1
AND s36t1.snap_id        = s0.snap_id + 1
AND s37t1.snap_id        = s0.snap_id + 1
AND s9t0.snap_id         = s0.snap_id
AND s10t0.snap_id         = s0.snap_id
AND s10t1.snap_id         = s0.snap_id + 1
AND s38t0.snap_id         = s0.snap_id
AND s38t1.snap_id         = s0.snap_id + 1
AND s39t0.snap_id         = s0.snap_id
AND s39t1.snap_id         = s0.snap_id + 1
AND s40t0.snap_id         = s0.snap_id
AND s40t1.snap_id         = s0.snap_id + 1
AND s41t0.snap_id         = s0.snap_id
AND s41t1.snap_id         = s0.snap_id + 1
AND s4t1.stat_name       = 'PHYSICAL_MEMORY_BYTES'
AND s36t1.name           = 'total PGA allocated'
AND s9t0.stat_name       = 'logons current'
AND s10t0.stat_name       = 'execute count'
AND s10t1.stat_name       = s10t0.stat_name
AND s38t0.stat_name       = 'user commits'
AND s38t1.stat_name       = s38t0.stat_name
AND s39t0.stat_name       = 'user rollbacks'
AND s39t1.stat_name       = s39t0.stat_name
AND s40t0.stat_name       = 'user calls'
AND s40t1.stat_name       = s40t0.stat_name
AND s41t0.stat_name       = 'session logical reads'
AND s41t1.stat_name       = s41t0.stat_name
)
-- WHERE 
-- tm > to_char(sysdate - 30, 'MM/DD/YY HH24:MI')
-- id  in (select snap_id from (select * from r2toolkit.r2_regression_data union all select * from r2toolkit.r2_outlier_data))
-- id in (336)
-- aas > 1
-- oracpupct > 50
-- oscpupct > 50
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') >= 1     -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') <= 7
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') >= 0900     -- Hour
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') <= 1800
-- AND s0.END_INTERVAL_TIME >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss')     -- Data range
-- AND s0.END_INTERVAL_TIME <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
ORDER BY id ASC;
}}}
{{{
set arraysize 5000

COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;

COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;

COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;

COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;

-- ttitle center 'AWR Top Events Report' skip 2
set pagesize 50000
set linesize 550

col instname    format a15              
col hostname    format a30              
col snap_id     format 99999            heading snap_id       -- "snapid"   
col tm          format a17              heading tm            -- "tm"       
col inst        format 90               heading inst          -- "inst"     
col dur         format 999990.00        heading dur           -- "dur"      
col event       format a55              heading event         -- "Event"    
col event_rank  format 90               heading event_rank    -- "EventRank"
col waits       format 9999999990.00    heading waits         -- "Waits"    
col time        format 9999999990.00    heading time          -- "Timesec"  
col avgwt       format 99990.00         heading avgwt         -- "Avgwtms"  
col pctdbt      format 9990.0           heading pctdbt        -- "DBTimepct"
col aas         format 990.0            heading aas           -- "Aas"      
col wait_class  format a15              heading wait_class    -- "WaitClass"

spool awr_topevents-tableau-&_instname-&_hostname..csv
select trim('&_instname') instname, trim('&_dbid') db_id, trim('&_hostname') hostname, snap_id, tm, inst, dur, event, event_rank, waits, time, avgwt, pctdbt, aas, wait_class
from 
      (select snap_id, TO_CHAR(tm,'MM/DD/YY HH24:MI:SS') tm, inst, dur, event, waits, time, avgwt, pctdbt, aas, wait_class, 
            DENSE_RANK() OVER (
          PARTITION BY snap_id ORDER BY time DESC) event_rank
      from 
              (
              select * from 
                    (select * from 
                          (select 
                            s0.snap_id snap_id,
                            s0.END_INTERVAL_TIME tm,
                            s0.instance_number inst,
                            round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                    + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                    + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                    + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
                            e.event_name event,
                            e.total_waits - nvl(b.total_waits,0)       waits,
                            round ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000000, 2)  time,     -- THIS IS EVENT (sec)
                            round (decode ((e.total_waits - nvl(b.total_waits, 0)), 0, to_number(NULL), ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000) / (e.total_waits - nvl(b.total_waits,0))), 2) avgwt,
                            ((round ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000000, 2)) / NULLIF(((s5t1.value - nvl(s5t0.value,0)) / 1000000),0))*100 as pctdbt,     -- THIS IS EVENT (sec) / DB TIME (sec)
                            (round ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000000, 2))/60 /  round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                            + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                            + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                            + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas,     -- THIS IS EVENT (min) / SnapDur (min) TO GET THE % DB CPU ON AAS
                            e.wait_class wait_class
                            from 
                                 dba_hist_snapshot s0,
                                 dba_hist_snapshot s1,
                                 dba_hist_system_event b,
                                 dba_hist_system_event e,
                                 dba_hist_sys_time_model s5t0,
                                 dba_hist_sys_time_model s5t1
                            where 
                              s0.dbid                   = &_dbid            -- CHANGE THE DBID HERE!
                              AND s1.dbid               = s0.dbid
                              and b.dbid(+)             = s0.dbid
                              and e.dbid                = s0.dbid
                              AND s5t0.dbid             = s0.dbid
                              AND s5t1.dbid             = s0.dbid
                              AND s0.instance_number    = &_instancenumber  -- CHANGE THE INSTANCE_NUMBER HERE!
                              AND s1.instance_number    = s0.instance_number
                              and b.instance_number(+)  = s0.instance_number
                              and e.instance_number     = s0.instance_number
                              AND s5t0.instance_number = s0.instance_number
                              AND s5t1.instance_number = s0.instance_number
                              AND s1.snap_id            = s0.snap_id + 1
                              AND b.snap_id(+)          = s0.snap_id
                              and e.snap_id             = s0.snap_id + 1
                              AND s5t0.snap_id         = s0.snap_id
                              AND s5t1.snap_id         = s0.snap_id + 1
                              AND s5t0.stat_name       = 'DB time'
                              AND s5t1.stat_name       = s5t0.stat_name
                                    and b.event_id            = e.event_id
                                    and e.wait_class          != 'Idle'
                                    and e.total_waits         > nvl(b.total_waits,0)
                                    and e.event_name not in ('smon timer', 
                                                             'pmon timer', 
                                                             'dispatcher timer',
                                                             'dispatcher listen timer',
                                                             'rdbms ipc message')
                                  order by snap_id, time desc, waits desc, event)
                    union all
                              select 
                                       s0.snap_id snap_id,
                                       s0.END_INTERVAL_TIME tm,
                                       s0.instance_number inst,
                                       round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                            + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                            + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                            + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
                                        'CPU time',
                                        0,
                                        round ((s6t1.value - s6t0.value) / 1000000, 2) as time,     -- THIS IS DB CPU (sec)
                                        0,
                                        ((round ((s6t1.value - s6t0.value) / 1000000, 2)) / NULLIF(((s5t1.value - nvl(s5t0.value,0)) / 1000000),0))*100 as pctdbt,     -- THIS IS DB CPU (sec) / DB TIME (sec)..TO GET % OF DB CPU ON DB TIME FOR TOP 5 TIMED EVENTS SECTION
                                        (round ((s6t1.value - s6t0.value) / 1000000, 2))/60 /  round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                    + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                    + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                    + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas,  -- THIS IS DB CPU (min) / SnapDur (min) TO GET THE % DB CPU ON AAS
                                        'CPU'
                                      from 
                                        dba_hist_snapshot s0,
                                        dba_hist_snapshot s1,
                                        dba_hist_sys_time_model s6t0,
                                        dba_hist_sys_time_model s6t1,
                                        dba_hist_sys_time_model s5t0,
                                        dba_hist_sys_time_model s5t1
                                      WHERE 
                                      s0.dbid                   = &_dbid              -- CHANGE THE DBID HERE!
                                      AND s1.dbid               = s0.dbid
                                      AND s6t0.dbid            = s0.dbid
                                      AND s6t1.dbid            = s0.dbid
                                      AND s5t0.dbid            = s0.dbid
                                      AND s5t1.dbid            = s0.dbid
                                      AND s0.instance_number    = &_instancenumber    -- CHANGE THE INSTANCE_NUMBER HERE!
                                      AND s1.instance_number    = s0.instance_number
                                      AND s6t0.instance_number = s0.instance_number
                                      AND s6t1.instance_number = s0.instance_number
                                      AND s5t0.instance_number = s0.instance_number
                                      AND s5t1.instance_number = s0.instance_number
                                      AND s1.snap_id            = s0.snap_id + 1
                                      AND s6t0.snap_id         = s0.snap_id
                                      AND s6t1.snap_id         = s0.snap_id + 1
                                      AND s5t0.snap_id         = s0.snap_id
                                      AND s5t1.snap_id         = s0.snap_id + 1
                                      AND s6t0.stat_name       = 'DB CPU'
                                      AND s6t1.stat_name       = s6t0.stat_name
                                      AND s5t0.stat_name       = 'DB time'
                                      AND s5t1.stat_name       = s5t0.stat_name
                    union all
                                      (select 
                                               dbtime.snap_id,
                                               dbtime.tm,
                                               dbtime.inst,
                                               dbtime.dur,
                                               'CPU wait',
                                                0,
                                                round(dbtime.time - accounted_dbtime.time, 2) time,     -- THIS IS UNACCOUNTED FOR DB TIME (sec)
                                                0,
                                                ((dbtime.aas - accounted_dbtime.aas)/ NULLIF(nvl(dbtime.aas,0),0))*100 as pctdbt,     -- THIS IS UNACCOUNTED FOR DB TIME (sec) / DB TIME (sec)
                                                round(dbtime.aas - accounted_dbtime.aas, 2) aas,     -- AAS OF UNACCOUNTED FOR DB TIME
                                                'CPU wait'
                                      from
                                                  (select  
                                                     s0.snap_id, 
                                                     s0.END_INTERVAL_TIME tm,
                                                     s0.instance_number inst,
                                                    round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                    + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                    + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                    + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
                                                    'DB time',
                                                    0,
                                                    round ((s5t1.value - s5t0.value) / 1000000, 2) as time,     -- THIS IS DB time (sec)
                                                    0,
                                                    0,
                                                     (round ((s5t1.value - s5t0.value) / 1000000, 2))/60 /  round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                                    + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                                    + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                                    + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas,
                                                    'DB time'
                                                  from 
                                                                    dba_hist_snapshot s0,
                                                                    dba_hist_snapshot s1,
                                                                    dba_hist_sys_time_model s5t0,
                                                                    dba_hist_sys_time_model s5t1
                                                                  WHERE 
                                                                  s0.dbid                   = &_dbid              -- CHANGE THE DBID HERE!
                                                                  AND s1.dbid               = s0.dbid
                                                                  AND s5t0.dbid            = s0.dbid
                                                                  AND s5t1.dbid            = s0.dbid
                                                                  AND s0.instance_number    = &_instancenumber    -- CHANGE THE INSTANCE_NUMBER HERE!
                                                                  AND s1.instance_number    = s0.instance_number
                                                                  AND s5t0.instance_number = s0.instance_number
                                                                  AND s5t1.instance_number = s0.instance_number
                                                                  AND s1.snap_id            = s0.snap_id + 1
                                                                  AND s5t0.snap_id         = s0.snap_id
                                                                  AND s5t1.snap_id         = s0.snap_id + 1
                                                                  AND s5t0.stat_name       = 'DB time'
                                                                  AND s5t1.stat_name       = s5t0.stat_name) dbtime, 
                                                  (select snap_id, sum(time) time, sum(AAS) aas from 
                                                          (select * from (select 
                                                                s0.snap_id snap_id,
                                                                s0.END_INTERVAL_TIME tm,
                                                                s0.instance_number inst,
                                                                round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                        + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                        + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                        + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
                                                                e.event_name event,
                                                                e.total_waits - nvl(b.total_waits,0)       waits,
                                                                round ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000000, 2)  time,     -- THIS IS EVENT (sec)
                                                                round (decode ((e.total_waits - nvl(b.total_waits, 0)), 0, to_number(NULL), ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000) / (e.total_waits - nvl(b.total_waits,0))), 2) avgwt,
                                                                ((round ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000000, 2)) / NULLIF(((s5t1.value - nvl(s5t0.value,0)) / 1000000),0))*100 as pctdbt,     -- THIS IS EVENT (sec) / DB TIME (sec)
                                                                (round ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000000, 2))/60 /  round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                                                + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                                                + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                                                + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas,     -- THIS IS EVENT (min) / SnapDur (min) TO GET THE % DB CPU ON AAS
                                                                e.wait_class wait_class
                                                          from 
                                                               dba_hist_snapshot s0,
                                                               dba_hist_snapshot s1,
                                                               dba_hist_system_event b,
                                                               dba_hist_system_event e,
                                                               dba_hist_sys_time_model s5t0,
                                                               dba_hist_sys_time_model s5t1
                                                          where 
                                                            s0.dbid                   = &_dbid            -- CHANGE THE DBID HERE!
                                                            AND s1.dbid               = s0.dbid
                                                            and b.dbid(+)             = s0.dbid
                                                            and e.dbid                = s0.dbid
                                                            AND s5t0.dbid             = s0.dbid
                                                            AND s5t1.dbid             = s0.dbid
                                                            AND s0.instance_number    = &_instancenumber  -- CHANGE THE INSTANCE_NUMBER HERE!
                                                            AND s1.instance_number    = s0.instance_number
                                                            and b.instance_number(+)  = s0.instance_number
                                                            and e.instance_number     = s0.instance_number
                                                            AND s5t0.instance_number = s0.instance_number
                                                            AND s5t1.instance_number = s0.instance_number
                                                            AND s1.snap_id            = s0.snap_id + 1
                                                            AND b.snap_id(+)          = s0.snap_id
                                                            and e.snap_id             = s0.snap_id + 1
                                                            AND s5t0.snap_id         = s0.snap_id
                                                            AND s5t1.snap_id         = s0.snap_id + 1
                                                      AND s5t0.stat_name       = 'DB time'
                                                      AND s5t1.stat_name       = s5t0.stat_name
                                                            and b.event_id            = e.event_id
                                                            and e.wait_class          != 'Idle'
                                                            and e.total_waits         > nvl(b.total_waits,0)
                                                            and e.event_name not in ('smon timer', 
                                                                                     'pmon timer', 
                                                                                     'dispatcher timer',
                                                                                     'dispatcher listen timer',
                                                                                     'rdbms ipc message')
                                                          order by snap_id, time desc, waits desc, event)
                                                    union all
                                                          select 
                                                                   s0.snap_id snap_id,
                                                                   s0.END_INTERVAL_TIME tm,
                                                                   s0.instance_number inst,
                                                                   round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                        + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                        + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                        + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
                                                                    'CPU time',
                                                                    0,
                                                                    round ((s6t1.value - s6t0.value) / 1000000, 2) as time,     -- THIS IS DB CPU (sec)
                                                                    0,
                                                                    ((round ((s6t1.value - s6t0.value) / 1000000, 2)) / NULLIF(((s5t1.value - nvl(s5t0.value,0)) / 1000000),0))*100 as pctdbt,     -- THIS IS DB CPU (sec) / DB TIME (sec)..TO GET % OF DB CPU ON DB TIME FOR TOP 5 TIMED EVENTS SECTION
                                                                    (round ((s6t1.value - s6t0.value) / 1000000, 2))/60 /  round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                                                + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                                                + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                                                + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas,  -- THIS IS DB CPU (min) / SnapDur (min) TO GET THE % DB CPU ON AAS
                                                                    'CPU'
                                                                  from 
                                                                    dba_hist_snapshot s0,
                                                                    dba_hist_snapshot s1,
                                                                    dba_hist_sys_time_model s6t0,
                                                                    dba_hist_sys_time_model s6t1,
                                                                    dba_hist_sys_time_model s5t0,
                                                                    dba_hist_sys_time_model s5t1
                                                                  WHERE 
                                                                  s0.dbid                   = &_dbid              -- CHANGE THE DBID HERE!
                                                                  AND s1.dbid               = s0.dbid
                                                                  AND s6t0.dbid            = s0.dbid
                                                                  AND s6t1.dbid            = s0.dbid
                                                                  AND s5t0.dbid            = s0.dbid
                                                                  AND s5t1.dbid            = s0.dbid
                                                                  AND s0.instance_number    = &_instancenumber    -- CHANGE THE INSTANCE_NUMBER HERE!
                                                                  AND s1.instance_number    = s0.instance_number
                                                                  AND s6t0.instance_number = s0.instance_number
                                                                  AND s6t1.instance_number = s0.instance_number
                                                                  AND s5t0.instance_number = s0.instance_number
                                                                  AND s5t1.instance_number = s0.instance_number
                                                                  AND s1.snap_id            = s0.snap_id + 1
                                                                  AND s6t0.snap_id         = s0.snap_id
                                                                  AND s6t1.snap_id         = s0.snap_id + 1
                                                                  AND s5t0.snap_id         = s0.snap_id
                                                                  AND s5t1.snap_id         = s0.snap_id + 1
                                                                  AND s6t0.stat_name       = 'DB CPU'
                                                                  AND s6t1.stat_name       = s6t0.stat_name
                                                                  AND s5t0.stat_name       = 'DB time'
                                                                  AND s5t1.stat_name       = s5t0.stat_name
                                                          ) group by snap_id) accounted_dbtime
                                                            where dbtime.snap_id = accounted_dbtime.snap_id 
                                        )
                    )
              )
      )
WHERE event_rank <= 5
-- AND tm > to_char(sysdate - 30, 'MM/DD/YY HH24:MI')
-- AND TO_CHAR(tm,'D') >= 1     -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(tm,'D') <= 7
-- AND TO_CHAR(tm,'HH24MI') >= 0900     -- Hour
-- AND TO_CHAR(tm,'HH24MI') <= 1800
-- AND tm >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss')     -- Data range
-- AND tm <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
-- and snap_id = 495
-- and snap_id >= 495 and snap_id <= 496
-- and event = 'db file sequential read'
-- and event like 'CPU%'
-- and avgwt > 5
-- and aas > .5
-- and wait_class = 'CPU'
-- and wait_class like '%I/O%'
-- and event_rank in (1,2,3)
ORDER BY snap_id;
}}}
If you'd like to be detailed and not only give you the top5 across snap_ids.. then comment the following lines below
<<<
where 
                          time_rank <= 5
<<<
then put filters like SQL_ID or AAS after the line 
<<<
-- where rownum <= 20
<<<


{{{
set arraysize 5000

COLUMN blocksize NEW_VALUE _blocksize NOPRINT
select distinct block_size blocksize from v$datafile;

COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;

COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;

COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;

COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;

-- ttitle center 'AWR Top SQL Report' skip 2
set pagesize 50000
set linesize 550

col snap_id             format 99999            heading -- "Snap|ID"
col tm                  format a15              heading -- "Snap|Start|Time"
col inst                format 90               heading -- "i|n|s|t|#"
col dur                 format 990.00           heading -- "Snap|Dur|(m)"
col sql_id              format a15              heading -- "SQL|ID"
col phv                 format 99999999999      heading -- "Plan|Hash|Value"
col module              format a50
col elap                format 999990.00        heading -- "Ela|Time|(s)"
col elapexec            format 999990.00        heading -- "Ela|Time|per|exec|(s)"
col cput                format 999990.00        heading -- "CPU|Time|(s)"
col iowait              format 999990.00        heading -- "IO|Wait|(s)"
col appwait             format 999990.00        heading -- "App|Wait|(s)"
col concurwait          format 999990.00        heading -- "Ccr|Wait|(s)"
col clwait              format 999990.00        heading -- "Cluster|Wait|(s)"
col bget                format 99999999990      heading -- "LIO"
col dskr                format 99999999990      heading -- "PIO"
col dpath               format 99999999990      heading -- "Direct|Writes"
col rowp                format 99999999990      heading -- "Rows"
col exec                format 9999990          heading -- "Exec"
col prsc                format 999999990        heading -- "Parse|Count"
col pxexec              format 9999990          heading -- "PX|Server|Exec"
col icbytes             format 99999990         heading -- "IC|MB"           
col offloadbytes        format 99999990         heading -- "Offload|MB"
col offloadreturnbytes  format 99999990         heading -- "Offload|return|MB"
col flashcachereads     format 99999990         heading -- "Flash|Cache|MB"   
col uncompbytes         format 99999990         heading -- "Uncomp|MB"       
col pctdbt              format 990              heading -- "DB Time|%"
col aas                 format 990.00           heading -- "A|A|S"
col time_rank           format 90               heading -- "Time|Rank"
col sql_text            format a6               heading -- "SQL|Text"

     select *
       from (
             select
                  trim('&_instname') instname, 
                  trim('&_dbid') db_id, 
                  trim('&_hostname') hostname, 
                  sqt.snap_id snap_id,
                  TO_CHAR(sqt.tm,'MM/DD/YY HH24:MI:SS') tm,
                  sqt.inst inst,
                  sqt.dur dur,
                  sqt.aas aas,
                  nvl((sqt.elap), to_number(null)) elap,
                  nvl((sqt.elapexec), 0) elapexec,
                  nvl((sqt.cput), to_number(null)) cput,
                  sqt.iowait iowait,
                  sqt.appwait appwait,
                  sqt.concurwait concurwait,
                  sqt.clwait clwait,
                  sqt.bget bget, 
                  sqt.dskr dskr, 
                  sqt.dpath dpath,
                  sqt.rowp rowp,
                  sqt.exec exec, 
                  sqt.prsc prsc, 
                  sqt.pxexec pxexec,
                  sqt.icbytes, 
                  sqt.offloadbytes, 
                  sqt.offloadreturnbytes, 
                  sqt.flashcachereads, 
                  sqt.uncompbytes,
                  sqt.time_rank time_rank,
                  sqt.sql_id sql_id,   
                  sqt.phv phv,                
                  substr(to_clob(decode(sqt.module, null, null, sqt.module)),1,50) module, 
                  st.sql_text sql_text     -- PUT/REMOVE COMMENT TO HIDE/SHOW THE SQL_TEXT
             from        (
                          select snap_id, tm, inst, dur, sql_id, phv, module, elap, elapexec, cput, iowait, appwait, concurwait, clwait, bget, dskr, dpath, rowp, exec, prsc, pxexec, icbytes, offloadbytes, offloadreturnbytes, flashcachereads, uncompbytes, aas, time_rank
                          from
                                             (
                                               select 
                                                      s0.snap_id snap_id,
                                                      s0.END_INTERVAL_TIME tm,
                                                      s0.instance_number inst,
                                                      round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                              + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                              + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                              + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
                                                      e.sql_id sql_id, 
                                                      e.plan_hash_value phv, 
                                                      max(e.module) module,
                                                      sum(e.elapsed_time_delta)/1000000 elap,
                                                      decode((sum(e.executions_delta)), 0, to_number(null), ((sum(e.elapsed_time_delta)) / (sum(e.executions_delta)) / 1000000)) elapexec,
                                                      sum(e.cpu_time_delta)/1000000     cput, 
                                                      sum(e.iowait_delta)/1000000 iowait,
                                                      sum(e.apwait_delta)/1000000 appwait,
                                                      sum(e.ccwait_delta)/1000000 concurwait,
                                                      sum(e.clwait_delta)/1000000 clwait,
                                                      sum(e.buffer_gets_delta) bget,
                                                      sum(e.disk_reads_delta) dskr, 
                                                      sum(e.direct_writes_delta) dpath,
                                                      sum(e.rows_processed_delta) rowp,
                                                      sum(e.executions_delta)   exec,
                                                      sum(e.parse_calls_delta) prsc,
                                                      sum(e.px_servers_execs_delta) pxexec,
                                                      sum(e.io_interconnect_bytes_delta)/1024/1024 icbytes,  
                                                      sum(e.io_offload_elig_bytes_delta)/1024/1024 offloadbytes,  
                                                      sum(e.io_offload_return_bytes_delta)/1024/1024 offloadreturnbytes,   
                                                      (sum(e.optimized_physical_reads_delta)* &_blocksize)/1024/1024 flashcachereads,   
                                                      sum(e.cell_uncompressed_bytes_delta)/1024/1024 uncompbytes, 
                                                      (sum(e.elapsed_time_delta)/1000000) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                                            + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                                            + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                                            + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60) aas,
                                                      DENSE_RANK() OVER (
                                                      PARTITION BY s0.snap_id ORDER BY e.elapsed_time_delta DESC) time_rank
                                               from 
                                                   dba_hist_snapshot s0,
                                                   dba_hist_snapshot s1,
                                                   dba_hist_sqlstat e
                                                   where 
                                                    s0.dbid                   = &_dbid                -- CHANGE THE DBID HERE!
                                                    AND s1.dbid               = s0.dbid
                                                    and e.dbid                = s0.dbid                                                
                                                    AND s0.instance_number    = &_instancenumber      -- CHANGE THE INSTANCE_NUMBER HERE!
                                                    AND s1.instance_number    = s0.instance_number
                                                    and e.instance_number     = s0.instance_number                                                 
                                                    AND s1.snap_id            = s0.snap_id + 1
                                                    and e.snap_id             = s0.snap_id + 1                                              
                                               group by 
                                                    s0.snap_id, s0.END_INTERVAL_TIME, s0.instance_number, e.sql_id, e.plan_hash_value, e.elapsed_time_delta, s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME
                                             )
                          where 
                          time_rank <= 5                                     -- GET TOP 5 SQL ACROSS SNAP_IDs... YOU CAN ALTER THIS TO HAVE MORE DATA POINTS
                         ) 
                        sqt,
                        (select sql_id, dbid, nvl(b.name, a.command_type) sql_text from dba_hist_sqltext a, audit_actions b where a.command_type =  b.action(+)) st
             where st.sql_id(+)             = sqt.sql_id
             and st.dbid(+)                 = &_dbid
-- AND TO_CHAR(tm,'D') >= 1                                                  -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(tm,'D') <= 7
-- AND TO_CHAR(tm,'HH24MI') >= 0900                                          -- Hour
-- AND TO_CHAR(tm,'HH24MI') <= 1800
-- AND tm >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss')     -- Data range
-- AND tm <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
-- AND snap_id in (338,339)
-- AND snap_id = 338
-- AND snap_id >= 335 and snap_id <= 339
-- AND lower(st.sql_text) like 'select%'
-- AND lower(st.sql_text) like 'insert%'
-- AND lower(st.sql_text) like 'update%'
-- AND lower(st.sql_text) like 'merge%'
-- AND pxexec > 0
-- AND aas > .5
             order by 
             snap_id                             -- TO GET SQL OUTPUT ACROSS SNAP_IDs SEQUENTIALLY AND ASC
             -- nvl(sqt.elap, -1) desc, sqt.sql_id     -- TO GET SQL OUTPUT BY ELAPSED TIME
             )
-- where rownum <= 20
;

}}}
http://gavinsoorma.com/2009/07/exporting-and-importing-awr-snapshot-data/

http://dboptimizer.com/2011/11/08/importing-awr-repositories-from-cloned-databases/  <-- this is to change the DBIDs
https://sites.google.com/site/oraclemonitor/dba_hist_active_sess_history#TOC-Force-importing-a-in-AWR   <-- this is to ''FORCE'' import ASH data 


How to Export and Import the AWR Repository From One Database to Another (Doc ID 785730.1)

Transporting Automatic Workload Repository Data to Another System https://docs.oracle.com/en/database/oracle/oracle-database/19/tgdba/gathering-database-statistics.html#GUID-F25470A0-C236-46DE-84F7-D68FBE1B0F12



{{{


###################################
on the source env
###################################

CREATE DIRECTORY AWR_DATA AS '/oracle/app/oracle/awrdata';

@?/rdbms/admin/awrextr.sql


~~~~~~~~~~~~~
AWR EXTRACT
~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~  This script will extract the AWR data for a range of snapshots  ~
~  into a dump file.  The script will prompt users for the         ~
~  following information:                                          ~
~     (1) database id                                              ~
~     (2) snapshot range to extract                                ~
~     (3) name of directory object                                 ~
~     (4) name of dump file                                        ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


Databases in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

   DB Id     DB Name      Host
------------ ------------ ------------
* 2607950532 IVRS         dbrocaix01.b
                          ayantel.com


The default database id is the local one: '2607950532'.  To use this
database id, press <return> to continue, otherwise enter an alternative.

Enter value for dbid: 2607950532


Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 235
Begin Snapshot Id specified: 235

Enter value for end_snap: 3333


Specify the Directory Name
~~~~~~~~~~~~~~~~~~~~~~~~~~

Directory Name                 Directory Path
------------------------------ -------------------------------------------------
ADMIN_DIR                      /oracle/app/oracle/product/10.2.0/db_1/md/admin
AWR_DATA                       /oracle/app/oracle/awrdata
DATA_PUMP_DIR                  /flash_reco/flash_recovery_area/IVRS/expdp
DATA_PUMP_LOG                  /home/oracle/logs
SQLT$STAGE                     /oracle/app/oracle/admin/ivrs/udump
SQLT$UDUMP                     /oracle/app/oracle/admin/ivrs/udump
WORK_DIR                       /oracle/app/oracle/product/10.2.0/db_1/work

Choose a Directory Name from the above list (case-sensitive).

Enter value for directory_name: AWR_DATA

Using the dump directory: AWR_DATA

Specify the Name of the Extract Dump File
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The prefix for the default dump file name is awrdat_235_3333.
To use this name, press <return> to continue, otherwise enter
an alternative.

Enter value for file_name: awrexp



###################################
on the target env
###################################

CREATE DIRECTORY AWR_DATA AS '/oracle/app/oracle/awrdata';

@?/rdbms/admin/awrload.sql

-- on target before the load 
-- MIN/MAX for dba_hist tables
  2  select min(snap_id) min_snap_id, max(snap_id) max_snap_id from dba_hist_snapshot;
  3  select to_char(min(end_interval_time),'yyyy-mon-dd hh24:mi:ss') min_date, to_char(max(end_interval_time),'yyyy-mon-dd hh24:mi:ss') max_date from dba_hist_snapshot;

  4    5    6    7    8    9   10   11
INSTANCE_NUMBER    SNAP_ID STARTUP_TIME         SNAP_START           SNAP_END                ELA_MIN
--------------- ---------- -------------------- -------------------- -------------------- ----------
              1        238 2011-jan-27 08:52:09 2011-jan-27 09:30:31 2011-jan-27 09:40:34      10.05
              1        237 2011-jan-27 08:52:09 2011-jan-27 09:20:28 2011-jan-27 09:30:31      10.04
              1        236 2011-jan-27 08:52:09 2011-jan-27 09:10:26 2011-jan-27 09:20:28      10.04
              1        235 2011-jan-27 08:52:09 2011-jan-27 09:03:24 2011-jan-27 09:10:26       7.03
              1        234 2009-dec-15 13:41:20 2009-dec-15 14:00:32 2011-jan-27 09:03:24  587222.87
              1        233 2009-dec-15 12:08:35 2009-dec-15 13:00:49 2009-dec-15 14:00:32      59.72
              1        232 2009-dec-15 12:08:35 2009-dec-15 12:19:42 2009-dec-15 13:00:49      41.12
              1        231 2009-dec-15 07:58:35 2009-dec-15 08:09:41 2009-dec-15 12:19:42     250.01
              1        230 2009-dec-14 23:35:11 2009-dec-14 23:46:20 2009-dec-15 08:09:41     503.35
              1        229 2009-dec-10 11:27:30 2009-dec-11 04:00:38 2009-dec-14 23:46:20     5505.7

10 rows selected.

sys@IVRS> sys@IVRS> sys@IVRS>
MIN_SNAP_ID MAX_SNAP_ID
----------- -----------
        213         239

sys@IVRS>
MIN_DATE             MAX_DATE
-------------------- --------------------
2009-dec-10 11:38:56 2011-jan-27 09:40:34



~~~~~~~~~~
AWR LOAD
~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~  This script will load the AWR data from a dump file. The   ~
~  script will prompt users for the following information:    ~
~     (1) name of directory object                            ~
~     (2) name of dump file                                   ~
~     (3) staging schema name to load AWR data into           ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Specify the Directory Name
~~~~~~~~~~~~~~~~~~~~~~~~~~

Directory Name                 Directory Path
------------------------------ -------------------------------------------------
ADMIN_DIR                      /oracle/app/oracle/product/10.2.0/db_1/md/admin
AWR_DATA                       /oracle/app/oracle/awrdata
DATA_PUMP_DIR                  /flash_reco/flash_recovery_area/IVRS/expdp
DATA_PUMP_LOG                  /home/oracle/logs
SQLT$STAGE                     /oracle/app/oracle/admin/ivrs/udump
SQLT$UDUMP                     /oracle/app/oracle/admin/ivrs/udump
WORK_DIR                       /oracle/app/oracle/product/10.2.0/db_1/work

Choose a Directory Name from the list above (case-sensitive).

Enter value for directory_name: AWR_DATA

Using the dump directory: AWR_DATA

Specify the Name of the Dump File to Load
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Please specify the prefix of the dump file (.dmp) to load:

Enter value for file_name: awrexp


Enter value for schema_name:

Using the staging schema name: AWR_STAGE

Choose the Default tablespace for the AWR_STAGE user
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Choose the AWR_STAGE users's default tablespace.  This is the
tablespace in which the AWR data will be staged.

TABLESPACE_NAME                CONTENTS  DEFAULT TABLESPACE
------------------------------ --------- ------------------
CCDATA                         PERMANENT
CCINDEX                        PERMANENT
PSE                            PERMANENT
SOE                            PERMANENT
SOEINDEX                       PERMANENT
SYSAUX                         PERMANENT *
TPCCTAB                        PERMANENT
TPCHTAB                        PERMANENT
USERS                          PERMANENT

Pressing <return> will result in the recommended default
tablespace (identified by *) being used.

Enter value for default_tablespace:


Using tablespace SYSAUX as the default tablespace for the AWR_STAGE


Choose the Temporary tablespace for the AWR_STAGE user
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Choose the AWR_STAGE user's temporary tablespace.

TABLESPACE_NAME                CONTENTS  DEFAULT TEMP TABLESPACE
------------------------------ --------- -----------------------
TEMP                           TEMPORARY *

Pressing <return> will result in the database's default temporary
tablespace (identified by *) being used.

Enter value for temporary_tablespace:




Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at 12:46:07
begin
*
ERROR at line 1:
ORA-20105: unable to move AWR data to SYS
ORA-06512: at "SYS.DBMS_SWRF_INTERNAL", line 1760
ORA-20107: not allowed to move AWR data for local dbid
ORA-06512: at line 3


... Dropping AWR_STAGE user

End of AWR Load
}}}
-- from http://www.perfvision.com/statspack/awr.txt

{{{
WORKLOAD REPOSITORY report for
DB Name         DB Id    Instance     Inst Num Release     RAC Host
              Snap Id      Snap Time      Sessions Curs/Sess
Cache Sizes
Load Profile
Instance Efficiency Percentages (Target 100%)
Top 5 Timed Events                                         Avg %Total
Time Model Statistics
Wait Class
Wait Events
Background Wait Events
Operating System Statistics
Service Statistics
Service Wait Class Stats
SQL ordered by Elapsed Time
SQL ordered by CPU Time
SQL ordered by Gets
SQL ordered by Reads
SQL ordered by Executions
SQL ordered by Parse Calls
SQL ordered by Sharable Memory
SQL ordered by Version Count
Instance Activity Stats
Instance Activity Stats - Absolute Values
Instance Activity Stats - Thread Activity
Tablespace IO Stats
File IO Stats
Buffer Pool Statistics
Instance Recovery Stats
Buffer Pool Advisory
PGA Aggr Summary
PGA Aggr Target Histogram
PGA Memory Advisory
Shared Pool Advisory
SGA Target Advisory
Streams Pool Advisory
Java Pool Advisory
Buffer Wait Statistics
Enqueue Activity
Undo Segment Summary
Latch Activity
Latch Sleep Breakdown
Latch Miss Sources
Parent Latch Statistics
Segments by Logical Reads
Segments by Physical Reads
Segments by Row Lock Waits
Segments by ITL Waits
Segments by Buffer Busy Waits
Dictionary Cache Stats
Library Cache Activity
Process Memory Summary
SGA Memory Summary
SGA regions                     Begin Size (Bytes)      (if different)
SGA breakdown difference
Streams CPU/IO Usage
Streams Capture
Streams Apply
Buffered Queues
Buffered Subscribers
Rule Set
Resource Limit Stats
init.ora Parameters

}}}
{{{
WORKLOAD REPOSITORY report for
DB Name         DB Id    Instance     Inst Num Release     RAC Host
              Snap Id      Snap Time      Sessions Curs/Sess
Cache Sizes
Load Profile
Instance Efficiency Percentages (Target 100%)
Top 5 Timed Events     Avg wait %Total Call
Time Model Statistics
Wait Class
Wait Events
Background Wait Events
Operating System Statistics
Service Statistics
Service Wait Class Stats
SQL ordered by Elapsed Time
SQL ordered by CPU Time
SQL ordered by Gets
SQL ordered by Reads
SQL ordered by Executions
SQL ordered by Parse Calls
SQL ordered by Sharable Memory
SQL ordered by Version Count
Instance Activity Stats
Instance Activity Stats - Absolute Values
Instance Activity Stats - Thread Activity
Tablespace IO Stats
File IO Stats
Buffer Pool Statistics
Instance Recovery Stats
Buffer Pool Advisory
PGA Aggr Summary
PGA Aggr Target Stats     <-- new in 10.2.0.3
PGA Aggr Target Histogram
PGA Memory Advisory
Shared Pool Advisory
SGA Target Advisory
Streams Pool Advisory
Java Pool Advisory
Buffer Wait Statistics
Enqueue Activity
Undo Segment Summary
Undo Segment Stats     <-- new in 10.2.0.3
Latch Activity
Latch Sleep Breakdown
Latch Miss Sources
Parent Latch Statistics
Child Latch Statistics     <-- new in 10.2.0.3
Segments by Logical Reads
Segments by Physical Reads
Segments by Row Lock Waits
Segments by ITL Waits
Segments by Buffer Busy Waits
Dictionary Cache Stats
Library Cache Activity
Process Memory Summary
SGA Memory Summary
SGA breakdown difference
Streams CPU/IO Usage
Streams Capture
Streams Apply
Buffered Queues
Buffered Subscribers
Rule Set
Resource Limit Stats
init.ora Parameters
}}}
-- from http://www.perfvision.com/statspack/awrrpt_1_122_123.txt


{{{
WORKLOAD REPOSITORY report for

DB Name         DB Id    Instance     Inst Num Release     RAC Host
------------ ----------- ------------ -------- ----------- --- ------------
CDB10         1193559071 cdb10               1 10.2.0.1.0  NO  tsukuba

              Snap Id      Snap Time      Sessions Curs/Sess
            --------- ------------------- -------- ---------
Begin Snap:       122 31-Jul-07 17:00:40        36      24.9
  End Snap:       123 31-Jul-07 18:00:56        37      25.0
   Elapsed:               60.26 (mins)
   DB Time:               89.57 (mins)

Cache Sizes
~~~~~~~~~~~                       Begin        End
                             ---------- ----------
               Buffer Cache:        28M        28M  Std Block Size:         8K
           Shared Pool Size:       128M       128M      Log Buffer:     6,256K

Load Profile
~~~~~~~~~~~~                            Per Second       Per Transaction
                                   ---------------       ---------------
                  Redo size:            404,585.37            714,975.12
              Logical reads:              8,318.76             14,700.74
              Block changes:              2,744.42              4,849.89
             Physical reads:                111.18                196.48
            Physical writes:                 48.07                 84.96
                 User calls:                154.96                273.84
                     Parses:                  3.17                  5.60
                Hard parses:                  0.07                  0.13
                      Sorts:                  9.07                 16.04
                     Logons:                  0.05                  0.09
                   Executes:                150.07                265.20
               Transactions:                  0.57

  % Blocks changed per Read:   32.99    Recursive Call %:    16.44
 Rollback per transaction %:   21.11       Rows per Sort:    57.60

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            Buffer Nowait %:  100.00       Redo NoWait %:   99.98
            Buffer  Hit   %:   98.70    In-memory Sort %:  100.00
            Library Hit   %:   99.94        Soft Parse %:   97.71
         Execute to Parse %:   97.89         Latch Hit %:  100.00
Parse CPU to Parse Elapsd %:    3.60     % Non-Parse CPU:   99.62

 Shared Pool Statistics        Begin    End
                              ------  ------
             Memory Usage %:   91.89   91.86
    % SQL with executions>1:   75.28   73.08
  % Memory for SQL w/exec>1:   73.58   70.06

Top 5 Timed Events                                         Avg %Total
~~~~~~~~~~~~~~~~~~                                        wait   Call
Event                                 Waits    Time (s)   (ms)   Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
log file parallel write               2,819       2,037    723   37.9 System I/O
db file parallel write               32,625       1,949     60   36.3 System I/O
db file sequential read             268,447       1,761      7   32.8   User I/O
log file sync                         1,850       1,117    604   20.8     Commit
log buffer space                      1,189         866    728   16.1 Configurat
          -------------------------------------------------------------
Time Model Statistics                    DB/Inst: CDB10/cdb10  Snaps: 122-123
-> Total time in database user-calls (DB Time): 5374.1s
-> Statistics including the word "background" measure background process
   time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name

Statistic Name                                       Time (s) % of DB Time
------------------------------------------ ------------------ ------------
sql execute elapsed time                              4,409.2         82.0
DB CPU                                                  488.2          9.1
parse time elapsed                                       48.5           .9
hard parse elapsed time                                  45.8           .9
PL/SQL execution elapsed time                            24.0           .4
sequence load elapsed time                                6.1           .1
connection management call elapsed time                   3.6           .1
failed parse elapsed time                                 0.8           .0
hard parse (sharing criteria) elapsed time                0.1           .0
repeated bind elapsed time                                0.0           .0
DB time                                               5,374.1          N/A
background elapsed time                               4,199.3          N/A
background cpu time                                      76.0          N/A
          -------------------------------------------------------------

Wait Class                                DB/Inst: CDB10/cdb10  Snaps: 122-123
-> s  - second
-> cs - centisecond -     100th of a second
-> ms - millisecond -    1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc

                                                                  Avg
                                       %Time       Total Wait    wait     Waits
Wait Class                      Waits  -outs         Time (s)    (ms)      /txn
-------------------- ---------------- ------ ---------------- ------- ---------
System I/O                     63,959     .0            4,080      64      31.3
User I/O                      286,652     .0            2,337       8     140.1
Commit                          1,850   47.2            1,117     604       0.9
Configuration                   4,319   79.1            1,081     250       2.1
Concurrency                       211   14.7               64     301       0.1
Application                     1,432     .3               29      21       0.7
Network                       566,962     .0               20       0     277.1
Other                             499    1.2                9      19       0.2
          -------------------------------------------------------------

Wait Events                              DB/Inst: CDB10/cdb10  Snaps: 122-123
-> s  - second
-> cs - centisecond -     100th of a second
-> ms - millisecond -    1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)

                                                                   Avg
                                             %Time  Total Wait    wait     Waits
Event                                 Waits  -outs    Time (s)    (ms)      /txn
---------------------------- -------------- ------ ----------- ------- ---------
log file parallel write               2,819     .0       2,037     723       1.4
db file parallel write               32,625     .0       1,949      60      15.9
db file sequential read             268,447     .0       1,761       7     131.2
log file sync                         1,850   47.2       1,117     604       0.9
log buffer space                      1,189   51.9         866     728       0.6
db file scattered read               16,589     .0         449      27       8.1
log file switch completion              182   35.2         109     597       0.1
control file parallel write           2,134     .0          87      41       1.0
direct path write temp                  415     .0          78     188       0.2
log file switch (checkpoint             120   24.2          53     444       0.1
buffer busy waits                       155   18.1          49     315       0.1
free buffer waits                     2,387   95.0          43      18       1.2
enq: RO - fast object reuse              60    6.7          23     379       0.0
SQL*Net more data to dblink           1,723     .0          19      11       0.8
direct path read temp                   350     .0          16      46       0.2
local write wait                        164    1.8          15      90       0.1
direct path write                       304     .0          13      42       0.1
write complete waits                     11   90.9          10     923       0.0
latch: In memory undo latch               5     .0           8    1592       0.0
os thread startup                        40    7.5           7     171       0.0
enq: CF - contention                     25     .0           7     272       0.0
SQL*Net break/reset to clien          1,372     .0           7       5       0.7
control file sequential read         26,253     .0           5       0      12.8
db file parallel read                   149     .0           4      29       0.1
direct path read                        233     .0           1       6       0.1
latch: cache buffers lru cha             10     .0           1     132       0.0
latch: object queue header o              2     .0           1     460       0.0
SQL*Net message to client           557,769     .0           1       0     272.6
log file single write                    64     .0           1      13       0.0
SQL*Net more data to client           1,806     .0           0       0       0.9
LGWR wait for redo copy                 125    4.8           0       1       0.1
rdbms ipc reply                         298     .0           0       0       0.1
SQL*Net more data from clien             93     .0           0       1       0.0
latch free                                2     .0           0      17       0.0
latch: redo allocation                    1     .0           0      21       0.0
latch: shared pool                        2     .0           0      10       0.0
log file sequential read                 64     .0           0       0       0.0
reliable message                         36     .0           0       1       0.0
read by other session                     1     .0           0      15       0.0
SQL*Net message to dblink             5,565     .0           0       0       2.7
latch: library cache                      4     .0           0       1       0.0
undo segment extension                  430   99.3           0       0       0.2
latch: cache buffers chains               4     .0           0       0       0.0
latch: library cache pin                  1     .0           0       0       0.0
SQL*Net more data from dblin              6     .0           0       0       0.0
SQL*Net message from client         557,767     .0      51,335      92     272.6
Streams AQ: waiting for time             50   40.0       3,796   75924       0.0
wait for unread message on b          3,588   99.5       3,522     982       1.8
Streams AQ: qmn slave idle w            128     .0       3,520   27498       0.1
Streams AQ: qmn coordinator             275   53.5       3,520   12799       0.1
virtual circuit status                  120  100.0       3,503   29191       0.1
Streams AQ: waiting for mess            725   97.7       3,498    4825       0.4
jobq slave wait                       1,133   97.5       3,284    2898       0.6
PL/SQL lock timer                       977   99.9       2,862    2929       0.5
SQL*Net message from dblink           5,566     .0         540      97       2.7
class slave wait                          2  100.0          10    4892       0.0
single-task message                       2     .0           0     103       0.0
          -------------------------------------------------------------

Background Wait Events                   DB/Inst: CDB10/cdb10  Snaps: 122-123
-> ordered by wait time desc, waits desc (idle events last)

                                                                   Avg
                                             %Time  Total Wait    wait     Waits
Event                                 Waits  -outs    Time (s)    (ms)      /txn
---------------------------- -------------- ------ ----------- ------- ---------
log file parallel write               2,820     .0       2,037     722       1.4
db file parallel write               32,625     .0       1,949      60      15.9
control file parallel write           2,134     .0          87      41       1.0
direct path write                       231     .0          13      55       0.1
db file sequential read                 935     .0          12      13       0.5
log buffer space                         13   53.8          10     791       0.0
events in waitclass Other               415    1.4           8      19       0.2
os thread startup                        40    7.5           7     171       0.0
db file scattered read                  115     .0           3      27       0.1
log file sync                             3   66.7           2     828       0.0
direct path read                        231     .0           1       6       0.1
buffer busy waits                        21     .0           1      63       0.0
control file sequential read          2,550     .0           1       0       1.2
log file single write                    64     .0           1      13       0.0
log file sequential read                 64     .0           0       0       0.0
latch: shared pool                        1     .0           0       7       0.0
latch: library cache                      2     .0           0       1       0.0
latch: cache buffers chains               1     .0           0       0       0.0
rdbms ipc message                    13,865   72.8      27,604    1991       6.8
Streams AQ: waiting for time             50   40.0       3,796   75924       0.0
pmon timer                            1,272   98.6       3,526    2772       0.6
Streams AQ: qmn slave idle w            128     .0       3,520   27498       0.1
Streams AQ: qmn coordinator             275   53.5       3,520   12799       0.1
smon timer                              178    3.4       3,360   18875       0.1
          -------------------------------------------------------------

Operating System Statistics               DB/Inst: CDB10/cdb10  Snaps: 122-123

Statistic                                       Total
-------------------------------- --------------------
AVG_BUSY_TIME                                 204,954
AVG_IDLE_TIME                                 155,940
AVG_IOWAIT_TIME                                     0
AVG_SYS_TIME                                   15,979
AVG_USER_TIME                                 188,638
BUSY_TIME                                     410,601
IDLE_TIME                                     312,370
IOWAIT_TIME                                         0
SYS_TIME                                       32,591
USER_TIME                                     378,010
LOAD                                                1
OS_CPU_WAIT_TIME                              228,200
RSRC_MGR_CPU_WAIT_TIME                              0
VM_IN_BYTES                               338,665,472
VM_OUT_BYTES                              397,410,304
PHYSICAL_MEMORY_BYTES                   6,388,301,824
NUM_CPUS                                            2
          -------------------------------------------------------------

Service Statistics                       DB/Inst: CDB10/cdb10  Snaps: 122-123
-> ordered by DB Time

                                                             Physical    Logical
Service Name                      DB Time (s)   DB CPU (s)      Reads      Reads
-------------------------------- ------------ ------------ ---------- ----------
SYS$USERS                             4,666.5        429.9    348,141 ##########
cdb10                                   701.4         58.1     51,046    224,419
SYS$BACKGROUND                            0.0          0.0      2,830     18,255
cdb10XDB                                  0.0          0.0          0          0
          -------------------------------------------------------------

Service Wait Class Stats                  DB/Inst: CDB10/cdb10  Snaps: 122-123
-> Wait Class info for services in the Service Statistics section.
-> Total Waits and Time Waited displayed for the following wait
   classes:  User I/O, Concurrency, Administrative, Network
-> Time Waited (Wt Time) in centisecond (100th of a second)

Service Name
----------------------------------------------------------------
 User I/O  User I/O  Concurcy  Concurcy     Admin     Admin   Network   Network
Total Wts   Wt Time Total Wts   Wt Time Total Wts   Wt Time Total Wts   Wt Time
--------- --------- --------- --------- --------- --------- --------- ---------
SYS$USERS
   271425    210890        65       602         0         0    532492      1979
cdb10
    12969     18550        81      4945         0         0     34068        15
SYS$BACKGROUND
     2261      4306        65       815         0         0         0         0
          -------------------------------------------------------------

SQL ordered by Elapsed Time              DB/Inst: CDB10/cdb10  Snaps: 122-123
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
   into the Total Database Time multiplied by 100

  Elapsed      CPU                  Elap per  % Total
  Time (s)   Time (s)  Executions   Exec (s)  DB Time    SQL Id
---------- ---------- ------------ ---------- ------- -------------
       797        134            1      796.6    14.8 f1qcyh20550cf
Call CALC_QOS_SLOW(:1, :2, :3, :4)

       773         58            1      773.2    14.4 fj6gjgsshtxyx
Call CALC_DELETE_OLD_DATA(:1)

       354         25            1      354.3     6.6 0cjsxw5ndqdbc
Call CALC_HFC_SLOW(:1, :2, :3, :4)

       275         29            1      275.3     5.1 8t8as9usk11qw
Call CALC_TOPOLOGY_SLOW(:1, :2, :3, :4)

       202          4            4       50.5     3.8 dr1rkrznhh95b
Call CALC_TOPOLOGY_MEDIUM(:1, :2, :3, :4)

       158         16            0        N/A     2.9 10dkqv3kr8xa5
 SELECT trunc(SYSDATE, 'HH24') HOUR_STAMP, CM_ID, MA
X(SUBSTR(CM_DESC, 1, 12)) CM_DESC, MAX(UP_ID) UP_ID, MA
X(DOWN_ID) DOWN_ID, MAX(MAC_ID) MAC_ID, MAX(CMTS_
ID) CMTS_ID, SUM(BYTES_UP) SUM_BYTES_UP, SUM(BY

       139          7            1      139.2     2.6 38zhkf4jdyff4
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN ash.collect(3,1200); :mydate := next_date; IF broken THEN :b := 1
; ELSE :b := 0; END IF; END;

       137         72            1      136.8     2.5 298wmz1kxjs1m
INSERT INTO CM_QOS_PROF SELECT :B1 , R.TOPOLOGYID, :B1 - :B4 , P.NODE_PROFILE_ID
, R.DOCSIFCMTSSERVICEQOSPROFILE FROM CM_SID_RAWDATA R, ( SELECT DISTINCT T.CMID,
 P.QOS_PROF_IDX, P.NODE_PROFILE_ID FROM TMP_TOP_SLOW_CM T, CMTS_QOS_PROF P WHERE
 T.CMTSID = P.TOPOLOGYID AND P.SECONDID = :B1 ) P WHERE R.BATCHID = :B3 AND R.PR

       130          9            1      130.5     2.4 6n0d6cv6w6krs
DELETE FROM CM_VA WHERE SECONDID <= :B1

       130          9            1      130.0     2.4 86m0m9q8fw9bj
DELETE FROM CM_QOS_PROF WHERE SECONDID <= :B1

       126          3            1      125.6     2.3 33bpz9dh1w5jk
Module: Lab128
--lab128 select /*+rule*/ owner, segment_name||decode(partition_name,null,nul
l,' ('||partition_name||')') name, segment_type,tablespace_name, extent_id,f
ile_id,block_id, blocks,bytes/1048576 bytes from dba_extents

       124          9            1      124.5     2.3 gyqv6h5pft4mj
DELETE FROM CM_BYTES WHERE SECONDID <= :B1

       121          2           56        2.2     2.3 6gvch1xu9ca3g
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS(); :mydate := next_date
; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;

       120          2            4       30.0     2.2 4zjg6w4mwu0wv
INSERT INTO TMP_TOP_MED_DN SELECT M.CMTSID, M.VENDOR_DESC, M.MODEL_DESC, MAC_L.T
OPOLOGYID, DOWN_L.TOPOLOGYID, M.UP_SNR_CNR_A3, M.UP_SNR_CNR_A2, M.UP_SNR_CNR_A1,
 M.UP_SNR_CNR_A0, M.MAC_SLOTS_OPEN, M.MAC_SLOTS_USED, M.CMTS_REBOOT, 0 FROM TMP_
TOP_MED_CMTS M, TOPOLOGY_LINK DOWN_L, TOPOLOGY_NODE DOWN_N, TOPOLOGY_LINK MAC_L

       119          9            1      119.1     2.2 aywfs0n7wwwhn
DELETE FROM CM_POWER_2 WHERE SECONDID <= :B1
SQL ordered by Elapsed Time              DB/Inst: CDB10/cdb10  Snaps: 122-123
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
   into the Total Database Time multiplied by 100

  Elapsed      CPU                  Elap per  % Total
  Time (s)   Time (s)  Executions   Exec (s)  DB Time    SQL Id
---------- ---------- ------------ ---------- ------- -------------

       117          9            1      117.4     2.2 0fnnktt50m86h
DELETE FROM CM_ERRORS WHERE SECONDID <= :B1

       116          1          977        0.1     2.1 5jh6zfmvpu77f
UPDATE ASH.DBIDS@REPO SET ASHSEQ = :B2 WHERE DBID = :B1

       108          9            1      107.5     2.0 21jqxqyf80cn8
DELETE FROM CM_POWER_1 WHERE SECONDID <= :B1

       107         11            1      107.0     2.0 87gy6mxtk7f3z
DELETE FROM CM_POLL_STATUS WHERE TOPOLOGYID IN ( SELECT DISTINCT TOPOLOGYID FROM
 CM_RAWDATA WHERE BATCHID = :B1 )

        96          6            1       95.9     1.8 2r6jnnf1hzb4z
select power.TOPOLOGYID, power.SAMPLE_LENGTH, UNIQUE_CMS, ACTIVE_CMS, BITSPERSYM
BOL, TXPOWER_UP FROM CM_POWER_2 power, TOPOLOGY_LINK link, UPSTREAM_CHANNEL chan
nel WHERE power.SECONDID = :1 AND link.TOPOLOGYID = power.TOPOLOGYID AND link.PA
RENTLEN = 1 AND link.STATEID = 1 AND link.LINKTYPEID = 1 AND link.PARENTID = cha

        95          1            1       95.1     1.8 1qp1yn30gajjw
 SELECT trunc(SYSDATE, 'HH24') HOUR_STAMP, M.
TOPOLOGYID UP_ID, T.UP_DESC UP_DESC, T.MAC_ID
 MAC_ID, T.CMTS_ID CMTS_ID, M.MAX_PERCENT_UTI
L, M.MAX_PACKETS_PER_SEC, M.AVG_PACKET_SIZE,

        94          5            1       93.9     1.7 fxvdq915s3qpt
DELETE FROM TMP_CALC_HFC_SLOW_CM_LAST

        87          4            1       86.9     1.6 axyukfdx12pu4
Call CALC_DELETE_SLOW_RAWDATA(:1, :2)

        85          9            1       84.6     1.6 998t5bbdfm5rm
INSERT INTO CM_RAWDATA SELECT PROFINDX, 0 BATCHID, TOPOLOGYID, SAMPLETIME, SYSUP
TIME, DOCSIFCMTSCMSTATUSVALUE, DOCSIFCMTSSERVICEINOCTETS, DOCSIFCMTSSERVICEOUTOC
TETS, DOCSIFCMSTATUSTXPOWER, DOCSIFCMTSCMSTATUSRXPOWER, DOCSIFDOWNCHANNELPOWER,
DOCSIFSIGQUNERROREDS, DOCSIFSIGQCORRECTEDS, DOCSIFSIGQUNCORRECTABLES, DOCSIFSIGQ

        84          5            1       83.8     1.6 3a11s4c86wdu5
DELETE FROM CM_RAWDATA WHERE BATCHID = 0 AND PROFINDX = :B1

        77         22      150,832        0.0     1.4 5zm9acqtd51h7
insert into cm_sid_rawdata (profindx, batchid, topologyid, sid, sampletime, docs
IfCmtsServiceQosProfile) values (:1, :2, :3, :4, :5, :6)

        74          9            1       73.6     1.4 3whpusvtv0qq1
INSERT INTO TMP_CALC_QOS_SLOW_CM_TMP SELECT T.CMTSID, T.DOWNID, T.UPID, T.CMID,
GREATEST(T.CMTS_REBOOT, T.UP_REBOOT), GREATEST(T.CMTS_REBOOT, T.UP_REBOOT), R.DO
CSIFCMTSSERVICEINOCTETS, R.DOCSIFCMTSSERVICEOUTOCTETS, S.SID, L.PREV_SECONDID, L
.PREV_IFINOCTETS, L.PREV_IFOUTOCTETS, L.PREV_SID FROM TMP_TOP_SLOW_CM T, CM_RAWD

        74          8            1       73.5     1.4 9h99br1t3qq3a
INSERT INTO TMP_CALC_HFC_SLOW_CM_LAST SELECT * FROM TMP_CALC_HFC_SLOW_CM_LAST_TM
P

        72          7            1       72.0     1.3 4qunm1qbf8cyk
select power.TOPOLOGYID, power.SAMPLE_LENGTH, UNIQUE_CMS, ACTIVE_CMS, CHANNELWID
TH, RXPOWER_UP, RXPOWER UPSTREAM_AVG_RX FROM CM_POWER_1 power, TOPOLOGY_LINK lin
k, UPSTREAM_CHANNEL channel, UPSTREAM_POWER_1 upstream_rx WHERE power.SECONDID =
 :1 and power.SECONDID = upstream_rx.secondid AND link.TOPOLOGYID = power.TOPOLO
SQL ordered by Elapsed Time              DB/Inst: CDB10/cdb10  Snaps: 122-123
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
   into the Total Database Time multiplied by 100

  Elapsed      CPU                  Elap per  % Total
  Time (s)   Time (s)  Executions   Exec (s)  DB Time    SQL Id
---------- ---------- ------------ ---------- ------- -------------

        68          3            1       68.4     1.3 bzmccctnyjb3z
INSERT INTO DOWNSTREAM_ERRORS SELECT T2.SECONDID, T1.DOWNID, ROUND(AVG(T2.SAMPLE
_LENGTH), 0), ROUND(AVG(DECODE(T2.UNERROREDS + T2.CORRECTEDS + T2.UNCORRECTABLES
,0,0, T2.UNCORRECTABLES / ( T2.UNERROREDS + T2.CORRECTEDS + T2.UNCORRECTABLES )
* 100)) ,2) AVG_CER, ROUND(AVG(DECODE(T2.UNERROREDS + T2.CORRECTEDS + T2.UNCORRE

        64          7            1       63.6     1.2 fqcwt6uak8x3w
INSERT INTO TMP_CALC_QOS_SLOW_CM_LAST SELECT * FROM TMP_CALC_QOS_SLOW_CM_LAST_TM
P

        59          6            1       58.8     1.1 fd6a0p6333g8z
 SELECT trunc(SYSDATE, 'HH24') HOUR_STAMP, CM_ID, MA
X(SUBSTR(CM_DESC, 1, 12)) CM_DESC, MAX(UP_ID) UP_ID, MA
X(DOWN_ID) DOWN_ID, MAX(MAC_ID) MAC_ID, MAX(CMTS_
ID) CMTS_ID, SUM(BYTES_UP) SUM_BYTES_UP, SUM(BY

          -------------------------------------------------------------

SQL ordered by CPU Time                  DB/Inst: CDB10/cdb10  Snaps: 122-123
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
   into the Total Database Time multiplied by 100

    CPU      Elapsed                  CPU per  % Total
  Time (s)   Time (s)  Executions     Exec (s) DB Time    SQL Id
---------- ---------- ------------ ----------- ------- -------------
       134        797            1      133.81    14.8 f1qcyh20550cf
Call CALC_QOS_SLOW(:1, :2, :3, :4)

        72        137            1       71.96     2.5 298wmz1kxjs1m
INSERT INTO CM_QOS_PROF SELECT :B1 , R.TOPOLOGYID, :B1 - :B4 , P.NODE_PROFILE_ID
, R.DOCSIFCMTSSERVICEQOSPROFILE FROM CM_SID_RAWDATA R, ( SELECT DISTINCT T.CMID,
 P.QOS_PROF_IDX, P.NODE_PROFILE_ID FROM TMP_TOP_SLOW_CM T, CMTS_QOS_PROF P WHERE
 T.CMTSID = P.TOPOLOGYID AND P.SECONDID = :B1 ) P WHERE R.BATCHID = :B3 AND R.PR

        58        773            1       57.60    14.4 fj6gjgsshtxyx
Call CALC_DELETE_OLD_DATA(:1)

        29        275            1       29.25     5.1 8t8as9usk11qw
Call CALC_TOPOLOGY_SLOW(:1, :2, :3, :4)

        25        354            1       24.50     6.6 0cjsxw5ndqdbc
Call CALC_HFC_SLOW(:1, :2, :3, :4)

        22         77      150,832        0.00     1.4 5zm9acqtd51h7
insert into cm_sid_rawdata (profindx, batchid, topologyid, sid, sampletime, docs
IfCmtsServiceQosProfile) values (:1, :2, :3, :4, :5, :6)

        19         52      150,324        0.00     1.0 6xz6vg8q1zygu
insert into cm_rawdata (profindx, batchid, topologyid, sampletime, docsifcmtscms
tatusvalue, docsifcmtsserviceinoctets, docsifcmtsserviceoutoctets, docsifcmtscms
tatusrxpower, cmtscm_unerr, cmtscm_corr, cmtscm_uncorr, cmtscm_snr, cmtscm_timin
goffset) values (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13)

        18         40      150,259        0.00     0.7 c2a2g4fqnm25h
insert into cm_rawdata (profindx, batchid, topologyid, sampletime, sysuptime, do
csifcmstatustxpower, docsifdownchannelpower, docsifsigqunerroreds, docsifsigqcor
recteds, docsifsigquncorrectables, docsifsigqsignalnoise, sysobjectid) values (:
1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12)

        16        158            0         N/A     2.9 10dkqv3kr8xa5
 SELECT trunc(SYSDATE, 'HH24') HOUR_STAMP, CM_ID, MA
X(SUBSTR(CM_DESC, 1, 12)) CM_DESC, MAX(UP_ID) UP_ID, MA
X(DOWN_ID) DOWN_ID, MAX(MAC_ID) MAC_ID, MAX(CMTS_
ID) CMTS_ID, SUM(BYTES_UP) SUM_BYTES_UP, SUM(BY

        11        107            1       10.68     2.0 87gy6mxtk7f3z
DELETE FROM CM_POLL_STATUS WHERE TOPOLOGYID IN ( SELECT DISTINCT TOPOLOGYID FROM
 CM_RAWDATA WHERE BATCHID = :B1 )

         9        130            1        9.26     2.4 86m0m9q8fw9bj
DELETE FROM CM_QOS_PROF WHERE SECONDID <= :B1

         9        130            1        9.03     2.4 6n0d6cv6w6krs
DELETE FROM CM_VA WHERE SECONDID <= :B1

         9        108            1        9.01     2.0 21jqxqyf80cn8
DELETE FROM CM_POWER_1 WHERE SECONDID <= :B1

         9         74            1        8.99     1.4 3whpusvtv0qq1
INSERT INTO TMP_CALC_QOS_SLOW_CM_TMP SELECT T.CMTSID, T.DOWNID, T.UPID, T.CMID,
GREATEST(T.CMTS_REBOOT, T.UP_REBOOT), GREATEST(T.CMTS_REBOOT, T.UP_REBOOT), R.DO
CSIFCMTSSERVICEINOCTETS, R.DOCSIFCMTSSERVICEOUTOCTETS, S.SID, L.PREV_SECONDID, L
.PREV_IFINOCTETS, L.PREV_IFOUTOCTETS, L.PREV_SID FROM TMP_TOP_SLOW_CM T, CM_RAWD

         9        117            1        8.96     2.2 0fnnktt50m86h
SQL ordered by CPU Time                  DB/Inst: CDB10/cdb10  Snaps: 122-123
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
   into the Total Database Time multiplied by 100

    CPU      Elapsed                  CPU per  % Total
  Time (s)   Time (s)  Executions     Exec (s) DB Time    SQL Id
---------- ---------- ------------ ----------- ------- -------------
DELETE FROM CM_ERRORS WHERE SECONDID <= :B1

         9        124            1        8.88     2.3 gyqv6h5pft4mj
DELETE FROM CM_BYTES WHERE SECONDID <= :B1

         9        119            1        8.87     2.2 aywfs0n7wwwhn
DELETE FROM CM_POWER_2 WHERE SECONDID <= :B1

         9         85            1        8.52     1.6 998t5bbdfm5rm
INSERT INTO CM_RAWDATA SELECT PROFINDX, 0 BATCHID, TOPOLOGYID, SAMPLETIME, SYSUP
TIME, DOCSIFCMTSCMSTATUSVALUE, DOCSIFCMTSSERVICEINOCTETS, DOCSIFCMTSSERVICEOUTOC
TETS, DOCSIFCMSTATUSTXPOWER, DOCSIFCMTSCMSTATUSRXPOWER, DOCSIFDOWNCHANNELPOWER,
DOCSIFSIGQUNERROREDS, DOCSIFSIGQCORRECTEDS, DOCSIFSIGQUNCORRECTABLES, DOCSIFSIGQ

         8         74            1        7.66     1.4 9h99br1t3qq3a
INSERT INTO TMP_CALC_HFC_SLOW_CM_LAST SELECT * FROM TMP_CALC_HFC_SLOW_CM_LAST_TM
P

         7         64            1        7.43     1.2 fqcwt6uak8x3w
INSERT INTO TMP_CALC_QOS_SLOW_CM_LAST SELECT * FROM TMP_CALC_QOS_SLOW_CM_LAST_TM
P

         7        139            1        7.13     2.6 38zhkf4jdyff4
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN ash.collect(3,1200); :mydate := next_date; IF broken THEN :b := 1
; ELSE :b := 0; END IF; END;

         7         72            1        6.69     1.3 4qunm1qbf8cyk
select power.TOPOLOGYID, power.SAMPLE_LENGTH, UNIQUE_CMS, ACTIVE_CMS, CHANNELWID
TH, RXPOWER_UP, RXPOWER UPSTREAM_AVG_RX FROM CM_POWER_1 power, TOPOLOGY_LINK lin
k, UPSTREAM_CHANNEL channel, UPSTREAM_POWER_1 upstream_rx WHERE power.SECONDID =
 :1 and power.SECONDID = upstream_rx.secondid AND link.TOPOLOGYID = power.TOPOLO

         6         59            1        6.12     1.1 fd6a0p6333g8z
 SELECT trunc(SYSDATE, 'HH24') HOUR_STAMP, CM_ID, MA
X(SUBSTR(CM_DESC, 1, 12)) CM_DESC, MAX(UP_ID) UP_ID, MA
X(DOWN_ID) DOWN_ID, MAX(MAC_ID) MAC_ID, MAX(CMTS_
ID) CMTS_ID, SUM(BYTES_UP) SUM_BYTES_UP, SUM(BY

         6         96            1        5.82     1.8 2r6jnnf1hzb4z
select power.TOPOLOGYID, power.SAMPLE_LENGTH, UNIQUE_CMS, ACTIVE_CMS, BITSPERSYM
BOL, TXPOWER_UP FROM CM_POWER_2 power, TOPOLOGY_LINK link, UPSTREAM_CHANNEL chan
nel WHERE power.SECONDID = :1 AND link.TOPOLOGYID = power.TOPOLOGYID AND link.PA
RENTLEN = 1 AND link.STATEID = 1 AND link.LINKTYPEID = 1 AND link.PARENTID = cha

         5         84            1        5.23     1.6 3a11s4c86wdu5
DELETE FROM CM_RAWDATA WHERE BATCHID = 0 AND PROFINDX = :B1

         5         94            1        5.19     1.7 fxvdq915s3qpt
DELETE FROM TMP_CALC_HFC_SLOW_CM_LAST

         4        202            4        1.11     3.8 dr1rkrznhh95b
Call CALC_TOPOLOGY_MEDIUM(:1, :2, :3, :4)

         4         87            1        3.68     1.6 axyukfdx12pu4
Call CALC_DELETE_SLOW_RAWDATA(:1, :2)

         3        126            1        2.92     2.3 33bpz9dh1w5jk
Module: Lab128
--lab128 select /*+rule*/ owner, segment_name||decode(partition_name,null,nul
SQL ordered by CPU Time                  DB/Inst: CDB10/cdb10  Snaps: 122-123
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
   into the Total Database Time multiplied by 100

    CPU      Elapsed                  CPU per  % Total
  Time (s)   Time (s)  Executions     Exec (s) DB Time    SQL Id
---------- ---------- ------------ ----------- ------- -------------
l,' ('||partition_name||')') name, segment_type,tablespace_name, extent_id,f
ile_id,block_id, blocks,bytes/1048576 bytes from dba_extents

         3         68            1        2.66     1.3 bzmccctnyjb3z
INSERT INTO DOWNSTREAM_ERRORS SELECT T2.SECONDID, T1.DOWNID, ROUND(AVG(T2.SAMPLE
_LENGTH), 0), ROUND(AVG(DECODE(T2.UNERROREDS + T2.CORRECTEDS + T2.UNCORRECTABLES
,0,0, T2.UNCORRECTABLES / ( T2.UNERROREDS + T2.CORRECTEDS + T2.UNCORRECTABLES )
* 100)) ,2) AVG_CER, ROUND(AVG(DECODE(T2.UNERROREDS + T2.CORRECTEDS + T2.UNCORRE

         2        121           56        0.04     2.3 6gvch1xu9ca3g
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS(); :mydate := next_date
; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;

         2        120            4        0.42     2.2 4zjg6w4mwu0wv
INSERT INTO TMP_TOP_MED_DN SELECT M.CMTSID, M.VENDOR_DESC, M.MODEL_DESC, MAC_L.T
OPOLOGYID, DOWN_L.TOPOLOGYID, M.UP_SNR_CNR_A3, M.UP_SNR_CNR_A2, M.UP_SNR_CNR_A1,
 M.UP_SNR_CNR_A0, M.MAC_SLOTS_OPEN, M.MAC_SLOTS_USED, M.CMTS_REBOOT, 0 FROM TMP_
TOP_MED_CMTS M, TOPOLOGY_LINK DOWN_L, TOPOLOGY_NODE DOWN_N, TOPOLOGY_LINK MAC_L

         1         95            1        1.19     1.8 1qp1yn30gajjw
 SELECT trunc(SYSDATE, 'HH24') HOUR_STAMP, M.
TOPOLOGYID UP_ID, T.UP_DESC UP_DESC, T.MAC_ID
 MAC_ID, T.CMTS_ID CMTS_ID, M.MAX_PERCENT_UTI
L, M.MAX_PACKETS_PER_SEC, M.AVG_PACKET_SIZE,

         1        116          977        0.00     2.1 5jh6zfmvpu77f
UPDATE ASH.DBIDS@REPO SET ASHSEQ = :B2 WHERE DBID = :B1

          -------------------------------------------------------------

SQL ordered by Gets                      DB/Inst: CDB10/cdb10  Snaps: 122-123
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> Total Buffer Gets:      30,077,723
-> Captured SQL account for     169.4% of Total

                                Gets              CPU     Elapsed
  Buffer Gets   Executions    per Exec   %Total Time (s)  Time (s)    SQL Id
-------------- ------------ ------------ ------ -------- --------- -------------
    16,494,914            1 ############   54.8   133.81    796.60 f1qcyh20550cf
Call CALC_QOS_SLOW(:1, :2, :3, :4)

    11,322,501            1 ############   37.6    71.96    136.75 298wmz1kxjs1m
INSERT INTO CM_QOS_PROF SELECT :B1 , R.TOPOLOGYID, :B1 - :B4 , P.NODE_PROFILE_ID
, R.DOCSIFCMTSSERVICEQOSPROFILE FROM CM_SID_RAWDATA R, ( SELECT DISTINCT T.CMID,
 P.QOS_PROF_IDX, P.NODE_PROFILE_ID FROM TMP_TOP_SLOW_CM T, CMTS_QOS_PROF P WHERE
 T.CMTSID = P.TOPOLOGYID AND P.SECONDID = :B1 ) P WHERE R.BATCHID = :B3 AND R.PR

     3,835,310            1  3,835,310.0   12.8    57.60    773.15 fj6gjgsshtxyx
Call CALC_DELETE_OLD_DATA(:1)

     2,140,461            1  2,140,461.0    7.1    24.50    354.27 0cjsxw5ndqdbc
Call CALC_HFC_SLOW(:1, :2, :3, :4)

     1,434,233            1  1,434,233.0    4.8    29.25    275.28 8t8as9usk11qw
Call CALC_TOPOLOGY_SLOW(:1, :2, :3, :4)

     1,400,037            1  1,400,037.0    4.7     8.99     73.62 3whpusvtv0qq1
INSERT INTO TMP_CALC_QOS_SLOW_CM_TMP SELECT T.CMTSID, T.DOWNID, T.UPID, T.CMID,
GREATEST(T.CMTS_REBOOT, T.UP_REBOOT), GREATEST(T.CMTS_REBOOT, T.UP_REBOOT), R.DO
CSIFCMTSSERVICEINOCTETS, R.DOCSIFCMTSSERVICEOUTOCTETS, S.SID, L.PREV_SECONDID, L
.PREV_IFINOCTETS, L.PREV_IFOUTOCTETS, L.PREV_SID FROM TMP_TOP_SLOW_CM T, CM_RAWD

     1,213,966            1  1,213,966.0    4.0     6.05     14.45 553hp60qv7vyh
select errors.TOPOLOGYID, errors.SAMPLE_LENGTH, UNIQUE_CMS, ACTIVE_CMS, CHANNELW
IDTH, BITSPERSYMBOL, SNR_DOWN, RXPOWER_DOWN FROM CM_ERRORS errors, CM_POWER_2 po
wer, TOPOLOGY_LINK link, DOWNSTREAM_CHANNEL channel where errors.SECONDID = powe
r.SECONDID AND errors.SECONDID = :1 AND errors.TOPOLOGYID = power.TOPOLOGYID AND

     1,065,052            1  1,065,052.0    3.5     6.69     72.01 4qunm1qbf8cyk
select power.TOPOLOGYID, power.SAMPLE_LENGTH, UNIQUE_CMS, ACTIVE_CMS, CHANNELWID
TH, RXPOWER_UP, RXPOWER UPSTREAM_AVG_RX FROM CM_POWER_1 power, TOPOLOGY_LINK lin
k, UPSTREAM_CHANNEL channel, UPSTREAM_POWER_1 upstream_rx WHERE power.SECONDID =
 :1 and power.SECONDID = upstream_rx.secondid AND link.TOPOLOGYID = power.TOPOLO

     1,011,784            1  1,011,784.0    3.4     8.52     84.62 998t5bbdfm5rm
INSERT INTO CM_RAWDATA SELECT PROFINDX, 0 BATCHID, TOPOLOGYID, SAMPLETIME, SYSUP
TIME, DOCSIFCMTSCMSTATUSVALUE, DOCSIFCMTSSERVICEINOCTETS, DOCSIFCMTSSERVICEOUTOC
TETS, DOCSIFCMSTATUSTXPOWER, DOCSIFCMTSCMSTATUSRXPOWER, DOCSIFDOWNCHANNELPOWER,
DOCSIFSIGQUNERROREDS, DOCSIFSIGQCORRECTEDS, DOCSIFSIGQUNCORRECTABLES, DOCSIFSIGQ

       776,443            1    776,443.0    2.6     7.66     73.54 9h99br1t3qq3a
INSERT INTO TMP_CALC_HFC_SLOW_CM_LAST SELECT * FROM TMP_CALC_HFC_SLOW_CM_LAST_TM
P

       762,710            1    762,710.0    2.5     5.82     95.88 2r6jnnf1hzb4z
select power.TOPOLOGYID, power.SAMPLE_LENGTH, UNIQUE_CMS, ACTIVE_CMS, BITSPERSYM
BOL, TXPOWER_UP FROM CM_POWER_2 power, TOPOLOGY_LINK link, UPSTREAM_CHANNEL chan
nel WHERE power.SECONDID = :1 AND link.TOPOLOGYID = power.TOPOLOGYID AND link.PA
RENTLEN = 1 AND link.STATEID = 1 AND link.LINKTYPEID = 1 AND link.PARENTID = cha

       724,267            1    724,267.0    2.4     7.43     63.59 fqcwt6uak8x3w
INSERT INTO TMP_CALC_QOS_SLOW_CM_LAST SELECT * FROM TMP_CALC_QOS_SLOW_CM_LAST_TM
P

       669,534            1    669,534.0    2.2     6.37     38.97 094vgzny6jvm4
INSERT INTO CM_VA ( SECONDID, TOPOLOGYID, CER, CCER, SNR, STATUSVALUE, TIMINGOFF
SET ) SELECT :B3 , TOPOLOGYID, CASE WHEN (CMTSCM_UNERR_D IS NULL OR CMTSCM_CORR_
D IS NULL OR CMTSCM_UNCORR_D IS NULL) THEN NULL ELSE 100 * CMTSCM_UNCORR_D/TOTAL
SQL ordered by Gets                      DB/Inst: CDB10/cdb10  Snaps: 122-123
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> Total Buffer Gets:      30,077,723
-> Captured SQL account for     169.4% of Total

                                Gets              CPU     Elapsed
  Buffer Gets   Executions    per Exec   %Total Time (s)  Time (s)    SQL Id
-------------- ------------ ------------ ------ -------- --------- -------------
_D END CER, CASE WHEN (CMTSCM_UNERR_D IS NULL OR CMTSCM_CORR_D IS NULL OR CMTSCM

       633,947      150,259          4.2    2.1    18.21     40.04 c2a2g4fqnm25h
insert into cm_rawdata (profindx, batchid, topologyid, sampletime, sysuptime, do
csifcmstatustxpower, docsifdownchannelpower, docsifsigqunerroreds, docsifsigqcor
recteds, docsifsigquncorrectables, docsifsigqsignalnoise, sysobjectid) values (:
1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12)

       618,871      150,324          4.1    2.1    18.56     51.78 6xz6vg8q1zygu
insert into cm_rawdata (profindx, batchid, topologyid, sampletime, docsifcmtscms
tatusvalue, docsifcmtsserviceinoctets, docsifcmtsserviceoutoctets, docsifcmtscms
tatusrxpower, cmtscm_unerr, cmtscm_corr, cmtscm_uncorr, cmtscm_snr, cmtscm_timin
goffset) values (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13)

       615,244            1    615,244.0    2.0     9.03    130.46 6n0d6cv6w6krs
DELETE FROM CM_VA WHERE SECONDID <= :B1

       615,129            1    615,129.0    2.0     9.26    130.03 86m0m9q8fw9bj
DELETE FROM CM_QOS_PROF WHERE SECONDID <= :B1

       614,747            1    614,747.0    2.0     8.96    117.43 0fnnktt50m86h
DELETE FROM CM_ERRORS WHERE SECONDID <= :B1

       614,661            1    614,661.0    2.0     8.88    124.47 gyqv6h5pft4mj
DELETE FROM CM_BYTES WHERE SECONDID <= :B1

       614,649            1    614,649.0    2.0    10.68    107.01 87gy6mxtk7f3z
DELETE FROM CM_POLL_STATUS WHERE TOPOLOGYID IN ( SELECT DISTINCT TOPOLOGYID FROM
 CM_RAWDATA WHERE BATCHID = :B1 )

       613,965            1    613,965.0    2.0     8.87    119.15 aywfs0n7wwwhn
DELETE FROM CM_POWER_2 WHERE SECONDID <= :B1

       613,256            1    613,256.0    2.0     9.01    107.53 21jqxqyf80cn8
DELETE FROM CM_POWER_1 WHERE SECONDID <= :B1

       598,348      150,832          4.0    2.0    22.39     76.71 5zm9acqtd51h7
insert into cm_sid_rawdata (profindx, batchid, topologyid, sid, sampletime, docs
IfCmtsServiceQosProfile) values (:1, :2, :3, :4, :5, :6)

       343,903            1    343,903.0    1.1     2.45     11.06 8b7g4s4qa5r1d
INSERT INTO UPSTREAM_POWER_1 SELECT :B4 , T.UPID, :B4 - :B3 , ROUND(AVG(C.DOCSIF
CMTSCMSTATUSRXPOWER), 0) FROM CM_RAWDATA C, TMP_TOP_SLOW_CM T WHERE C.TOPOLOGYID
 = T.CMID AND C.BATCHID = :B2 AND C.PROFINDX = :B1 GROUP BY T.UPID

       301,471            1    301,471.0    1.0     2.66     68.37 bzmccctnyjb3z
INSERT INTO DOWNSTREAM_ERRORS SELECT T2.SECONDID, T1.DOWNID, ROUND(AVG(T2.SAMPLE
_LENGTH), 0), ROUND(AVG(DECODE(T2.UNERROREDS + T2.CORRECTEDS + T2.UNCORRECTABLES
,0,0, T2.UNCORRECTABLES / ( T2.UNERROREDS + T2.CORRECTEDS + T2.UNCORRECTABLES )
* 100)) ,2) AVG_CER, ROUND(AVG(DECODE(T2.UNERROREDS + T2.CORRECTEDS + T2.UNCORRE

          -------------------------------------------------------------

SQL ordered by Reads                     DB/Inst: CDB10/cdb10  Snaps: 122-123
-> Total Disk Reads:         401,992
-> Captured SQL account for    134.7% of Total

                               Reads              CPU     Elapsed
Physical Reads  Executions    per Exec   %Total Time (s)  Time (s)    SQL Id
-------------- ----------- ------------- ------ -------- --------- -------------
       192,597           1     192,597.0   47.9   133.81    796.60 f1qcyh20550cf
Call CALC_QOS_SLOW(:1, :2, :3, :4)

       144,969           1     144,969.0   36.1    71.96    136.75 298wmz1kxjs1m
INSERT INTO CM_QOS_PROF SELECT :B1 , R.TOPOLOGYID, :B1 - :B4 , P.NODE_PROFILE_ID
, R.DOCSIFCMTSSERVICEQOSPROFILE FROM CM_SID_RAWDATA R, ( SELECT DISTINCT T.CMID,
 P.QOS_PROF_IDX, P.NODE_PROFILE_ID FROM TMP_TOP_SLOW_CM T, CMTS_QOS_PROF P WHERE
 T.CMTSID = P.TOPOLOGYID AND P.SECONDID = :B1 ) P WHERE R.BATCHID = :B3 AND R.PR

        28,436           4       7,109.0    7.1     4.42    201.93 dr1rkrznhh95b
Call CALC_TOPOLOGY_MEDIUM(:1, :2, :3, :4)

        22,352           1      22,352.0    5.6    24.50    354.27 0cjsxw5ndqdbc
Call CALC_HFC_SLOW(:1, :2, :3, :4)

        21,907           1      21,907.0    5.4    57.60    773.15 fj6gjgsshtxyx
Call CALC_DELETE_OLD_DATA(:1)

        15,834           0           N/A    3.9    15.56    158.02 10dkqv3kr8xa5
 SELECT trunc(SYSDATE, 'HH24') HOUR_STAMP, CM_ID, MA
X(SUBSTR(CM_DESC, 1, 12)) CM_DESC, MAX(UP_ID) UP_ID, MA
X(DOWN_ID) DOWN_ID, MAX(MAC_ID) MAC_ID, MAX(CMTS_
ID) CMTS_ID, SUM(BYTES_UP) SUM_BYTES_UP, SUM(BY

        15,050           1      15,050.0    3.7    29.25    275.28 8t8as9usk11qw
Call CALC_TOPOLOGY_SLOW(:1, :2, :3, :4)

        13,424           1      13,424.0    3.3     6.12     58.83 fd6a0p6333g8z
 SELECT trunc(SYSDATE, 'HH24') HOUR_STAMP, CM_ID, MA
X(SUBSTR(CM_DESC, 1, 12)) CM_DESC, MAX(UP_ID) UP_ID, MA
X(DOWN_ID) DOWN_ID, MAX(MAC_ID) MAC_ID, MAX(CMTS_
ID) CMTS_ID, SUM(BYTES_UP) SUM_BYTES_UP, SUM(BY

        10,667           1      10,667.0    2.7     2.92    125.63 33bpz9dh1w5jk
Module: Lab128
--lab128 select /*+rule*/ owner, segment_name||decode(partition_name,null,nul
l,' ('||partition_name||')') name, segment_type,tablespace_name, extent_id,f
ile_id,block_id, blocks,bytes/1048576 bytes from dba_extents

         9,156           4       2,289.0    2.3     1.68    119.84 4zjg6w4mwu0wv
INSERT INTO TMP_TOP_MED_DN SELECT M.CMTSID, M.VENDOR_DESC, M.MODEL_DESC, MAC_L.T
OPOLOGYID, DOWN_L.TOPOLOGYID, M.UP_SNR_CNR_A3, M.UP_SNR_CNR_A2, M.UP_SNR_CNR_A1,
 M.UP_SNR_CNR_A0, M.MAC_SLOTS_OPEN, M.MAC_SLOTS_USED, M.CMTS_REBOOT, 0 FROM TMP_
TOP_MED_CMTS M, TOPOLOGY_LINK DOWN_L, TOPOLOGY_NODE DOWN_N, TOPOLOGY_LINK MAC_L

         8,700           1       8,700.0    2.2     3.68     86.86 axyukfdx12pu4
Call CALC_DELETE_SLOW_RAWDATA(:1, :2)

         6,878           1       6,878.0    1.7     8.99     73.62 3whpusvtv0qq1
INSERT INTO TMP_CALC_QOS_SLOW_CM_TMP SELECT T.CMTSID, T.DOWNID, T.UPID, T.CMID,
GREATEST(T.CMTS_REBOOT, T.UP_REBOOT), GREATEST(T.CMTS_REBOOT, T.UP_REBOOT), R.DO
CSIFCMTSSERVICEINOCTETS, R.DOCSIFCMTSSERVICEOUTOCTETS, S.SID, L.PREV_SECONDID, L
.PREV_IFINOCTETS, L.PREV_IFOUTOCTETS, L.PREV_SID FROM TMP_TOP_SLOW_CM T, CM_RAWD

         5,338           1       5,338.0    1.3     6.37     38.97 094vgzny6jvm4
INSERT INTO CM_VA ( SECONDID, TOPOLOGYID, CER, CCER, SNR, STATUSVALUE, TIMINGOFF
SET ) SELECT :B3 , TOPOLOGYID, CASE WHEN (CMTSCM_UNERR_D IS NULL OR CMTSCM_CORR_
D IS NULL OR CMTSCM_UNCORR_D IS NULL) THEN NULL ELSE 100 * CMTSCM_UNCORR_D/TOTAL
_D END CER, CASE WHEN (CMTSCM_UNERR_D IS NULL OR CMTSCM_CORR_D IS NULL OR CMTSCM

SQL ordered by Reads                     DB/Inst: CDB10/cdb10  Snaps: 122-123
-> Total Disk Reads:         401,992
-> Captured SQL account for    134.7% of Total

                               Reads              CPU     Elapsed
Physical Reads  Executions    per Exec   %Total Time (s)  Time (s)    SQL Id
-------------- ----------- ------------- ------ -------- --------- -------------
         4,337           4       1,084.3    1.1     0.36      5.60 46jpzuthyv6wa
Module: Lab128
--lab128 select se.fa_se, uit.ui, uipt.uip, uist.uis, fr_s.fr_se, t.dt from (se
lect /*+ all_rows */ count(*) fa_se from (select ts#,max(length) m from sys.fet$
 group by ts#) f, sys.seg$ s where s.ts#=f.ts# and extsize>m) se, (select count(
*) ui from sys.ind$ where bitand(flags,1)=1) uit, (select count(*) uip from sys.

         4,197           1       4,197.0    1.0     2.45     11.06 8b7g4s4qa5r1d
INSERT INTO UPSTREAM_POWER_1 SELECT :B4 , T.UPID, :B4 - :B3 , ROUND(AVG(C.DOCSIF
CMTSCMSTATUSRXPOWER), 0) FROM CM_RAWDATA C, TMP_TOP_SLOW_CM T WHERE C.TOPOLOGYID
 = T.CMID AND C.BATCHID = :B2 AND C.PROFINDX = :B1 GROUP BY T.UPID

          -------------------------------------------------------------

SQL ordered by Executions                DB/Inst: CDB10/cdb10  Snaps: 122-123
-> Total Executions:         542,597
-> Captured SQL account for     86.2% of Total

                                              CPU per    Elap per
 Executions   Rows Processed  Rows per Exec   Exec (s)   Exec (s)     SQL Id
------------ --------------- -------------- ---------- ----------- -------------
     150,832         150,324            1.0       0.00        0.00 5zm9acqtd51h7
insert into cm_sid_rawdata (profindx, batchid, topologyid, sid, sampletime, docs
IfCmtsServiceQosProfile) values (:1, :2, :3, :4, :5, :6)

     150,324         150,324            1.0       0.00        0.00 6xz6vg8q1zygu
insert into cm_rawdata (profindx, batchid, topologyid, sampletime, docsifcmtscms
tatusvalue, docsifcmtsserviceinoctets, docsifcmtsserviceoutoctets, docsifcmtscms
tatusrxpower, cmtscm_unerr, cmtscm_corr, cmtscm_uncorr, cmtscm_snr, cmtscm_timin
goffset) values (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13)

     150,259         150,259            1.0       0.00        0.00 c2a2g4fqnm25h
insert into cm_rawdata (profindx, batchid, topologyid, sampletime, sysuptime, do
csifcmstatustxpower, docsifdownchannelpower, docsifsigqunerroreds, docsifsigqcor
recteds, docsifsigquncorrectables, docsifsigqsignalnoise, sysobjectid) values (:
1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12)

       8,128           8,128            1.0       0.00        0.01 12a0nrhpk3hym
UPDATE TOPOLOGY_LINK SET DATETO=sysdate, STATEID=0 WHERE TOPOLOGYID=:1 AND PAREN
TID=:2 AND STATEID=1

         977             977            1.0       0.00        0.12 5jh6zfmvpu77f
UPDATE ASH.DBIDS@REPO SET ASHSEQ = :B2 WHERE DBID = :B1

         624             624            1.0       0.00        0.00 7h35uxf5uhmm1
select sysdate from dual

         624               0            0.0       0.00        0.00 apuw5pk7p77hc
ALTER SESSION SET ISOLATION_LEVEL = READ COMMITTED

         595           7,140           12.0       0.01        0.01 d5vf5a1ffcskb
Module: Lab128
--lab128 select replace(stat_name,'TICKS','TIME') stat_name,value from v$osstat
 where substr(stat_name,1,3) !='AVG'

         567             567            1.0       0.00        0.00 bsa0wjtftg3uw
select file# from file$ where ts#=:1

         556             556            1.0       0.01        0.02 7gtztzv329wg0
select c.name, u.name from con$ c, cdef$ cd, user$ u where c.con# = cd.con# and
 cd.enabled = :1 and c.owner# = u.user#

          -------------------------------------------------------------

SQL ordered by Parse Calls               DB/Inst: CDB10/cdb10  Snaps: 122-123
-> Total Parse Calls:          11,460
-> Captured SQL account for      56.7% of Total

                            % Total
 Parse Calls  Executions     Parses    SQL Id
------------ ------------ --------- -------------
         624          624      5.45 7h35uxf5uhmm1
select sysdate from dual

         624          624      5.45 apuw5pk7p77hc
ALTER SESSION SET ISOLATION_LEVEL = READ COMMITTED

         567          567      4.95 bsa0wjtftg3uw
select file# from file$ where ts#=:1

         556          556      4.85 7gtztzv329wg0
select c.name, u.name from con$ c, cdef$ cd, user$ u where c.con# = cd.con# and
 cd.enabled = :1 and c.owner# = u.user#

         508      150,832      4.43 5zm9acqtd51h7
insert into cm_sid_rawdata (profindx, batchid, topologyid, sid, sampletime, docs
IfCmtsServiceQosProfile) values (:1, :2, :3, :4, :5, :6)

         448          448      3.91 0h6b2sajwb74n
select privilege#,level from sysauth$ connect by grantee#=prior privilege# and p
rivilege#>0 start with grantee#=:1 and privilege#>0

         411          411      3.59 9qgtwh66xg6nz
update seg$ set type#=:4,blocks=:5,extents=:6,minexts=:7,maxexts=:8,extsize=:9,e
xtpct=:10,user#=:11,iniexts=:12,lists=decode(:13, 65535, NULL, :13),groups=decod
e(:14, 65535, NULL, :14), cachehint=:15, hwmincr=:16, spare1=DECODE(:17,0,NULL,:
17),scanhint=:18 where ts#=:1 and file#=:2 and block#=:3

         297          297      2.59 350f5yrnnmshs
lock table sys.mon_mods$ in exclusive mode nowait

         297          297      2.59 g00cj285jmgsw
update sys.mon_mods$ set inserts = inserts + :ins, updates = updates + :upd, del
etes = deletes + :del, flags = (decode(bitand(flags, :flag), :flag, flags, flags
 + :flag)), drop_segments = drop_segments + :dropseg, timestamp = :time where ob
j# = :objn

         181          181      1.58 6129566gyvx21
Module: OEM.SystemPool
SELECT INSTANTIABLE, supertype_owner, supertype_name, LOCAL_ATTRIBUTES FROM all_
types WHERE type_name = :1 AND owner = :2

         144          144      1.26 0k8522rmdzg4k
select privilege# from sysauth$ where (grantee#=:1 or grantee#=1) and privilege#
>0

         128          128      1.12 cp8ygp2mr8j6s
select * from TOPOLOGY_NODETYPE where NODETYPEID < 0

         117          117      1.02 2b064ybzkwf1y
Module: OEM.SystemPool
BEGIN EMD_NOTIFICATION.QUEUE_READY(:1, :2, :3); END;

         117          117      1.02 9p1um1wd886xb
select o.owner#, u.name, o.name, o.namespace, o.obj#, d.d
_timestamp, nvl(d.property,0), o.type#, o.subname, d.d_attrs from dependency$ d
, obj$ o, user$ u where d.p_obj#=:1 and (d.p_timestamp=:2 or d.property=2)
and d.d_obj#=o.obj# and o.owner#=u.user# order by o.obj#

         116          116      1.01 9zg6y3ucgy8kb
select n.intcol# from ntab$ n, col$ c where n.obj#=:1 and c.obj#=:1 and c.intco
SQL ordered by Parse Calls               DB/Inst: CDB10/cdb10  Snaps: 122-123
-> Total Parse Calls:          11,460
-> Captured SQL account for      56.7% of Total

                            % Total
 Parse Calls  Executions     Parses    SQL Id
------------ ------------ --------- -------------
l#=n.intcol# and bitand(c.property, 32768)!=32768

          -------------------------------------------------------------

SQL ordered by Sharable Memory           DB/Inst: CDB10/cdb10  Snaps: 122-123

                  No data exists for this section of the report.
          -------------------------------------------------------------

SQL ordered by Version Count             DB/Inst: CDB10/cdb10  Snaps: 122-123

                  No data exists for this section of the report.
          -------------------------------------------------------------

Instance Activity Stats                  DB/Inst: CDB10/cdb10  Snaps: 122-123

Statistic                                     Total     per Second     per Trans
-------------------------------- ------------------ -------------- -------------
CPU used by this session                     48,802           13.5          23.9
CPU used when call started                   49,725           13.8          24.3
CR blocks created                             1,548            0.4           0.8
Cached Commit SCN referenced                  4,257            1.2           2.1
Commit SCN cached                                19            0.0           0.0
DB time                                   2,051,539          567.4       1,002.7
DBWR checkpoint buffers written               7,052            2.0           3.5
DBWR checkpoints                                 78            0.0           0.0
DBWR object drop buffers written                352            0.1           0.2
DBWR revisited being-written buf                281            0.1           0.1
DBWR thread checkpoint buffers w              6,008            1.7           2.9
DBWR transaction table writes                   169            0.1           0.1
DBWR undo block writes                       86,711           24.0          42.4
IMU CR rollbacks                                196            0.1           0.1
IMU Flushes                                   1,921            0.5           0.9
IMU Redo allocation size                  4,831,688        1,336.3       2,361.5
IMU commits                                   1,095            0.3           0.5
IMU contention                                   51            0.0           0.0
IMU ktichg flush                                 11            0.0           0.0
IMU pool not allocated                          261            0.1           0.1
IMU recursive-transaction flush                   5            0.0           0.0
IMU undo allocation size                  8,282,272        2,290.7       4,048.0
IMU- failed to get a private str                261            0.1           0.1
PX local messages recv'd                          0            0.0           0.0
PX local messages sent                            0            0.0           0.0
SMON posted for undo segment shr                  8            0.0           0.0
SQL*Net roundtrips to/from clien            557,524          154.2         272.5
SQL*Net roundtrips to/from dblin              5,571            1.5           2.7
active txn count during cleanout            667,416          184.6         326.2
application wait time                         2,949            0.8           1.4
auto extends on undo tablespace                   0            0.0           0.0
background checkpoints completed                 33            0.0           0.0
background checkpoints started                   32            0.0           0.0
background timeouts                          10,887            3.0           5.3
branch node splits                               14            0.0           0.0
buffer is not pinned count               16,308,390        4,510.5       7,970.9
buffer is pinned count                   37,217,420       10,293.4      18,190.3
bytes received via SQL*Net from          54,299,124       15,017.8      26,539.2
bytes received via SQL*Net from             702,510          194.3         343.4
bytes sent via SQL*Net to client         59,493,239       16,454.4      29,077.8
bytes sent via SQL*Net to dblink          4,758,313        1,316.0       2,325.7
calls to get snapshot scn: kcmgs            102,555           28.4          50.1
calls to kcmgas                             122,772           34.0          60.0
calls to kcmgcs                             666,871          184.4         325.9
change write time                            93,636           25.9          45.8
cleanout - number of ktugct call            694,894          192.2         339.6
cleanouts and rollbacks - consis                524            0.1           0.3
cleanouts only - consistent read             16,400            4.5           8.0
cluster key scan block gets                  62,504           17.3          30.6
cluster key scans                            44,624           12.3          21.8
commit batch performed                            5            0.0           0.0
commit batch requested                            5            0.0           0.0
commit batch/immediate performed                 49            0.0           0.0
commit batch/immediate requested                 49            0.0           0.0
commit cleanout failures: block              10,148            2.8           5.0
commit cleanout failures: buffer                 39            0.0           0.0
commit cleanout failures: callba                 93            0.0           0.1
commit cleanout failures: cannot                  2            0.0           0.0
commit cleanouts                             49,810           13.8          24.4
commit cleanouts successfully co             39,528           10.9          19.3
Instance Activity Stats                  DB/Inst: CDB10/cdb10  Snaps: 122-123

Statistic                                     Total     per Second     per Trans
-------------------------------- ------------------ -------------- -------------
commit immediate performed                       44            0.0           0.0
commit immediate requested                       44            0.0           0.0
commit txn count during cleanout             37,416           10.4          18.3
concurrency wait time                         6,361            1.8           3.1
consistent changes                          375,588          103.9         183.6
consistent gets                          19,788,311        5,473.0       9,671.7
consistent gets - examination            15,781,101        4,364.7       7,713.2
consistent gets direct                            2            0.0           0.0
consistent gets from cache               19,788,309        5,473.0       9,671.7
current blocks converted for CR                   1            0.0           0.0
cursor authentications                           60            0.0           0.0
data blocks consistent reads - u              7,046            2.0           3.4
db block changes                          9,922,875        2,744.4       4,849.9
db block gets                            10,289,412        2,845.8       5,029.0
db block gets direct                          3,341            0.9           1.6
db block gets from cache                 10,286,071        2,844.9       5,027.4
deferred (CURRENT) block cleanou             10,217            2.8           5.0
dirty buffers inspected                     142,881           39.5          69.8
enqueue conversions                          13,940            3.9           6.8
enqueue releases                             71,947           19.9          35.2
enqueue requests                             71,973           19.9          35.2
enqueue timeouts                                 34            0.0           0.0
enqueue waits                                    65            0.0           0.0
exchange deadlocks                                0            0.0           0.0
execute count                               542,597          150.1         265.2
free buffer inspected                       536,842          148.5         262.4
free buffer requested                       511,414          141.4         250.0
global undo segment hints helped                  0            0.0           0.0
global undo segment hints were s                  0            0.0           0.0
heap block compress                          23,794            6.6          11.6
hot buffers moved to head of LRU             35,300            9.8          17.3
immediate (CR) block cleanout ap             16,924            4.7           8.3
immediate (CURRENT) block cleano             40,644           11.2          19.9
index fast full scans (full)                     11            0.0           0.0
index fetch by key                        9,609,838        2,657.9       4,696.9
index scans kdiixs1                         540,504          149.5         264.2
leaf node 90-10 splits                        3,675            1.0           1.8
leaf node splits                              7,868            2.2           3.9
lob reads                                        10            0.0           0.0
lob writes                                      597            0.2           0.3
lob writes unaligned                            597            0.2           0.3
logons cumulative                               179            0.1           0.1
messages received                            36,800           10.2          18.0
messages sent                                36,800           10.2          18.0
no buffer to keep pinned count                    0            0.0           0.0
no work - consistent read gets            3,414,669          944.4       1,669.0
opened cursors cumulative                    11,030            3.1           5.4
parse count (failures)                           11            0.0           0.0
parse count (hard)                              263            0.1           0.1
parse count (total)                          11,460            3.2           5.6
parse time cpu                                  184            0.1           0.1
parse time elapsed                            5,105            1.4           2.5
physical read IO requests                   286,506           79.2         140.0
physical read bytes                   3,293,118,464      910,795.7   1,609,539.8
physical read total IO requests             312,883           86.5         152.9
physical read total bytes             3,723,894,784    1,029,937.9   1,820,085.4
physical read total multi block              16,936            4.7           8.3
physical reads                              401,992          111.2         196.5
physical reads cache                        391,309          108.2         191.3
physical reads cache prefetch               106,160           29.4          51.9
Instance Activity Stats                  DB/Inst: CDB10/cdb10  Snaps: 122-123

Statistic                                     Total     per Second     per Trans
-------------------------------- ------------------ -------------- -------------
physical reads direct                        10,683            3.0           5.2
physical reads direct (lob)                       2            0.0           0.0
physical reads direct temporary              10,450            2.9           5.1
physical write IO requests                  124,209           34.4          60.7
physical write bytes                  1,423,941,632      393,827.3     695,963.7
physical write total IO requests            135,013           37.3          66.0
physical write total bytes            3,039,874,048      840,754.5   1,485,764.4
physical write total multi block              9,946            2.8           4.9
physical writes                             173,821           48.1          85.0
physical writes direct                       15,138            4.2           7.4
physical writes direct (lob)                      3            0.0           0.0
physical writes direct temporary             13,312            3.7           6.5
physical writes from cache                  158,683           43.9          77.6
physical writes non checkpoint              171,458           47.4          83.8
pinned buffers inspected                      1,327            0.4           0.7
prefetched blocks aged out befor                971            0.3           0.5
process last non-idle time                    5,863            1.6           2.9
recovery blocks read                              0            0.0           0.0
recursive calls                             110,227           30.5          53.9
recursive cpu usage                          28,845            8.0          14.1
redo blocks read for recovery                     0            0.0           0.0
redo blocks written                       2,951,190          816.2       1,442.4
redo buffer allocation retries                4,972            1.4           2.4
redo entries                              4,971,193        1,374.9       2,429.7
redo log space requests                       1,018            0.3           0.5
redo log space wait time                     16,736            4.6           8.2
redo ordering marks                          86,212           23.8          42.1
redo size                             1,462,839,100      404,585.4     714,975.1
redo synch time                             114,641           31.7          56.0
redo synch writes                             5,072            1.4           2.5
redo wastage                                773,164          213.8         377.9
redo write time                             208,649           57.7         102.0
redo writer latching time                         9            0.0           0.0
redo writes                                   2,820            0.8           1.4
rollback changes - undo records               7,908            2.2           3.9
rollbacks only - consistent read              1,010            0.3           0.5
rows fetched via callback                 6,732,803        1,862.1       3,290.7
session connect time                              0            0.0           0.0
session cursor cache hits                     6,009            1.7           2.9
session logical reads                    30,077,723        8,318.8      14,700.7
session pga memory                       87,991,760       24,336.4      43,006.7
session pga memory max                  128,361,936       35,501.8      62,738.0
session uga memory                  262,000,976,040   72,463,036.0 #############
session uga memory max                  122,117,960       33,774.8      59,686.2
shared hash latch upgrades - no             918,434          254.0         448.9
shared hash latch upgrades - wai                  3            0.0           0.0
sorts (disk)                                      0            0.0           0.0
sorts (memory)                               32,808            9.1          16.0
sorts (rows)                              1,889,801          522.7         923.7
sql area purged                                  58            0.0           0.0
summed dirty queue length                 2,498,747          691.1       1,221.3
switch current to new buffer                 10,984            3.0           5.4
table fetch by rowid                     20,173,244        5,579.4       9,859.9
table fetch continued row                         9            0.0           0.0
table scan blocks gotten                    227,381           62.9         111.1
table scan rows gotten                   22,027,503        6,092.3      10,766.1
table scans (cache partitions)                    0            0.0           0.0
table scans (long tables)                       176            0.1           0.1
table scans (short tables)                    5,560            1.5           2.7
total number of times SMON poste                172            0.1           0.1
Instance Activity Stats                  DB/Inst: CDB10/cdb10  Snaps: 122-123

Statistic                                     Total     per Second     per Trans
-------------------------------- ------------------ -------------- -------------
transaction rollbacks                            49            0.0           0.0
transaction tables consistent re                  7            0.0           0.0
transaction tables consistent re                254            0.1           0.1
undo change vector size                 619,905,088      171,450.5     302,983.9
user I/O wait time                          233,992           64.7         114.4
user calls                                  560,283          155.0         273.8
user commits                                  1,614            0.5           0.8
user rollbacks                                  432            0.1           0.2
workarea executions - onepass                     4            0.0           0.0
workarea executions - optimal                36,889           10.2          18.0
write clones created in backgrou                169            0.1           0.1
write clones created in foregrou                830            0.2           0.4
          -------------------------------------------------------------

Instance Activity Stats - Absolute ValuesDB/Inst: CDB10/cdb10  Snaps: 122-123
-> Statistics with absolute values (should not be diffed)

Statistic                            Begin Value       End Value
-------------------------------- --------------- ---------------
session cursor cache count                36,864          38,406
opened cursors current                       895             925
workarea memory allocated                 33,293          34,475
logons current                                36              37
          -------------------------------------------------------------

Instance Activity Stats - Thread Activity DB/Inst: CDB10/cdb10  Snaps: 122-123
-> Statistics identified by '(derived)' come from sources other than SYSSTAT

Statistic                                     Total  per Hour
-------------------------------- ------------------ ---------
log switches (derived)                           32     31.86
          -------------------------------------------------------------

Tablespace IO Stats                      DB/Inst: CDB10/cdb10  Snaps: 122-123
-> ordered by IOs (Reads + Writes) desc

Tablespace
------------------------------
                 Av      Av     Av                       Av     Buffer Av Buf
         Reads Reads/s Rd(ms) Blks/Rd       Writes Writes/s      Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
TS_STARGUS
       194,616      54    8.3     1.2       43,074       12          0    0.0
TEMP
        73,213      20    5.1     1.4       13,433        4          0    0.0
UNDOTBS1
           998       0   34.5     1.0       65,474       18        152  325.0
SYSTEM
         9,656       3   12.1     5.1          254        0          2  300.0
SYSAUX
         6,768       2   16.5     1.1        1,773        0          2   10.0
PERFSTAT
           661       0   35.7     1.0          271        0          0    0.0
EXAMPLE
           482       0   13.4     1.0           33        0          0    0.0
USERS
           105       0    8.7     1.0           33        0          0    0.0
          -------------------------------------------------------------

File IO Stats                            DB/Inst: CDB10/cdb10  Snaps: 122-123
-> ordered by Tablespace, File

Tablespace               Filename
------------------------ ----------------------------------------------------
                 Av      Av     Av                       Av     Buffer Av Buf
         Reads Reads/s Rd(ms) Blks/Rd       Writes Writes/s      Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
EXAMPLE                  /export/home/oracle10/oradata/cdb10/example01.dbf
           482       0   13.4     1.0           33        0          0    0.0
PERFSTAT                 /export/home/oracle10/oradata/cdb10/perfstat01.dbf
           661       0   35.7     1.0          271        0          0    0.0
SYSAUX                   /export/home/oracle10/oradata/cdb10/sysaux01.dbf
         6,768       2   16.5     1.1        1,773        0          2   10.0
SYSTEM                   /export/home/oracle10/oradata/cdb10/system01.dbf
         9,656       3   12.1     5.1          254        0          2  300.0
TEMP                     /export/home/oracle10/oradata/cdb10/temp01.dbf
        73,213      20    5.1     1.4       13,433        4          0    N/A
TS_STARGUS               /export/home/oracle10/oradata/cdb10/ts_stargus_01.db
       194,616      54    8.3     1.2       43,074       12          0    0.0
UNDOTBS1                 /export/home/oracle10/oradata/cdb10/undotbs01.dbf
           998       0   34.5     1.0       65,474       18        152  325.0
USERS                    /export/home/oracle10/oradata/cdb10/users01.dbf
           105       0    8.7     1.0           33        0          0    0.0
          -------------------------------------------------------------

Buffer Pool Statistics                   DB/Inst: CDB10/cdb10  Snaps: 122-123
-> Standard block size Pools  D: default,  K: keep,  R: recycle
-> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k

                                                            Free Writ     Buffer
     Number of Pool         Buffer     Physical    Physical Buff Comp       Busy
P      Buffers Hit%           Gets        Reads      Writes Wait Wait      Waits
--- ---------- ---- -------------- ------------ ----------- ---- ---- ----------
D        3,465   99     30,072,012      391,303     159,176 ####    8        156
          -------------------------------------------------------------

Instance Recovery Stats                   DB/Inst: CDB10/cdb10  Snaps: 122-123
-> B: Begin snapshot,  E: End snapshot

  Targt  Estd                                  Log File Log Ckpt     Log Ckpt
  MTTR   MTTR   Recovery  Actual    Target       Size    Timeout     Interval
   (s)    (s)   Estd IOs Redo Blks Redo Blks  Redo Blks Redo Blks   Redo Blks
- ----- ----- ---------- --------- --------- ---------- --------- ------------
B     0    20       1046    147434    184320     184320    483666          N/A
E     0    16        764     94387    184320     184320    441470          N/A
          -------------------------------------------------------------

Buffer Pool Advisory                           DB/Inst: CDB10/cdb10  Snap: 123
-> Only rows with estimated physical reads >0 are displayed
-> ordered by Block Size, Buffers For Estimate

                                        Est
                                       Phys
    Size for   Size      Buffers for   Read          Estimated
P    Est (M) Factor         Estimate Factor     Physical Reads
--- -------- ------ ---------------- ------ ------------------
D          4     .1              495    2.6          5,966,703
D          8     .3              990    1.4          3,331,760
D         12     .4            1,485    1.4          3,181,146
D         16     .6            1,980    1.3          3,073,609
D         20     .7            2,475    1.3          2,965,522
D         24     .9            2,970    1.0          2,373,562
D         28    1.0            3,465    1.0          2,334,724
D         32    1.1            3,960    1.0          2,309,994
D         36    1.3            4,455    1.0          2,278,012
D         40    1.4            4,950    1.0          2,253,921
D         44    1.6            5,445    1.0          2,231,246
D         48    1.7            5,940    0.9          2,212,530
D         52    1.9            6,435    0.9          2,184,378
D         56    2.0            6,930    0.9          2,146,358
          -------------------------------------------------------------

PGA Aggr Summary                         DB/Inst: CDB10/cdb10  Snaps: 122-123
-> PGA cache hit % - percentage of W/A (WorkArea) data processed only in-memory

PGA Cache Hit %   W/A MB Processed  Extra W/A MB Read/Written
--------------- ------------------ --------------------------
           93.7              3,557                        241
          -------------------------------------------------------------

PGA Aggr Target Stats                     DB/Inst: CDB10/cdb10  Snaps: 122-123
-> B: Begin snap   E: End snap (rows dentified with B or E contain data
   which is absolute i.e. not diffed over the interval)
-> Auto PGA Target - actual workarea memory target
-> W/A PGA Used    - amount of memory used for all Workareas (manual + auto)
-> %PGA W/A Mem    - percentage of PGA memory allocated to workareas
-> %Auto W/A Mem   - percentage of workarea memory controlled by Auto Mem Mgmt
-> %Man W/A Mem    - percentage of workarea memory under manual control

                                                %PGA  %Auto   %Man
    PGA Aggr   Auto PGA   PGA Mem    W/A PGA     W/A    W/A    W/A Global Mem
   Target(M)  Target(M)  Alloc(M)    Used(M)     Mem    Mem    Mem   Bound(K)
- ---------- ---------- ---------- ---------- ------ ------ ------ ----------
B        200        127      141.6       32.5   23.0  100.0     .0     40,960
E        200        125      144.8       33.7   23.2  100.0     .0     40,960
          -------------------------------------------------------------

PGA Aggr Target Histogram                 DB/Inst: CDB10/cdb10  Snaps: 122-123
-> Optimal Executions are purely in-memory operations

  Low     High
Optimal Optimal    Total Execs  Optimal Execs 1-Pass Execs M-Pass Execs
------- ------- -------------- -------------- ------------ ------------
     2K      4K         33,637         33,637            0            0
    64K    128K             25             25            0            0
   128K    256K              3              3            0            0
   256K    512K             26             26            0            0
   512K   1024K          2,273          2,273            0            0
     1M      2M            895            895            0            0
     4M      8M             10              8            2            0
     8M     16M             12             12            0            0
    16M     32M              2              2            0            0
    64M    128M              2              0            2            0
          -------------------------------------------------------------

PGA Memory Advisory                            DB/Inst: CDB10/cdb10  Snap: 123
-> When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value
   where Estd PGA Overalloc Count is 0

                                       Estd Extra    Estd PGA   Estd PGA
PGA Target    Size           W/A MB   W/A MB Read/      Cache  Overalloc
  Est (MB)   Factr        Processed Written to Disk     Hit %      Count
---------- ------- ---------------- ---------------- -------- ----------
        25     0.1         56,190.1          4,876.0     92.0        353
        50     0.3         56,190.1          3,846.0     94.0        203
       100     0.5         56,190.1            406.6     99.0          0
       150     0.8         56,190.1            278.9    100.0          0
       200     1.0         56,190.1            278.9    100.0          0
       240     1.2         56,190.1            215.7    100.0          0
       280     1.4         56,190.1            215.7    100.0          0
       320     1.6         56,190.1            215.7    100.0          0
       360     1.8         56,190.1            215.7    100.0          0
       400     2.0         56,190.1            215.7    100.0          0
       600     3.0         56,190.1            215.7    100.0          0
       800     4.0         56,190.1            215.7    100.0          0
     1,200     6.0         56,190.1            215.7    100.0          0
     1,600     8.0         56,190.1            215.7    100.0          0
          -------------------------------------------------------------

Shared Pool Advisory                          DB/Inst: CDB10/cdb10  Snap: 123
-> SP: Shared Pool     Est LC: Estimated Library Cache   Factr: Factor
-> Note there is often a 1:Many correlation between a single logical object
   in the Library Cache, and the physical number of memory objects associated
   with it.  Therefore comparing the number of Lib Cache objects (e.g. in
   v$librarycache), with the number of Lib Cache Memory Objects is invalid.

                                        Est LC Est LC  Est LC Est LC
    Shared    SP   Est LC                 Time   Time    Load   Load      Est LC
      Pool  Size     Size       Est LC   Saved  Saved    Time   Time         Mem
   Size(M) Factr      (M)      Mem Obj     (s)  Factr     (s)  Factr    Obj Hits
---------- ----- -------- ------------ ------- ------ ------- ------ -----------
        96    .8       19        2,407 #######    1.0     882    2.0   3,172,239
       112    .9       33        3,038 #######    1.0     538    1.2   3,190,425
       128   1.0       47        4,150 #######    1.0     433    1.0   3,193,792
       144   1.1       62        5,909 #######    1.0     430    1.0   3,194,235
       160   1.3       77        7,196 #######    1.0     427    1.0   3,194,510
       176   1.4       92        8,955 #######    1.0     427    1.0   3,194,594
       192   1.5      107       10,579 #######    1.0     426    1.0   3,194,828
       208   1.6      122       12,029 #######    1.0     426    1.0   3,195,128
       224   1.8      137       13,603 #######    1.0     424    1.0   3,195,555
       240   1.9      152       14,744 #######    1.0     423    1.0   3,195,770
       256   2.0      167       15,773 #######    1.0     423    1.0   3,195,906
          -------------------------------------------------------------

SGA Target Advisory                            DB/Inst: CDB10/cdb10  Snap: 123

                  No data exists for this section of the report.
          -------------------------------------------------------------

Streams Pool Advisory                          DB/Inst: CDB10/cdb10  Snap: 123

                  No data exists for this section of the report.
          -------------------------------------------------------------

Java Pool Advisory                             DB/Inst: CDB10/cdb10  Snap: 123

                  No data exists for this section of the report.
          -------------------------------------------------------------

Buffer Wait Statistics                    DB/Inst: CDB10/cdb10  Snaps: 122-123
-> ordered by wait time desc, waits desc

Class                    Waits Total Wait Time (s)  Avg Time (ms)
------------------ ----------- ------------------- --------------
undo header                152                  49            325
data block                   4                   1            155
          -------------------------------------------------------------

Enqueue Activity                         DB/Inst: CDB10/cdb10  Snaps: 122-123
-> only enqueues with waits are shown
-> Enqueue stats gathered prior to 10g should not be compared with 10g data
-> ordered by Wait Time desc, Waits desc

Enqueue Type (Request Reason)
------------------------------------------------------------------------------
    Requests    Succ Gets Failed Gets       Waits  Wt Time (s) Av Wt Time(ms)
------------ ------------ ----------- ----------- ------------ --------------
RO-Multiple Object Reuse (fast object reuse)
         414          414           0          46           23         505.78
CF-Controlfile Transaction
       2,004        2,003           1          19            7         366.58
          -------------------------------------------------------------

Undo Segment Summary                     DB/Inst: CDB10/cdb10  Snaps: 122-123
-> Min/Max TR (mins) - Min and Max Tuned Retention (minutes)
-> STO - Snapshot Too Old count,  OOS - Out of Space count
-> Undo segment block stats:
-> uS - unexpired Stolen,   uR - unexpired Released,   uU - unexpired reUsed
-> eS - expired   Stolen,   eR - expired   Released,   eU - expired   reUsed

Undo   Num Undo       Number of  Max Qry   Max Tx Min/Max   STO/     uS/uR/uU/
 TS# Blocks (K)    Transactions  Len (s) Concurcy TR (mins) OOS      eS/eR/eU
---- ---------- --------------- -------- -------- --------- ----- --------------
   1       82.3          16,347      253        6 15/15.25  0/0   0/0/0/0/0/0
          -------------------------------------------------------------

Undo Segment Stats                        DB/Inst: CDB10/cdb10  Snaps: 122-123
-> Most recent 35 Undostat rows, ordered by Time desc

                Num Undo    Number of Max Qry  Max Tx Tun Ret STO/    uS/uR/uU/
End Time          Blocks Transactions Len (s)   Concy  (mins) OOS     eS/eR/eU
------------ ----------- ------------ ------- ------- ------- ----- ------------
31-Jul 17:54      17,588        4,451      13       6      15 0/0   0/0/0/0/0/0
31-Jul 17:44      11,302        4,215       0       4      15 0/0   0/0/0/0/0/0
31-Jul 17:34       8,066        1,832       0       4      15 0/0   0/0/0/0/0/0
31-Jul 17:24      17,412          861      90       5      15 0/0   0/0/0/0/0/0
31-Jul 17:14      15,100          892     137       3      15 0/0   0/0/0/0/0/0
31-Jul 17:04      12,857        4,096     253       6      15 0/0   0/0/0/0/0/0
          -------------------------------------------------------------

Latch Activity                           DB/Inst: CDB10/cdb10  Snaps: 122-123
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
   willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0

                                           Pct    Avg   Wait                 Pct
                                    Get    Get   Slps   Time       NoWait NoWait
Latch Name                     Requests   Miss  /Miss    (s)     Requests   Miss
------------------------ -------------- ------ ------ ------ ------------ ------
AWR Alerted Metric Eleme         13,936    0.0    N/A      0            0    N/A
Consistent RBA                    2,852    0.0    N/A      0            0    N/A
FOB s.o list latch                  333    0.0    N/A      0            0    N/A
In memory undo latch             30,230    0.0    0.7      8        4,148    0.0
JS mem alloc latch                    3    0.0    N/A      0            0    N/A
JS queue access latch                 3    0.0    N/A      0            0    N/A
JS queue state obj latch         25,990    0.0    N/A      0            0    N/A
JS slv state obj latch              115    0.0    N/A      0            0    N/A
KMG MMAN ready and start          1,201    0.0    N/A      0            0    N/A
KTF sga latch                        10    0.0    N/A      0        1,006    0.0
KWQMN job cache list lat            116    0.0    N/A      0            0    N/A
KWQP Prop Status                      1    0.0    N/A      0            0    N/A
MQL Tracking Latch                    0    N/A    N/A      0           72    0.0
Memory Management Latch               0    N/A    N/A      0        1,201    0.0
OS process                          573    0.0    N/A      0            0    N/A
OS process allocation             1,584    0.0    N/A      0            0    N/A
OS process: request allo            235    0.0    N/A      0            0    N/A
PL/SQL warning settings             935    0.0    N/A      0            0    N/A
SQL memory manager latch              2    0.0    N/A      0        1,177    0.0
SQL memory manager worka         92,470    0.0    N/A      0            0    N/A
Shared B-Tree                       137    0.0    N/A      0            0    N/A
active checkpoint queue          34,292    0.0    N/A      0            0    N/A
active service list               8,119    0.0    N/A      0        1,272    0.0
archive control                   1,017    0.0    N/A      0            0    N/A
begin backup scn array               92    0.0    N/A      0            0    N/A
cache buffer handles             22,997    0.0    N/A      0            0    N/A
cache buffers chains         66,867,303    0.0    0.0      0      665,222    0.0
cache buffers lru chain       1,026,321    0.1    0.0      1      135,882    0.1
cache table scan latch                0    N/A    N/A      0       16,587    0.0
channel handle pool latc            570    0.0    N/A      0            0    N/A
channel operations paren         25,362    0.1    0.0      0            0    N/A
checkpoint queue latch          369,985    0.0    0.0      0      140,728    0.0
client/application info           2,329    0.0    N/A      0            0    N/A
commit callback allocati             97    0.0    N/A      0            0    N/A
compile environment latc          7,185    0.0    N/A      0            0    N/A
dictionary lookup                    55    0.0    N/A      0            0    N/A
dml lock allocation              21,178    0.0    N/A      0            0    N/A
dummy allocation                    357    0.0    N/A      0            0    N/A
enqueue hash chains             157,981    0.0    0.0      0        5,754    0.0
enqueues                         97,190    0.0    0.0      0            0    N/A
event group latch                   118    0.0    N/A      0            0    N/A
file cache latch                    995    0.0    N/A      0            0    N/A
global KZLD latch for me             81    0.0    N/A      0            0    N/A
global tx hash mapping           10,377    0.0    N/A      0            0    N/A
hash table column usage             163    0.0    N/A      0       72,097    0.0
hash table modification             129    0.0    N/A      0            0    N/A
job workq parent latch                0    N/A    N/A      0          122    0.0
job_queue_processes para            120    0.0    N/A      0            0    N/A
kks stats                           504    0.0    N/A      0            0    N/A
ksuosstats global area            1,435    0.0    N/A      0            0    N/A
ktm global data                     194    0.0    N/A      0            0    N/A
kwqbsn:qsga                         137    0.0    N/A      0            0    N/A
lgwr LWN SCN                      2,855    0.0    0.0      0            0    N/A
library cache                 1,239,482    0.0    0.0      0          322    0.0
library cache load lock              90    0.0    N/A      0            7    0.0
library cache lock               55,083    0.0    N/A      0            0    N/A
library cache lock alloc          1,753    0.0    N/A      0            0    N/A
library cache pin             1,158,486    0.0    0.0      0            0    N/A
library cache pin alloca            584    0.0    N/A      0            0    N/A
list of block allocation          1,340    0.0    N/A      0            0    N/A
Latch Activity                           DB/Inst: CDB10/cdb10  Snaps: 122-123
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
   willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0

                                           Pct    Avg   Wait                 Pct
                                    Get    Get   Slps   Time       NoWait NoWait
Latch Name                     Requests   Miss  /Miss    (s)     Requests   Miss
------------------------ -------------- ------ ------ ------ ------------ ------
loader state object free            570    0.0    N/A      0            0    N/A
longop free list parent               1    0.0    N/A      0            1    0.0
message pool operations             568    0.0    N/A      0            0    N/A
messages                         99,560    0.0    0.0      0            0    N/A
mostly latch-free SCN             2,865    0.2    0.0      0            0    N/A
multiblock read objects          35,166    0.0    N/A      0            0    N/A
ncodef allocation latch              71    0.0    N/A      0            0    N/A
object queue header heap          7,276    0.0    N/A      0        7,237    0.0
object queue header oper      1,520,984    0.0    0.0      1            0    N/A
object stats modificatio            360    1.1    0.0      0            0    N/A
parallel query alloc buf            472    0.0    N/A      0            0    N/A
parameter list                      643    0.0    N/A      0            0    N/A
parameter table allocati            240    0.0    N/A      0            0    N/A
post/wait queue                   9,658    0.1    0.0      0        3,466    0.0
process allocation                  235    0.0    N/A      0          118    0.0
process group creation              235    0.0    N/A      0            0    N/A
qmn task queue latch                512    0.0    N/A      0            0    N/A
redo allocation                  25,223    0.1    0.0      0    4,972,609    0.0
redo copy                             0    N/A    N/A      0    4,972,708    0.0
redo writing                     46,698    0.0    0.0      0            0    N/A
resmgr group change latc            533    0.0    N/A      0            0    N/A
resmgr:actses active lis            950    0.0    N/A      0            0    N/A
resmgr:actses change gro            142    0.0    N/A      0            0    N/A
resmgr:free threads list            353    0.0    N/A      0            0    N/A
resmgr:schema config                597    0.0    N/A      0            0    N/A
row cache objects               231,601    0.0    0.0      0          448    0.0
rules engine aggregate s             17    0.0    N/A      0            0    N/A
rules engine rule set st            134    0.0    N/A      0            0    N/A
sequence cache                    4,464    0.0    N/A      0            0    N/A
session allocation               42,421    0.0    N/A      0            0    N/A
session idle bit              1,127,557    0.0    0.0      0            0    N/A
session state list latch            494    0.0    N/A      0            0    N/A
session switching                    71    0.0    N/A      0            0    N/A
session timer                     1,272    0.0    N/A      0            0    N/A
shared pool                      26,428    0.0    0.4      0            0    N/A
simulator hash latch          2,137,589    0.0    N/A      0            0    N/A
simulator lru latch           2,051,579    0.0    0.0      0       46,222    0.1
slave class                           2    0.0    N/A      0            0    N/A
slave class create                    8    0.0    N/A      0            0    N/A
sort extent pool                  4,406    0.1    0.0      0            0    N/A
state object free list                2    0.0    N/A      0            0    N/A
statistics aggregation              140    0.0    N/A      0            0    N/A
temp lob duration state               2    0.0    N/A      0            0    N/A
threshold alerts latch              305    0.0    N/A      0            0    N/A
transaction allocation          875,726    0.0    N/A      0            0    N/A
transaction branch alloc          2,031    0.0    N/A      0            0    N/A
undo global data                804,587    0.0    0.0      0            0    N/A
user lock                           444    0.0    N/A      0            0    N/A
          -------------------------------------------------------------

Latch Sleep Breakdown                    DB/Inst: CDB10/cdb10  Snaps: 122-123
-> ordered by misses desc

Latch Name
----------------------------------------
  Get Requests      Misses      Sleeps  Spin Gets   Sleep1   Sleep2   Sleep3
-------------- ----------- ----------- ---------- -------- -------- --------
cache buffers chains
    66,867,303       1,726           4      1,722        0        0        0
cache buffers lru chain
     1,026,321       1,124          10      1,114        0        0        0
simulator lru latch
     2,051,579         537           2        535        0        0        0
library cache
     1,239,482         149           4        145        0        0        0
object queue header operation
     1,520,984         123           2        121        0        0        0
library cache pin
     1,158,486          33           1         32        0        0        0
redo allocation
        25,223          33           1         32        0        0        0
In memory undo latch
        30,230           7           5          2        0        0        0
shared pool
        26,428           5           2          3        0        0        0
          -------------------------------------------------------------

Latch Miss Sources                       DB/Inst: CDB10/cdb10  Snaps: 122-123
-> only latches with sleeps are shown
-> ordered by name, sleeps desc

                                                     NoWait              Waiter
Latch Name               Where                       Misses     Sleeps   Sleeps
------------------------ -------------------------- ------- ---------- --------
In memory undo latch     ktiFlush: child                  0          5        4
cache buffers chains     kcbgtcr: kslbegin excl           0          4        0
cache buffers chains     kcbgtcr: fast path               0          2        1
cache buffers chains     kcbchg: kslbegin: call CR        0          1        1
cache buffers chains     kcbgtcr: kslbegin shared         0          1        0
cache buffers lru chain  kcbzgws_1                        0          6        9
cache buffers lru chain  kcbzar: KSLNBEGIN                0          2        0
cache buffers lru chain  kcbbic2                          0          1        1
cache buffers lru chain  kcbbwlru                         0          1        0
library cache            kglhdiv: child                   0          1        0
library cache lock       kgllkdl: child: no lock ha       0          2        0
library cache pin        kglpndl                          0          1        1
object queue header oper kcbo_switch_cq                   0          1        1
object queue header oper kcbw_link_q                      0          1        0
redo allocation          kcrfw_redo_gen: redo alloc       0          1        0
shared pool              kghalp                           0          1        0
shared pool              kghfrunp: clatch: nowait         0          1        0
shared pool              kghupr1                          0          1        0
simulator lru latch      kcbs_simulate: simulate se       0          2        1
          -------------------------------------------------------------

Parent Latch Statistics                  DB/Inst: CDB10/cdb10  Snaps: 122-123

                  No data exists for this section of the report.
          -------------------------------------------------------------

Child Latch Statistics                    DB/Inst: CDB10/cdb10  Snaps: 122-123

                  No data exists for this section of the report.
          -------------------------------------------------------------

Segments by Logical Reads                DB/Inst: CDB10/cdb10  Snaps: 122-123
-> Total Logical Reads:      30,077,723
-> Captured Segments account for   88.6% of Total

           Tablespace                      Subobject  Obj.       Logical
Owner         Name    Object Name            Name     Type         Reads  %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
STARGUS    TEMP       PK_TMP_TOP_SLOW_CM              INDEX    5,264,912   17.50
STARGUS    TEMP       TMP_TOP_SLOW_CM                 TABLE    5,244,192   17.44
STARGUS    TS_STARGUS PK_CM_RAWDATA                   INDEX    2,271,232    7.55
STARGUS    TS_STARGUS CM_RAWDATA                      TABLE    1,899,472    6.32
STARGUS    TS_STARGUS CM_SID_RAWDATA                  TABLE    1,440,752    4.79
          -------------------------------------------------------------

Segments by Physical Reads                DB/Inst: CDB10/cdb10  Snaps: 122-123
-> Total Physical Reads:         401,992
-> Captured Segments account for    67.7% of Total

           Tablespace                      Subobject  Obj.      Physical
Owner         Name    Object Name            Name     Type         Reads  %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
STARGUS    TS_STARGUS PK_CM_SID_RAWDATA               INDEX       42,629   10.60
STARGUS    TEMP       TMP_TOP_SLOW_CM                 TABLE       38,818    9.66
STARGUS    TS_STARGUS CM_SID_RAWDATA                  TABLE       38,588    9.60
STARGUS    TEMP       PK_TMP_TOP_SLOW_CM              INDEX       31,020    7.72
STARGUS    TS_STARGUS TOPOLOGY_LINK                   TABLE       30,360    7.55
          -------------------------------------------------------------

Segments by Row Lock Waits               DB/Inst: CDB10/cdb10  Snaps: 122-123
-> % of Capture shows % of row lock waits for each top segment compared
-> with total row lock waits for all segments captured by the Snapshot

                                                                     Row
           Tablespace                      Subobject  Obj.          Lock    % of
Owner         Name    Object Name            Name     Type         Waits Capture
---------- ---------- -------------------- ---------- ----- ------------ -------
SYS        SYSTEM     SMON_SCN_TIME                   TABLE            4   30.77
SYSMAN     SYSAUX     MGMT_METRICS_1HOUR_P            INDEX            2   15.38
PERFSTAT   PERFSTAT   STATS$EVENT_HISTOGRA            INDEX            2   15.38
PERFSTAT   PERFSTAT   STATS$LATCH_PK                  INDEX            2   15.38
SYS        SYSAUX     WRH$_SERVICE_STAT_PK 559071_106 INDEX            2   15.38
          -------------------------------------------------------------

Segments by ITL Waits                     DB/Inst: CDB10/cdb10  Snaps: 122-123

                  No data exists for this section of the report.
          -------------------------------------------------------------

Segments by Buffer Busy Waits             DB/Inst: CDB10/cdb10  Snaps: 122-123
-> % of Capture shows % of Buffer Busy Waits for each top segment compared
-> with total Buffer Busy Waits for all segments captured by the Snapshot

                                                                  Buffer
           Tablespace                      Subobject  Obj.          Busy    % of
Owner         Name    Object Name            Name     Type         Waits Capture
---------- ---------- -------------------- ---------- ----- ------------ -------
SYS        SYSTEM     JOB$                            TABLE            2   66.67
SYSMAN     SYSAUX     MGMT_CURRENT_METRICS            INDEX            1   33.33
          -------------------------------------------------------------

Dictionary Cache Stats                   DB/Inst: CDB10/cdb10  Snaps: 122-123
-> "Pct Misses"  should be very low (< 2% in most cases)
-> "Final Usage" is the number of cache entries being used

                                   Get    Pct    Scan   Pct      Mod      Final
Cache                         Requests   Miss    Reqs  Miss     Reqs      Usage
------------------------- ------------ ------ ------- ----- -------- ----------
dc_awr_control                      67    0.0       0   N/A        2          1
dc_database_links                   72    0.0       0   N/A        0          1
dc_files                            70    0.0       0   N/A        0          7
dc_global_oids                   4,852    0.0       0   N/A        0         16
dc_histogram_data                3,190    0.9       0   N/A        0      1,064
dc_histogram_defs                6,187    0.7       0   N/A        0      1,592
dc_object_ids                    7,737    0.9       0   N/A        1        480
dc_objects                       1,345    1.9       0   N/A       56        437
dc_profiles                        163    0.0       0   N/A        0          2
dc_rollback_segments               677    0.0       0   N/A        0         22
dc_segments                      1,839    0.5       0   N/A      411        264
dc_sequences                        75    0.0       0   N/A       75          6
dc_tablespace_quotas               890    0.1       0   N/A        0          5
dc_tablespaces                  26,615    0.0       0   N/A        0          8
dc_usernames                       257    0.4       0   N/A        0          9
dc_users                        25,512    0.0       0   N/A        0         44
outstanding_alerts                 126    7.1       0   N/A       17         17
          -------------------------------------------------------------

Library Cache Activity                    DB/Inst: CDB10/cdb10  Snaps: 122-123
-> "Pct Misses"  should be very low

                         Get    Pct            Pin    Pct             Invali-
Namespace           Requests   Miss       Requests   Miss    Reloads  dations
--------------- ------------ ------ -------------- ------ ---------- --------
BODY                     550    0.0          6,440    0.0          0        0
CLUSTER                    1    0.0              4    0.0          0        0
INDEX                     41    0.0             86    0.0          0        0
SQL AREA                  64   71.9        548,251    0.1        194      177
TABLE/PROCEDURE          389    3.9         13,583    0.3         16        0
TRIGGER                   34   11.8            520    0.8          0        0
          -------------------------------------------------------------

Process Memory Summary                   DB/Inst: CDB10/cdb10  Snaps: 122-123
-> B: Begin snap   E: End snap
-> All rows below contain absolute values (i.e. not diffed over the interval)
-> Max Alloc is Maximum PGA Allocation size at snapshot time
-> Hist Max Alloc is the Historical Max Allocation for still-connected processes
-> ordered by Begin/End snapshot, Alloc (MB) desc

                                                            Hist
                                    Avg  Std Dev     Max     Max
               Alloc      Used    Alloc    Alloc   Alloc   Alloc    Num    Num
  Category      (MB)      (MB)     (MB)     (MB)    (MB)    (MB)   Proc  Alloc
- -------- --------- --------- -------- -------- ------- ------- ------ ------
B Other         71.7       N/A      1.9      3.6      22      22     38     38
  SQL           39.2      38.0      1.4      6.9      37      46     29     25
  Freeable      29.8        .0      1.1      1.5       9     N/A     26     26
  PL/SQL          .9        .5       .0       .0       0       0     36     36
E Other         74.2       N/A      1.9      3.6      22      22     39     39
  SQL           40.2      38.9      1.3      6.7      37      46     30     26
  Freeable      29.5        .0      1.1      1.5       9     N/A     26     26
  PL/SQL         1.0        .6       .0       .0       0       0     37     37
          -------------------------------------------------------------

SGA Memory Summary                        DB/Inst: CDB10/cdb10  Snaps: 122-123

                                                      End Size (Bytes)
SGA regions                     Begin Size (Bytes)      (if different)
------------------------------ ------------------- -------------------
Database Buffers                        29,360,128
Fixed Size                               1,979,488
Redo Buffers                             6,406,144
Variable Size                          423,627,680
                               -------------------
sum                                    461,373,440
          -------------------------------------------------------------

SGA breakdown difference                  DB/Inst: CDB10/cdb10  Snaps: 122-123
-> ordered by Pool, Name
-> N/A value for Begin MB or End MB indicates the size of that Pool/Name was
   insignificant, or zero in that snapshot

Pool   Name                                 Begin MB         End MB  % Diff
------ ------------------------------ -------------- -------------- -------
java   free memory                              24.0           24.0    0.00
shared ASH buffers                               4.0            4.0    0.00
shared CCursor                                   6.5            6.5   -0.11
shared FileOpenBlock                             1.4            1.4    0.00
shared Heap0: KGL                                3.8            3.7   -0.62
shared KCB Table Scan Buffer                     3.8            3.8    0.00
shared KGLS heap                                 1.6            1.5   -6.07
shared KQR M PO                                  1.5            1.3   -9.15
shared KSFD SGA I/O b                            3.8            3.8    0.00
shared PCursor                                   5.2            5.3    0.20
shared PL/SQL MPCODE                             3.4            3.4    0.00
shared event statistics per sess                 1.5            1.5    0.00
shared free memory                              10.4           10.4    0.39
shared kglsim hash table bkts                    4.0            4.0    0.00
shared kglsim heap                               1.3            1.4    1.72
shared kglsim object batch                       2.1            2.1    1.04
shared kks stbkt                                 1.5            1.5    0.00
shared library cache                             9.5            9.6    0.57
shared private strands                           2.3            2.3    0.00
shared row cache                                 7.1            7.1    0.00
shared sql area                                 27.2           27.4    0.89
       buffer_cache                             28.0           28.0    0.00
       fixed_sga                                 1.9            1.9    0.00
       log_buffer                                6.1            6.1    0.00
          -------------------------------------------------------------

Streams CPU/IO Usage                     DB/Inst: CDB10/cdb10  Snaps: 122-123

                  No data exists for this section of the report.
          -------------------------------------------------------------

Streams Capture                           DB/Inst: CDB10/cdb10  Snaps: 122-123

                  No data exists for this section of the report.
          -------------------------------------------------------------

Streams Apply                             DB/Inst: CDB10/cdb10  Snaps: 122-123

                  No data exists for this section of the report.
          -------------------------------------------------------------

Buffered Queues                           DB/Inst: CDB10/cdb10  Snaps: 122-123

                  No data exists for this section of the report.
          -------------------------------------------------------------

Buffered Subscribers                      DB/Inst: CDB10/cdb10  Snaps: 122-123

                  No data exists for this section of the report.
          -------------------------------------------------------------

Rule Set                                  DB/Inst: CDB10/cdb10  Snaps: 122-123

                  No data exists for this section of the report.
          -------------------------------------------------------------

Resource Limit Stats                          DB/Inst: CDB10/cdb10  Snap: 123

                  No data exists for this section of the report.
          -------------------------------------------------------------

init.ora Parameters                      DB/Inst: CDB10/cdb10  Snaps: 122-123

                                                                End value
Parameter Name                Begin value                       (if different)
----------------------------- --------------------------------- --------------
audit_file_dest               /export/home/oracle10/admin/cdb10
background_dump_dest          /export/home/oracle10/admin/cdb10
compatible                    10.2.0.1.0
control_files                 /export/home/oracle10/oradata/cdb
core_dump_dest                /export/home/oracle10/admin/cdb10
db_block_size                 8192
db_cache_size                 29360128
db_domain
db_file_multiblock_read_count 8
db_name                       cdb10
db_recovery_file_dest         /export/home/oracle10/flash_recov
db_recovery_file_dest_size    2147483648
dispatchers                   (PROTOCOL=TCP) (SERVICE=cdb10XDB)
job_queue_processes           10
open_cursors                  300
pga_aggregate_target          209715200
processes                     150
remote_login_passwordfile     EXCLUSIVE
sga_max_size                  461373440
sga_target                    0
shared_pool_size              134217728
undo_management               AUTO
undo_tablespace               UNDOTBS1
user_dump_dest                /export/home/oracle10/admin/cdb10
          -------------------------------------------------------------

End of Report
}}}
{{{

WORKLOAD REPOSITORY report for

DB Name         DB Id    Instance     Inst Num Release     RAC Host
------------ ----------- ------------ -------- ----------- --- ------------
IVRS          2607950532 ivrs                1 10.2.0.3.0  NO  dbrocaix01.b

              Snap Id      Snap Time      Sessions Curs/Sess
            --------- ------------------- -------- ---------
Begin Snap:       338 17-Jan-10 06:50:58        31       2.9
  End Snap:       339 17-Jan-10 07:01:01        30       2.2
   Elapsed:               10.05 (mins)
   DB Time:               22.08 (mins)

Cache Sizes
~~~~~~~~~~~                       Begin        End
                             ---------- ----------
               Buffer Cache:       200M       196M  Std Block Size:         8K
           Shared Pool Size:        92M        96M      Log Buffer:     2,860K

Load Profile
~~~~~~~~~~~~                            Per Second       Per Transaction
                                   ---------------       ---------------
                  Redo size:             25,946.47              6,162.81
              Logical reads:             10,033.03              2,383.05
              Block changes:                147.02                 34.92
             Physical reads:              9,390.59              2,230.46
            Physical writes:                 41.20                  9.79
                 User calls:                 19.14                  4.55
                     Parses:                  9.87                  2.34
                Hard parses:                  0.69                  0.16
                      Sorts:                  3.05                  0.72
                     Logons:                  0.52                  0.12
                   Executes:                 95.91                 22.78
               Transactions:                  4.21

  % Blocks changed per Read:    1.47    Recursive Call %:    90.93
 Rollback per transaction %:    0.51       Rows per Sort: ########

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            Buffer Nowait %:  100.00       Redo NoWait %:   99.99
            Buffer  Hit   %:  102.59    In-memory Sort %:  100.00
            Library Hit   %:   97.85        Soft Parse %:   93.01
         Execute to Parse %:   89.71         Latch Hit %:  100.00
Parse CPU to Parse Elapsd %:   19.56     % Non-Parse CPU:   98.43

 Shared Pool Statistics        Begin    End
                              ------  ------
             Memory Usage %:   75.99   78.27
    % SQL with executions>1:   68.86   64.10
  % Memory for SQL w/exec>1:   65.95   58.03

Top 5 Timed Events                                         Avg %Total
~~~~~~~~~~~~~~~~~~                                        wait   Call
Event                                 Waits    Time (s)   (ms)   Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
CPU time                                            436          32.9
db file sequential read              18,506         279     15   21.1   User I/O
PX Deq Credit: send blkd             79,918         177      2   13.4      Other
direct path read                    374,300         149      0   11.2   User I/O
log file parallel write               2,299          83     36    6.2 System I/O
          -------------------------------------------------------------
Time Model Statistics                      DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Total time in database user-calls (DB Time): 1324.6s
-> Statistics including the word "background" measure background process
   time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name

Statistic Name                                       Time (s) % of DB Time
------------------------------------------ ------------------ ------------
sql execute elapsed time                              1,272.2         96.0
DB CPU                                                  435.7         32.9
parse time elapsed                                       52.3          3.9
hard parse elapsed time                                  42.5          3.2
Java execution elapsed time                               4.0           .3
PL/SQL execution elapsed time                             3.3           .2
PL/SQL compilation elapsed time                           0.3           .0
connection management call elapsed time                   0.1           .0
sequence load elapsed time                                0.1           .0
hard parse (sharing criteria) elapsed time                0.1           .0
repeated bind elapsed time                                0.1           .0
hard parse (bind mismatch) elapsed time                   0.0           .0
DB time                                               1,324.6          N/A
background elapsed time                                 314.3          N/A
background cpu time                                      11.6          N/A
          -------------------------------------------------------------

Wait Class                                  DB/Inst: IVRS/ivrs  Snaps: 338-339
-> s  - second
-> cs - centisecond -     100th of a second
-> ms - millisecond -    1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc

                                                                  Avg
                                       %Time       Total Wait    wait     Waits
Wait Class                      Waits  -outs         Time (s)    (ms)      /txn
-------------------- ---------------- ------ ---------------- ------- ---------
User I/O                      396,180     .0              488       1     156.0
Other                          88,652    5.2              259       3      34.9
System I/O                      4,903     .0              243      50       1.9
Commit                          1,418    1.3               67      48       0.6
Concurrency                        29   20.7                2      60       0.0
Configuration                       1     .0                0     247       0.0
Network                         8,410     .0                0       0       3.3
Application                        36     .0                0       0       0.0
          -------------------------------------------------------------

Wait Events                                DB/Inst: IVRS/ivrs  Snaps: 338-339
-> s  - second
-> cs - centisecond -     100th of a second
-> ms - millisecond -    1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)

                                                                   Avg
                                             %Time  Total Wait    wait     Waits
Event                                 Waits  -outs    Time (s)    (ms)      /txn
---------------------------- -------------- ------ ----------- ------- ---------
db file sequential read              18,506     .0         279      15       7.3
PX Deq Credit: send blkd             79,918     .0         177       2      31.5
direct path read                    374,300     .0         149       0     147.4
log file parallel write               2,299     .0          83      36       0.9
db file parallel write                  658     .0          79     120       0.3
PX qref latch                         6,958   64.7          79      11       2.7
log file sync                         1,418    1.3          67      48       0.6
buffer read retry                        54   81.5          43     797       0.0
control file parallel write             259     .0          42     163       0.1
log file sequential read                 54     .0          27     507       0.0
control file sequential read          1,577     .0          11       7       0.6
db file scattered read                  236     .0           9      36       0.1
direct path write temp                1,533     .0           7       4       0.6
direct path read temp                 1,533     .0           2       1       0.6
os thread startup                         5   20.0           2     321       0.0
PX Deq: Signal ACK                      182   26.4           1       8       0.1
change tracking file synchro             11     .0           1     105       0.0
Log archive I/O                          54     .0           1      17       0.0
log file switch completion                1     .0           0     247       0.0
PX Deq: Table Q qref                  1,422     .0           0       0       0.6
enq: PS - contention                     51     .0           0       3       0.0
SQL*Net more data from clien             12     .0           0      12       0.0
PX Deq: Table Q Get Keys                 40     .0           0       3       0.0
latch: library cache                      9     .0           0       9       0.0
SQL*Net message to client             8,291     .0           0       0       3.3
latch free                                7     .0           0       9       0.0
cursor: pin S wait on X                  10   50.0           0       5       0.0
latch: cache buffers lru cha             25     .0           0       2       0.0
latch: session allocation                 7     .0           0       5       0.0
log file single write                     2     .0           0       9       0.0
direct path write                        15     .0           0       1       0.0
latch: shared pool                        2     .0           0       6       0.0
latch: redo allocation                    1     .0           0      10       0.0
read by other session                     3     .0           0       3       0.0
SQL*Net break/reset to clien             36     .0           0       0       0.0
SQL*Net more data to client             107     .0           0       0       0.0
latch: object queue header o              3     .0           0       1       0.0
change tracking file synchro             12     .0           0       0       0.0
LGWR wait for redo copy                  14     .0           0       0       0.0
latch: cache buffers chains               3     .0           0       0       0.0
enq: BF - allocation content              1     .0           0       0       0.0
PX Idle Wait                          1,398   79.0       2,234    1598       0.6
class slave wait                         28   21.4       1,114   39769       0.0
PX Deq: Table Q Normal              348,049     .0         682       2     137.1
jobq slave wait                         232   94.8         670    2890       0.1
ASM background timer                    148     .0         583    3937       0.1
Streams AQ: qmn coordinator              43   51.2         577   13430       0.0
Streams AQ: qmn slave idle w             21     .0         577   27498       0.0
PX Deq: Execution Msg                 7,434    2.5         573      77       2.9
SQL*Net message from client           8,291     .0         568      68       3.3
virtual circuit status                   19  100.0         557   29296       0.0
PX Deq: Execute Reply                 5,871    1.1         508      86       2.3
PX Deq Credit: need buffer           62,922     .0          48       1      24.8
PX Deq: Table Q Sample                1,307     .0           5       4       0.5
KSV master wait                          22     .0           0      22       0.0
PX Deq: Parse Reply                     201     .0           0       1       0.1
PX Deq: Msg Fragment                    234     .0           0       1       0.1
PX Deq: Join ACK                        170     .0           0       1       0.1
SGA: MMAN sleep for componen             16   43.8           0      12       0.0
          -------------------------------------------------------------

Background Wait Events                     DB/Inst: IVRS/ivrs  Snaps: 338-339
-> ordered by wait time desc, waits desc (idle events last)

                                                                   Avg
                                             %Time  Total Wait    wait     Waits
Event                                 Waits  -outs    Time (s)    (ms)      /txn
---------------------------- -------------- ------ ----------- ------- ---------
log file parallel write               2,299     .0          82      36       0.9
db file parallel write                  654     .0          78     119       0.3
control file parallel write             259     .0          42     163       0.1
log file sequential read                 54     .0          27     507       0.0
control file sequential read            399     .0          11      26       0.2
os thread startup                         5   20.0           2     321       0.0
Log archive I/O                          54     .0           1      17       0.0
events in waitclass Other                38     .0           1      16       0.0
log file single write                     2     .0           0       9       0.0
direct path write                        13     .0           0       1       0.0
latch: shared pool                        1     .0           0      10       0.0
direct path read                         13     .0           0       0       0.0
db file sequential read                 543     .0          -1      -2       0.2
rdbms ipc message                     4,458   50.1       7,496    1681       1.8
smon timer                               39     .0         611   15662       0.0
ASM background timer                    148     .0         583    3937       0.1
pmon timer                              241  100.0         581    2412       0.1
Streams AQ: qmn coordinator              43   51.2         577   13430       0.0
Streams AQ: qmn slave idle w             21     .0         577   27498       0.0
KSV master wait                          22     .0           0      22       0.0
SGA: MMAN sleep for componen             16   43.8           0      12       0.0
          -------------------------------------------------------------

Operating System Statistics                 DB/Inst: IVRS/ivrs  Snaps: 338-339

Statistic                                       Total
-------------------------------- --------------------
BUSY_TIME                                      46,982
IDLE_TIME                                       9,587
IOWAIT_TIME                                     5,623
NICE_TIME                                         172
SYS_TIME                                       37,041
USER_TIME                                       9,589
LOAD                                                4
RSRC_MGR_CPU_WAIT_TIME                              0
PHYSICAL_MEMORY_BYTES                          50,048
NUM_CPUS                                            1
          -------------------------------------------------------------

Service Statistics                         DB/Inst: IVRS/ivrs  Snaps: 338-339
-> ordered by DB Time

                                                             Physical    Logical
Service Name                      DB Time (s)   DB CPU (s)      Reads      Reads
-------------------------------- ------------ ------------ ---------- ----------
ivrs.bayantel.com                     1,329.2        427.1  5,587,106  5,878,962
SYS$USERS                                91.6         13.5      1,357     94,224
SYS$BACKGROUND                            0.0          0.0      1,367     19,062
ivrsXDB                                   0.0          0.0          0          0
          -------------------------------------------------------------

Service Wait Class Stats                    DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Wait Class info for services in the Service Statistics section.
-> Total Waits and Time Waited displayed for the following wait
   classes:  User I/O, Concurrency, Administrative, Network
-> Time Waited (Wt Time) in centisecond (100th of a second)

Service Name
----------------------------------------------------------------
 User I/O  User I/O  Concurcy  Concurcy     Admin     Admin   Network   Network
Total Wts   Wt Time Total Wts   Wt Time Total Wts   Wt Time Total Wts   Wt Time
--------- --------- --------- --------- --------- --------- --------- ---------
ivrs.bayantel.com
   394179     34576        16         6         0         0      8358         6
SYS$USERS
     1120      3538         2         1         0         0        42        14
SYS$BACKGROUND
     1310     10821         6       162         0         0         0         0
          -------------------------------------------------------------

SQL ordered by Elapsed Time                DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
   into the Total Database Time multiplied by 100

  Elapsed      CPU                  Elap per  % Total
  Time (s)   Time (s)  Executions   Exec (s)  DB Time    SQL Id
---------- ---------- ------------ ---------- ------- -------------
        90         28            1       89.6     6.8 bsdgaykhvy4xr
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, count(*) as numwait from supplier, lineitem l1, orders, nation wh
ere s_suppkey = l1.l_suppkey and o_orderkey = l1.l_orderkey and o_orderstatus =
'F' and l1.l_receiptdate > l1.l_commitdate and exists ( select * from lineitem l
2 where l2.l_orderkey = l1.l_orderkey and l2.l_suppkey <> l1.l_suppkey) and not

        89         28            2       44.5     6.7 bmfc2a2ym0kwr
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select nation, o_year, sum(amount) as sum_profit from ( select n_name as nation,
 extract(year from o_orderdate) as o_year, l_extendedprice * (1 - l_discount) -
ps_supplycost * l_quantity as amount from part, supplier, lineitem, partsupp, or
ders, nation where s_suppkey = l_suppkey and ps_suppkey = l_suppkey and ps_partk

        59          6            1       58.5     4.4 081am6psuh26j
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select sum(l_extendedprice) / 7.0 as avg_yearly from lineitem, part where p_part
key = l_partkey and p_brand = 'Brand#35' and p_container = 'LG BOX' and l_quanti
ty < ( select 0.2 * avg(l_quantity) from lineitem where l_partkey = p_partkey)

        57         22            1       56.9     4.3 6mrh6s1s5g851
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, count(*) as numwait from supplier, lineitem l1, orders, nation wh
ere s_suppkey = l1.l_suppkey and o_orderkey = l1.l_orderkey and o_orderstatus =
'F' and l1.l_receiptdate > l1.l_commitdate and exists ( select * from lineitem l
2 where l2.l_orderkey = l1.l_orderkey and l2.l_suppkey <> l1.l_suppkey) and not

        50          6            1       50.2     3.8 acgpfd4ysyfxb
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, s_address from supplier, nation where s_suppkey in ( select ps_su
ppkey from partsupp where ps_partkey in ( select p_partkey from part where p_nam
e like 'puff%') and ps_availqty > ( select 0.5 * sum(l_quantity) from lineitem w
here l_partkey = ps_partkey and l_suppkey = ps_suppkey and l_shipdate >= date '1

        49         21            1       49.2     3.7 2n4xg8c3dmd62
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, count(*) as numwait from supplier, lineitem l1, orders, nation wh
ere s_suppkey = l1.l_suppkey and o_orderkey = l1.l_orderkey and o_orderstatus =
'F' and l1.l_receiptdate > l1.l_commitdate and exists ( select * from lineitem l
2 where l2.l_orderkey = l1.l_orderkey and l2.l_suppkey <> l1.l_suppkey) and not

        46         15            1       45.7     3.4 29rqwcj4cs31u
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)
 from customer, orders, lineitem where o_orderkey in ( select l_orderkey from li
neitem group by l_orderkey having sum(l_quantity) > 313) and c_custkey = o_custk
ey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderda

        40          1            1       40.1     3.0 d92h3rjp0y217
begin prvt_hdm.auto_execute( :db_id, :inst_id, :end_snap ); end;

        34         14            1       34.1     2.6 cvhgz2zwbk4qf
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedpri
ce) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price
, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_qua
ntity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_dis

        31         14            1       30.8     2.3 7409gxv4spfj2
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
SQL ordered by Elapsed Time                DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
   into the Total Database Time multiplied by 100

  Elapsed      CPU                  Elap per  % Total
  Time (s)   Time (s)  Executions   Exec (s)  DB Time    SQL Id
---------- ---------- ------------ ---------- ------- -------------
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedpri
ce) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price
, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_qua
ntity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_dis

        29         14            1       29.3     2.2 1f0r8shtps3bu
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select nation, o_year, sum(amount) as sum_profit from ( select n_name as nation,
 extract(year from o_orderdate) as o_year, l_extendedprice * (1 - l_discount) -
ps_supplycost * l_quantity as amount from part, supplier, lineitem, partsupp, or
ders, nation where s_suppkey = l_suppkey and ps_suppkey = l_suppkey and ps_partk

        26         14            1       25.9     2.0 05jp96tzvutb6
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedpri
ce) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price
, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_qua
ntity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_dis

        26          8            1       25.5     1.9 6aqpwwba8xvuu
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_year, sum(case when nation = 'GERMANY' then volume else 0 end) / sum(vo
lume) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_ext
endedprice * (1 - l_discount) as volume, n2.n_name as nation from part, supplier
, lineitem, orders, customer, nation n1, nation n2, region where p_partkey = l_p

        24         11            1       23.7     1.8 8sfhj7ua3qfjf
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)
 from customer, orders, lineitem where o_orderkey in ( select l_orderkey from li
neitem group by l_orderkey having sum(l_quantity) > 312) and c_custkey = o_custk
ey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderda

        23         11            2       11.6     1.8 814qvp0rkqug4
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)
 from customer, orders, lineitem where o_orderkey in ( select l_orderkey from li
neitem group by l_orderkey having sum(l_quantity) > 314) and c_custkey = o_custk
ey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderda

        23          7            1       23.2     1.8 94wqqbu0ajcvn
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_year, sum(case when nation = 'JAPAN' then volume else 0 end) / sum(volu
me) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_exten
dedprice * (1 - l_discount) as volume, n2.n_name as nation from part, supplier,
lineitem, orders, customer, nation n1, nation n2, region where p_partkey = l_par

        23          7            1       23.2     1.8 5xd0ak4417rk0
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_year, sum(case when nation = 'MOZAMBIQUE' then volume else 0 end) / sum
(volume) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_
extendedprice * (1 - l_discount) as volume, n2.n_name as nation from part, suppl
ier, lineitem, orders, customer, nation n1, nation n2, region where p_partkey =

        22          2          539        0.0     1.7 aw9ttz9acxbc3
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
BEGIN payment(:p_w_id,:p_d_id,:p_c_w_id,:p_c_d_id,:p_c_id,:byname,:p_h_amount,:p
_c_last,:p_w_street_1,:p_w_street_2,:p_w_city,:p_w_state,:p_w_zip,:p_d_street_1,
:p_d_street_2,:p_d_city,:p_d_state,:p_d_zip,:p_c_first,:p_c_middle,:p_c_street_1
,:p_c_street_2,:p_c_city,:p_c_state,:p_c_zip,:p_c_phone,:p_c_since,:p_c_credit,:
SQL ordered by Elapsed Time                DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
   into the Total Database Time multiplied by 100

  Elapsed      CPU                  Elap per  % Total
  Time (s)   Time (s)  Executions   Exec (s)  DB Time    SQL Id
---------- ---------- ------------ ---------- ------- -------------

        21          4            2       10.6     1.6 2x4gjqru5u1xx
Module: SQL*Plus
SELECT s0.snap_id id, -- TO_CHAR(s0.END_INTERVAL_TIME,'YY/MM/DD HH24:MI') tm,
 round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
 + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END
_INTERVAL_TIME) * 60 + EXTRACT(MINUTE FROM s1.

        21          4          546        0.0     1.6 16dhat4ta7xs9
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
begin neword(:no_w_id,:no_max_w_id,:no_d_id,:no_c_id,:no_o_ol_cnt,:no_c_discount
,:no_c_last,:no_c_credit,:no_d_tax,:no_w_tax,:no_d_next_o_id,TO_DATE(:timestamp,
'YYYYMMDDHH24MISS')); END;

        21          0            1       20.9     1.6 14wnf35dahb7v
SELECT A.ID,A.TYPE FROM SYS.WRI$_ADV_DEFINITIONS A WHERE A.NAME = :B1

        17          0           42        0.4     1.3 d4ujh5yqt1fph
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
BEGIN delivery(:d_w_id,:d_o_carrier_id,TO_DATE(:timestamp,'YYYYMMDDHH24MISS'));
END;

        16          2            1       16.5     1.2 1wzqub25cwnjm
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN wksys.wk_job.invoke(21,21); :mydate := next_date; IF broken THEN
:b := 1; ELSE :b := 0; END IF; END;

        16          0          420        0.0     1.2 5ps73nuy5f2vj
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
UPDATE ORDER_LINE SET OL_DELIVERY_D = :B4 WHERE OL_O_ID = :B3 AND OL_D_ID = :B2
AND OL_W_ID = :B1

        15          1          317        0.0     1.1 4wg725nwpxb1z
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
SELECT C_FIRST, C_MIDDLE, C_ID, C_STREET_1, C_STREET_2, C_CITY, C_STATE, C_ZIP,
C_PHONE, C_CREDIT, C_CREDIT_LIM, C_DISCOUNT, C_BALANCE, C_SINCE FROM CUSTOMER WH
ERE C_W_ID = :B3 AND C_D_ID = :B2 AND C_LAST = :B1 ORDER BY C_FIRST

        15          6            1       14.5     1.1 fcfjqugcc1zy0
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_count, count(*) as custdist from ( select c_custkey, count(o_orderkey)
as c_count from customer left outer join orders on c_custkey = o_custkey and o_c
omment not like '%express%requests%' group by c_custkey) c_orders group by c_cou
nt order by custdist desc, c_count desc

        14          2            9        1.6     1.1 6gvch1xu9ca3g
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS(); :mydate := next_date
; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;

        14          7            1       13.9     1.0 15dxu5nmuj14a
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_orderpriority, count(*) as order_count from orders where o_orderdate >=
 date '1994-08-01' and o_orderdate < date '1994-08-01' + interval '3' month and
exists ( select * from lineitem where l_orderkey = o_orderkey and l_commitdate <
 l_receiptdate) group by o_orderpriority order by o_orderpriority

        14          2        5,442        0.0     1.0 8yvup05pk06ca
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
SELECT S_QUANTITY, S_DATA, S_DIST_01, S_DIST_02, S_DIST_03, S_DIST_04, S_DIST_05
SQL ordered by Elapsed Time                DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
   into the Total Database Time multiplied by 100

  Elapsed      CPU                  Elap per  % Total
  Time (s)   Time (s)  Executions   Exec (s)  DB Time    SQL Id
---------- ---------- ------------ ---------- ------- -------------
, S_DIST_06, S_DIST_07, S_DIST_08, S_DIST_09, S_DIST_10 FROM STOCK WHERE S_I_ID
= :B2 AND S_W_ID = :B1

          -------------------------------------------------------------

SQL ordered by CPU Time                    DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
   into the Total Database Time multiplied by 100

    CPU      Elapsed                  CPU per  % Total
  Time (s)   Time (s)  Executions     Exec (s) DB Time    SQL Id
---------- ---------- ------------ ----------- ------- -------------
        28         89            2       14.17     6.7 bmfc2a2ym0kwr
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select nation, o_year, sum(amount) as sum_profit from ( select n_name as nation,
 extract(year from o_orderdate) as o_year, l_extendedprice * (1 - l_discount) -
ps_supplycost * l_quantity as amount from part, supplier, lineitem, partsupp, or
ders, nation where s_suppkey = l_suppkey and ps_suppkey = l_suppkey and ps_partk

        28         90            1       28.03     6.8 bsdgaykhvy4xr
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, count(*) as numwait from supplier, lineitem l1, orders, nation wh
ere s_suppkey = l1.l_suppkey and o_orderkey = l1.l_orderkey and o_orderstatus =
'F' and l1.l_receiptdate > l1.l_commitdate and exists ( select * from lineitem l
2 where l2.l_orderkey = l1.l_orderkey and l2.l_suppkey <> l1.l_suppkey) and not

        22         57            1       21.81     4.3 6mrh6s1s5g851
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, count(*) as numwait from supplier, lineitem l1, orders, nation wh
ere s_suppkey = l1.l_suppkey and o_orderkey = l1.l_orderkey and o_orderstatus =
'F' and l1.l_receiptdate > l1.l_commitdate and exists ( select * from lineitem l
2 where l2.l_orderkey = l1.l_orderkey and l2.l_suppkey <> l1.l_suppkey) and not

        21         49            1       20.85     3.7 2n4xg8c3dmd62
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, count(*) as numwait from supplier, lineitem l1, orders, nation wh
ere s_suppkey = l1.l_suppkey and o_orderkey = l1.l_orderkey and o_orderstatus =
'F' and l1.l_receiptdate > l1.l_commitdate and exists ( select * from lineitem l
2 where l2.l_orderkey = l1.l_orderkey and l2.l_suppkey <> l1.l_suppkey) and not

        15         46            1       14.85     3.4 29rqwcj4cs31u
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)
 from customer, orders, lineitem where o_orderkey in ( select l_orderkey from li
neitem group by l_orderkey having sum(l_quantity) > 313) and c_custkey = o_custk
ey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderda

        14         34            1       14.34     2.6 cvhgz2zwbk4qf
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedpri
ce) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price
, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_qua
ntity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_dis

        14         31            1       14.08     2.3 7409gxv4spfj2
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedpri
ce) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price
, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_qua
ntity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_dis

        14         26            1       13.69     2.0 05jp96tzvutb6
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedpri
ce) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price
, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_qua
ntity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_dis

        14         29            1       13.58     2.2 1f0r8shtps3bu
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select nation, o_year, sum(amount) as sum_profit from ( select n_name as nation,
 extract(year from o_orderdate) as o_year, l_extendedprice * (1 - l_discount) -
SQL ordered by CPU Time                    DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
   into the Total Database Time multiplied by 100

    CPU      Elapsed                  CPU per  % Total
  Time (s)   Time (s)  Executions     Exec (s) DB Time    SQL Id
---------- ---------- ------------ ----------- ------- -------------
ps_supplycost * l_quantity as amount from part, supplier, lineitem, partsupp, or
ders, nation where s_suppkey = l_suppkey and ps_suppkey = l_suppkey and ps_partk

        11         24            1       11.47     1.8 8sfhj7ua3qfjf
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)
 from customer, orders, lineitem where o_orderkey in ( select l_orderkey from li
neitem group by l_orderkey having sum(l_quantity) > 312) and c_custkey = o_custk
ey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderda

        11         23            2        5.66     1.8 814qvp0rkqug4
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)
 from customer, orders, lineitem where o_orderkey in ( select l_orderkey from li
neitem group by l_orderkey having sum(l_quantity) > 314) and c_custkey = o_custk
ey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderda

         8         26            1        8.14     1.9 6aqpwwba8xvuu
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_year, sum(case when nation = 'GERMANY' then volume else 0 end) / sum(vo
lume) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_ext
endedprice * (1 - l_discount) as volume, n2.n_name as nation from part, supplier
, lineitem, orders, customer, nation n1, nation n2, region where p_partkey = l_p

         7         23            1        6.89     1.8 5xd0ak4417rk0
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_year, sum(case when nation = 'MOZAMBIQUE' then volume else 0 end) / sum
(volume) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_
extendedprice * (1 - l_discount) as volume, n2.n_name as nation from part, suppl
ier, lineitem, orders, customer, nation n1, nation n2, region where p_partkey =

         7         23            1        6.76     1.8 94wqqbu0ajcvn
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_year, sum(case when nation = 'JAPAN' then volume else 0 end) / sum(volu
me) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_exten
dedprice * (1 - l_discount) as volume, n2.n_name as nation from part, supplier,
lineitem, orders, customer, nation n1, nation n2, region where p_partkey = l_par

         7         14            1        6.63     1.0 15dxu5nmuj14a
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_orderpriority, count(*) as order_count from orders where o_orderdate >=
 date '1994-08-01' and o_orderdate < date '1994-08-01' + interval '3' month and
exists ( select * from lineitem where l_orderkey = o_orderkey and l_commitdate <
 l_receiptdate) group by o_orderpriority order by o_orderpriority

         6         15            1        6.36     1.1 fcfjqugcc1zy0
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_count, count(*) as custdist from ( select c_custkey, count(o_orderkey)
as c_count from customer left outer join orders on c_custkey = o_custkey and o_c
omment not like '%express%requests%' group by c_custkey) c_orders group by c_cou
nt order by custdist desc, c_count desc

         6         50            1        6.02     3.8 acgpfd4ysyfxb
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, s_address from supplier, nation where s_suppkey in ( select ps_su
ppkey from partsupp where ps_partkey in ( select p_partkey from part where p_nam
e like 'puff%') and ps_availqty > ( select 0.5 * sum(l_quantity) from lineitem w
here l_partkey = ps_partkey and l_suppkey = ps_suppkey and l_shipdate >= date '1

         6         59            1        5.87     4.4 081am6psuh26j
SQL ordered by CPU Time                    DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
   into the Total Database Time multiplied by 100

    CPU      Elapsed                  CPU per  % Total
  Time (s)   Time (s)  Executions     Exec (s) DB Time    SQL Id
---------- ---------- ------------ ----------- ------- -------------
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select sum(l_extendedprice) / 7.0 as avg_yearly from lineitem, part where p_part
key = l_partkey and p_brand = 'Brand#35' and p_container = 'LG BOX' and l_quanti
ty < ( select 0.2 * avg(l_quantity) from lineitem where l_partkey = p_partkey)

         4         21            2        2.15     1.6 2x4gjqru5u1xx
Module: SQL*Plus
SELECT s0.snap_id id, -- TO_CHAR(s0.END_INTERVAL_TIME,'YY/MM/DD HH24:MI') tm,
 round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
 + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END
_INTERVAL_TIME) * 60 + EXTRACT(MINUTE FROM s1.

         4         21          546        0.01     1.6 16dhat4ta7xs9
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
begin neword(:no_w_id,:no_max_w_id,:no_d_id,:no_c_id,:no_o_ol_cnt,:no_c_discount
,:no_c_last,:no_c_credit,:no_d_tax,:no_w_tax,:no_d_next_o_id,TO_DATE(:timestamp,
'YYYYMMDDHH24MISS')); END;

         2         14            9        0.27     1.1 6gvch1xu9ca3g
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS(); :mydate := next_date
; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;

         2         16            1        2.12     1.2 1wzqub25cwnjm
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN wksys.wk_job.invoke(21,21); :mydate := next_date; IF broken THEN
:b := 1; ELSE :b := 0; END IF; END;

         2         14        5,442        0.00     1.0 8yvup05pk06ca
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
SELECT S_QUANTITY, S_DATA, S_DIST_01, S_DIST_02, S_DIST_03, S_DIST_04, S_DIST_05
, S_DIST_06, S_DIST_07, S_DIST_08, S_DIST_09, S_DIST_10 FROM STOCK WHERE S_I_ID
= :B2 AND S_W_ID = :B1

         2         22          539        0.00     1.7 aw9ttz9acxbc3
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
BEGIN payment(:p_w_id,:p_d_id,:p_c_w_id,:p_c_d_id,:p_c_id,:byname,:p_h_amount,:p
_c_last,:p_w_street_1,:p_w_street_2,:p_w_city,:p_w_state,:p_w_zip,:p_d_street_1,
:p_d_street_2,:p_d_city,:p_d_state,:p_d_zip,:p_c_first,:p_c_middle,:p_c_street_1
,:p_c_street_2,:p_c_city,:p_c_state,:p_c_zip,:p_c_phone,:p_c_since,:p_c_credit,:

         1         40            1        1.18     3.0 d92h3rjp0y217
begin prvt_hdm.auto_execute( :db_id, :inst_id, :end_snap ); end;

         1         15          317        0.00     1.1 4wg725nwpxb1z
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
SELECT C_FIRST, C_MIDDLE, C_ID, C_STREET_1, C_STREET_2, C_CITY, C_STATE, C_ZIP,
C_PHONE, C_CREDIT, C_CREDIT_LIM, C_DISCOUNT, C_BALANCE, C_SINCE FROM CUSTOMER WH
ERE C_W_ID = :B3 AND C_D_ID = :B2 AND C_LAST = :B1 ORDER BY C_FIRST

         0         17           42        0.01     1.3 d4ujh5yqt1fph
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
BEGIN delivery(:d_w_id,:d_o_carrier_id,TO_DATE(:timestamp,'YYYYMMDDHH24MISS'));
END;

         0         16          420        0.00     1.2 5ps73nuy5f2vj
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
UPDATE ORDER_LINE SET OL_DELIVERY_D = :B4 WHERE OL_O_ID = :B3 AND OL_D_ID = :B2
AND OL_W_ID = :B1

SQL ordered by CPU Time                    DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
   into the Total Database Time multiplied by 100

    CPU      Elapsed                  CPU per  % Total
  Time (s)   Time (s)  Executions     Exec (s) DB Time    SQL Id
---------- ---------- ------------ ----------- ------- -------------
         0         21            1        0.12     1.6 14wnf35dahb7v
SELECT A.ID,A.TYPE FROM SYS.WRI$_ADV_DEFINITIONS A WHERE A.NAME = :B1

          -------------------------------------------------------------

SQL ordered by Gets                        DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> Total Buffer Gets:       6,050,561
-> Captured SQL account for      72.1% of Total

                                Gets              CPU     Elapsed
  Buffer Gets   Executions    per Exec   %Total Time (s)  Time (s)    SQL Id
-------------- ------------ ------------ ------ -------- --------- -------------
       331,630            1    331,630.0    5.5    21.81     56.88 6mrh6s1s5g851
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, count(*) as numwait from supplier, lineitem l1, orders, nation wh
ere s_suppkey = l1.l_suppkey and o_orderkey = l1.l_orderkey and o_orderstatus =
'F' and l1.l_receiptdate > l1.l_commitdate and exists ( select * from lineitem l
2 where l2.l_orderkey = l1.l_orderkey and l2.l_suppkey <> l1.l_suppkey) and not

       331,630            1    331,630.0    5.5    28.03     89.61 bsdgaykhvy4xr
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, count(*) as numwait from supplier, lineitem l1, orders, nation wh
ere s_suppkey = l1.l_suppkey and o_orderkey = l1.l_orderkey and o_orderstatus =
'F' and l1.l_receiptdate > l1.l_commitdate and exists ( select * from lineitem l
2 where l2.l_orderkey = l1.l_orderkey and l2.l_suppkey <> l1.l_suppkey) and not

       331,626            1    331,626.0    5.5    20.85     49.20 2n4xg8c3dmd62
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, count(*) as numwait from supplier, lineitem l1, orders, nation wh
ere s_suppkey = l1.l_suppkey and o_orderkey = l1.l_orderkey and o_orderstatus =
'F' and l1.l_receiptdate > l1.l_commitdate and exists ( select * from lineitem l
2 where l2.l_orderkey = l1.l_orderkey and l2.l_suppkey <> l1.l_suppkey) and not

       294,409            2    147,204.5    4.9    28.34     89.09 bmfc2a2ym0kwr
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select nation, o_year, sum(amount) as sum_profit from ( select n_name as nation,
 extract(year from o_orderdate) as o_year, l_extendedprice * (1 - l_discount) -
ps_supplycost * l_quantity as amount from part, supplier, lineitem, partsupp, or
ders, nation where s_suppkey = l_suppkey and ps_suppkey = l_suppkey and ps_partk

       147,206            1    147,206.0    2.4    13.58     29.35 1f0r8shtps3bu
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select nation, o_year, sum(amount) as sum_profit from ( select n_name as nation,
 extract(year from o_orderdate) as o_year, l_extendedprice * (1 - l_discount) -
ps_supplycost * l_quantity as amount from part, supplier, lineitem, partsupp, or
ders, nation where s_suppkey = l_suppkey and ps_suppkey = l_suppkey and ps_partk

       132,996            1    132,996.0    2.2     6.89     23.19 5xd0ak4417rk0
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_year, sum(case when nation = 'MOZAMBIQUE' then volume else 0 end) / sum
(volume) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_
extendedprice * (1 - l_discount) as volume, n2.n_name as nation from part, suppl
ier, lineitem, orders, customer, nation n1, nation n2, region where p_partkey =

       132,996            1    132,996.0    2.2     8.14     25.54 6aqpwwba8xvuu
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_year, sum(case when nation = 'GERMANY' then volume else 0 end) / sum(vo
lume) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_ext
endedprice * (1 - l_discount) as volume, n2.n_name as nation from part, supplier
, lineitem, orders, customer, nation n1, nation n2, region where p_partkey = l_p

       132,996            1    132,996.0    2.2     6.76     23.21 94wqqbu0ajcvn
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_year, sum(case when nation = 'JAPAN' then volume else 0 end) / sum(volu
me) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_exten
dedprice * (1 - l_discount) as volume, n2.n_name as nation from part, supplier,
lineitem, orders, customer, nation n1, nation n2, region where p_partkey = l_par

       125,774            1    125,774.0    2.1     6.39     12.83 05burzzbuh660
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_orderpriority, count(*) as order_count from orders where o_orderdate >=
 date '1993-05-01' and o_orderdate < date '1993-05-01' + interval '3' month and
SQL ordered by Gets                        DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> Total Buffer Gets:       6,050,561
-> Captured SQL account for      72.1% of Total

                                Gets              CPU     Elapsed
  Buffer Gets   Executions    per Exec   %Total Time (s)  Time (s)    SQL Id
-------------- ------------ ------------ ------ -------- --------- -------------
exists ( select * from lineitem where l_orderkey = o_orderkey and l_commitdate <
 l_receiptdate) group by o_orderpriority order by o_orderpriority

       125,774            1    125,774.0    2.1     6.12     12.15 05pqvq1019n1t
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_orderpriority, count(*) as order_count from orders where o_orderdate >=
 date '1996-05-01' and o_orderdate < date '1996-05-01' + interval '3' month and
exists ( select * from lineitem where l_orderkey = o_orderkey and l_commitdate <
 l_receiptdate) group by o_orderpriority order by o_orderpriority

       125,774            1    125,774.0    2.1     6.63     13.86 15dxu5nmuj14a
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_orderpriority, count(*) as order_count from orders where o_orderdate >=
 date '1994-08-01' and o_orderdate < date '1994-08-01' + interval '3' month and
exists ( select * from lineitem where l_orderkey = o_orderkey and l_commitdate <
 l_receiptdate) group by o_orderpriority order by o_orderpriority

       125,774            1    125,774.0    2.1     5.85     11.77 2xf48ymvbjhxv
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_shipmode, sum(case when o_orderpriority = '1-URGENT' or o_orderpriority
 = '2-HIGH' then 1 else 0 end) as high_line_count, sum(case when o_orderpriority
 <> '1-URGENT' and o_orderpriority <> '2-HIGH' then 1 else 0 end) as low_line_co
unt from orders, lineitem where o_orderkey = l_orderkey and l_shipmode in ('MAIL

       125,774            1    125,774.0    2.1     5.78     11.67 3yj8qcg6sf32h
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_shipmode, sum(case when o_orderpriority = '1-URGENT' or o_orderpriority
 = '2-HIGH' then 1 else 0 end) as high_line_count, sum(case when o_orderpriority
 <> '1-URGENT' and o_orderpriority <> '2-HIGH' then 1 else 0 end) as low_line_co
unt from orders, lineitem where o_orderkey = l_orderkey and l_shipmode in ('AIR'

       125,774            1    125,774.0    2.1     6.01     12.20 c5dr0bxu3s966
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_shipmode, sum(case when o_orderpriority = '1-URGENT' or o_orderpriority
 = '2-HIGH' then 1 else 0 end) as high_line_count, sum(case when o_orderpriority
 <> '1-URGENT' and o_orderpriority <> '2-HIGH' then 1 else 0 end) as low_line_co
unt from orders, lineitem where o_orderkey = l_orderkey and l_shipmode in ('TRUC

       114,091            1    114,091.0    1.9     6.32     12.80 bdaz68nhm6jm4
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, s_address from supplier, nation where s_suppkey in ( select ps_su
ppkey from partsupp where ps_partkey in ( select p_partkey from part where p_nam
e like 'linen%') and ps_availqty > ( select 0.5 * sum(l_quantity) from lineitem
where l_partkey = ps_partkey and l_suppkey = ps_suppkey and l_shipdate >= date '

       113,956            1    113,956.0    1.9     6.02     50.23 acgpfd4ysyfxb
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, s_address from supplier, nation where s_suppkey in ( select ps_su
ppkey from partsupp where ps_partkey in ( select p_partkey from part where p_nam
e like 'puff%') and ps_availqty > ( select 0.5 * sum(l_quantity) from lineitem w
here l_partkey = ps_partkey and l_suppkey = ps_suppkey and l_shipdate >= date '1

       113,849            1    113,849.0    1.9     5.46     10.68 cx10bjzjkg410
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, s_address from supplier, nation where s_suppkey in ( select ps_su
ppkey from partsupp where ps_partkey in ( select p_partkey from part where p_nam
e like 'moccasin%') and ps_availqty > ( select 0.5 * sum(l_quantity) from lineit
em where l_partkey = ps_partkey and l_suppkey = ps_suppkey and l_shipdate >= dat

       106,702            1    106,702.0    1.8     5.18      9.82 3v74jf7w31h8v
SQL ordered by Gets                        DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> Total Buffer Gets:       6,050,561
-> Captured SQL account for      72.1% of Total

                                Gets              CPU     Elapsed
  Buffer Gets   Executions    per Exec   %Total Time (s)  Time (s)    SQL Id
-------------- ------------ ------------ ------ -------- --------- -------------
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select sum(l_extendedprice) / 7.0 as avg_yearly from lineitem, part where p_part
key = l_partkey and p_brand = 'Brand#21' and p_container = 'LG DRUM' and l_quant
ity < ( select 0.2 * avg(l_quantity) from lineitem where l_partkey = p_partkey)

       106,702            1    106,702.0    1.8     5.52     10.44 5u88ac3spdu0n
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select sum(l_extendedprice* (1 - l_discount)) as revenue from lineitem, part whe
re ( p_partkey = l_partkey and p_brand = 'Brand#45' and p_container in ('SM CASE
', 'SM BOX', 'SM PACK', 'SM PKG') and l_quantity >= 1 and l_quantity <= 1 + 10 a
nd p_size between 1 and 5 and l_shipmode in ('AIR', 'AIR REG') and l_shipinstruc

       106,702            1    106,702.0    1.8     5.42     12.66 75d32g70ru6f2
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select sum(l_extendedprice* (1 - l_discount)) as revenue from lineitem, part whe
re ( p_partkey = l_partkey and p_brand = 'Brand#34' and p_container in ('SM CASE
', 'SM BOX', 'SM PACK', 'SM PKG') and l_quantity >= 3 and l_quantity <= 3 + 10 a
nd p_size between 1 and 5 and l_shipmode in ('AIR', 'AIR REG') and l_shipinstruc

       106,702            1    106,702.0    1.8     5.36     10.16 by11nan0n3nbb
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select sum(l_extendedprice* (1 - l_discount)) as revenue from lineitem, part whe
re ( p_partkey = l_partkey and p_brand = 'Brand#23' and p_container in ('SM CASE
', 'SM BOX', 'SM PACK', 'SM PKG') and l_quantity >= 1 and l_quantity <= 1 + 10 a
nd p_size between 1 and 5 and l_shipmode in ('AIR', 'AIR REG') and l_shipinstruc

       106,698            1    106,698.0    1.8     5.87     58.52 081am6psuh26j
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select sum(l_extendedprice) / 7.0 as avg_yearly from lineitem, part where p_part
key = l_partkey and p_brand = 'Brand#35' and p_container = 'LG BOX' and l_quanti
ty < ( select 0.2 * avg(l_quantity) from lineitem where l_partkey = p_partkey)

       102,798            1    102,798.0    1.7    11.47     23.72 8sfhj7ua3qfjf
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)
 from customer, orders, lineitem where o_orderkey in ( select l_orderkey from li
neitem group by l_orderkey having sum(l_quantity) > 312) and c_custkey = o_custk
ey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderda

       102,774            1    102,774.0    1.7    14.85     45.70 29rqwcj4cs31u
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)
 from customer, orders, lineitem where o_orderkey in ( select l_orderkey from li
neitem group by l_orderkey having sum(l_quantity) > 313) and c_custkey = o_custk
ey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderda

       102,763            2     51,381.5    1.7    11.32     23.29 814qvp0rkqug4
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)
 from customer, orders, lineitem where o_orderkey in ( select l_orderkey from li
neitem group by l_orderkey having sum(l_quantity) > 314) and c_custkey = o_custk
ey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderda

       102,697            1    102,697.0    1.7    13.69     25.90 05jp96tzvutb6
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedpri
ce) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price
, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_qua
ntity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_dis

SQL ordered by Gets                        DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> Total Buffer Gets:       6,050,561
-> Captured SQL account for      72.1% of Total

                                Gets              CPU     Elapsed
  Buffer Gets   Executions    per Exec   %Total Time (s)  Time (s)    SQL Id
-------------- ------------ ------------ ------ -------- --------- -------------
       102,697            1    102,697.0    1.7    14.08     30.77 7409gxv4spfj2
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedpri
ce) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price
, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_qua
ntity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_dis

       102,697            1    102,697.0    1.7    14.34     34.07 cvhgz2zwbk4qf
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedpri
ce) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price
, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_qua
ntity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_dis

        88,530          546        162.1    1.5     4.02     20.99 16dhat4ta7xs9
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
begin neword(:no_w_id,:no_max_w_id,:no_d_id,:no_c_id,:no_o_ol_cnt,:no_c_discount
,:no_c_last,:no_c_credit,:no_d_tax,:no_w_tax,:no_d_next_o_id,TO_DATE(:timestamp,
'YYYYMMDDHH24MISS')); END;

        80,163            2     40,081.5    1.3     4.30     21.15 2x4gjqru5u1xx
Module: SQL*Plus
SELECT s0.snap_id id, -- TO_CHAR(s0.END_INTERVAL_TIME,'YY/MM/DD HH24:MI') tm,
 round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440
 + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END
_INTERVAL_TIME) * 60 + EXTRACT(MINUTE FROM s1.

        69,716            2     34,858.0    1.2     4.26      8.54 ag9jkv5xuz0dz
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select ps_partkey, sum(ps_supplycost * ps_availqty) as value from partsupp, supp
lier, nation where ps_suppkey = s_suppkey and s_nationkey = n_nationkey and n_na
me = 'EGYPT' group by ps_partkey having sum(ps_supplycost * ps_availqty) > ( sel
ect sum(ps_supplycost * ps_availqty) * 0.0001000000 from partsupp, supplier, nat

          -------------------------------------------------------------

SQL ordered by Reads                       DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Total Disk Reads:       5,663,126
-> Captured SQL account for     74.4% of Total

                               Reads              CPU     Elapsed
Physical Reads  Executions    per Exec   %Total Time (s)  Time (s)    SQL Id
-------------- ----------- ------------- ------ -------- --------- -------------
       330,210           1     330,210.0    5.8    20.85     49.20 2n4xg8c3dmd62
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, count(*) as numwait from supplier, lineitem l1, orders, nation wh
ere s_suppkey = l1.l_suppkey and o_orderkey = l1.l_orderkey and o_orderstatus =
'F' and l1.l_receiptdate > l1.l_commitdate and exists ( select * from lineitem l
2 where l2.l_orderkey = l1.l_orderkey and l2.l_suppkey <> l1.l_suppkey) and not

       330,210           1     330,210.0    5.8    21.81     56.88 6mrh6s1s5g851
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, count(*) as numwait from supplier, lineitem l1, orders, nation wh
ere s_suppkey = l1.l_suppkey and o_orderkey = l1.l_orderkey and o_orderstatus =
'F' and l1.l_receiptdate > l1.l_commitdate and exists ( select * from lineitem l
2 where l2.l_orderkey = l1.l_orderkey and l2.l_suppkey <> l1.l_suppkey) and not

       330,210           1     330,210.0    5.8    28.03     89.61 bsdgaykhvy4xr
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, count(*) as numwait from supplier, lineitem l1, orders, nation wh
ere s_suppkey = l1.l_suppkey and o_orderkey = l1.l_orderkey and o_orderstatus =
'F' and l1.l_receiptdate > l1.l_commitdate and exists ( select * from lineitem l
2 where l2.l_orderkey = l1.l_orderkey and l2.l_suppkey <> l1.l_suppkey) and not

       301,265           2     150,632.5    5.3    28.34     89.09 bmfc2a2ym0kwr
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select nation, o_year, sum(amount) as sum_profit from ( select n_name as nation,
 extract(year from o_orderdate) as o_year, l_extendedprice * (1 - l_discount) -
ps_supplycost * l_quantity as amount from part, supplier, lineitem, partsupp, or
ders, nation where s_suppkey = l_suppkey and ps_suppkey = l_suppkey and ps_partk

       151,726           1     151,726.0    2.7    13.58     29.35 1f0r8shtps3bu
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select nation, o_year, sum(amount) as sum_profit from ( select n_name as nation,
 extract(year from o_orderdate) as o_year, l_extendedprice * (1 - l_discount) -
ps_supplycost * l_quantity as amount from part, supplier, lineitem, partsupp, or
ders, nation where s_suppkey = l_suppkey and ps_suppkey = l_suppkey and ps_partk

       132,179           1     132,179.0    2.3     6.89     23.19 5xd0ak4417rk0
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_year, sum(case when nation = 'MOZAMBIQUE' then volume else 0 end) / sum
(volume) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_
extendedprice * (1 - l_discount) as volume, n2.n_name as nation from part, suppl
ier, lineitem, orders, customer, nation n1, nation n2, region where p_partkey =

       132,179           1     132,179.0    2.3     8.14     25.54 6aqpwwba8xvuu
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_year, sum(case when nation = 'GERMANY' then volume else 0 end) / sum(vo
lume) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_ext
endedprice * (1 - l_discount) as volume, n2.n_name as nation from part, supplier
, lineitem, orders, customer, nation n1, nation n2, region where p_partkey = l_p

       132,179           1     132,179.0    2.3     6.76     23.21 94wqqbu0ajcvn
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_year, sum(case when nation = 'JAPAN' then volume else 0 end) / sum(volu
me) as mkt_share from ( select extract(year from o_orderdate) as o_year, l_exten
dedprice * (1 - l_discount) as volume, n2.n_name as nation from part, supplier,
lineitem, orders, customer, nation n1, nation n2, region where p_partkey = l_par

       125,250           1     125,250.0    2.2     6.39     12.83 05burzzbuh660
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_orderpriority, count(*) as order_count from orders where o_orderdate >=
 date '1993-05-01' and o_orderdate < date '1993-05-01' + interval '3' month and
SQL ordered by Reads                       DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Total Disk Reads:       5,663,126
-> Captured SQL account for     74.4% of Total

                               Reads              CPU     Elapsed
Physical Reads  Executions    per Exec   %Total Time (s)  Time (s)    SQL Id
-------------- ----------- ------------- ------ -------- --------- -------------
exists ( select * from lineitem where l_orderkey = o_orderkey and l_commitdate <
 l_receiptdate) group by o_orderpriority order by o_orderpriority

       125,250           1     125,250.0    2.2     6.12     12.15 05pqvq1019n1t
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_orderpriority, count(*) as order_count from orders where o_orderdate >=
 date '1996-05-01' and o_orderdate < date '1996-05-01' + interval '3' month and
exists ( select * from lineitem where l_orderkey = o_orderkey and l_commitdate <
 l_receiptdate) group by o_orderpriority order by o_orderpriority

       125,250           1     125,250.0    2.2     6.63     13.86 15dxu5nmuj14a
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select o_orderpriority, count(*) as order_count from orders where o_orderdate >=
 date '1994-08-01' and o_orderdate < date '1994-08-01' + interval '3' month and
exists ( select * from lineitem where l_orderkey = o_orderkey and l_commitdate <
 l_receiptdate) group by o_orderpriority order by o_orderpriority

       125,250           1     125,250.0    2.2     5.85     11.77 2xf48ymvbjhxv
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_shipmode, sum(case when o_orderpriority = '1-URGENT' or o_orderpriority
 = '2-HIGH' then 1 else 0 end) as high_line_count, sum(case when o_orderpriority
 <> '1-URGENT' and o_orderpriority <> '2-HIGH' then 1 else 0 end) as low_line_co
unt from orders, lineitem where o_orderkey = l_orderkey and l_shipmode in ('MAIL

       125,250           1     125,250.0    2.2     5.78     11.67 3yj8qcg6sf32h
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_shipmode, sum(case when o_orderpriority = '1-URGENT' or o_orderpriority
 = '2-HIGH' then 1 else 0 end) as high_line_count, sum(case when o_orderpriority
 <> '1-URGENT' and o_orderpriority <> '2-HIGH' then 1 else 0 end) as low_line_co
unt from orders, lineitem where o_orderkey = l_orderkey and l_shipmode in ('AIR'

       125,250           1     125,250.0    2.2     6.01     12.20 c5dr0bxu3s966
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_shipmode, sum(case when o_orderpriority = '1-URGENT' or o_orderpriority
 = '2-HIGH' then 1 else 0 end) as high_line_count, sum(case when o_orderpriority
 <> '1-URGENT' and o_orderpriority <> '2-HIGH' then 1 else 0 end) as low_line_co
unt from orders, lineitem where o_orderkey = l_orderkey and l_shipmode in ('TRUC

       109,730           1     109,730.0    1.9     6.02     50.23 acgpfd4ysyfxb
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, s_address from supplier, nation where s_suppkey in ( select ps_su
ppkey from partsupp where ps_partkey in ( select p_partkey from part where p_nam
e like 'puff%') and ps_availqty > ( select 0.5 * sum(l_quantity) from lineitem w
here l_partkey = ps_partkey and l_suppkey = ps_suppkey and l_shipdate >= date '1

       108,262           1     108,262.0    1.9     5.46     10.68 cx10bjzjkg410
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, s_address from supplier, nation where s_suppkey in ( select ps_su
ppkey from partsupp where ps_partkey in ( select p_partkey from part where p_nam
e like 'moccasin%') and ps_availqty > ( select 0.5 * sum(l_quantity) from lineit
em where l_partkey = ps_partkey and l_suppkey = ps_suppkey and l_shipdate >= dat

       107,978           1     107,978.0    1.9     6.32     12.80 bdaz68nhm6jm4
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select s_name, s_address from supplier, nation where s_suppkey in ( select ps_su
ppkey from partsupp where ps_partkey in ( select p_partkey from part where p_nam
e like 'linen%') and ps_availqty > ( select 0.5 * sum(l_quantity) from lineitem
where l_partkey = ps_partkey and l_suppkey = ps_suppkey and l_shipdate >= date '

       106,241           1     106,241.0    1.9     5.87     58.52 081am6psuh26j
SQL ordered by Reads                       DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Total Disk Reads:       5,663,126
-> Captured SQL account for     74.4% of Total

                               Reads              CPU     Elapsed
Physical Reads  Executions    per Exec   %Total Time (s)  Time (s)    SQL Id
-------------- ----------- ------------- ------ -------- --------- -------------
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select sum(l_extendedprice) / 7.0 as avg_yearly from lineitem, part where p_part
key = l_partkey and p_brand = 'Brand#35' and p_container = 'LG BOX' and l_quanti
ty < ( select 0.2 * avg(l_quantity) from lineitem where l_partkey = p_partkey)

       106,241           1     106,241.0    1.9     5.18      9.82 3v74jf7w31h8v
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select sum(l_extendedprice) / 7.0 as avg_yearly from lineitem, part where p_part
key = l_partkey and p_brand = 'Brand#21' and p_container = 'LG DRUM' and l_quant
ity < ( select 0.2 * avg(l_quantity) from lineitem where l_partkey = p_partkey)

       106,241           1     106,241.0    1.9     5.52     10.44 5u88ac3spdu0n
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select sum(l_extendedprice* (1 - l_discount)) as revenue from lineitem, part whe
re ( p_partkey = l_partkey and p_brand = 'Brand#45' and p_container in ('SM CASE
', 'SM BOX', 'SM PACK', 'SM PKG') and l_quantity >= 1 and l_quantity <= 1 + 10 a
nd p_size between 1 and 5 and l_shipmode in ('AIR', 'AIR REG') and l_shipinstruc

       106,241           1     106,241.0    1.9     5.42     12.66 75d32g70ru6f2
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select sum(l_extendedprice* (1 - l_discount)) as revenue from lineitem, part whe
re ( p_partkey = l_partkey and p_brand = 'Brand#34' and p_container in ('SM CASE
', 'SM BOX', 'SM PACK', 'SM PKG') and l_quantity >= 3 and l_quantity <= 3 + 10 a
nd p_size between 1 and 5 and l_shipmode in ('AIR', 'AIR REG') and l_shipinstruc

       106,241           1     106,241.0    1.9     5.36     10.16 by11nan0n3nbb
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select sum(l_extendedprice* (1 - l_discount)) as revenue from lineitem, part whe
re ( p_partkey = l_partkey and p_brand = 'Brand#23' and p_container in ('SM CASE
', 'SM BOX', 'SM PACK', 'SM PKG') and l_quantity >= 1 and l_quantity <= 1 + 10 a
nd p_size between 1 and 5 and l_shipmode in ('AIR', 'AIR REG') and l_shipinstruc

       103,748           1     103,748.0    1.8    14.85     45.70 29rqwcj4cs31u
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)
 from customer, orders, lineitem where o_orderkey in ( select l_orderkey from li
neitem group by l_orderkey having sum(l_quantity) > 313) and c_custkey = o_custk
ey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderda

       102,436           2      51,218.0    1.8    11.32     23.29 814qvp0rkqug4
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)
 from customer, orders, lineitem where o_orderkey in ( select l_orderkey from li
neitem group by l_orderkey having sum(l_quantity) > 314) and c_custkey = o_custk
ey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderda

       102,430           1     102,430.0    1.8    11.47     23.72 8sfhj7ua3qfjf
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)
 from customer, orders, lineitem where o_orderkey in ( select l_orderkey from li
neitem group by l_orderkey having sum(l_quantity) > 312) and c_custkey = o_custk
ey and o_orderkey = l_orderkey group by c_name, c_custkey, o_orderkey, o_orderda

       102,405           1     102,405.0    1.8    13.69     25.90 05jp96tzvutb6
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedpri
ce) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price
, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_qua
ntity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_dis

SQL ordered by Reads                       DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Total Disk Reads:       5,663,126
-> Captured SQL account for     74.4% of Total

                               Reads              CPU     Elapsed
Physical Reads  Executions    per Exec   %Total Time (s)  Time (s)    SQL Id
-------------- ----------- ------------- ------ -------- --------- -------------
       102,405           1     102,405.0    1.8    14.08     30.77 7409gxv4spfj2
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedpri
ce) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price
, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_qua
ntity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_dis

       102,405           1     102,405.0    1.8    14.34     34.07 cvhgz2zwbk4qf
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedpri
ce) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price
, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_qua
ntity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_dis

        66,900           2      33,450.0    1.2     4.26      8.54 ag9jkv5xuz0dz
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
select ps_partkey, sum(ps_supplycost * ps_availqty) as value from partsupp, supp
lier, nation where ps_suppkey = s_suppkey and s_nationkey = n_nationkey and n_na
me = 'EGYPT' group by ps_partkey having sum(ps_supplycost * ps_availqty) > ( sel
ect sum(ps_supplycost * ps_availqty) * 0.0001000000 from partsupp, supplier, nat

          -------------------------------------------------------------

SQL ordered by Executions                  DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Total Executions:          57,841
-> Captured SQL account for     18.5% of Total

                                              CPU per    Elap per
 Executions   Rows Processed  Rows per Exec   Exec (s)   Exec (s)     SQL Id
------------ --------------- -------------- ---------- ----------- -------------
       5,442           5,442            1.0       0.00        0.00 8yvup05pk06ca
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
SELECT S_QUANTITY, S_DATA, S_DIST_01, S_DIST_02, S_DIST_03, S_DIST_04, S_DIST_05
, S_DIST_06, S_DIST_07, S_DIST_08, S_DIST_09, S_DIST_10 FROM STOCK WHERE S_I_ID
= :B2 AND S_W_ID = :B1

         563             563            1.0       0.00        0.00 3c1kubcdjnppq
update sys.col_usage$ set equality_preds = equality_preds + decode(bitan
d(:flag,1),0,0,1), equijoin_preds = equijoin_preds + decode(bitand(:flag
,2),0,0,1), nonequijoin_preds = nonequijoin_preds + decode(bitand(:flag,4),0,0
,1), range_preds = range_preds + decode(bitand(:flag,8),0,0,1),

         546             546            1.0       0.01        0.04 16dhat4ta7xs9
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
begin neword(:no_w_id,:no_max_w_id,:no_d_id,:no_c_id,:no_o_ol_cnt,:no_c_discount
,:no_c_last,:no_c_credit,:no_d_tax,:no_w_tax,:no_d_next_o_id,TO_DATE(:timestamp,
'YYYYMMDDHH24MISS')); END;

         539             539            1.0       0.00        0.04 aw9ttz9acxbc3
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
BEGIN payment(:p_w_id,:p_d_id,:p_c_w_id,:p_c_d_id,:p_c_id,:byname,:p_h_amount,:p
_c_last,:p_w_street_1,:p_w_street_2,:p_w_city,:p_w_state,:p_w_zip,:p_d_street_1,
:p_d_street_2,:p_d_city,:p_d_state,:p_d_zip,:p_c_first,:p_c_middle,:p_c_street_1
,:p_c_street_2,:p_c_city,:p_c_state,:p_c_zip,:p_c_phone,:p_c_since,:p_c_credit,:

         420           4,284           10.2       0.00        0.04 5ps73nuy5f2vj
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
UPDATE ORDER_LINE SET OL_DELIVERY_D = :B4 WHERE OL_O_ID = :B3 AND OL_D_ID = :B2
AND OL_W_ID = :B1

         317           2,534            8.0       0.00        0.05 4wg725nwpxb1z
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
SELECT C_FIRST, C_MIDDLE, C_ID, C_STREET_1, C_STREET_2, C_CITY, C_STATE, C_ZIP,
C_PHONE, C_CREDIT, C_CREDIT_LIM, C_DISCOUNT, C_BALANCE, C_SINCE FROM CUSTOMER WH
ERE C_W_ID = :B3 AND C_D_ID = :B2 AND C_LAST = :B1 ORDER BY C_FIRST

         268             268            1.0       0.00        0.00 2ym6hhaq30r73
select type#,blocks,extents,minexts,maxexts,extsize,extpct,user#,iniexts,NVL(lis
ts,65535),NVL(groups,65535),cachehint,hwmincr, NVL(spare1,0),NVL(scanhint,0) fro
m seg$ where ts#=:1 and file#=:2 and block#=:3

         224             203            0.9       0.00        0.02 96g93hntrzjtr
select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, timestamp#, sample_
size, minimum, maximum, distcnt, lowval, hival, density, col#, spare1, spare2, a
vgcln from hist_head$ where obj#=:1 and intcol#=:2

         203               0            0.0       0.00        0.00 b2gnxm5z6r51n
lock table sys.col_usage$ in exclusive mode nowait

         135             135            1.0       0.00        0.00 3m8smr0v7v1m6
INSERT INTO sys.wri$_adv_message_groups (task_id,id,seq,message#,fac,hdr,lm,nl,p
1,p2,p3,p4,p5) VALUES (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13)

          -------------------------------------------------------------

SQL ordered by Parse Calls                 DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Total Parse Calls:           5,952
-> Captured SQL account for      61.4% of Total

                            % Total
 Parse Calls  Executions     Parses    SQL Id
------------ ------------ --------- -------------
         546          546      9.17 16dhat4ta7xs9
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
begin neword(:no_w_id,:no_max_w_id,:no_d_id,:no_c_id,:no_o_ol_cnt,:no_c_discount
,:no_c_last,:no_c_credit,:no_d_tax,:no_w_tax,:no_d_next_o_id,TO_DATE(:timestamp,
'YYYYMMDDHH24MISS')); END;

         539          539      9.06 aw9ttz9acxbc3
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
BEGIN payment(:p_w_id,:p_d_id,:p_c_w_id,:p_c_d_id,:p_c_id,:byname,:p_h_amount,:p
_c_last,:p_w_street_1,:p_w_street_2,:p_w_city,:p_w_state,:p_w_zip,:p_d_street_1,
:p_d_street_2,:p_d_city,:p_d_state,:p_d_zip,:p_c_first,:p_c_middle,:p_c_street_1
,:p_c_street_2,:p_c_city,:p_c_state,:p_c_zip,:p_c_phone,:p_c_since,:p_c_credit,:

         268          268      4.50 2ym6hhaq30r73
select type#,blocks,extents,minexts,maxexts,extsize,extpct,user#,iniexts,NVL(lis
ts,65535),NVL(groups,65535),cachehint,hwmincr, NVL(spare1,0),NVL(scanhint,0) fro
m seg$ where ts#=:1 and file#=:2 and block#=:3

         203          563      3.41 3c1kubcdjnppq
update sys.col_usage$ set equality_preds = equality_preds + decode(bitan
d(:flag,1),0,0,1), equijoin_preds = equijoin_preds + decode(bitand(:flag
,2),0,0,1), nonequijoin_preds = nonequijoin_preds + decode(bitand(:flag,4),0,0
,1), range_preds = range_preds + decode(bitand(:flag,8),0,0,1),

         203            0      3.41 53btfq0dt9bs9
insert into sys.col_usage$ values ( :objn, :coln, decode(bitand(:flag,1),0,0
,1), decode(bitand(:flag,2),0,0,1), decode(bitand(:flag,4),0,0,1), decode(
bitand(:flag,8),0,0,1), decode(bitand(:flag,16),0,0,1), decode(bitand(:flag,
32),0,0,1), :time)

         203          203      3.41 b2gnxm5z6r51n
lock table sys.col_usage$ in exclusive mode nowait

         135          135      2.27 3m8smr0v7v1m6
INSERT INTO sys.wri$_adv_message_groups (task_id,id,seq,message#,fac,hdr,lm,nl,p
1,p2,p3,p4,p5) VALUES (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13)

         132          132      2.22 grwydz59pu6mc
select text from view$ where rowid=:1

         130          130      2.18 f80h0xb1qvbsk
SELECT sys.wri$_adv_seq_msggroup.nextval FROM dual

         125          125      2.10 350f5yrnnmshs
lock table sys.mon_mods$ in exclusive mode nowait

         125          125      2.10 g00cj285jmgsw
update sys.mon_mods$ set inserts = inserts + :ins, updates = updates + :upd, del
etes = deletes + :del, flags = (decode(bitand(flags, :flag), :flag, flags, flags
 + :flag)), drop_segments = drop_segments + :dropseg, timestamp = :time where ob
j# = :objn

          83           83      1.39 4m7m0t6fjcs5x
update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,cache=
:7,highwater=:8,audit$=:9,flags=:10 where obj#=:1

          82           82      1.38 0h6b2sajwb74n
select privilege#,level from sysauth$ connect by grantee#=prior privilege# and p
rivilege#>0 start with grantee#=:1 and privilege#>0

SQL ordered by Parse Calls                 DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Total Parse Calls:           5,952
-> Captured SQL account for      61.4% of Total

                            % Total
 Parse Calls  Executions     Parses    SQL Id
------------ ------------ --------- -------------
          70           70      1.18 1dubbbfqnqvh9
SELECT ORA_TQ_BASE$.NEXTVAL FROM DUAL

          63           63      1.06 39m4sx9k63ba2
select /*+ index(idl_ub2$ i_idl_ub21) +*/ piece#,length,piece from idl_ub2$ wher
e obj#=:1 and part=:2 and version=:3 order by piece#

          63           63      1.06 c6awqs517jpj0
select /*+ index(idl_char$ i_idl_char1) +*/ piece#,length,piece from idl_char$ w
here obj#=:1 and part=:2 and version=:3 order by piece#

          63           63      1.06 cvn54b7yz0s8u
select /*+ index(idl_ub1$ i_idl_ub11) +*/ piece#,length,piece from idl_ub1$ wher
e obj#=:1 and part=:2 and version=:3 order by piece#

          63           63      1.06 ga9j9xk5cy9s0
select /*+ index(idl_sb4$ i_idl_sb41) +*/ piece#,length,piece from idl_sb4$ wher
e obj#=:1 and part=:2 and version=:3 order by piece#

          62           62      1.04 5hyh0360hgx2u
Module: wish8.5@dbrocaix01.bayantel.com (TNS V1-V3)
BEGIN slev(:st_w_id,:st_d_id,:threshold); END;

          -------------------------------------------------------------

SQL ordered by Sharable Memory             DB/Inst: IVRS/ivrs  Snaps: 338-339

                  No data exists for this section of the report.
          -------------------------------------------------------------

SQL ordered by Version Count               DB/Inst: IVRS/ivrs  Snaps: 338-339

                  No data exists for this section of the report.
          -------------------------------------------------------------

Instance Activity Stats                    DB/Inst: IVRS/ivrs  Snaps: 338-339

Statistic                                     Total     per Second     per Trans
-------------------------------- ------------------ -------------- -------------
CPU used by this session                     43,952           72.9          17.3
CPU used when call started                   80,217          133.0          31.6
CR blocks created                                 6            0.0           0.0
Cached Commit SCN referenced                    432            0.7           0.2
Commit SCN cached                                 1            0.0           0.0
DB time                                     374,210          620.5         147.4
DBWR checkpoint buffers written               3,433            5.7           1.4
DBWR checkpoints                                  1            0.0           0.0
DBWR transaction table writes                    20            0.0           0.0
DBWR undo block writes                          917            1.5           0.4
DFO trees parallelized                           76            0.1           0.0
IMU CR rollbacks                                  0            0.0           0.0
IMU Flushes                                     159            0.3           0.1
IMU Redo allocation size                    896,956        1,487.3         353.3
IMU commits                                   1,881            3.1           0.7
IMU contention                                    5            0.0           0.0
IMU pool not allocated                          488            0.8           0.2
IMU recursive-transaction flush                  47            0.1           0.0
IMU undo allocation size                 14,837,636       24,603.8       5,843.9
IMU- failed to get a private str                488            0.8           0.2
PX local messages recv'd                    427,618          709.1         168.4
PX local messages sent                      427,626          709.1         168.4
Parallel operations not downgrad                 76            0.1           0.0
SMON posted for undo segment shr                  0            0.0           0.0
SQL*Net roundtrips to/from clien              8,221           13.6           3.2
active txn count during cleanout                220            0.4           0.1
application wait time                             0            0.0           0.0
background checkpoints completed                  1            0.0           0.0
background checkpoints started                    1            0.0           0.0
background timeouts                           2,201            3.7           0.9
branch node splits                                0            0.0           0.0
buffer is not pinned count                  185,040          306.8          72.9
buffer is pinned count                      153,771          255.0          60.6
bytes received via SQL*Net from           2,339,345        3,879.1         921.4
bytes sent via SQL*Net to client          3,067,072        5,085.8       1,208.0
calls to get snapshot scn: kcmgs             93,092          154.4          36.7
calls to kcmgas                               4,066            6.7           1.6
calls to kcmgcs                                 276            0.5           0.1
change write time                               204            0.3           0.1
cleanout - number of ktugct call                278            0.5           0.1
cleanouts and rollbacks - consis                  0            0.0           0.0
cleanouts only - consistent read                 65            0.1           0.0
cluster key scan block gets                   1,999            3.3           0.8
cluster key scans                               991            1.6           0.4
commit batch/immediate performed                  5            0.0           0.0
commit batch/immediate requested                  5            0.0           0.0
commit cleanout failures: block                   0            0.0           0.0
commit cleanout failures: buffer                  0            0.0           0.0
commit cleanout failures: callba                 13            0.0           0.0
commit cleanout failures: cannot                  0            0.0           0.0
commit cleanouts                             21,493           35.6           8.5
commit cleanouts successfully co             21,480           35.6           8.5
commit immediate performed                        5            0.0           0.0
commit immediate requested                        5            0.0           0.0
commit txn count during cleanout                254            0.4           0.1
concurrency wait time                           168            0.3           0.1
consistent changes                               87            0.1           0.0
consistent gets                           5,981,829        9,919.1       2,356.0
consistent gets - examination               217,091          360.0          85.5
consistent gets direct                    5,802,782        9,622.2       2,285.5
Instance Activity Stats                    DB/Inst: IVRS/ivrs  Snaps: 338-339

Statistic                                     Total     per Second     per Trans
-------------------------------- ------------------ -------------- -------------
consistent gets from cache                  354,637          588.1         139.7
cursor authentications                          208            0.3           0.1
data blocks consistent reads - u                  4            0.0           0.0
db block changes                             88,663          147.0          34.9
db block gets                                68,732          114.0          27.1
db block gets direct                              6            0.0           0.0
db block gets from cache                     68,726          114.0          27.1
deferred (CURRENT) block cleanou              9,569           15.9           3.8
dirty buffers inspected                       2,347            3.9           0.9
enqueue conversions                             806            1.3           0.3
enqueue releases                             21,768           36.1           8.6
enqueue requests                             21,963           36.4           8.7
enqueue timeouts                                201            0.3           0.1
enqueue waits                                    51            0.1           0.0
execute count                                57,841           95.9          22.8
free buffer inspected                        20,379           33.8           8.0
free buffer requested                        19,793           32.8           7.8
heap block compress                             144            0.2           0.1
hot buffers moved to head of LRU              7,705           12.8           3.0
immediate (CR) block cleanout ap                 65            0.1           0.0
immediate (CURRENT) block cleano              2,074            3.4           0.8
index crx upgrade (positioned)                  498            0.8           0.2
index fast full scans (direct re                 78            0.1           0.0
index fast full scans (full)                     15            0.0           0.0
index fast full scans (rowid ran                156            0.3           0.1
index fetch by key                           82,020          136.0          32.3
index scans kdiixs1                          16,708           27.7           6.6
leaf node 90-10 splits                           10            0.0           0.0
leaf node splits                                172            0.3           0.1
lob reads                                        42            0.1           0.0
lob writes                                      178            0.3           0.1
lob writes unaligned                            178            0.3           0.1
logons cumulative                               312            0.5           0.1
messages received                             2,837            4.7           1.1
messages sent                                 2,837            4.7           1.1
no buffer to keep pinned count                    0            0.0           0.0
no work - consistent read gets            5,895,112        9,775.3       2,321.8
opened cursors cumulative                     6,773           11.2           2.7
parse count (failures)                            0            0.0           0.0
parse count (hard)                              416            0.7           0.2
parse count (total)                           5,952            9.9           2.3
parse time cpu                                  684            1.1           0.3
parse time elapsed                            3,497            5.8           1.4
physical read IO requests                   397,584          659.3         156.6
physical read bytes                  47,825,584,128   79,304,326.1  18,836,386.0
physical read total IO requests             387,784          643.0         152.7
physical read total bytes            47,903,270,912   79,433,146.3  18,866,983.4
physical read total multi block             380,329          630.7         149.8
physical reads                            5,663,126        9,390.6       2,230.5
physical reads cache                         17,948           29.8           7.1
physical reads cache prefetch                   943            1.6           0.4
physical reads direct                     5,820,136        9,650.9       2,292.3
physical reads direct (lob)                       0            0.0           0.0
physical reads direct temporary              17,299           28.7           6.8
physical reads prefetch warmup                    0            0.0           0.0
physical reads retry corrupt                     54            0.1           0.0
physical write IO requests                    5,731            9.5           2.3
physical write bytes                    203,554,816      337,534.4      80,171.3
physical write total IO requests              8,484           14.1           3.3
physical write total bytes              283,976,192      470,889.0     111,845.7
Instance Activity Stats                    DB/Inst: IVRS/ivrs  Snaps: 338-339

Statistic                                     Total     per Second     per Trans
-------------------------------- ------------------ -------------- -------------
physical write total multi block              5,019            8.3           2.0
physical writes                              24,848           41.2           9.8
physical writes direct                       17,318           28.7           6.8
physical writes direct (lob)                      0            0.0           0.0
physical writes direct temporary             17,299           28.7           6.8
physical writes from cache                    7,530           12.5           3.0
physical writes non checkpoint               23,712           39.3           9.3
pinned buffers inspected                          6            0.0           0.0
prefetch warmup blocks aged out                   0            0.0           0.0
prefetched blocks aged out befor                  2            0.0           0.0
process last non-idle time                      579            1.0           0.2
queries parallelized                             67            0.1           0.0
recursive calls                             115,742          191.9          45.6
recursive cpu usage                          43,163           71.6          17.0
redo blocks written                          32,504           53.9          12.8
redo buffer allocation retries                    1            0.0           0.0
redo entries                                 19,373           32.1           7.6
redo log space requests                           1            0.0           0.0
redo log space wait time                         25            0.0           0.0
redo size                                15,647,384       25,946.5       6,162.8
redo synch time                               5,470            9.1           2.2
redo synch writes                             2,102            3.5           0.8
redo wastage                                473,620          785.4         186.5
redo write time                               7,062           11.7           2.8
redo writer latching time                         1            0.0           0.0
redo writes                                   2,253            3.7           0.9
rollback changes - undo records                 109            0.2           0.0
rollbacks only - consistent read                  4            0.0           0.0
rows fetched via callback                    65,515          108.6          25.8
session connect time                              0            0.0           0.0
session cursor cache hits                     5,056            8.4           2.0
session logical reads                     6,050,561       10,033.0       2,383.1
session pga memory                      411,760,136      682,780.2     162,174.1
session pga memory max                5,395,314,184    8,946,503.5   2,124,976.1
session uga memory                   51,546,743,504   85,474,748.1  20,301,986.4
session uga memory max                1,043,985,628    1,731,135.7     411,179.9
shared hash latch upgrades - no               4,344            7.2           1.7
sorts (disk)                                      0            0.0           0.0
sorts (memory)                                1,839            3.1           0.7
sorts (rows)                             25,796,856       42,776.3      10,160.2
sql area evicted                                 85            0.1           0.0
sql area purged                                   0            0.0           0.0
summed dirty queue length                    13,205           21.9           5.2
switch current to new buffer                    858            1.4           0.3
table fetch by rowid                        127,462          211.4          50.2
table fetch continued row                       199            0.3           0.1
table scan blocks gotten                  5,826,518        9,661.5       2,294.8
table scan rows gotten                  342,238,570      567,499.6     134,792.7
table scans (direct read)                     3,353            5.6           1.3
table scans (long tables)                     4,229            7.0           1.7
table scans (rowid ranges)                    4,229            7.0           1.7
table scans (short tables)                    4,797            8.0           1.9
total number of times SMON poste                 39            0.1           0.0
transaction rollbacks                             5            0.0           0.0
transaction tables consistent re                  2            0.0           0.0
transaction tables consistent re                 28            0.1           0.0
undo change vector size                   5,810,908        9,635.6       2,288.7
user I/O wait time                           42,789           71.0          16.9
user calls                                   11,544           19.1           4.6
user commits                                  2,526            4.2           1.0
Instance Activity Stats                    DB/Inst: IVRS/ivrs  Snaps: 338-339

Statistic                                     Total     per Second     per Trans
-------------------------------- ------------------ -------------- -------------
user rollbacks                                   13            0.0           0.0
workarea executions - onepass                    10            0.0           0.0
workarea executions - optimal                 1,805            3.0           0.7
write clones created in foregrou                  7            0.0           0.0
          -------------------------------------------------------------

Instance Activity Stats - Absolute Values  DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Statistics with absolute values (should not be diffed)

Statistic                            Begin Value       End Value
-------------------------------- --------------- ---------------
session cursor cache count                10,364          10,760
opened cursors current                        91              67
workarea memory allocated                  2,258           2,368
logons current                                31              30
          -------------------------------------------------------------

Instance Activity Stats - Thread Activity   DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Statistics identified by '(derived)' come from sources other than SYSSTAT

Statistic                                     Total  per Hour
-------------------------------- ------------------ ---------
log switches (derived)                            1      5.97
          -------------------------------------------------------------

Tablespace IO Stats                        DB/Inst: IVRS/ivrs  Snaps: 338-339
-> ordered by IOs (Reads + Writes) desc

Tablespace
------------------------------
                 Av      Av     Av                       Av     Buffer Av Buf
         Reads Reads/s Rd(ms) Blks/Rd       Writes Writes/s      Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
TPCHTAB
       387,123     642    0.2    14.8            2        0          3    0.0
USERS
         5,988      10   14.6     1.0        3,649        6          0    0.0
TEMP
         1,534       3    0.0    11.3        1,533        3          0    0.0
SYSTEM
         1,211       2   47.0     1.1           41        0          0    0.0
SYSAUX
           726       1  100.0     1.3          254        0          0    0.0
UNDOTBS1
             9       0    4.4     1.0          354        1          0    0.0
CCDATA
             1       0    0.0     1.0            1        0          0    0.0
CCINDEX
             1       0    0.0     1.0            1        0          0    0.0
PSE
             1       0    0.0     1.0            1        0          0    0.0
SOE
             1       0    0.0     1.0            1        0          0    0.0
SOEINDEX
             1       0    0.0     1.0            1        0          0    0.0
TPCCTAB
             1       0    0.0     1.0            1        0          0    0.0
          -------------------------------------------------------------

File IO Stats                              DB/Inst: IVRS/ivrs  Snaps: 338-339
-> ordered by Tablespace, File

Tablespace               Filename
------------------------ ----------------------------------------------------
                 Av      Av     Av                       Av     Buffer Av Buf
         Reads Reads/s Rd(ms) Blks/Rd       Writes Writes/s      Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
CCDATA                   +DATA_1/ivrs/datafile/ccdata.dbf
             1       0    0.0     1.0            1        0          0    0.0
CCINDEX                  +DATA_1/ivrs/datafile/ccindex.dbf
             1       0    0.0     1.0            1        0          0    0.0
PSE                      +DATA_1/ivrs/pse.dbf
             1       0    0.0     1.0            1        0          0    0.0
SOE                      +DATA_1/ivrs/datafile/soe.dbf
             1       0    0.0     1.0            1        0          0    0.0
SOEINDEX                 +DATA_1/ivrs/datafile/soeindex.dbf
             1       0    0.0     1.0            1        0          0    0.0
SYSAUX                   +DATA_1/ivrs/datafile/sysaux.258.652821943
           726       1  100.0     1.3          254        0          0    0.0
SYSTEM                   +DATA_1/ivrs/datafile/system.267.652821909
         1,176       2   48.3     1.1           40        0          0    0.0
SYSTEM                   +DATA_1/ivrs/datafile/system_02.dbf
            35       0    4.6     1.0            1        0          0    0.0
TEMP                     +DATA_1/ivrs/tempfile/temp.256.652821953
         1,534       3    0.0    11.3        1,533        3          0    N/A
TPCCTAB                  +DATA_1/ivrs/tpcctab01.dbf
             1       0    0.0     1.0            1        0          0    0.0
TPCHTAB                  +DATA_1/ivrs/datafile/tpch_01.dbf
       387,123     642    0.2    14.8            2        0          3    0.0
UNDOTBS1                 +DATA_1/ivrs/datafile/undotbs1.257.652821933
             9       0    4.4     1.0          354        1          0    0.0
USERS                    +DATA_1/ivrs/datafile/users.263.652821963
         5,739      10   13.0     1.0        3,515        6          0    0.0
USERS                    +DATA_1/ivrs/datafile/users02.dbf
           249       0   50.3     1.0          134        0          0    0.0
          -------------------------------------------------------------

Buffer Pool Statistics                     DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Standard block size Pools  D: default,  K: keep,  R: recycle
-> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k

                                                            Free Writ     Buffer
     Number of Pool         Buffer     Physical    Physical Buff Comp       Busy
P      Buffers Hit%           Gets        Reads      Writes Wait Wait      Waits
--- ---------- ---- -------------- ------------ ----------- ---- ---- ----------
D       24,184   96        428,087       18,328       7,579    0    0          3
          -------------------------------------------------------------

Instance Recovery Stats                     DB/Inst: IVRS/ivrs  Snaps: 338-339
-> B: Begin snapshot,  E: End snapshot

  Targt  Estd                                  Log File Log Ckpt     Log Ckpt
  MTTR   MTTR   Recovery  Actual    Target       Size    Timeout     Interval
   (s)    (s)   Estd IOs Redo Blks Redo Blks  Redo Blks Redo Blks   Redo Blks
- ----- ----- ---------- --------- --------- ---------- --------- ------------
B     0    50       1945     28667    184320     184320    219902          N/A
E     0    45        490      3284    119243     184320    119243          N/A
          -------------------------------------------------------------

Buffer Pool Advisory                             DB/Inst: IVRS/ivrs  Snap: 339
-> Only rows with estimated physical reads >0 are displayed
-> ordered by Block Size, Buffers For Estimate

                                        Est
                                       Phys
    Size for   Size      Buffers for   Read          Estimated
P    Est (M) Factor         Estimate Factor     Physical Reads
--- -------- ------ ---------------- ------ ------------------
D         16     .1            1,996    3.5            550,709
D         32     .2            3,992    2.5            392,558
D         48     .2            5,988    1.8            285,114
D         64     .3            7,984    1.5            235,318
D         80     .4            9,980    1.3            209,129
D         96     .5           11,976    1.2            196,161
D        112     .6           13,972    1.2            185,692
D        128     .7           15,968    1.1            178,684
D        144     .7           17,964    1.1            172,352
D        160     .8           19,960    1.1            166,932
D        176     .9           21,956    1.0            162,491
D        192    1.0           23,952    1.0            159,080
D        196    1.0           24,451    1.0            158,303
D        208    1.1           25,948    1.0            156,344
D        224    1.1           27,944    1.0            153,879
D        240    1.2           29,940    1.0            150,890
D        256    1.3           31,936    0.9            141,958
D        272    1.4           33,932    0.9            138,023
D        288    1.5           35,928    0.9            135,507
D        304    1.6           37,924    0.8            133,447
D        320    1.6           39,920    0.8            131,793
          -------------------------------------------------------------

PGA Aggr Summary                           DB/Inst: IVRS/ivrs  Snaps: 338-339
-> PGA cache hit % - percentage of W/A (WorkArea) data processed only in-memory

PGA Cache Hit %   W/A MB Processed  Extra W/A MB Read/Written
--------------- ------------------ --------------------------
           88.6              1,083                        139
          -------------------------------------------------------------

PGA Aggr Target Stats                       DB/Inst: IVRS/ivrs  Snaps: 338-339
-> B: Begin snap   E: End snap (rows dentified with B or E contain data
   which is absolute i.e. not diffed over the interval)
-> Auto PGA Target - actual workarea memory target
-> W/A PGA Used    - amount of memory used for all Workareas (manual + auto)
-> %PGA W/A Mem    - percentage of PGA memory allocated to workareas
-> %Auto W/A Mem   - percentage of workarea memory controlled by Auto Mem Mgmt
-> %Man W/A Mem    - percentage of workarea memory under manual control

                                                %PGA  %Auto   %Man
    PGA Aggr   Auto PGA   PGA Mem    W/A PGA     W/A    W/A    W/A Global Mem
   Target(M)  Target(M)  Alloc(M)    Used(M)     Mem    Mem    Mem   Bound(K)
- ---------- ---------- ---------- ---------- ------ ------ ------ ----------
B        103         39      148.9        7.7    5.2  100.0     .0     21,094
E        103         39      154.8        7.2    4.7  100.0     .0     21,094
          -------------------------------------------------------------

PGA Aggr Target Histogram                   DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Optimal Executions are purely in-memory operations

  Low     High
Optimal Optimal    Total Execs  Optimal Execs 1-Pass Execs M-Pass Execs
------- ------- -------------- -------------- ------------ ------------
     2K      4K          1,418          1,418            0            0
    64K    128K             14             14            0            0
   128K    256K             30             30            0            0
   256K    512K             12             12            0            0
   512K   1024K            140            140            0            0
     1M      2M             98             98            0            0
     2M      4M             70             70            0            0
     4M      8M             18             18            0            0
     8M     16M             12              6            6            0
    16M     32M             24             20            4            0
          -------------------------------------------------------------

PGA Memory Advisory                              DB/Inst: IVRS/ivrs  Snap: 339
-> When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value
   where Estd PGA Overalloc Count is 0

                                       Estd Extra    Estd PGA   Estd PGA
PGA Target    Size           W/A MB   W/A MB Read/      Cache  Overalloc
  Est (MB)   Factr        Processed Written to Disk     Hit %      Count
---------- ------- ---------------- ---------------- -------- ----------
        13     0.1          2,834.0          3,667.8     44.0        146
        26     0.3          2,834.0          3,667.8     44.0        146
        52     0.5          2,834.0          3,664.7     44.0        145
        77     0.8          2,834.0            756.1     79.0          7
       103     1.0          2,834.0            194.9     94.0          1
       124     1.2          2,834.0             41.8     99.0          0
       144     1.4          2,834.0             41.8     99.0          0
       165     1.6          2,834.0              0.0    100.0          0
       185     1.8          2,834.0              0.0    100.0          0
       206     2.0          2,834.0              0.0    100.0          0
       309     3.0          2,834.0              0.0    100.0          0
       412     4.0          2,834.0              0.0    100.0          0
       618     6.0          2,834.0              0.0    100.0          0
       824     8.0          2,834.0              0.0    100.0          0
          -------------------------------------------------------------

Shared Pool Advisory                            DB/Inst: IVRS/ivrs  Snap: 339
-> SP: Shared Pool     Est LC: Estimated Library Cache   Factr: Factor
-> Note there is often a 1:Many correlation between a single logical object
   in the Library Cache, and the physical number of memory objects associated
   with it.  Therefore comparing the number of Lib Cache objects (e.g. in
   v$librarycache), with the number of Lib Cache Memory Objects is invalid.

                                        Est LC Est LC  Est LC Est LC
    Shared    SP   Est LC                 Time   Time    Load   Load      Est LC
      Pool  Size     Size       Est LC   Saved  Saved    Time   Time         Mem
   Size(M) Factr      (M)      Mem Obj     (s)  Factr     (s)  Factr    Obj Hits
---------- ----- -------- ------------ ------- ------ ------- ------ -----------
        60    .6       14        1,547  44,365     .8  12,137   28.4     331,775
        72    .8       24        2,600  50,039     .9   6,463   15.1     333,416
        84    .9       35        3,092  54,519    1.0   1,983    4.6     334,661
        96   1.0       45        3,178  56,075    1.0     427    1.0     335,447
       108   1.1       56        3,304  56,282    1.0     220     .5     335,896
       120   1.3       66        3,529  56,304    1.0     198     .5     336,179
       132   1.4       77        3,956  56,314    1.0     188     .4     336,411
       144   1.5       88        4,522  56,317    1.0     185     .4     336,631
       156   1.6      100        6,389  56,319    1.0     183     .4     336,791
       168   1.8      100        6,389  56,319    1.0     183     .4     336,855
       180   1.9      100        6,389  56,319    1.0     183     .4     336,865
       192   2.0      100        6,389  56,319    1.0     183     .4     336,867
          -------------------------------------------------------------

SGA Target Advisory                              DB/Inst: IVRS/ivrs  Snap: 339

SGA Target   SGA Size       Est DB     Est Physical
  Size (M)     Factor     Time (s)            Reads
---------- ---------- ------------ ----------------
       156        0.5       11,816          283,577
       234        0.8        8,316          195,068
       312        1.0        7,642          158,193
       390        1.3        7,248          137,264
       468        1.5        7,098          131,063
       546        1.8        7,093          131,063
       624        2.0        7,093          131,063
          -------------------------------------------------------------

Streams Pool Advisory                            DB/Inst: IVRS/ivrs  Snap: 339

  Size for      Size   Est Spill   Est Spill Est Unspill Est Unspill
  Est (MB)    Factor       Count    Time (s)       Count    Time (s)
---------- --------- ----------- ----------- ----------- -----------
         4       1.0           0           0           0           0
         8       2.0           0           0           0           0
        12       3.0           0           0           0           0
        16       4.0           0           0           0           0
        20       5.0           0           0           0           0
        24       6.0           0           0           0           0
        28       7.0           0           0           0           0
        32       8.0           0           0           0           0
        36       9.0           0           0           0           0
        40      10.0           0           0           0           0
        44      11.0           0           0           0           0
        48      12.0           0           0           0           0
        52      13.0           0           0           0           0
        56      14.0           0           0           0           0
        60      15.0           0           0           0           0
        64      16.0           0           0           0           0
        68      17.0           0           0           0           0
        72      18.0           0           0           0           0
        76      19.0           0           0           0           0
        80      20.0           0           0           0           0
          -------------------------------------------------------------

Java Pool Advisory                               DB/Inst: IVRS/ivrs  Snap: 339

                                        Est LC Est LC  Est LC Est LC
      Java    JP   Est LC                 Time   Time    Load   Load      Est LC
      Pool  Size     Size       Est LC   Saved  Saved    Time   Time         Mem
   Size(M) Factr      (M)      Mem Obj     (s)  Factr     (s)  Factr    Obj Hits
---------- ----- -------- ------------ ------- ------ ------- ------ -----------
         8   1.0        4          148      78    1.0     427    1.0         163
        12   1.5        4          148      78    1.0     427    1.0         163
        16   2.0        4          148      78    1.0     427    1.0         163
          -------------------------------------------------------------

Buffer Wait Statistics                      DB/Inst: IVRS/ivrs  Snaps: 338-339
-> ordered by wait time desc, waits desc

Class                    Waits Total Wait Time (s)  Avg Time (ms)
------------------ ----------- ------------------- --------------
data block                   3                   0              0
          -------------------------------------------------------------

Enqueue Activity                           DB/Inst: IVRS/ivrs  Snaps: 338-339
-> only enqueues with waits are shown
-> Enqueue stats gathered prior to 10g should not be compared with 10g data
-> ordered by Wait Time desc, Waits desc

Enqueue Type (Request Reason)
------------------------------------------------------------------------------
    Requests    Succ Gets Failed Gets       Waits  Wt Time (s) Av Wt Time(ms)
------------ ------------ ----------- ----------- ------------ --------------
PS-PX Process Reservation
       1,368        1,168         200          50            0           3.80
BF-BLOOM FILTER (allocation contention)
          78           78           0           1            0            .00
          -------------------------------------------------------------

Undo Segment Summary                       DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Min/Max TR (mins) - Min and Max Tuned Retention (minutes)
-> STO - Snapshot Too Old count,  OOS - Out of Space count
-> Undo segment block stats:
-> uS - unexpired Stolen,   uR - unexpired Released,   uU - unexpired reUsed
-> eS - expired   Stolen,   eR - expired   Released,   eU - expired   reUsed

Undo   Num Undo       Number of  Max Qry   Max Tx Min/Max   STO/     uS/uR/uU/
 TS# Blocks (K)    Transactions  Len (s) Concurcy TR (mins) OOS      eS/eR/eU
---- ---------- --------------- -------- -------- --------- ----- --------------
   1        1.4           4,612       49        4 15/15     0/0   0/0/0/0/0/0
          -------------------------------------------------------------

Undo Segment Stats                          DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Most recent 35 Undostat rows, ordered by Time desc

                Num Undo    Number of Max Qry  Max Tx Tun Ret STO/    uS/uR/uU/
End Time          Blocks Transactions Len (s)   Concy  (mins) OOS     eS/eR/eU
------------ ----------- ------------ ------- ------- ------- ----- ------------
17-Jan 07:07         170          198       0       3      15 0/0   0/0/0/0/0/0
17-Jan 06:57       1,200        4,414      49       4      15 0/0   0/0/0/0/0/0
          -------------------------------------------------------------

Latch Activity                             DB/Inst: IVRS/ivrs  Snaps: 338-339
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
   willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0

                                           Pct    Avg   Wait                 Pct
                                    Get    Get   Slps   Time       NoWait NoWait
Latch Name                     Requests   Miss  /Miss    (s)     Requests   Miss
------------------------ -------------- ------ ------ ------ ------------ ------
ASM allocation                      154    0.0    N/A      0            0    N/A
ASM db client latch                 358    0.0    N/A      0            0    N/A
ASM map headers                      66    0.0    N/A      0            0    N/A
ASM map load waiting lis             11    0.0    N/A      0            0    N/A
ASM map operation freeli             35    0.0    N/A      0            0    N/A
ASM map operation hash t        836,612    0.0    N/A      0            0    N/A
ASM network background l            302    0.0    N/A      0            0    N/A
AWR Alerted Metric Eleme          2,246    0.0    N/A      0            0    N/A
Bloom filter list latch              27    0.0    N/A      0            0    N/A
Consistent RBA                    2,298    0.0    N/A      0            0    N/A
FAL request queue                    14    0.0    N/A      0            0    N/A
FAL subheap alocation                14    0.0    N/A      0            0    N/A
FIB s.o chain latch                  26    0.0    N/A      0            0    N/A
FOB s.o list latch                  131    0.0    N/A      0            0    N/A
In memory undo latch             35,563    0.0    N/A      0        2,736    0.0
JOX SGA heap latch                  887    0.0    N/A      0            0    N/A
JS queue state obj latch          4,248    0.0    N/A      0            0    N/A
JS slv state obj latch                4    0.0    N/A      0            0    N/A
KFK SGA context latch               301    0.0    N/A      0            0    N/A
KFMD SGA                             33    0.0    N/A      0            0    N/A
KMG MMAN ready and start            213    0.0    N/A      0            0    N/A
KMG resize request state              9    0.0    N/A      0            0    N/A
KTF sga latch                         2    0.0    N/A      0          157    0.0
KWQP Prop Status                      3    0.0    N/A      0            0    N/A
MQL Tracking Latch                    0    N/A    N/A      0           11    0.0
Memory Management Latch              96    0.0    N/A      0          213    0.0
OS process                           51    0.0    N/A      0            0    N/A
OS process allocation               230    0.0    N/A      0            0    N/A
OS process: request allo             17    0.0    N/A      0            0    N/A
PL/SQL warning settings           2,329    0.0    N/A      0            0    N/A
Reserved Space Latch                  3    0.0    N/A      0            0    N/A
SGA IO buffer pool latch            128    0.0    N/A      0          164    0.0
SQL memory manager latch             48    0.0    N/A      0          179    0.0
SQL memory manager worka         16,771    0.0    1.0      0            0    N/A
Shared B-Tree                        28    0.0    N/A      0            0    N/A
active checkpoint queue             829    0.0    N/A      0            0    N/A
active service list               3,643    0.0    N/A      0          240    0.0
archive control                      16    0.0    N/A      0            0    N/A
archive process latch               193    0.0    N/A      0            0    N/A
begin backup scn array                2    0.0    N/A      0            0    N/A
buffer pool                           8    0.0    N/A      0            0    N/A
cache buffer handles             34,516    0.0    N/A      0            0    N/A
cache buffers chains            952,571    0.0    1.0      0       26,780    0.0
cache buffers lru chain          39,708    0.1    1.0      0        8,252    0.0
cache table scan latch                0    N/A    N/A      0          236    0.0
channel handle pool latc             95    0.0    N/A      0            0    N/A
channel operations paren          2,989    0.0    N/A      0            0    N/A
checkpoint queue latch           16,493    0.0    N/A      0        4,965    0.0
client/application info           1,815    0.0    N/A      0            0    N/A
compile environment latc          4,979    0.0    N/A      0            0    N/A
dml lock allocation              23,341    0.0    1.0      0            0    N/A
dummy allocation                    633    0.0    N/A      0            0    N/A
enqueue hash chains              45,427    0.0    N/A      0           30    0.0
enqueues                         19,373    0.0    N/A      0            0    N/A
error message lists                 540    0.0    N/A      0            0    N/A
event group latch                     8    0.0    N/A      0            0    N/A
file cache latch                     36    0.0    N/A      0            0    N/A
global KZLD latch for me              4    0.0    N/A      0            0    N/A
hash table column usage             222    0.0    N/A      0       48,959    0.0
hash table modification               6    0.0    N/A      0            0    N/A
Latch Activity                             DB/Inst: IVRS/ivrs  Snaps: 338-339
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
   willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0

                                           Pct    Avg   Wait                 Pct
                                    Get    Get   Slps   Time       NoWait NoWait
Latch Name                     Requests   Miss  /Miss    (s)     Requests   Miss
------------------------ -------------- ------ ------ ------ ------------ ------
job workq parent latch                0    N/A    N/A      0           24    0.0
job_queue_processes para             21    0.0    N/A      0            0    N/A
kks stats                         1,012    0.0    N/A      0            0    N/A
ksuosstats global area               44    0.0    N/A      0            0    N/A
ktm global data                      39    0.0    N/A      0            0    N/A
kwqbsn:qsga                          27    0.0    N/A      0            0    N/A
lgwr LWN SCN                      2,325    0.0    N/A      0            0    N/A
library cache                    50,273    0.0    1.1      0          141    0.7
library cache load lock             800    0.0    N/A      0            0    N/A
library cache lock               20,079    0.0    N/A      0            0    N/A
library cache lock alloc            590    0.0    N/A      0            0    N/A
library cache pin                17,505    0.0    N/A      0            0    N/A
library cache pin alloca            188    0.0    N/A      0            0    N/A
list of block allocation             74    0.0    N/A      0            0    N/A
loader state object free          6,844    0.0    N/A      0            0    N/A
logminer context allocat              1    0.0    N/A      0            0    N/A
longop free list parent               2    0.0    N/A      0            2    0.0
message pool operations              78    0.0    N/A      0            0    N/A
messages                         12,610    0.0    N/A      0            0    N/A
mostly latch-free SCN             2,325    0.0    N/A      0            0    N/A
msg queue                            22    0.0    N/A      0           22    0.0
multiblock read objects             840    0.0    N/A      0            0    N/A
ncodef allocation latch              11    0.0    N/A      0            0    N/A
object queue header heap          1,079    0.0    N/A      0           92    0.0
object queue header oper         65,592    0.0    1.0      0        1,285    0.0
object stats modificatio              2    0.0    N/A      0            0    N/A
parallel query alloc buf          4,364    0.0    N/A      0            0    N/A
parallel query stats                491    0.0    N/A      0            0    N/A
parameter list                      107    0.0    N/A      0            0    N/A
parameter table allocati            624    0.0    N/A      0            0    N/A
post/wait queue                   3,549    0.0    N/A      0        1,440    0.0
process allocation                   85    0.0    N/A      0            8    0.0
process group creation               17    0.0    N/A      0            0    N/A
process queue                     2,870    0.0    N/A      0            0    N/A
process queue reference       7,204,772    0.0    1.0      0      585,085    1.2
qmn task queue latch                 88    0.0    N/A      0            0    N/A
query server freelists            2,507    0.0    N/A      0            0    N/A
redo allocation                   9,876    0.0    1.0      0       19,450    0.0
redo copy                             1    0.0    N/A      0       19,449    0.1
redo writing                      8,100    0.0    N/A      0            0    N/A
reservation so alloc lat              2    0.0    N/A      0            0    N/A
resmgr group change latc            366    0.0    N/A      0            0    N/A
resmgr:actses active lis            630    0.0    N/A      0            0    N/A
resmgr:actses change gro            312    0.0    N/A      0            0    N/A
resmgr:free threads list            628    0.0    N/A      0            0    N/A
resmgr:schema config                  2    0.0    N/A      0            0    N/A
row cache objects               157,344    0.0    N/A      0          425    0.0
rules engine rule set st            200    0.0    N/A      0            0    N/A
segmented array pool                 22    0.0    N/A      0            0    N/A
sequence cache                      590    0.0    N/A      0            0    N/A
session allocation               78,343    0.0    1.2      0            0    N/A
session idle bit                 28,022    0.0    N/A      0            0    N/A
session state list latch            644    0.0    N/A      0            0    N/A
session switching                    11    0.0    N/A      0            0    N/A
session timer                       240    0.0    N/A      0            0    N/A
shared pool                      33,242    0.0    1.0      0            0    N/A
shared pool sim alloc                10    0.0    N/A      0            0    N/A
shared pool simulator            10,083    0.0    N/A      0            0    N/A
simulator hash latch             47,721    0.0    N/A      0            0    N/A
simulator lru latch              26,263    0.0    1.0      0       18,583    0.0
Latch Activity                             DB/Inst: IVRS/ivrs  Snaps: 338-339
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
   willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0

                                           Pct    Avg   Wait                 Pct
                                    Get    Get   Slps   Time       NoWait NoWait
Latch Name                     Requests   Miss  /Miss    (s)     Requests   Miss
------------------------ -------------- ------ ------ ------ ------------ ------
slave class                          69    0.0    N/A      0            0    N/A
slave class create                   55    1.8    1.0      0            0    N/A
sort extent pool                    321    0.0    N/A      0            0    N/A
state object free list                2    0.0    N/A      0            0    N/A
statistics aggregation              140    0.0    N/A      0            0    N/A
temp lob duration state               2    0.0    N/A      0            0    N/A
threshold alerts latch               64    0.0    N/A      0            0    N/A
transaction allocation               50    0.0    N/A      0            0    N/A
transaction branch alloc             11    0.0    N/A      0            0    N/A
undo global data                 14,398    0.0    N/A      0            0    N/A
user lock                            42    0.0    N/A      0            0    N/A
          -------------------------------------------------------------

Latch Sleep Breakdown                      DB/Inst: IVRS/ivrs  Snaps: 338-339
-> ordered by misses desc

Latch Name
----------------------------------------
  Get Requests      Misses      Sleeps  Spin Gets   Sleep1   Sleep2   Sleep3
-------------- ----------- ----------- ---------- -------- -------- --------
cache buffers lru chain
        39,708          22          23          0        0        0        0
library cache
        50,273           8           9          0        0        0        0
session allocation
        78,343           6           7          0        0        0        0
cache buffers chains
       952,571           3           3          0        0        0        0
object queue header operation
        65,592           2           2          0        0        0        0
process queue reference
     7,204,772           2           2          0        0        0        0
shared pool
        33,242           2           2          0        0        0        0
simulator lru latch
        26,263           2           2          0        0        0        0
SQL memory manager workarea list latch
        16,771           1           1          0        0        0        0
dml lock allocation
        23,341           1           1          0        0        0        0
redo allocation
         9,876           1           1          0        0        0        0
slave class create
            55           1           1          0        0        0        0
          -------------------------------------------------------------

Latch Miss Sources                         DB/Inst: IVRS/ivrs  Snaps: 338-339
-> only latches with sleeps are shown
-> ordered by name, sleeps desc

                                                     NoWait              Waiter
Latch Name               Where                       Misses     Sleeps   Sleeps
------------------------ -------------------------- ------- ---------- --------
SQL memory manager worka qesmmIRegisterWorkArea           0          1        1
cache buffers chains     kcbgtcr: kslbegin excl           0          2        3
cache buffers chains     kcbgtcr: fast path               0          1        0
cache buffers lru chain  kcbzgws_1                        0         19       20
cache buffers lru chain  kcbw_activate_granule            0          1        0
dml lock allocation      ktaiam                           0          1        0
library cache            kglScanDependency                0          3        0
library cache            kgldte: child 0                  0          3        6
library cache            kgldti: 2child                   0          1        0
library cache            kglobpn: child:                  0          1        1
object queue header oper kcbw_link_q                      0          1        0
object queue header oper kcbw_unlink_q                    0          1        1
process queue reference  kxfpqrsnd                        0          2        0
redo allocation          kcrfw_redo_gen: redo alloc       0          1        0
session allocation       ksuxds: KSUSFCLC not set         0          3        1
session allocation       ksursi                           0          2        2
session allocation       ksucri                           0          1        1
session allocation       ksuxds: KSUSFCLC set             0          1        0
shared pool              kghalo                           0          2        0
shared pool              kghfrunp: clatch: nowait         0          1        0
simulator lru latch      kcbs_simulate: simulate se       0          2        2
slave class create       ksvcreate                        0          1        0
          -------------------------------------------------------------

Parent Latch Statistics                    DB/Inst: IVRS/ivrs  Snaps: 338-339

                  No data exists for this section of the report.
          -------------------------------------------------------------

Child Latch Statistics                      DB/Inst: IVRS/ivrs  Snaps: 338-339

                  No data exists for this section of the report.
          -------------------------------------------------------------

Segments by Logical Reads                  DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Total Logical Reads:       6,050,561
-> Captured Segments account for  101.7% of Total

           Tablespace                      Subobject  Obj.       Logical
Owner         Name    Object Name            Name     Type         Reads  %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
TPCH       TPCHTAB    LINEITEM                        TABLE    4,960,400   81.98
TPCH       TPCHTAB    ORDERS                          TABLE      502,768    8.31
TPCH       TPCHTAB    PARTSUPP                        TABLE      161,968    2.68
TPCH       TPCHTAB    PART                            TABLE       95,984    1.59
TPCC       USERS      STOCK_I1                        INDEX       91,984    1.52
          -------------------------------------------------------------

Segments by Physical Reads                  DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Total Physical Reads:       5,663,126
-> Captured Segments account for   101.7% of Total

           Tablespace                      Subobject  Obj.      Physical
Owner         Name    Object Name            Name     Type         Reads  %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
TPCH       TPCHTAB    LINEITEM                        TABLE    4,947,520   87.36
TPCH       TPCHTAB    ORDERS                          TABLE      492,387    8.69
TPCH       TPCHTAB    PARTSUPP                        TABLE      158,037    2.79
TPCH       TPCHTAB    PART                            TABLE       92,064    1.63
TPCH       TPCHTAB    CUSTOMER                        TABLE       55,709     .98
          -------------------------------------------------------------

Segments by Row Lock Waits                 DB/Inst: IVRS/ivrs  Snaps: 338-339
-> % of Capture shows % of row lock waits for each top segment compared
-> with total row lock waits for all segments captured by the Snapshot

                                                                     Row
           Tablespace                      Subobject  Obj.          Lock    % of
Owner         Name    Object Name            Name     Type         Waits Capture
---------- ---------- -------------------- ---------- ----- ------------ -------
TPCC       USERS      IORDL                           INDEX           24   75.00
PERFSTAT   USERS      STATS$EVENT_HISTOGRA            INDEX            4   12.50
PERFSTAT   USERS      STATS$LATCH_PK                  INDEX            4   12.50
          -------------------------------------------------------------

Segments by ITL Waits                       DB/Inst: IVRS/ivrs  Snaps: 338-339

                  No data exists for this section of the report.
          -------------------------------------------------------------

Segments by Buffer Busy Waits               DB/Inst: IVRS/ivrs  Snaps: 338-339

                  No data exists for this section of the report.
          -------------------------------------------------------------

Dictionary Cache Stats                     DB/Inst: IVRS/ivrs  Snaps: 338-339
-> "Pct Misses"  should be very low (< 2% in most cases)
-> "Final Usage" is the number of cache entries being used

                                   Get    Pct    Scan   Pct      Mod      Final
Cache                         Requests   Miss    Reqs  Miss     Reqs      Usage
------------------------- ------------ ------ ------- ----- -------- ----------
dc_awr_control                      14    0.0       0   N/A        2          1
dc_global_oids                      91    4.4       0   N/A        0         29
dc_histogram_data                4,249    2.0       0   N/A        0      1,281
dc_histogram_defs                9,313    2.4       0   N/A        0      2,713
dc_object_grants                    26    7.7       0   N/A        0         45
dc_object_ids                    4,946    1.0       0   N/A        0        663
dc_objects                       1,968    4.0       0   N/A        3        794
dc_profiles                         16    0.0       0   N/A        0          1
dc_rollback_segments               136    0.0       0   N/A        0         16
dc_segments                      1,989    2.6       0   N/A        4        479
dc_sequences                        84    0.0       0   N/A       84          7
dc_tablespaces                  16,511    0.0       0   N/A        0         12
dc_usernames                       260    0.0       0   N/A        0         12
dc_users                        15,529    0.0       0   N/A        0         57
outstanding_alerts                  27    0.0       0   N/A        0         24
          -------------------------------------------------------------

Library Cache Activity                      DB/Inst: IVRS/ivrs  Snaps: 338-339
-> "Pct Misses"  should be very low

                         Get    Pct            Pin    Pct             Invali-
Namespace           Requests   Miss       Requests   Miss    Reloads  dations
--------------- ------------ ------ -------------- ------ ---------- --------
SQL AREA               1,117    6.1         64,285    1.8        294      154
TABLE/PROCEDURE          449    0.4          7,900    4.6        261        0
BODY                     148    0.0          1,278    1.7         22        0
TRIGGER                   42    0.0             80   13.8         11        0
INDEX                     24    0.0             80    6.3          5        0
CLUSTER                   18    0.0             59    0.0          0        0
JAVA DATA                  1    0.0              0    N/A          0        0
          -------------------------------------------------------------

Process Memory Summary                     DB/Inst: IVRS/ivrs  Snaps: 338-339
-> B: Begin snap   E: End snap
-> All rows below contain absolute values (i.e. not diffed over the interval)
-> Max Alloc is Maximum PGA Allocation size at snapshot time
-> Hist Max Alloc is the Historical Max Allocation for still-connected processes
-> ordered by Begin/End snapshot, Alloc (MB) desc

                                                            Hist
                                    Avg  Std Dev     Max     Max
               Alloc      Used    Alloc    Alloc   Alloc   Alloc    Num    Num
  Category      (MB)      (MB)     (MB)     (MB)    (MB)    (MB)   Proc  Alloc
- -------- --------- --------- -------- -------- ------- ------- ------ ------
B Other        128.6       N/A      3.5      6.1      24      25     37     37
  Freeable       9.7        .0       .6       .6       2     N/A     16     16
  SQL            3.6       2.9       .2       .3       1      25     22     15
  PL/SQL          .4        .1       .0       .0       0       0     35     33
E Other        133.6       N/A      3.7      6.1      24      24     36     36
  Freeable      12.1        .0       .7       .4       2     N/A     18     18
  SQL            2.9       2.6       .1       .3       1      26     22     14
  PL/SQL          .5        .1       .0       .0       0       0     34     32
  JAVA            .0        .0       .0       .0       0       2      1      1
          -------------------------------------------------------------

SGA Memory Summary                          DB/Inst: IVRS/ivrs  Snaps: 338-339

                                                      End Size (Bytes)
SGA regions                     Begin Size (Bytes)      (if different)
------------------------------ ------------------- -------------------
Database Buffers                       213,909,504         205,520,896
Fixed Size                               1,261,612
Redo Buffers                             2,928,640
Variable Size                          109,055,956         117,444,564
                               -------------------
sum                                    327,155,712
          -------------------------------------------------------------

SGA breakdown difference                    DB/Inst: IVRS/ivrs  Snaps: 338-339
-> ordered by Pool, Name
-> N/A value for Begin MB or End MB indicates the size of that Pool/Name was
   insignificant, or zero in that snapshot

Pool   Name                                 Begin MB         End MB  % Diff
------ ------------------------------ -------------- -------------- -------
java   free memory                               2.8            2.7   -3.98
java   joxlod exec hp                            5.0            5.1    2.23
java   joxs heap                                  .2             .2    0.00
large  ASM map operations hashta                  .2             .2    0.00
large  CTWR dba buffer                            .4             .4    0.00
large  PX msg pool                                .2             .2   20.83
large  free memory                               1.2            1.2   -3.32
large  krcc extent chunk                         2.0            2.0    0.00
shared ASH buffers                               2.0            2.0    0.00
shared CCursor                                   3.0            3.3   11.37
shared Heap0: KGL                                1.7            1.7    2.13
shared KCB Table Scan Buffer                     3.8            3.8    0.00
shared KGH: NO ACCESS                           12.0           13.9   16.25
shared KGLS heap                                 2.6            3.4   31.25
shared KQR M PO                                  2.2            2.1   -3.77
shared KSFD SGA I/O b                            3.8            3.8    0.00
shared KTI-UNDO                                  1.2            1.2    0.00
shared PCursor                                   2.0            2.0    1.70
shared PL/SQL DIANA                              N/A            1.1     N/A
shared PL/SQL MPCODE                             2.3            2.3    1.07
shared event statistics per sess                 1.3            1.3    0.00
shared free memory                              22.1           20.9   -5.56
shared kglsim hash table bkts                    2.0            2.0    0.00
shared library cache                             5.7            5.7   -0.52
shared private strands                           1.1            1.1    0.00
shared row cache                                 3.6            3.6    0.00
shared sql area                                  9.7           15.0   54.05
stream free memory                               4.0            4.0    0.00
       buffer_cache                            204.0          196.0   -3.92
       fixed_sga                                 1.2            1.2    0.00
       log_buffer                                2.8            2.8    0.00
          -------------------------------------------------------------

Streams CPU/IO Usage                       DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Streams processes ordered by CPU usage
-> CPU and I/O Time in micro seconds

Session Type                    CPU Time  User I/O Time   Sys I/O Time
------------------------- -------------- -------------- --------------
QMON Coordinator                  31,890              0              0
QMON Slaves                       24,062              0              0
          -------------------------------------------------------------

Streams Capture                             DB/Inst: IVRS/ivrs  Snaps: 338-339

                  No data exists for this section of the report.
          -------------------------------------------------------------

Streams Apply                               DB/Inst: IVRS/ivrs  Snaps: 338-339

                  No data exists for this section of the report.
          -------------------------------------------------------------

Buffered Queues                             DB/Inst: IVRS/ivrs  Snaps: 338-339

                  No data exists for this section of the report.
          -------------------------------------------------------------

Buffered Subscribers                        DB/Inst: IVRS/ivrs  Snaps: 338-339

                  No data exists for this section of the report.
          -------------------------------------------------------------

Rule Set                                    DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Rule Sets ordered by Evaluations

                                       Fast      SQL      CPU  Elapsed
Ruleset Name                 Evals    Evals    Execs     Time     Time
------------------------- -------- -------- -------- -------- --------
SYS.ALERT_QUE_R                  0        0        0        0        0
          -------------------------------------------------------------

Resource Limit Stats                            DB/Inst: IVRS/ivrs  Snap: 339

                  No data exists for this section of the report.
          -------------------------------------------------------------

init.ora Parameters                        DB/Inst: IVRS/ivrs  Snaps: 338-339

                                                                End value
Parameter Name                Begin value                       (if different)
----------------------------- --------------------------------- --------------
audit_file_dest               /oracle/app/oracle/admin/ivrs/adu
audit_sys_operations          TRUE
background_dump_dest          /oracle/app/oracle/admin/ivrs/bdu
compatible                    10.2.0.3.0
control_files                 +DATA_1/ivrs/control01.ctl, +DATA
core_dump_dest                /oracle/app/oracle/admin/ivrs/cdu
db_block_size                 8192
db_domain                     bayantel.com
db_file_multiblock_read_count 16
db_name                       ivrs
db_recovery_file_dest         /flash_reco/flash_recovery_area
db_recovery_file_dest_size    161061273600
dispatchers                   (PROTOCOL=TCP) (SERVICE=ivrsXDB)
job_queue_processes           10
log_archive_dest_1            LOCATION=USE_DB_RECOVERY_FILE_DES
log_archive_format            ivrs_%t_%s_%r.arc
open_cursors                  300
os_authent_prefix
os_roles                      FALSE
pga_aggregate_target          108003328
processes                     150
recyclebin                    OFF
remote_login_passwordfile     EXCLUSIVE
remote_os_authent             FALSE
remote_os_roles               FALSE
sga_target                    327155712
spfile                        +DATA_1/ivrs/spfileivrs.ora
sql92_security                TRUE
statistics_level              TYPICAL
undo_management               AUTO
undo_tablespace               UNDOTBS1
user_dump_dest                /oracle/app/oracle/admin/ivrs/udu
          -------------------------------------------------------------

End of Report
}}}
http://karlarao.wordpress.com/scripts-resources/
<<<
AWR Tableau Toolkit – create your own performance data warehouse and easily characterize the workload, CPU, IO of the entire cluster (30 instances) with months of perf data in less than 1 hour updated 20120912
I no longer update this toolkit. This served as a version 1 of a more comprehensive tool called eAdam which I started with Carlos Sierra (the main developer), Frits Hoogland, and Randy Johnson at Enkitec.
<<<
see [[eAdam]]
''AWR tableau and R toolkit - blueprint'' http://www.evernote.com/shard/s48/sh/e20c905c-694e-4950-8d57-e890a208c76b/189e50f39a739500e6b98b4511751cea

''Workload visualization notes:'' http://www.evernote.com/shard/s48/sh/0918cd46-2cec-494e-9932-eb725712bb68/9d211e0a7876d7dc41c98c0416675965

''check this out for the mind map version'' https://sites.google.com/site/karlarao/home/mindmap/awr-tableau-and-r-toolkit-visualization-examples

''the viz on the tiddlers are coming from the following data sets:''
<<<
{{{
topevents
sysstat
io workload
cpu workload
services
topsql

General Workload 
	wait class, wait events, executes/sec
CPU 
	AAS CPU, load average, NUM_CPUs
IO
	latency, IOPS breakdown, MB/s
Memory
	SGA , max PGA usage, physical memory
Storage 
	total storage size, reco size 
Services
	distribution of workload/modules
Top SQL 
	modules, SQL type, SQL_ID PIOs/LIOs , PX usage
}}}
<<<
with these data points I can easily characterize the overall behavior of the cluster 


''check this out for the mind map version'' https://sites.google.com/site/karlarao/home/mindmap/awr-tableau-and-r-toolkit-visualization-examples


{{{
set arraysize 5000

COLUMN blocksize NEW_VALUE _blocksize NOPRINT
select distinct block_size blocksize from v$datafile;

COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;

COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;

COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;

COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;

-- ttitle center 'AWR CPU and IO Workload Report' skip 2
set pagesize 50000
set linesize 550

col instname    format a15              heading instname        -- instname
col hostname    format a30              heading hostname        -- hostname
col tm          format a17              heading tm              -- "tm"
col id          format 99999            heading id              -- "snapid"
col inst        format 90               heading inst            -- "inst"
col dur         format 999990.00        heading dur             -- "dur"
col cpu         format 90               heading cpu             -- "cpu"
col cap         format 9999990.00       heading cap             -- "capacity"
col dbt         format 999990.00        heading dbt             -- "DBTime"
col dbc         format 99990.00         heading dbc             -- "DBcpu"
col bgc         format 99990.00         heading bgc             -- "BGcpu"
col rman        format 9990.00          heading rman            -- "RMANcpu"
col aas         format 990.0            heading aas             -- "AAS"
col totora      format 9999990.00       heading totora          -- "TotalOracleCPU"
col busy        format 9999990.00       heading busy            -- "BusyTime"
col load        format 990.00           heading load            -- "OSLoad"
col totos       format 9999990.00       heading totos           -- "TotalOSCPU"
col mem         format 999990.00        heading mem             -- "PhysicalMemorymb"
col IORs        format 9990.000         heading IORs            -- "IOPsr"
col IOWs        format 9990.000         heading IOWs            -- "IOPsw"
col IORedo      format 9990.000         heading IORedo          -- "IOPsredo"
col IORmbs      format 9990.000         heading IORmbs          -- "IOrmbs"
col IOWmbs      format 9990.000         heading IOWmbs          -- "IOwmbs"
col redosizesec format 9990.000         heading redosizesec     -- "Redombs"
col logons      format 990              heading logons          -- "Sess"
col logone      format 990              heading logone          -- "SessEnd"
col exsraw      format 99990.000        heading exsraw          -- "Execrawdelta"
col exs         format 9990.000         heading exs             -- "Execs"
col ucs         format 9990.000         heading ucs             -- "UserCalls"
col ucoms       format 9990.000         heading ucoms           -- "Commit"
col urs         format 9990.000         heading urs             -- "Rollback"
col oracpupct   format 990              heading oracpupct       -- "OracleCPUPct"
col rmancpupct  format 990              heading rmancpupct      -- "RMANCPUPct"
col oscpupct    format 990              heading oscpupct        -- "OSCPUPct"
col oscpuusr    format 990              heading oscpuusr        -- "USRPct"
col oscpusys    format 990              heading oscpusys        -- "SYSPct"
col oscpuio     format 990              heading oscpuio         -- "IOPct"

SELECT * FROM
( 
  SELECT trim('&_instname') instname, 
         trim('&_dbid') db_id, 
         trim('&_hostname') hostname, 
          s0.snap_id id,
         TO_CHAR(s0.END_INTERVAL_TIME,'MM/DD/YY HH24:MI:SS') tm,
         s0.instance_number inst,
  round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
  s3t1.value AS cpu,
  (round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value cap,
  (s5t1.value - s5t0.value) / 1000000 as dbt,
  (s6t1.value - s6t0.value) / 1000000 as dbc,
  (s7t1.value - s7t0.value) / 1000000 as bgc,
  round(DECODE(s8t1.value,null,'null',(s8t1.value - s8t0.value) / 1000000),2) as rman,
  ((s5t1.value - s5t0.value) / 1000000)/60 /  round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas,
  round(((s6t1.value - s6t0.value) / 1000000) + ((s7t1.value - s7t0.value) / 1000000),2) totora,
  -- s1t1.value - s1t0.value AS busy,  -- this is osstat BUSY_TIME
  round(s2t1.value,2) AS load,
  (s1t1.value - s1t0.value)/100 AS totos,
  ((round(((s6t1.value - s6t0.value) / 1000000) + ((s7t1.value - s7t0.value) / 1000000),2)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                                              + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                                              + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                                              + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oracpupct,
  ((round(DECODE(s8t1.value,null,'null',(s8t1.value - s8t0.value) / 1000000),2)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                                              + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                                              + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                                              + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as rmancpupct,
  (((s1t1.value - s1t0.value)/100) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                                              + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                                              + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                                              + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oscpupct,
  (((s17t1.value - s17t0.value)/100) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                                              + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                                              + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                                              + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oscpuusr,
  (((s18t1.value - s18t0.value)/100) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                                              + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                                              + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                                              + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oscpusys,
  (((s19t1.value - s19t0.value)/100) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                                              + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                                              + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                                              + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oscpuio
FROM dba_hist_snapshot s0,
  dba_hist_snapshot s1,
  dba_hist_osstat s1t0,         -- BUSY_TIME
  dba_hist_osstat s1t1,
  dba_hist_osstat s17t0,        -- USER_TIME
  dba_hist_osstat s17t1,
  dba_hist_osstat s18t0,        -- SYS_TIME
  dba_hist_osstat s18t1,
  dba_hist_osstat s19t0,        -- IOWAIT_TIME
  dba_hist_osstat s19t1,
  dba_hist_osstat s2t1,         -- osstat just get the end value
  dba_hist_osstat s3t1,         -- osstat just get the end value
  dba_hist_sys_time_model s5t0,
  dba_hist_sys_time_model s5t1,
  dba_hist_sys_time_model s6t0,
  dba_hist_sys_time_model s6t1,
  dba_hist_sys_time_model s7t0,
  dba_hist_sys_time_model s7t1,
  dba_hist_sys_time_model s8t0,
  dba_hist_sys_time_model s8t1
WHERE s0.dbid            = &_dbid    -- CHANGE THE DBID HERE!
AND s1.dbid              = s0.dbid
AND s1t0.dbid            = s0.dbid
AND s1t1.dbid            = s0.dbid
AND s2t1.dbid            = s0.dbid
AND s3t1.dbid            = s0.dbid
AND s5t0.dbid            = s0.dbid
AND s5t1.dbid            = s0.dbid
AND s6t0.dbid            = s0.dbid
AND s6t1.dbid            = s0.dbid
AND s7t0.dbid            = s0.dbid
AND s7t1.dbid            = s0.dbid
AND s8t0.dbid            = s0.dbid
AND s8t1.dbid            = s0.dbid
AND s17t0.dbid            = s0.dbid
AND s17t1.dbid            = s0.dbid
AND s18t0.dbid            = s0.dbid
AND s18t1.dbid            = s0.dbid
AND s19t0.dbid            = s0.dbid
AND s19t1.dbid            = s0.dbid
--AND s0.instance_number   = &_instancenumber   -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number   = s0.instance_number
AND s1t0.instance_number = s0.instance_number
AND s1t1.instance_number = s0.instance_number
AND s2t1.instance_number = s0.instance_number
AND s3t1.instance_number = s0.instance_number
AND s5t0.instance_number = s0.instance_number
AND s5t1.instance_number = s0.instance_number
AND s6t0.instance_number = s0.instance_number
AND s6t1.instance_number = s0.instance_number
AND s7t0.instance_number = s0.instance_number
AND s7t1.instance_number = s0.instance_number
AND s8t0.instance_number = s0.instance_number
AND s8t1.instance_number = s0.instance_number
AND s17t0.instance_number = s0.instance_number
AND s17t1.instance_number = s0.instance_number
AND s18t0.instance_number = s0.instance_number
AND s18t1.instance_number = s0.instance_number
AND s19t0.instance_number = s0.instance_number
AND s19t1.instance_number = s0.instance_number
AND s1.snap_id           = s0.snap_id + 1
AND s1t0.snap_id         = s0.snap_id
AND s1t1.snap_id         = s0.snap_id + 1
AND s2t1.snap_id         = s0.snap_id + 1
AND s3t1.snap_id         = s0.snap_id + 1
AND s5t0.snap_id         = s0.snap_id
AND s5t1.snap_id         = s0.snap_id + 1
AND s6t0.snap_id         = s0.snap_id
AND s6t1.snap_id         = s0.snap_id + 1
AND s7t0.snap_id         = s0.snap_id
AND s7t1.snap_id         = s0.snap_id + 1
AND s8t0.snap_id         = s0.snap_id
AND s8t1.snap_id         = s0.snap_id + 1
AND s17t0.snap_id         = s0.snap_id
AND s17t1.snap_id         = s0.snap_id + 1
AND s18t0.snap_id         = s0.snap_id
AND s18t1.snap_id         = s0.snap_id + 1
AND s19t0.snap_id         = s0.snap_id
AND s19t1.snap_id         = s0.snap_id + 1
AND s1t0.stat_name       = 'BUSY_TIME'
AND s1t1.stat_name       = s1t0.stat_name
AND s17t0.stat_name       = 'USER_TIME'
AND s17t1.stat_name       = s17t0.stat_name
AND s18t0.stat_name       = 'SYS_TIME'
AND s18t1.stat_name       = s18t0.stat_name
AND s19t0.stat_name       = 'IOWAIT_TIME'
AND s19t1.stat_name       = s19t0.stat_name
AND s2t1.stat_name       = 'LOAD'
AND s3t1.stat_name       = 'NUM_CPUS'
AND s5t0.stat_name       = 'DB time'
AND s5t1.stat_name       = s5t0.stat_name
AND s6t0.stat_name       = 'DB CPU'
AND s6t1.stat_name       = s6t0.stat_name
AND s7t0.stat_name       = 'background cpu time'
AND s7t1.stat_name       = s7t0.stat_name
AND s8t0.stat_name       = 'RMAN cpu time (backup/restore)'
AND s8t1.stat_name       = s8t0.stat_name
)
-- WHERE 
-- tm > to_char(sysdate - 30, 'MM/DD/YY HH24:MI')
-- id  in (select snap_id from (select * from r2toolkit.r2_regression_data union all select * from r2toolkit.r2_outlier_data))
-- id in (336)
-- aas > 1
-- oracpupct > 50
-- oscpupct > 50
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') >= 1     -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') <= 7
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') >= 0900     -- Hour
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') <= 1800
-- AND s0.END_INTERVAL_TIME >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss')     -- Data range
-- AND s0.END_INTERVAL_TIME <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
ORDER BY id ASC;
}}}
''For Exadata, make use of the following formulas to create Calculated Fields in Tableau, this already accounts for "ASM Normal redundancy" (see * 2), change it to * 3 for High redundancy'' also check this blog post for more on ASM redundancy effect on performance http://karlarao.wordpress.com/2012/06/29/the-effect-of-asm-redundancyparity-on-readwrite-iops-slob-test-case-for-exadata-and-non-exa-environments/
{{{
Flash IOPS       ((SIORS+MIORS) + ((SIOWS+MIOWS+IOREDO) * 2)) * ([FLASHCACHE] * .01)
Flash RIOPS      (SIORS+MIORS) * ([FLASHCACHE] *.01)
Flash WIOPS      ((SIOWS+MIOWS+IOREDO) * 2) * ([FLASHCACHE] * .01)
HD IOPS          ((SIORS+MIORS) + ((SIOWS+MIOWS+IOREDO) * 2)) * ((100 - [FLASHCACHE]) * .01)
HD RIOPS         (SIORS+MIORS) * ((100 - [FLASHCACHE]) * .01)
HD WIOPS         (((SIOWS+MIOWS+IOREDO) * 2)) * ((100 - [FLASHCACHE]) * .01)
}}}
* note, if you see way more cell flash cache read hits than physical read total IO requests then it's accumulating both the reads & writes in the same metric, the effect of this is the above formula will have negative values on the HD IOPS, to fix this create a calculated field on the FLASHCACHE measure, read more about this behavior here http://blog.tanelpoder.com/2013/12/04/cell-flash-cache-read-hits-vs-cell-writes-to-flash-cache-statistics-on-exadata/
{{{
name the calculated field as FLASHCACHE2 then just do a find/replace on the formula above
IF [FLASHCACHE] > 100 THEN 100 ELSE [FLASHCACHE] END
}}}

''Then generate the following graphs''
1) Flash vs HD IOPS
2) Flash vs HD IOPS with read/write breakdown
3) IO throughput read/write MB/s
4) Polynomial Regression (Degree 4)

''For non-Exadata just use the following formula to get the total IOPS, name the Calculated Filed as ALL_IOPS''
{{{
(SIORS+MIORS) + (SIOWS+MIOWS+IOREDO)
}}}

Here are some more examples on ''Different views of IO performance with AWR data'' check the full details on this link http://goo.gl/i660CZ 
I also mentioned about this tiddler last OakTable World 2013 http://www.slideshare.net/karlarao/o-2013-ultimate-exadata-io-monitoring-flash-harddisk-write-back-cache-overhead

SECTION 1: USER IO wait class and cell single block reads latency with curve fitting
SECTION 2: Small IOPS vs Large IOPS
SECTION 3: Flash vs HD IOPS
SECTION 4: Flash vs HD IOPS with read/write breakdown
SECTION 5: IO throughput read/write MB/s
SECTION 6: Drill down on smart scans affecting cell single block latency on 24hour period 


{{{
set arraysize 5000

COLUMN blocksize NEW_VALUE _blocksize NOPRINT
select distinct block_size blocksize from v$datafile;

COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;

COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;

COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;

COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;

-- ttitle center 'AWR IO Workload Report' skip 2
set pagesize 50000
set linesize 550

col instname       format a15              heading instname            -- instname
col hostname       format a30              heading hostname            -- hostname
col tm             format a17              heading tm                  -- "tm"
col id             format 99999            heading id                  -- "snapid"
col inst           format 90               heading inst                -- "inst"
col dur            format 999990.00        heading dur                 -- "dur"
col cpu            format 90               heading cpu                 -- "cpu"
col cap            format 9999990.00       heading cap                 -- "capacity"
col dbt            format 999990.00        heading dbt                 -- "DBTime"
col dbc            format 99990.00         heading dbc                 -- "DBcpu"
col bgc            format 99990.00         heading bgc                 -- "BGcpu"
col rman           format 9990.00          heading rman                -- "RMANcpu"
col aas            format 990.0            heading aas                 -- "AAS"
col totora         format 9999990.00       heading totora              -- "TotalOracleCPU"
col busy           format 9999990.00       heading busy                -- "BusyTime"
col load           format 990.00           heading load                -- "OSLoad"
col totos          format 9999990.00       heading totos               -- "TotalOSCPU"
col mem            format 999990.00        heading mem                 -- "PhysicalMemorymb"
col IORs           format 99990.000        heading IORs                -- "IOPsr"
col IOWs           format 99990.000        heading IOWs                -- "IOPsw"
col IORedo         format 99990.000        heading IORedo              -- "IOPsredo"
col IORmbs         format 99990.000        heading IORmbs              -- "IOrmbs"
col IOWmbs         format 99990.000        heading IOWmbs              -- "IOwmbs"
col redosizesec    format 99990.000        heading redosizesec         -- "Redombs"
col logons         format 990              heading logons              -- "Sess"
col logone         format 990              heading logone              -- "SessEnd"
col exsraw         format 99990.000        heading exsraw              -- "Execrawdelta"
col exs            format 9990.000         heading exs                 -- "Execs"
col oracpupct      format 990              heading oracpupct           -- "OracleCPUPct"
col rmancpupct     format 990              heading rmancpupct          -- "RMANCPUPct"
col oscpupct       format 990              heading oscpupct            -- "OSCPUPct"
col oscpuusr       format 990              heading oscpuusr            -- "USRPct"
col oscpusys       format 990              heading oscpusys            -- "SYSPct"
col oscpuio        format 990              heading oscpuio             -- "IOPct"
col SIORs          format 99990.000        heading SIORs               -- "IOPsSingleBlockr"
col MIORs          format 99990.000        heading MIORs               -- "IOPsMultiBlockr"
col TIORmbs        format 99990.000        heading TIORmbs             -- "Readmbs"
col SIOWs          format 99990.000        heading SIOWs               -- "IOPsSingleBlockw"
col MIOWs          format 99990.000        heading MIOWs               -- "IOPsMultiBlockw"
col TIOWmbs        format 99990.000        heading TIOWmbs             -- "Writembs"
col TIOR           format 99990.000        heading TIOR                -- "TotalIOPsr"
col TIOW           format 99990.000        heading TIOW                -- "TotalIOPsw"
col TIOALL         format 99990.000        heading TIOALL              -- "TotalIOPsALL"
col ALLRmbs        format 99990.000        heading ALLRmbs             -- "TotalReadmbs"
col ALLWmbs        format 99990.000        heading ALLWmbs             -- "TotalWritembs"
col GRANDmbs       format 99990.000        heading GRANDmbs            -- "TotalmbsALL"
col readratio      format 990              heading readratio           -- "ReadRatio"
col writeratio     format 990              heading writeratio          -- "WriteRatio"
col diskiops       format 99990.000        heading diskiops            -- "HWDiskIOPs"
col numdisks       format 99990.000        heading numdisks            -- "HWNumofDisks"
col flashcache     format 990              heading flashcache          -- "FlashCacheHitsPct"
col cellpiob       format 99990.000        heading cellpiob            -- "CellPIOICmbs"
col cellpiobss     format 99990.000        heading cellpiobss          -- "CellPIOICSmartScanmbs"
col cellpiobpreoff format 99990.000        heading cellpiobpreoff      -- "CellPIOpredoffloadmbs"
col cellpiobsi     format 99990.000        heading cellpiobsi          -- "CellPIOstorageindexmbs"
col celliouncomb   format 99990.000        heading celliouncomb        -- "CellIOuncompmbs"
col cellpiobs      format 99990.000        heading cellpiobs           -- "CellPIOsavedfilecreationmbs"
col cellpiobsrman  format 99990.000        heading cellpiobsrman       -- "CellPIOsavedRMANfilerestorembs"

SELECT * FROM
( 
  SELECT trim('&_instname') instname, 
         trim('&_dbid') db_id, 
         trim('&_hostname') hostname, 
         s0.snap_id id,
         TO_CHAR(s0.END_INTERVAL_TIME,'MM/DD/YY HH24:MI:SS') tm,
         s0.instance_number inst,
  round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
   (((s20t1.value - s20t0.value) - (s21t1.value - s21t0.value)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as SIORs,
   ((s21t1.value - s21t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as MIORs,
   (((s22t1.value - s22t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as TIORmbs,
   (((s23t1.value - s23t0.value) - (s24t1.value - s24t0.value)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as SIOWs,
   ((s24t1.value - s24t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as MIOWs,
   (((s25t1.value - s25t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as TIOWmbs,
   ((s13t1.value - s13t0.value)  / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as IORedo, 
   (((s14t1.value - s14t0.value)/1024/1024)  / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as redosizesec,
    ((s33t1.value - s33t0.value) / (s20t1.value - s20t0.value))*100 as flashcache,
   (((s26t1.value - s26t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as cellpiob,
   (((s31t1.value - s31t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as cellpiobss,
   (((s29t1.value - s29t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as cellpiobpreoff,
   (((s30t1.value - s30t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as cellpiobsi,
   (((s32t1.value - s32t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as celliouncomb,
   (((s27t1.value - s27t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as cellpiobs,
   (((s28t1.value - s28t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as cellpiobsrman
FROM dba_hist_snapshot s0,
  dba_hist_snapshot s1,
  dba_hist_sysstat s13t0,       -- redo writes, diffed
  dba_hist_sysstat s13t1,
  dba_hist_sysstat s14t0,       -- redo size, diffed
  dba_hist_sysstat s14t1,
  dba_hist_sysstat s20t0,       -- physical read total IO requests, diffed
  dba_hist_sysstat s20t1,
  dba_hist_sysstat s21t0,       -- physical read total multi block requests, diffed
  dba_hist_sysstat s21t1,  
  dba_hist_sysstat s22t0,       -- physical read total bytes, diffed
  dba_hist_sysstat s22t1,  
  dba_hist_sysstat s23t0,       -- physical write total IO requests, diffed
  dba_hist_sysstat s23t1,
  dba_hist_sysstat s24t0,       -- physical write total multi block requests, diffed
  dba_hist_sysstat s24t1,
  dba_hist_sysstat s25t0,       -- physical write total bytes, diffed
  dba_hist_sysstat s25t1,
  dba_hist_sysstat s26t0,       -- cell physical IO interconnect bytes, diffed, cellpiob
  dba_hist_sysstat s26t1,
  dba_hist_sysstat s27t0,       -- cell physical IO bytes saved during optimized file creation, diffed, cellpiobs
  dba_hist_sysstat s27t1,
  dba_hist_sysstat s28t0,       -- cell physical IO bytes saved during optimized RMAN file restore, diffed, cellpiobsrman
  dba_hist_sysstat s28t1,
  dba_hist_sysstat s29t0,       -- cell physical IO bytes eligible for predicate offload, diffed, cellpiobpreoff
  dba_hist_sysstat s29t1,
  dba_hist_sysstat s30t0,       -- cell physical IO bytes saved by storage index, diffed, cellpiobsi
  dba_hist_sysstat s30t1,
  dba_hist_sysstat s31t0,       -- cell physical IO interconnect bytes returned by smart scan, diffed, cellpiobss
  dba_hist_sysstat s31t1,
  dba_hist_sysstat s32t0,       -- cell IO uncompressed bytes, diffed, celliouncomb
  dba_hist_sysstat s32t1,
  dba_hist_sysstat s33t0,       -- cell flash cache read hits
  dba_hist_sysstat s33t1
WHERE s0.dbid            = &_dbid    -- CHANGE THE DBID HERE!
AND s1.dbid              = s0.dbid
AND s13t0.dbid            = s0.dbid
AND s13t1.dbid            = s0.dbid
AND s14t0.dbid            = s0.dbid
AND s14t1.dbid            = s0.dbid
AND s20t0.dbid            = s0.dbid
AND s20t1.dbid            = s0.dbid
AND s21t0.dbid            = s0.dbid
AND s21t1.dbid            = s0.dbid
AND s22t0.dbid            = s0.dbid
AND s22t1.dbid            = s0.dbid
AND s23t0.dbid            = s0.dbid
AND s23t1.dbid            = s0.dbid
AND s24t0.dbid            = s0.dbid
AND s24t1.dbid            = s0.dbid
AND s25t0.dbid            = s0.dbid
AND s25t1.dbid            = s0.dbid
AND s26t0.dbid            = s0.dbid
AND s26t1.dbid            = s0.dbid
AND s27t0.dbid            = s0.dbid
AND s27t1.dbid            = s0.dbid
AND s28t0.dbid            = s0.dbid
AND s28t1.dbid            = s0.dbid
AND s29t0.dbid            = s0.dbid
AND s29t1.dbid            = s0.dbid
AND s30t0.dbid            = s0.dbid
AND s30t1.dbid            = s0.dbid
AND s31t0.dbid            = s0.dbid
AND s31t1.dbid            = s0.dbid
AND s32t0.dbid            = s0.dbid
AND s32t1.dbid            = s0.dbid
AND s33t0.dbid            = s0.dbid
AND s33t1.dbid            = s0.dbid
--AND s0.instance_number   = &_instancenumber   -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number   = s0.instance_number
AND s13t0.instance_number = s0.instance_number
AND s13t1.instance_number = s0.instance_number
AND s14t0.instance_number = s0.instance_number
AND s14t1.instance_number = s0.instance_number
AND s20t0.instance_number = s0.instance_number
AND s20t1.instance_number = s0.instance_number
AND s21t0.instance_number = s0.instance_number
AND s21t1.instance_number = s0.instance_number
AND s22t0.instance_number = s0.instance_number
AND s22t1.instance_number = s0.instance_number
AND s23t0.instance_number = s0.instance_number
AND s23t1.instance_number = s0.instance_number
AND s24t0.instance_number = s0.instance_number
AND s24t1.instance_number = s0.instance_number
AND s25t0.instance_number = s0.instance_number
AND s25t1.instance_number = s0.instance_number
AND s26t0.instance_number = s0.instance_number
AND s26t1.instance_number = s0.instance_number
AND s27t0.instance_number = s0.instance_number
AND s27t1.instance_number = s0.instance_number
AND s28t0.instance_number = s0.instance_number
AND s28t1.instance_number = s0.instance_number
AND s29t0.instance_number = s0.instance_number
AND s29t1.instance_number = s0.instance_number
AND s30t0.instance_number = s0.instance_number
AND s30t1.instance_number = s0.instance_number
AND s31t0.instance_number = s0.instance_number
AND s31t1.instance_number = s0.instance_number
AND s32t0.instance_number = s0.instance_number
AND s32t1.instance_number = s0.instance_number
AND s33t0.instance_number = s0.instance_number
AND s33t1.instance_number = s0.instance_number
AND s1.snap_id            = s0.snap_id + 1
AND s13t0.snap_id         = s0.snap_id
AND s13t1.snap_id         = s0.snap_id + 1
AND s14t0.snap_id         = s0.snap_id
AND s14t1.snap_id         = s0.snap_id + 1
AND s20t0.snap_id         = s0.snap_id
AND s20t1.snap_id         = s0.snap_id + 1
AND s21t0.snap_id         = s0.snap_id
AND s21t1.snap_id         = s0.snap_id + 1
AND s22t0.snap_id         = s0.snap_id
AND s22t1.snap_id         = s0.snap_id + 1
AND s23t0.snap_id         = s0.snap_id
AND s23t1.snap_id         = s0.snap_id + 1
AND s24t0.snap_id         = s0.snap_id
AND s24t1.snap_id         = s0.snap_id + 1
AND s25t0.snap_id         = s0.snap_id
AND s25t1.snap_id         = s0.snap_id + 1
AND s26t0.snap_id         = s0.snap_id
AND s26t1.snap_id         = s0.snap_id + 1
AND s27t0.snap_id         = s0.snap_id
AND s27t1.snap_id         = s0.snap_id + 1
AND s28t0.snap_id         = s0.snap_id
AND s28t1.snap_id         = s0.snap_id + 1
AND s29t0.snap_id         = s0.snap_id
AND s29t1.snap_id         = s0.snap_id + 1
AND s30t0.snap_id         = s0.snap_id
AND s30t1.snap_id         = s0.snap_id + 1
AND s31t0.snap_id         = s0.snap_id
AND s31t1.snap_id         = s0.snap_id + 1
AND s32t0.snap_id         = s0.snap_id
AND s32t1.snap_id         = s0.snap_id + 1
AND s33t0.snap_id         = s0.snap_id
AND s33t1.snap_id         = s0.snap_id + 1
AND s13t0.stat_name       = 'redo writes'
AND s13t1.stat_name       = s13t0.stat_name
AND s14t0.stat_name       = 'redo size'
AND s14t1.stat_name       = s14t0.stat_name
AND s20t0.stat_name       = 'physical read total IO requests'
AND s20t1.stat_name       = s20t0.stat_name
AND s21t0.stat_name       = 'physical read total multi block requests'
AND s21t1.stat_name       = s21t0.stat_name
AND s22t0.stat_name       = 'physical read total bytes'
AND s22t1.stat_name       = s22t0.stat_name
AND s23t0.stat_name       = 'physical write total IO requests'
AND s23t1.stat_name       = s23t0.stat_name
AND s24t0.stat_name       = 'physical write total multi block requests'
AND s24t1.stat_name       = s24t0.stat_name
AND s25t0.stat_name       = 'physical write total bytes'
AND s25t1.stat_name       = s25t0.stat_name
AND s26t0.stat_name       = 'cell physical IO interconnect bytes'
AND s26t1.stat_name       = s26t0.stat_name
AND s27t0.stat_name       = 'cell physical IO bytes saved during optimized file creation'
AND s27t1.stat_name       = s27t0.stat_name
AND s28t0.stat_name       = 'cell physical IO bytes saved during optimized RMAN file restore'
AND s28t1.stat_name       = s28t0.stat_name
AND s29t0.stat_name       = 'cell physical IO bytes eligible for predicate offload'
AND s29t1.stat_name       = s29t0.stat_name
AND s30t0.stat_name       = 'cell physical IO bytes saved by storage index'
AND s30t1.stat_name       = s30t0.stat_name
AND s31t0.stat_name       = 'cell physical IO interconnect bytes returned by smart scan'
AND s31t1.stat_name       = s31t0.stat_name
AND s32t0.stat_name       = 'cell IO uncompressed bytes'
AND s32t1.stat_name       = s32t0.stat_name
AND s33t0.stat_name       = 'cell flash cache read hits'
AND s33t1.stat_name       = s33t0.stat_name
)
-- WHERE 
-- tm > to_char(sysdate - 30, 'MM/DD/YY HH24:MI')
-- id  in (select snap_id from (select * from r2toolkit.r2_regression_data union all select * from r2toolkit.r2_outlier_data))
-- id in (338)
-- aas > 1
-- oscpuio > 50
-- rmancpupct > 0
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') >= 1     -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') <= 7
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') >= 0900     -- Hour
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') <= 1800
-- AND s0.END_INTERVAL_TIME >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss')     -- Data range
-- AND s0.END_INTERVAL_TIME <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
ORDER BY id ASC;
}}}
this SQL only pulls the following info for non-Exadata environments
-- redo writes
-- redo size
-- physical read total IO requests
-- physical read total multi block requests
-- physical write total IO requests
-- physical write total multi block requests

{{{
set arraysize 5000

COLUMN blocksize NEW_VALUE _blocksize NOPRINT
select distinct block_size blocksize from v$datafile;

COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;

COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;

COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;

COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;

-- ttitle center 'AWR IO Workload Report' skip 2
set pagesize 50000
set linesize 550

col instname       format a15              heading instname            -- instname
col hostname       format a30              heading hostname            -- hostname
col tm             format a17              heading tm                  -- "tm"
col id             format 99999            heading id                  -- "snapid"
col inst           format 90               heading inst                -- "inst"
col dur            format 999990.00        heading dur                 -- "dur"
col cpu            format 90               heading cpu                 -- "cpu"
col cap            format 9999990.00       heading cap                 -- "capacity"
col dbt            format 999990.00        heading dbt                 -- "DBTime"
col dbc            format 99990.00         heading dbc                 -- "DBcpu"
col bgc            format 99990.00         heading bgc                 -- "BGcpu"
col rman           format 9990.00          heading rman                -- "RMANcpu"
col aas            format 990.0            heading aas                 -- "AAS"
col totora         format 9999990.00       heading totora              -- "TotalOracleCPU"
col busy           format 9999990.00       heading busy                -- "BusyTime"
col load           format 990.00           heading load                -- "OSLoad"
col totos          format 9999990.00       heading totos               -- "TotalOSCPU"
col mem            format 999990.00        heading mem                 -- "PhysicalMemorymb"
col IORs           format 99990.000        heading IORs                -- "IOPsr"
col IOWs           format 99990.000        heading IOWs                -- "IOPsw"
col IORedo         format 99990.000        heading IORedo              -- "IOPsredo"
col IORmbs         format 99990.000        heading IORmbs              -- "IOrmbs"
col IOWmbs         format 99990.000        heading IOWmbs              -- "IOwmbs"
col redosizesec    format 99990.000        heading redosizesec         -- "Redombs"
col logons         format 990              heading logons              -- "Sess"
col logone         format 990              heading logone              -- "SessEnd"
col exsraw         format 99990.000        heading exsraw              -- "Execrawdelta"
col exs            format 9990.000         heading exs                 -- "Execs"
col oracpupct      format 990              heading oracpupct           -- "OracleCPUPct"
col rmancpupct     format 990              heading rmancpupct          -- "RMANCPUPct"
col oscpupct       format 990              heading oscpupct            -- "OSCPUPct"
col oscpuusr       format 990              heading oscpuusr            -- "USRPct"
col oscpusys       format 990              heading oscpusys            -- "SYSPct"
col oscpuio        format 990              heading oscpuio             -- "IOPct"
col SIORs          format 99990.000        heading SIORs               -- "IOPsSingleBlockr"
col MIORs          format 99990.000        heading MIORs               -- "IOPsMultiBlockr"
col TIORmbs        format 99990.000        heading TIORmbs             -- "Readmbs"
col SIOWs          format 99990.000        heading SIOWs               -- "IOPsSingleBlockw"
col MIOWs          format 99990.000        heading MIOWs               -- "IOPsMultiBlockw"
col TIOWmbs        format 99990.000        heading TIOWmbs             -- "Writembs"
col TIOR           format 99990.000        heading TIOR                -- "TotalIOPsr"
col TIOW           format 99990.000        heading TIOW                -- "TotalIOPsw"
col TIOALL         format 99990.000        heading TIOALL              -- "TotalIOPsALL"
col ALLRmbs        format 99990.000        heading ALLRmbs             -- "TotalReadmbs"
col ALLWmbs        format 99990.000        heading ALLWmbs             -- "TotalWritembs"
col GRANDmbs       format 99990.000        heading GRANDmbs            -- "TotalmbsALL"
col readratio      format 990              heading readratio           -- "ReadRatio"
col writeratio     format 990              heading writeratio          -- "WriteRatio"
col diskiops       format 99990.000        heading diskiops            -- "HWDiskIOPs"
col numdisks       format 99990.000        heading numdisks            -- "HWNumofDisks"
col flashcache     format 990              heading flashcache          -- "FlashCacheHitsPct"
col cellpiob       format 99990.000        heading cellpiob            -- "CellPIOICmbs"
col cellpiobss     format 99990.000        heading cellpiobss          -- "CellPIOICSmartScanmbs"
col cellpiobpreoff format 99990.000        heading cellpiobpreoff      -- "CellPIOpredoffloadmbs"
col cellpiobsi     format 99990.000        heading cellpiobsi          -- "CellPIOstorageindexmbs"
col celliouncomb   format 99990.000        heading celliouncomb        -- "CellIOuncompmbs"
col cellpiobs      format 99990.000        heading cellpiobs           -- "CellPIOsavedfilecreationmbs"
col cellpiobsrman  format 99990.000        heading cellpiobsrman       -- "CellPIOsavedRMANfilerestorembs"

SELECT * FROM
( 
  SELECT trim('&_instname') instname, 
         trim('&_dbid') db_id, 
         trim('&_hostname') hostname, 
         s0.snap_id id,
         TO_CHAR(s0.END_INTERVAL_TIME,'MM/DD/YY HH24:MI:SS') tm,
         s0.instance_number inst,
  round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
   (((s20t1.value - s20t0.value) - (s21t1.value - s21t0.value)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as SIORs,
   ((s21t1.value - s21t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as MIORs,
   (((s23t1.value - s23t0.value) - (s24t1.value - s24t0.value)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as SIOWs,
   ((s24t1.value - s24t0.value) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as MIOWs,
   ((s13t1.value - s13t0.value)  / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as IORedo, 
   (((s14t1.value - s14t0.value)/1024/1024)  / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as redosizesec
FROM dba_hist_snapshot s0,
  dba_hist_snapshot s1,
  dba_hist_sysstat s13t0,       -- redo writes, diffed
  dba_hist_sysstat s13t1,
  dba_hist_sysstat s14t0,       -- redo size, diffed
  dba_hist_sysstat s14t1,
  dba_hist_sysstat s20t0,       -- physical read total IO requests, diffed
  dba_hist_sysstat s20t1,
  dba_hist_sysstat s21t0,       -- physical read total multi block requests, diffed
  dba_hist_sysstat s21t1,  
  dba_hist_sysstat s23t0,       -- physical write total IO requests, diffed
  dba_hist_sysstat s23t1,
  dba_hist_sysstat s24t0,       -- physical write total multi block requests, diffed
  dba_hist_sysstat s24t1
WHERE s0.dbid            = &_dbid    -- CHANGE THE DBID HERE!
AND s1.dbid              = s0.dbid
AND s13t0.dbid            = s0.dbid
AND s13t1.dbid            = s0.dbid
AND s14t0.dbid            = s0.dbid
AND s14t1.dbid            = s0.dbid
AND s20t0.dbid            = s0.dbid
AND s20t1.dbid            = s0.dbid
AND s21t0.dbid            = s0.dbid
AND s21t1.dbid            = s0.dbid
AND s23t0.dbid            = s0.dbid
AND s23t1.dbid            = s0.dbid
AND s24t0.dbid            = s0.dbid
AND s24t1.dbid            = s0.dbid
--AND s0.instance_number   = &_instancenumber   -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number   = s0.instance_number
AND s13t0.instance_number = s0.instance_number
AND s13t1.instance_number = s0.instance_number
AND s14t0.instance_number = s0.instance_number
AND s14t1.instance_number = s0.instance_number
AND s20t0.instance_number = s0.instance_number
AND s20t1.instance_number = s0.instance_number
AND s21t0.instance_number = s0.instance_number
AND s21t1.instance_number = s0.instance_number
AND s23t0.instance_number = s0.instance_number
AND s23t1.instance_number = s0.instance_number
AND s24t0.instance_number = s0.instance_number
AND s24t1.instance_number = s0.instance_number
AND s1.snap_id            = s0.snap_id + 1
AND s13t0.snap_id         = s0.snap_id
AND s13t1.snap_id         = s0.snap_id + 1
AND s14t0.snap_id         = s0.snap_id
AND s14t1.snap_id         = s0.snap_id + 1
AND s20t0.snap_id         = s0.snap_id
AND s20t1.snap_id         = s0.snap_id + 1
AND s21t0.snap_id         = s0.snap_id
AND s21t1.snap_id         = s0.snap_id + 1
AND s23t0.snap_id         = s0.snap_id
AND s23t1.snap_id         = s0.snap_id + 1
AND s24t0.snap_id         = s0.snap_id
AND s24t1.snap_id         = s0.snap_id + 1
AND s13t0.stat_name       = 'redo writes'
AND s13t1.stat_name       = s13t0.stat_name
AND s14t0.stat_name       = 'redo size'
AND s14t1.stat_name       = s14t0.stat_name
AND s20t0.stat_name       = 'physical read total IO requests'
AND s20t1.stat_name       = s20t0.stat_name
AND s21t0.stat_name       = 'physical read total multi block requests'
AND s21t1.stat_name       = s21t0.stat_name
AND s23t0.stat_name       = 'physical write total IO requests'
AND s23t1.stat_name       = s23t0.stat_name
AND s24t0.stat_name       = 'physical write total multi block requests'
AND s24t1.stat_name       = s24t0.stat_name
)
-- WHERE 
-- tm > to_char(sysdate - 30, 'MM/DD/YY HH24:MI')
-- id  in (select snap_id from (select * from r2toolkit.r2_regression_data union all select * from r2toolkit.r2_outlier_data))
-- id in (338)
-- aas > 1
-- oscpuio > 50
-- rmancpupct > 0
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') >= 1     -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') <= 7
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') >= 0900     -- Hour
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') <= 1800
-- AND s0.END_INTERVAL_TIME >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss')     -- Data range
-- AND s0.END_INTERVAL_TIME <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
ORDER BY id ASC;
}}}
{{{
set arraysize 5000

COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;

COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;

COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;

COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;

-- ttitle center 'AWR Services Statistics Report' skip 2
set pagesize 50000
set linesize 550

col instname    format a15
col hostname    format a30
col tm          format a15              heading tm           --"Snap|Start|Time"
col id          format 99999            heading id           --"Snap|ID"
col inst        format 90               heading inst         --"i|n|s|t|#"
col dur         format 999990.00        heading dur          --"Snap|Dur|(m)"
col cpu         format 90               heading cpu          --"C|P|U"
col cap         format 9999990.00       heading cap          --"***|Total|CPU|Time|(s)"
col dbt         format 999990.00        heading dbt          --"DB|Time"
col dbc         format 99990.00         heading dbc          --"DB|CPU"
col bgc         format 99990.00         heading bgc          --"Bg|CPU"
col rman        format 9990.00          heading rman         --"RMAN|CPU"
col aas         format 990.0            heading aas          --"A|A|S"
col totora      format 9999990.00       heading totora       --"***|Total|Oracle|CPU|(s)"
col busy        format 9999990.00       heading busy         --"Busy|Time"
col load        format 990.00           heading load         --"OS|Load"
col totos       format 9999990.00       heading totos        --"***|Total|OS|CPU|(s)"
col mem         format 999990.00        heading mem          --"Physical|Memory|(mb)"
col IORs        format 9990.000         heading IORs         --"IOPs|r"
col IOWs        format 9990.000         heading IOWs         --"IOPs|w"
col IORedo      format 9990.000         heading IORedo       --"IOPs|redo"
col IORmbs      format 9990.000         heading IORmbs       --"IO r|(mb)/s"
col IOWmbs      format 9990.000         heading IOWmbs       --"IO w|(mb)/s"
col redosizesec format 9990.000         heading redosizesec  --"Redo|(mb)/s"
col logons      format 990              heading logons       --"Sess"
col logone      format 990              heading logone       --"Sess|End"
col exsraw      format 99990.000        heading exsraw       --"Exec|raw|delta"
col exs         format 9990.000         heading exs          --"Exec|/s"
col oracpupct   format 990              heading oracpupct    --"Oracle|CPU|%"
col rmancpupct  format 990              heading rmancpupct   --"RMAN|CPU|%"
col oscpupct    format 990              heading oscpupct     --"OS|CPU|%"
col oscpuusr    format 990              heading oscpuusr     --"U|S|R|%"
col oscpusys    format 990              heading oscpusys     --"S|Y|S|%"
col oscpuio     format 990              heading oscpuio      --"I|O|%"
col phy_reads   format 99999990.00      heading phy_reads    --"physical|reads"
col log_reads   format 99999990.00      heading log_reads    --"logical|reads"

select  trim('&_instname') instname, trim('&_dbid') db_id, trim('&_hostname') hostname, snap_id,
        TO_CHAR(tm,'MM/DD/YY HH24:MI:SS') tm, 
        inst,
        dur,
        service_name, 
        round(db_time / 1000000, 1) as dbt, 
        round(db_cpu  / 1000000, 1) as dbc,
        phy_reads, 
        log_reads,
        aas
 from (select 
          s1.snap_id,
          s1.tm,
          s1.inst,
          s1.dur,
          s1.service_name, 
          sum(decode(s1.stat_name, 'DB time', s1.diff, 0)) db_time,
          sum(decode(s1.stat_name, 'DB CPU',  s1.diff, 0)) db_cpu,
          sum(decode(s1.stat_name, 'physical reads', s1.diff, 0)) phy_reads,
          sum(decode(s1.stat_name, 'session logical reads', s1.diff, 0)) log_reads,
          round(sum(decode(s1.stat_name, 'DB time', s1.diff, 0))/1000000,1)/60 / s1.dur as aas
   from
     (select s0.snap_id snap_id,
             s0.END_INTERVAL_TIME tm,
             s0.instance_number inst,
            round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
             e.service_name     service_name, 
             e.stat_name        stat_name, 
             e.value - b.value  diff
       from dba_hist_snapshot s0,
            dba_hist_snapshot s1,
            dba_hist_service_stat b,
            dba_hist_service_stat e
       where 
         s0.dbid                  = &_dbid            -- CHANGE THE DBID HERE!
         and s1.dbid              = s0.dbid
         and b.dbid               = s0.dbid
         and e.dbid               = s0.dbid
         --and s0.instance_number   = &_instancenumber  -- CHANGE THE INSTANCE_NUMBER HERE!
         and s1.instance_number   = s0.instance_number
         and b.instance_number    = s0.instance_number
         and e.instance_number    = s0.instance_number
         and s1.snap_id           = s0.snap_id + 1
         and b.snap_id            = s0.snap_id
         and e.snap_id            = s0.snap_id + 1
         and b.stat_id            = e.stat_id
         and b.service_name_hash  = e.service_name_hash) s1
   group by 
     s1.snap_id, s1.tm, s1.inst, s1.dur, s1.service_name
   order by 
     snap_id asc, aas desc, service_name)
-- where 
-- AND TO_CHAR(tm,'D') >= 1     -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(tm,'D') <= 7
-- AND TO_CHAR(tm,'HH24MI') >= 0900     -- Hour
-- AND TO_CHAR(tm,'HH24MI') <= 1800
-- AND tm >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss')     -- Data range
-- AND tm <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
-- snap_id = 338
-- and snap_id >= 335 and snap_id <= 339
-- aas > .5
;
}}}
to derive the transactions/seconds put this as a calculated field in Tableau
{{{
TRX = (UCOMS + URS) / DUR 
}}}

{{{
set arraysize 5000

COLUMN blocksize NEW_VALUE _blocksize NOPRINT
select distinct block_size blocksize from v$datafile;

COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;

COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;

COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;

COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;

-- ttitle center 'AWR CPU and IO Workload Report' skip 2
set pagesize 50000
set linesize 550

col instname    format a15              heading instname        -- instname
col hostname    format a30              heading hostname        -- hostname
col tm          format a17              heading tm              -- "tm"
col id          format 99999            heading id              -- "snapid"
col inst        format 90               heading inst            -- "inst"
col dur         format 999990.00        heading dur             -- "dur"
col cpu         format 90               heading cpu             -- "cpu"
col cap         format 9999990.00       heading cap             -- "capacity"
col dbt         format 999990.00        heading dbt             -- "DBTime"
col dbc         format 99990.00         heading dbc             -- "DBcpu"
col bgc         format 99990.00         heading bgc             -- "BGcpu"
col rman        format 9990.00          heading rman            -- "RMANcpu"
col aas         format 990.0            heading aas             -- "AAS"
col totora      format 9999990.00       heading totora          -- "TotalOracleCPU"
col busy        format 9999990.00       heading busy            -- "BusyTime"
col load        format 990.00           heading load            -- "OSLoad"
col totos       format 9999990.00       heading totos           -- "TotalOSCPU"
col mem         format 999990.00        heading mem             -- "PhysicalMemorymb"
col IORs        format 9990.000         heading IORs            -- "IOPsr"
col IOWs        format 9990.000         heading IOWs            -- "IOPsw"
col IORedo      format 9990.000         heading IORedo          -- "IOPsredo"
col IORmbs      format 9990.000         heading IORmbs          -- "IOrmbs"
col IOWmbs      format 9990.000         heading IOWmbs          -- "IOwmbs"
col redosizesec format 9990.000         heading redosizesec     -- "Redombs"
col logons      format 990              heading logons          -- "Sess"
col logone      format 990              heading logone          -- "SessEnd"
col exsraw      format 99990.000        heading exsraw          -- "Execrawdelta"
col exs         format 9990.000         heading exs             -- "Execs"
col ucs         format 9990.000         heading ucs             -- "UserCalls"
col ucoms       format 9990.000         heading ucoms           -- "Commit"
col urs         format 9990.000         heading urs             -- "Rollback"
col lios        format 9999990.00       heading lios            -- "LIOs"
col oracpupct   format 990              heading oracpupct       -- "OracleCPUPct"
col rmancpupct  format 990              heading rmancpupct      -- "RMANCPUPct"
col oscpupct    format 990              heading oscpupct        -- "OSCPUPct"
col oscpuusr    format 990              heading oscpuusr        -- "USRPct"
col oscpusys    format 990              heading oscpusys        -- "SYSPct"
col oscpuio     format 990              heading oscpuio         -- "IOPct"

SELECT * FROM
( 
  SELECT trim('&_instname') instname, 
         trim('&_dbid') db_id, 
         trim('&_hostname') hostname, 
          s0.snap_id id,
         TO_CHAR(s0.END_INTERVAL_TIME,'MM/DD/YY HH24:MI:SS') tm,
         s0.instance_number inst,
  round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
  round(s4t1.value/1024/1024/1024,2) AS memgb,
  round(s37t1.value/1024/1024/1024,2) AS sgagb,
  round(s36t1.value/1024/1024/1024,2) AS pgagb,
     s9t0.value logons, 
   ((s10t1.value - s10t0.value)  / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as exs, 
   ((s40t1.value - s40t0.value)  / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as ucs, 
   ((s38t1.value - s38t0.value)  / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as ucoms, 
   ((s39t1.value - s39t0.value)  / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as urs,
   ((s41t1.value - s41t0.value)  / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as lios
FROM dba_hist_snapshot s0,
  dba_hist_snapshot s1,
  dba_hist_osstat s4t1,         -- osstat just get the end value 
  (select snap_id, dbid, instance_number, sum(value) value from dba_hist_sga group by snap_id, dbid, instance_number) s37t1, -- total SGA allocated, just get the end value
  dba_hist_pgastat s36t1,		-- total PGA allocated, just get the end value 
  dba_hist_sysstat s9t0,        -- logons current, sysstat absolute value should not be diffed
  dba_hist_sysstat s10t0,       -- execute count, diffed
  dba_hist_sysstat s10t1,
  dba_hist_sysstat s38t0,       -- user commits, diffed
  dba_hist_sysstat s38t1,
  dba_hist_sysstat s39t0,       -- user rollbacks, diffed
  dba_hist_sysstat s39t1,
  dba_hist_sysstat s40t0,       -- user calls, diffed
  dba_hist_sysstat s40t1,
  dba_hist_sysstat s41t0,       -- session logical reads, diffed
  dba_hist_sysstat s41t1
WHERE s0.dbid            = &_dbid    -- CHANGE THE DBID HERE!
AND s1.dbid              = s0.dbid
AND s4t1.dbid            = s0.dbid
AND s9t0.dbid            = s0.dbid
AND s10t0.dbid            = s0.dbid
AND s10t1.dbid            = s0.dbid
AND s36t1.dbid            = s0.dbid
AND s37t1.dbid            = s0.dbid
AND s38t0.dbid            = s0.dbid
AND s38t1.dbid            = s0.dbid
AND s39t0.dbid            = s0.dbid
AND s39t1.dbid            = s0.dbid
AND s40t0.dbid            = s0.dbid
AND s40t1.dbid            = s0.dbid
AND s41t0.dbid            = s0.dbid
AND s41t1.dbid            = s0.dbid
--AND s0.instance_number   = &_instancenumber   -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number   = s0.instance_number
AND s4t1.instance_number = s0.instance_number
AND s9t0.instance_number = s0.instance_number
AND s10t0.instance_number = s0.instance_number
AND s10t1.instance_number = s0.instance_number
AND s36t1.instance_number = s0.instance_number
AND s37t1.instance_number = s0.instance_number
AND s38t0.instance_number = s0.instance_number
AND s38t1.instance_number = s0.instance_number
AND s39t0.instance_number = s0.instance_number
AND s39t1.instance_number = s0.instance_number
AND s40t0.instance_number = s0.instance_number
AND s40t1.instance_number = s0.instance_number
AND s41t0.instance_number = s0.instance_number
AND s41t1.instance_number = s0.instance_number
AND s1.snap_id           = s0.snap_id + 1
AND s4t1.snap_id         = s0.snap_id + 1
AND s36t1.snap_id        = s0.snap_id + 1
AND s37t1.snap_id        = s0.snap_id + 1
AND s9t0.snap_id         = s0.snap_id
AND s10t0.snap_id         = s0.snap_id
AND s10t1.snap_id         = s0.snap_id + 1
AND s38t0.snap_id         = s0.snap_id
AND s38t1.snap_id         = s0.snap_id + 1
AND s39t0.snap_id         = s0.snap_id
AND s39t1.snap_id         = s0.snap_id + 1
AND s40t0.snap_id         = s0.snap_id
AND s40t1.snap_id         = s0.snap_id + 1
AND s41t0.snap_id         = s0.snap_id
AND s41t1.snap_id         = s0.snap_id + 1
AND s4t1.stat_name       = 'PHYSICAL_MEMORY_BYTES'
AND s36t1.name           = 'total PGA allocated'
AND s9t0.stat_name       = 'logons current'
AND s10t0.stat_name       = 'execute count'
AND s10t1.stat_name       = s10t0.stat_name
AND s38t0.stat_name       = 'user commits'
AND s38t1.stat_name       = s38t0.stat_name
AND s39t0.stat_name       = 'user rollbacks'
AND s39t1.stat_name       = s39t0.stat_name
AND s40t0.stat_name       = 'user calls'
AND s40t1.stat_name       = s40t0.stat_name
AND s41t0.stat_name       = 'session logical reads'
AND s41t1.stat_name       = s41t0.stat_name
)
-- WHERE 
-- tm > to_char(sysdate - 30, 'MM/DD/YY HH24:MI')
-- id  in (select snap_id from (select * from r2toolkit.r2_regression_data union all select * from r2toolkit.r2_outlier_data))
-- id in (336)
-- aas > 1
-- oracpupct > 50
-- oscpupct > 50
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') >= 1     -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') <= 7
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') >= 0900     -- Hour
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') <= 1800
-- AND s0.END_INTERVAL_TIME >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss')     -- Data range
-- AND s0.END_INTERVAL_TIME <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
ORDER BY id ASC;
}}}
only shows the following 
- 'PHYSICAL_MEMORY_BYTES'
- 'total PGA allocated'
- 'logons current'
- 'execute count'

which is all the data needed to characterize the SGA,PGA requirements of the databases
and characterize the load activity by the metric "execute count" which pretty much drives the trx/s metric
the range of 25K ex/s is a high load OLTP environment 

{{{
set arraysize 5000

COLUMN blocksize NEW_VALUE _blocksize NOPRINT
select distinct block_size blocksize from v$datafile;

COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;

COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;

COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;

COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;

-- ttitle center 'AWR CPU and IO Workload Report' skip 2
set pagesize 50000
set linesize 550

col instname    format a15              heading instname        -- instname
col hostname    format a30              heading hostname        -- hostname
col tm          format a17              heading tm              -- "tm"
col id          format 99999            heading id              -- "snapid"
col inst        format 90               heading inst            -- "inst"
col dur         format 999990.00        heading dur             -- "dur"
col cpu         format 90               heading cpu             -- "cpu"
col cap         format 9999990.00       heading cap             -- "capacity"
col dbt         format 999990.00        heading dbt             -- "DBTime"
col dbc         format 99990.00         heading dbc             -- "DBcpu"
col bgc         format 99990.00         heading bgc             -- "BGcpu"
col rman        format 9990.00          heading rman            -- "RMANcpu"
col aas         format 990.0            heading aas             -- "AAS"
col totora      format 9999990.00       heading totora          -- "TotalOracleCPU"
col busy        format 9999990.00       heading busy            -- "BusyTime"
col load        format 990.00           heading load            -- "OSLoad"
col totos       format 9999990.00       heading totos           -- "TotalOSCPU"
col mem         format 999990.00        heading mem             -- "PhysicalMemorymb"
col IORs        format 9990.000         heading IORs            -- "IOPsr"
col IOWs        format 9990.000         heading IOWs            -- "IOPsw"
col IORedo      format 9990.000         heading IORedo          -- "IOPsredo"
col IORmbs      format 9990.000         heading IORmbs          -- "IOrmbs"
col IOWmbs      format 9990.000         heading IOWmbs          -- "IOwmbs"
col redosizesec format 9990.000         heading redosizesec     -- "Redombs"
col logons      format 990              heading logons          -- "Sess"
col logone      format 990              heading logone          -- "SessEnd"
col exsraw      format 99990.000        heading exsraw          -- "Execrawdelta"
col exs         format 9990.000         heading exs             -- "Execs"
col ucs         format 9990.000         heading ucs             -- "UserCalls"
col ucoms       format 9990.000         heading ucoms           -- "Commit"
col urs         format 9990.000         heading urs             -- "Rollback"
col lios        format 9999990.00       heading lios            -- "LIOs"
col oracpupct   format 990              heading oracpupct       -- "OracleCPUPct"
col rmancpupct  format 990              heading rmancpupct      -- "RMANCPUPct"
col oscpupct    format 990              heading oscpupct        -- "OSCPUPct"
col oscpuusr    format 990              heading oscpuusr        -- "USRPct"
col oscpusys    format 990              heading oscpusys        -- "SYSPct"
col oscpuio     format 990              heading oscpuio         -- "IOPct"

SELECT * FROM
( 
  SELECT trim('&_instname') instname, 
         trim('&_dbid') db_id, 
         trim('&_hostname') hostname, 
          s0.snap_id id,
         TO_CHAR(s0.END_INTERVAL_TIME,'MM/DD/YY HH24:MI:SS') tm,
         s0.instance_number inst,
  round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
  round(s4t1.value/1024/1024/1024,2) AS memgb,
  round(s37t1.value/1024/1024/1024,2) AS sgagb,
  round(s36t1.value/1024/1024/1024,2) AS pgagb,
     s9t0.value logons, 
   ((s10t1.value - s10t0.value)  / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as exs
FROM dba_hist_snapshot s0,
  dba_hist_snapshot s1,
  dba_hist_osstat s4t1,         -- osstat just get the end value 
  (select snap_id, dbid, instance_number, sum(value) value from dba_hist_sga group by snap_id, dbid, instance_number) s37t1, -- total SGA allocated, just get the end value
  dba_hist_pgastat s36t1,		-- total PGA allocated, just get the end value 
  dba_hist_sysstat s9t0,        -- logons current, sysstat absolute value should not be diffed
  dba_hist_sysstat s10t0,       -- execute count, diffed
  dba_hist_sysstat s10t1
WHERE s0.dbid            = &_dbid    -- CHANGE THE DBID HERE!
AND s1.dbid              = s0.dbid
AND s4t1.dbid            = s0.dbid
AND s9t0.dbid            = s0.dbid
AND s10t0.dbid            = s0.dbid
AND s10t1.dbid            = s0.dbid
AND s36t1.dbid            = s0.dbid
AND s37t1.dbid            = s0.dbid
--AND s0.instance_number   = &_instancenumber   -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number   = s0.instance_number
AND s4t1.instance_number = s0.instance_number
AND s9t0.instance_number = s0.instance_number
AND s10t0.instance_number = s0.instance_number
AND s10t1.instance_number = s0.instance_number
AND s36t1.instance_number = s0.instance_number
AND s37t1.instance_number = s0.instance_number
AND s1.snap_id           = s0.snap_id + 1
AND s4t1.snap_id         = s0.snap_id + 1
AND s36t1.snap_id        = s0.snap_id + 1
AND s37t1.snap_id        = s0.snap_id + 1
AND s9t0.snap_id         = s0.snap_id
AND s10t0.snap_id         = s0.snap_id
AND s10t1.snap_id         = s0.snap_id + 1
AND s4t1.stat_name       = 'PHYSICAL_MEMORY_BYTES'
AND s36t1.name           = 'total PGA allocated'
AND s9t0.stat_name       = 'logons current'
AND s10t0.stat_name       = 'execute count'
AND s10t1.stat_name       = s10t0.stat_name
)
-- WHERE 
-- tm > to_char(sysdate - 30, 'MM/DD/YY HH24:MI')
-- id  in (select snap_id from (select * from r2toolkit.r2_regression_data union all select * from r2toolkit.r2_outlier_data))
-- id in (336)
-- aas > 1
-- oracpupct > 50
-- oscpupct > 50
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') >= 1     -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') <= 7
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') >= 0900     -- Hour
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') <= 1800
-- AND s0.END_INTERVAL_TIME >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss')     -- Data range
-- AND s0.END_INTERVAL_TIME <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
ORDER BY id ASC;
}}}
{{{
set arraysize 5000

COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;

COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;

COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;

COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;

-- ttitle center 'AWR Top Events Report' skip 2
set pagesize 50000
set linesize 550

col instname    format a15              
col hostname    format a30              
col snap_id     format 99999            heading snap_id       -- "snapid"   
col tm          format a17              heading tm            -- "tm"       
col inst        format 90               heading inst          -- "inst"     
col dur         format 999990.00        heading dur           -- "dur"      
col event       format a55              heading event         -- "Event"    
col event_rank  format 90               heading event_rank    -- "EventRank"
col waits       format 9999999990.00    heading waits         -- "Waits"    
col time        format 9999999990.00    heading time          -- "Timesec"  
col avgwt       format 99990.00         heading avgwt         -- "Avgwtms"  
col pctdbt      format 9990.0           heading pctdbt        -- "DBTimepct"
col aas         format 990.0            heading aas           -- "Aas"      
col wait_class  format a15              heading wait_class    -- "WaitClass"

spool awr_topevents-tableau-&_instname-&_hostname..csv
select trim('&_instname') instname, trim('&_dbid') db_id, trim('&_hostname') hostname, snap_id, tm, inst, dur, event, event_rank, waits, time, avgwt, pctdbt, aas, wait_class
from 
      (select snap_id, TO_CHAR(tm,'MM/DD/YY HH24:MI:SS') tm, inst, dur, event, waits, time, avgwt, pctdbt, aas, wait_class, 
            DENSE_RANK() OVER (
          PARTITION BY snap_id ORDER BY time DESC) event_rank
      from 
              (
              select * from 
                    (select * from 
                          (select 
                            s0.snap_id snap_id,
                            s0.END_INTERVAL_TIME tm,
                            s0.instance_number inst,
                            round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                    + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                    + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                    + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
                            e.event_name event,
                            e.total_waits - nvl(b.total_waits,0)       waits,
                            round ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000000, 2)  time,     -- THIS IS EVENT (sec)
                            round (decode ((e.total_waits - nvl(b.total_waits, 0)), 0, to_number(NULL), ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000) / (e.total_waits - nvl(b.total_waits,0))), 2) avgwt,
                            ((round ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000000, 2)) / NULLIF(((s5t1.value - nvl(s5t0.value,0)) / 1000000),0))*100 as pctdbt,     -- THIS IS EVENT (sec) / DB TIME (sec)
                            (round ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000000, 2))/60 /  round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                            + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                            + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                            + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas,     -- THIS IS EVENT (min) / SnapDur (min) TO GET THE % DB CPU ON AAS
                            e.wait_class wait_class
                            from 
                                 dba_hist_snapshot s0,
                                 dba_hist_snapshot s1,
                                 dba_hist_system_event b,
                                 dba_hist_system_event e,
                                 dba_hist_sys_time_model s5t0,
                                 dba_hist_sys_time_model s5t1
                            where 
                              s0.dbid                   = &_dbid            -- CHANGE THE DBID HERE!
                              AND s1.dbid               = s0.dbid
                              and b.dbid(+)             = s0.dbid
                              and e.dbid                = s0.dbid
                              AND s5t0.dbid             = s0.dbid
                              AND s5t1.dbid             = s0.dbid
                              --AND s0.instance_number    = &_instancenumber  -- CHANGE THE INSTANCE_NUMBER HERE!
                              AND s1.instance_number    = s0.instance_number
                              and b.instance_number(+)  = s0.instance_number
                              and e.instance_number     = s0.instance_number
                              AND s5t0.instance_number = s0.instance_number
                              AND s5t1.instance_number = s0.instance_number
                              AND s1.snap_id            = s0.snap_id + 1
                              AND b.snap_id(+)          = s0.snap_id
                              and e.snap_id             = s0.snap_id + 1
                              AND s5t0.snap_id         = s0.snap_id
                              AND s5t1.snap_id         = s0.snap_id + 1
                              AND s5t0.stat_name       = 'DB time'
                              AND s5t1.stat_name       = s5t0.stat_name
                                    and b.event_id            = e.event_id
                                    and e.wait_class          != 'Idle'
                                    and e.total_waits         > nvl(b.total_waits,0)
                                    and e.event_name not in ('smon timer', 
                                                             'pmon timer', 
                                                             'dispatcher timer',
                                                             'dispatcher listen timer',
                                                             'rdbms ipc message')
                                  order by snap_id, time desc, waits desc, event)
                    union all
                              select 
                                       s0.snap_id snap_id,
                                       s0.END_INTERVAL_TIME tm,
                                       s0.instance_number inst,
                                       round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                            + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                            + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                            + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
                                        'CPU time',
                                        0,
                                        round ((s6t1.value - s6t0.value) / 1000000, 2) as time,     -- THIS IS DB CPU (sec)
                                        0,
                                        ((round ((s6t1.value - s6t0.value) / 1000000, 2)) / NULLIF(((s5t1.value - nvl(s5t0.value,0)) / 1000000),0))*100 as pctdbt,     -- THIS IS DB CPU (sec) / DB TIME (sec)..TO GET % OF DB CPU ON DB TIME FOR TOP 5 TIMED EVENTS SECTION
                                        (round ((s6t1.value - s6t0.value) / 1000000, 2))/60 /  round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                    + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                    + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                    + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas,  -- THIS IS DB CPU (min) / SnapDur (min) TO GET THE % DB CPU ON AAS
                                        'CPU'
                                      from 
                                        dba_hist_snapshot s0,
                                        dba_hist_snapshot s1,
                                        dba_hist_sys_time_model s6t0,
                                        dba_hist_sys_time_model s6t1,
                                        dba_hist_sys_time_model s5t0,
                                        dba_hist_sys_time_model s5t1
                                      WHERE 
                                      s0.dbid                   = &_dbid              -- CHANGE THE DBID HERE!
                                      AND s1.dbid               = s0.dbid
                                      AND s6t0.dbid            = s0.dbid
                                      AND s6t1.dbid            = s0.dbid
                                      AND s5t0.dbid            = s0.dbid
                                      AND s5t1.dbid            = s0.dbid
                                      --AND s0.instance_number    = &_instancenumber    -- CHANGE THE INSTANCE_NUMBER HERE!
                                      AND s1.instance_number    = s0.instance_number
                                      AND s6t0.instance_number = s0.instance_number
                                      AND s6t1.instance_number = s0.instance_number
                                      AND s5t0.instance_number = s0.instance_number
                                      AND s5t1.instance_number = s0.instance_number
                                      AND s1.snap_id            = s0.snap_id + 1
                                      AND s6t0.snap_id         = s0.snap_id
                                      AND s6t1.snap_id         = s0.snap_id + 1
                                      AND s5t0.snap_id         = s0.snap_id
                                      AND s5t1.snap_id         = s0.snap_id + 1
                                      AND s6t0.stat_name       = 'DB CPU'
                                      AND s6t1.stat_name       = s6t0.stat_name
                                      AND s5t0.stat_name       = 'DB time'
                                      AND s5t1.stat_name       = s5t0.stat_name
                    union all
                                      (select 
                                               dbtime.snap_id,
                                               dbtime.tm,
                                               dbtime.inst,
                                               dbtime.dur,
                                               'CPU wait',
                                                0,
                                                round(dbtime.time - accounted_dbtime.time, 2) time,     -- THIS IS UNACCOUNTED FOR DB TIME (sec)
                                                0,
                                                ((dbtime.aas - accounted_dbtime.aas)/ NULLIF(nvl(dbtime.aas,0),0))*100 as pctdbt,     -- THIS IS UNACCOUNTED FOR DB TIME (sec) / DB TIME (sec)
                                                round(dbtime.aas - accounted_dbtime.aas, 2) aas,     -- AAS OF UNACCOUNTED FOR DB TIME
                                                'CPU wait'
                                      from
                                                  (select  
                                                     s0.snap_id, 
                                                     s0.END_INTERVAL_TIME tm,
                                                     s0.instance_number inst,
                                                    round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                    + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                    + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                    + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
                                                    'DB time',
                                                    0,
                                                    round ((s5t1.value - s5t0.value) / 1000000, 2) as time,     -- THIS IS DB time (sec)
                                                    0,
                                                    0,
                                                     (round ((s5t1.value - s5t0.value) / 1000000, 2))/60 /  round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                                    + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                                    + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                                    + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas,
                                                    'DB time'
                                                  from 
                                                                    dba_hist_snapshot s0,
                                                                    dba_hist_snapshot s1,
                                                                    dba_hist_sys_time_model s5t0,
                                                                    dba_hist_sys_time_model s5t1
                                                                  WHERE 
                                                                  s0.dbid                   = &_dbid              -- CHANGE THE DBID HERE!
                                                                  AND s1.dbid               = s0.dbid
                                                                  AND s5t0.dbid            = s0.dbid
                                                                  AND s5t1.dbid            = s0.dbid
                                                                  --AND s0.instance_number    = &_instancenumber    -- CHANGE THE INSTANCE_NUMBER HERE!
                                                                  AND s1.instance_number    = s0.instance_number
                                                                  AND s5t0.instance_number = s0.instance_number
                                                                  AND s5t1.instance_number = s0.instance_number
                                                                  AND s1.snap_id            = s0.snap_id + 1
                                                                  AND s5t0.snap_id         = s0.snap_id
                                                                  AND s5t1.snap_id         = s0.snap_id + 1
                                                                  AND s5t0.stat_name       = 'DB time'
                                                                  AND s5t1.stat_name       = s5t0.stat_name) dbtime, 
                                                  (select snap_id, inst, sum(time) time, sum(AAS) aas from 
                                                          (select * from (select 
                                                                s0.snap_id snap_id,
                                                                s0.END_INTERVAL_TIME tm,
                                                                s0.instance_number inst,
                                                                round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                        + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                        + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                        + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
                                                                e.event_name event,
                                                                e.total_waits - nvl(b.total_waits,0)       waits,
                                                                round ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000000, 2)  time,     -- THIS IS EVENT (sec)
                                                                round (decode ((e.total_waits - nvl(b.total_waits, 0)), 0, to_number(NULL), ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000) / (e.total_waits - nvl(b.total_waits,0))), 2) avgwt,
                                                                ((round ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000000, 2)) / NULLIF(((s5t1.value - nvl(s5t0.value,0)) / 1000000),0))*100 as pctdbt,     -- THIS IS EVENT (sec) / DB TIME (sec)
                                                                (round ((e.time_waited_micro - nvl(b.time_waited_micro,0))/1000000, 2))/60 /  round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                                                + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                                                + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                                                + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas,     -- THIS IS EVENT (min) / SnapDur (min) TO GET THE % DB CPU ON AAS
                                                                e.wait_class wait_class
                                                          from 
                                                               dba_hist_snapshot s0,
                                                               dba_hist_snapshot s1,
                                                               dba_hist_system_event b,
                                                               dba_hist_system_event e,
                                                               dba_hist_sys_time_model s5t0,
                                                               dba_hist_sys_time_model s5t1
                                                          where 
                                                            s0.dbid                   = &_dbid            -- CHANGE THE DBID HERE!
                                                            AND s1.dbid               = s0.dbid
                                                            and b.dbid(+)             = s0.dbid
                                                            and e.dbid                = s0.dbid
                                                            AND s5t0.dbid             = s0.dbid
                                                            AND s5t1.dbid             = s0.dbid
                                                            --AND s0.instance_number    = &_instancenumber  -- CHANGE THE INSTANCE_NUMBER HERE!
                                                            AND s1.instance_number    = s0.instance_number
                                                            and b.instance_number(+)  = s0.instance_number
                                                            and e.instance_number     = s0.instance_number
                                                            AND s5t0.instance_number = s0.instance_number
                                                            AND s5t1.instance_number = s0.instance_number
                                                            AND s1.snap_id            = s0.snap_id + 1
                                                            AND b.snap_id(+)          = s0.snap_id
                                                            and e.snap_id             = s0.snap_id + 1
                                                            AND s5t0.snap_id         = s0.snap_id
                                                            AND s5t1.snap_id         = s0.snap_id + 1
                                                      AND s5t0.stat_name       = 'DB time'
                                                      AND s5t1.stat_name       = s5t0.stat_name
                                                            and b.event_id            = e.event_id
                                                            and e.wait_class          != 'Idle'
                                                            and e.total_waits         > nvl(b.total_waits,0)
                                                            and e.event_name not in ('smon timer', 
                                                                                     'pmon timer', 
                                                                                     'dispatcher timer',
                                                                                     'dispatcher listen timer',
                                                                                     'rdbms ipc message')
                                                          order by snap_id, time desc, waits desc, event)
                                                    union all
                                                          select 
                                                                   s0.snap_id snap_id,
                                                                   s0.END_INTERVAL_TIME tm,
                                                                   s0.instance_number inst,
                                                                   round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                        + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                        + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                        + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
                                                                    'CPU time',
                                                                    0,
                                                                    round ((s6t1.value - s6t0.value) / 1000000, 2) as time,     -- THIS IS DB CPU (sec)
                                                                    0,
                                                                    ((round ((s6t1.value - s6t0.value) / 1000000, 2)) / NULLIF(((s5t1.value - nvl(s5t0.value,0)) / 1000000),0))*100 as pctdbt,     -- THIS IS DB CPU (sec) / DB TIME (sec)..TO GET % OF DB CPU ON DB TIME FOR TOP 5 TIMED EVENTS SECTION
                                                                    (round ((s6t1.value - s6t0.value) / 1000000, 2))/60 /  round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                                                + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                                                + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                                                + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas,  -- THIS IS DB CPU (min) / SnapDur (min) TO GET THE % DB CPU ON AAS
                                                                    'CPU'
                                                                  from 
                                                                    dba_hist_snapshot s0,
                                                                    dba_hist_snapshot s1,
                                                                    dba_hist_sys_time_model s6t0,
                                                                    dba_hist_sys_time_model s6t1,
                                                                    dba_hist_sys_time_model s5t0,
                                                                    dba_hist_sys_time_model s5t1
                                                                  WHERE 
                                                                  s0.dbid                   = &_dbid              -- CHANGE THE DBID HERE!
                                                                  AND s1.dbid               = s0.dbid
                                                                  AND s6t0.dbid            = s0.dbid
                                                                  AND s6t1.dbid            = s0.dbid
                                                                  AND s5t0.dbid            = s0.dbid
                                                                  AND s5t1.dbid            = s0.dbid
                                                                  --AND s0.instance_number    = &_instancenumber    -- CHANGE THE INSTANCE_NUMBER HERE!
                                                                  AND s1.instance_number    = s0.instance_number
                                                                  AND s6t0.instance_number = s0.instance_number
                                                                  AND s6t1.instance_number = s0.instance_number
                                                                  AND s5t0.instance_number = s0.instance_number
                                                                  AND s5t1.instance_number = s0.instance_number
                                                                  AND s1.snap_id            = s0.snap_id + 1
                                                                  AND s6t0.snap_id         = s0.snap_id
                                                                  AND s6t1.snap_id         = s0.snap_id + 1
                                                                  AND s5t0.snap_id         = s0.snap_id
                                                                  AND s5t1.snap_id         = s0.snap_id + 1
                                                                  AND s6t0.stat_name       = 'DB CPU'
                                                                  AND s6t1.stat_name       = s6t0.stat_name
                                                                  AND s5t0.stat_name       = 'DB time'
                                                                  AND s5t1.stat_name       = s5t0.stat_name
                                                          ) group by snap_id, inst) accounted_dbtime
                                                            where dbtime.snap_id = accounted_dbtime.snap_id 
                                                            and dbtime.inst = accounted_dbtime.inst 
                                        )
                    )
              )
      )
WHERE event_rank <= 5
-- AND tm > to_char(sysdate - 30, 'MM/DD/YY HH24:MI')
-- AND TO_CHAR(tm,'D') >= 1     -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(tm,'D') <= 7
-- AND TO_CHAR(tm,'HH24MI') >= 0900     -- Hour
-- AND TO_CHAR(tm,'HH24MI') <= 1800
-- AND tm >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss')     -- Data range
-- AND tm <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
-- and snap_id = 495
-- and snap_id >= 495 and snap_id <= 496
-- and event = 'db file sequential read'
-- and event like 'CPU%'
-- and avgwt > 5
-- and aas > .5
-- and wait_class = 'CPU'
-- and wait_class like '%I/O%'
-- and event_rank in (1,2,3)
ORDER BY snap_id;
}}}
{{{

set arraysize 5000

COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;

COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;

COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;

COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;

-- ttitle center 'AWR Top Segments' skip 2
set pagesize 50000
set linesize 550


SELECT  
  trim('&_instname') instname, 
  trim('&_dbid') db_id, 
  trim('&_hostname') hostname, 
  snap_id, tm, inst, 
  owner, 
  tablespace_name, 
  dataobj#, 
  object_name, 
  subobject_name, 
  object_type, 
  physical_rw,
  LOGICAL_READS_DELTA,             
  BUFFER_BUSY_WAITS_DELTA,         
  DB_BLOCK_CHANGES_DELTA,          
  PHYSICAL_READS_DELTA,            
  PHYSICAL_WRITES_DELTA,           
  PHYSICAL_READS_DIRECT_DELTA,     
  PHYSICAL_WRITES_DIRECT_DELTA,    
  ITL_WAITS_DELTA,                 
  ROW_LOCK_WAITS_DELTA,            
  GC_CR_BLOCKS_SERVED_DELTA,       
  GC_CU_BLOCKS_SERVED_DELTA,       
  GC_BUFFER_BUSY_DELTA,            
  GC_CR_BLOCKS_RECEIVED_DELTA,     
  GC_CU_BLOCKS_RECEIVED_DELTA,     
  SPACE_USED_DELTA,                
  SPACE_ALLOCATED_DELTA,           
  TABLE_SCANS_DELTA,               
  CHAIN_ROW_EXCESS_DELTA,          
  PHYSICAL_READ_REQUESTS_DELTA,    
  PHYSICAL_WRITE_REQUESTS_DELTA,   
  OPTIMIZED_PHYSICAL_READS_DELTA,
  seg_rank
FROM 
    ( 
        SELECT 
          r.snap_id, 
          TO_CHAR(r.tm,'MM/DD/YY HH24:MI:SS') tm, 
          r.inst, 
          n.owner, 
          n.tablespace_name, 
          n.dataobj#, 
          n.object_name, 
          CASE 
            WHEN LENGTH(n.subobject_name) < 11 
            THEN n.subobject_name 
            ELSE SUBSTR(n.subobject_name,LENGTH(n.subobject_name)-9) 
          END subobject_name, 
          n.object_type, 
          (r.PHYSICAL_READS_DELTA + r.PHYSICAL_WRITES_DELTA) as physical_rw,
          r.LOGICAL_READS_DELTA,             
          r.BUFFER_BUSY_WAITS_DELTA,         
          r.DB_BLOCK_CHANGES_DELTA,          
          r.PHYSICAL_READS_DELTA,            
          r.PHYSICAL_WRITES_DELTA,           
          r.PHYSICAL_READS_DIRECT_DELTA,     
          r.PHYSICAL_WRITES_DIRECT_DELTA,    
          r.ITL_WAITS_DELTA,                 
          r.ROW_LOCK_WAITS_DELTA,            
          r.GC_CR_BLOCKS_SERVED_DELTA,       
          r.GC_CU_BLOCKS_SERVED_DELTA,       
          r.GC_BUFFER_BUSY_DELTA,            
          r.GC_CR_BLOCKS_RECEIVED_DELTA,     
          r.GC_CU_BLOCKS_RECEIVED_DELTA,     
          r.SPACE_USED_DELTA,                
          r.SPACE_ALLOCATED_DELTA,           
          r.TABLE_SCANS_DELTA,               
          r.CHAIN_ROW_EXCESS_DELTA,          
          r.PHYSICAL_READ_REQUESTS_DELTA,    
          r.PHYSICAL_WRITE_REQUESTS_DELTA,   
          r.OPTIMIZED_PHYSICAL_READS_DELTA,
          DENSE_RANK() OVER (PARTITION BY r.snap_id ORDER BY r.PHYSICAL_READS_DELTA + r.PHYSICAL_WRITES_DELTA DESC) seg_rank
        FROM  
              dba_hist_seg_stat_obj n, 
              ( 
                SELECT  
                  s0.snap_id snap_id, 
                  s0.END_INTERVAL_TIME tm, 
                  s0.instance_number inst, 
                  b.dataobj#, 
                  b.obj#, 
                  b.dbid,                 
                  sum(b.LOGICAL_READS_DELTA) LOGICAL_READS_DELTA,            
                  sum(b.BUFFER_BUSY_WAITS_DELTA) BUFFER_BUSY_WAITS_DELTA,        
                  sum(b.DB_BLOCK_CHANGES_DELTA) DB_BLOCK_CHANGES_DELTA,         
                  sum(b.PHYSICAL_READS_DELTA) PHYSICAL_READS_DELTA,           
                  sum(b.PHYSICAL_WRITES_DELTA) PHYSICAL_WRITES_DELTA,         
                  sum(b.PHYSICAL_READS_DIRECT_DELTA) PHYSICAL_READS_DIRECT_DELTA,    
                  sum(b.PHYSICAL_WRITES_DIRECT_DELTA) PHYSICAL_WRITES_DIRECT_DELTA,  
                  sum(b.ITL_WAITS_DELTA) ITL_WAITS_DELTA,               
                  sum(b.ROW_LOCK_WAITS_DELTA) ROW_LOCK_WAITS_DELTA,           
                  sum(b.GC_CR_BLOCKS_SERVED_DELTA) GC_CR_BLOCKS_SERVED_DELTA,      
                  sum(b.GC_CU_BLOCKS_SERVED_DELTA) GC_CU_BLOCKS_SERVED_DELTA,     
                  sum(b.GC_BUFFER_BUSY_DELTA) GC_BUFFER_BUSY_DELTA,           
                  sum(b.GC_CR_BLOCKS_RECEIVED_DELTA) GC_CR_BLOCKS_RECEIVED_DELTA,    
                  sum(b.GC_CU_BLOCKS_RECEIVED_DELTA) GC_CU_BLOCKS_RECEIVED_DELTA,    
                  sum(b.SPACE_USED_DELTA) SPACE_USED_DELTA,              
                  sum(b.SPACE_ALLOCATED_DELTA) SPACE_ALLOCATED_DELTA,         
                  sum(b.TABLE_SCANS_DELTA) TABLE_SCANS_DELTA,              
                  sum(b.CHAIN_ROW_EXCESS_DELTA) CHAIN_ROW_EXCESS_DELTA,         
                  sum(b.PHYSICAL_READ_REQUESTS_DELTA) PHYSICAL_READ_REQUESTS_DELTA,   
                  sum(b.PHYSICAL_WRITE_REQUESTS_DELTA) PHYSICAL_WRITE_REQUESTS_DELTA,  
                  sum(b.OPTIMIZED_PHYSICAL_READS_DELTA) OPTIMIZED_PHYSICAL_READS_DELTA
                FROM  
                    dba_hist_snapshot s0, 
                    dba_hist_snapshot s1, 
                    dba_hist_seg_stat b 
                WHERE  
                    s0.dbid                  = &_dbid     
                    AND s1.dbid              = s0.dbid 
                    AND b.dbid               = s0.dbid 
                    --AND s0.instance_number   = &_instancenumber   
                    AND s1.instance_number   = s0.instance_number 
                    AND b.instance_number    = s0.instance_number 
                    AND s1.snap_id           = s0.snap_id + 1 
                    AND b.snap_id            = s0.snap_id + 1 
                    --AND s0.snap_id = 35547 
                GROUP BY  
                  s0.snap_id, s0.END_INTERVAL_TIME, s0.instance_number, b.dataobj#, b.obj#, b.dbid 
              ) r 
        WHERE n.dataobj#     = r.dataobj# 
        AND n.obj#           = r.obj# 
        AND n.dbid           = r.dbid 
        AND r.PHYSICAL_READS_DELTA + r.PHYSICAL_WRITES_DELTA > 0 
        ORDER BY physical_rw DESC, 
          object_name, 
          owner, 
          subobject_name 
    ) 
WHERE 
seg_rank <=10
--and snap_id in (35547,35548,35549)
order by inst, snap_id, seg_rank asc;

}}}


''References:''
SPACE_ALLOCATED_DELTA http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:7867887875624

This script is very handy for characterizing the resource hogging SQLs, once the data is pulled from tableau it's very easy to sort stuff and you'll be done in a couple of minutes.. 
<<<
''Per instance''	
	 top elap / exec
	 top disk reads
	 top buffer gets
	 top executes
	 top app wait
	 top concurrency wait
	 top cluster wait
	
''Per App Module (parsing schema)''

''or by other dimensions (inst, sql_id, module)''
<<<

{{{
set arraysize 5000

COLUMN blocksize NEW_VALUE _blocksize NOPRINT
select distinct block_size blocksize from v$datafile;

COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;

COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;

COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;

COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;

-- ttitle center 'AWR Top SQL Report' skip 2
set pagesize 50000
set linesize 550

col snap_id             format 99999            heading -- "Snap|ID"
col tm                  format a15              heading -- "Snap|Start|Time"
col inst                format 90               heading -- "i|n|s|t|#"
col dur                 format 990.00           heading -- "Snap|Dur|(m)"
col sql_id              format a15              heading -- "SQL|ID"
col phv                 format 99999999999      heading -- "Plan|Hash|Value"
col module              format a50
col elap                format 999990.00        heading -- "Ela|Time|(s)"
col elapexec            format 999990.00        heading -- "Ela|Time|per|exec|(s)"
col cput                format 999990.00        heading -- "CPU|Time|(s)"
col iowait              format 999990.00        heading -- "IO|Wait|(s)"
col appwait             format 999990.00        heading -- "App|Wait|(s)"
col concurwait          format 999990.00        heading -- "Ccr|Wait|(s)"
col clwait              format 999990.00        heading -- "Cluster|Wait|(s)"
col bget                format 99999999990      heading -- "LIO"
col dskr                format 99999999990      heading -- "PIO"
col dpath               format 99999999990      heading -- "Direct|Writes"
col rowp                format 99999999990      heading -- "Rows"
col exec                format 9999990          heading -- "Exec"
col prsc                format 999999990        heading -- "Parse|Count"
col pxexec              format 9999990          heading -- "PX|Server|Exec"
col icbytes             format 99999990         heading -- "IC|MB"           
col offloadbytes        format 99999990         heading -- "Offload|MB"
col offloadreturnbytes  format 99999990         heading -- "Offload|return|MB"
col flashcachereads     format 99999990         heading -- "Flash|Cache|MB"   
col uncompbytes         format 99999990         heading -- "Uncomp|MB"       
col pctdbt              format 990              heading -- "DB Time|%"
col aas                 format 990.00           heading -- "A|A|S"
col time_rank           format 90               heading -- "Time|Rank"
col sql_text            format a6               heading -- "SQL|Text"

     select *
       from (
             select
                  trim('&_instname') instname, 
                  trim('&_dbid') db_id, 
                  trim('&_hostname') hostname, 
                  sqt.snap_id snap_id,
                  TO_CHAR(sqt.tm,'MM/DD/YY HH24:MI:SS') tm,
                  sqt.inst inst,
                  sqt.dur dur,
                  sqt.aas aas,
                  nvl((sqt.elap), to_number(null)) elap,
                  nvl((sqt.elapexec), 0) elapexec,
                  nvl((sqt.cput), to_number(null)) cput,
                  sqt.iowait iowait,
                  sqt.appwait appwait,
                  sqt.concurwait concurwait,
                  sqt.clwait clwait,
                  sqt.bget bget, 
                  sqt.dskr dskr, 
                  sqt.dpath dpath,
                  sqt.rowp rowp,
                  sqt.exec exec, 
                  sqt.prsc prsc, 
                  sqt.pxexec pxexec,
                  sqt.icbytes, 
                  sqt.offloadbytes, 
                  sqt.offloadreturnbytes, 
                  sqt.flashcachereads, 
                  sqt.uncompbytes,
                  sqt.time_rank time_rank,
                  sqt.sql_id sql_id,   
                  sqt.phv phv,     
                  sqt.parse_schema parse_schema,
                  substr(to_clob(decode(sqt.module, null, null, sqt.module)),1,50) module, 
                  st.sql_text sql_text     -- PUT/REMOVE COMMENT TO HIDE/SHOW THE SQL_TEXT
             from        (
                          select snap_id, tm, inst, dur, sql_id, phv, parse_schema, module, elap, elapexec, cput, iowait, appwait, concurwait, clwait, bget, dskr, dpath, rowp, exec, prsc, pxexec, icbytes, offloadbytes, offloadreturnbytes, flashcachereads, uncompbytes, aas, time_rank
                          from
                                             (
                                               select 
                                                      s0.snap_id snap_id,
                                                      s0.END_INTERVAL_TIME tm,
                                                      s0.instance_number inst,
                                                      round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                              + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                              + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                              + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
                                                      e.sql_id sql_id, 
                                                      e.plan_hash_value phv, 
                                                      e.parsing_schema_name parse_schema,
                                                      max(e.module) module,
                                                      sum(e.elapsed_time_delta)/1000000 elap,
                                                      decode((sum(e.executions_delta)), 0, to_number(null), ((sum(e.elapsed_time_delta)) / (sum(e.executions_delta)) / 1000000)) elapexec,
                                                      sum(e.cpu_time_delta)/1000000     cput, 
                                                      sum(e.iowait_delta)/1000000 iowait,
                                                      sum(e.apwait_delta)/1000000 appwait,
                                                      sum(e.ccwait_delta)/1000000 concurwait,
                                                      sum(e.clwait_delta)/1000000 clwait,
                                                      sum(e.buffer_gets_delta) bget,
                                                      sum(e.disk_reads_delta) dskr, 
                                                      sum(e.direct_writes_delta) dpath,
                                                      sum(e.rows_processed_delta) rowp,
                                                      sum(e.executions_delta)   exec,
                                                      sum(e.parse_calls_delta) prsc,
                                                      sum(e.px_servers_execs_delta) pxexec,
                                                      sum(e.io_interconnect_bytes_delta)/1024/1024 icbytes,  
                                                      sum(e.io_offload_elig_bytes_delta)/1024/1024 offloadbytes,  
                                                      sum(e.io_offload_return_bytes_delta)/1024/1024 offloadreturnbytes,   
                                                      (sum(e.optimized_physical_reads_delta)* &_blocksize)/1024/1024 flashcachereads,   
                                                      sum(e.cell_uncompressed_bytes_delta)/1024/1024 uncompbytes, 
                                                      (sum(e.elapsed_time_delta)/1000000) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                                            + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                                            + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                                            + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60) aas,
                                                      DENSE_RANK() OVER (
                                                      PARTITION BY s0.snap_id ORDER BY e.elapsed_time_delta DESC) time_rank
                                               from 
                                                   dba_hist_snapshot s0,
                                                   dba_hist_snapshot s1,
                                                   dba_hist_sqlstat e
                                                   where 
                                                    s0.dbid                   = &_dbid                -- CHANGE THE DBID HERE!
                                                    AND s1.dbid               = s0.dbid
                                                    and e.dbid                = s0.dbid                                                
                                                    --AND s0.instance_number    = &_instancenumber      -- CHANGE THE INSTANCE_NUMBER HERE!
                                                    AND s1.instance_number    = s0.instance_number
                                                    and e.instance_number     = s0.instance_number                                                 
                                                    AND s1.snap_id            = s0.snap_id + 1
                                                    and e.snap_id             = s0.snap_id + 1                                              
                                               group by 
                                                    s0.snap_id, s0.END_INTERVAL_TIME, s0.instance_number, e.sql_id, e.plan_hash_value, e.parsing_schema_name, e.elapsed_time_delta, s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME
                                             )
                          where 
                          time_rank <= 5                                     -- GET TOP 5 SQL ACROSS SNAP_IDs... YOU CAN ALTER THIS TO HAVE MORE DATA POINTS
                         ) 
                        sqt,
                        (select sql_id, dbid, nvl(b.name, a.command_type) sql_text from dba_hist_sqltext a, audit_actions b where a.command_type =  b.action(+)) st
             where st.sql_id(+)             = sqt.sql_id
             and st.dbid(+)                 = &_dbid
-- AND TO_CHAR(tm,'D') >= 1                                                  -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(tm,'D') <= 7
-- AND TO_CHAR(tm,'HH24MI') >= 0900                                          -- Hour
-- AND TO_CHAR(tm,'HH24MI') <= 1800
-- AND tm >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss')     -- Data range
-- AND tm <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
-- AND snap_id in (338,339)
-- AND snap_id = 338
-- AND snap_id >= 335 and snap_id <= 339
-- AND lower(st.sql_text) like 'select%'
-- AND lower(st.sql_text) like 'insert%'
-- AND lower(st.sql_text) like 'update%'
-- AND lower(st.sql_text) like 'merge%'
-- AND pxexec > 0
-- AND aas > .5
             order by 
             snap_id                             -- TO GET SQL OUTPUT ACROSS SNAP_IDs SEQUENTIALLY AND ASC
             -- nvl(sqt.elap, -1) desc, sqt.sql_id     -- TO GET SQL OUTPUT BY ELAPSED TIME
             )
-- where rownum <= 20
;
}}}
Sample storage forecast here https://www.evernote.com/shard/s48/sh/9594b0d9-cf51-4bea-b0e1-ce68915c0357/a7626bde5789e0964b25d79bbcf1f6ca

Use the first two SQLs below and extract each of them in a CSV file as input to Tableau, the "per day" is the monthly high level number and the "per snap_id" would be the detail
{{{

-- per day
  SELECT a.month,
    used_size_mb ,
    used_size_mb - LAG (used_size_mb,1) OVER (PARTITION BY a.name ORDER BY a.name,a.month) inc_used_size_mb
  FROM
    (SELECT TO_CHAR(sp.begin_interval_time,'MM/DD/YY') month ,
      ts.name ,
      MAX(ROUND((tsu.tablespace_usedsize* dt.block_size )/(1024*1024),2)) used_size_mb
    FROM DBA_HIST_TBSPC_SPACE_USAGE tsu,
      v$tablespace ts ,
      DBA_HIST_SNAPSHOT sp,
      DBA_TABLESPACES dt
    WHERE tsu.tablespace_id    = ts.ts#
    AND tsu.snap_id            = sp.snap_id
    AND ts.name              = dt.tablespace_name
    GROUP BY TO_CHAR(sp.begin_interval_time,'MM/DD/YY'),
      ts.name
    ORDER BY 
      month
    ) A;

-- detail per snap_id
select *  from 
(
SELECT 
  s0.snap_id id,
  TO_CHAR(s0.END_INTERVAL_TIME,'MM/DD/YY HH24:MI:SS') tm,
  ts.ts#, 
  ts.name ,
  MAX(ROUND((tsu0.tablespace_maxsize    * dt.block_size )/(1024*1024),2) ) max_size_MB ,
  MAX(ROUND((tsu0.tablespace_size       * dt.block_size )/(1024*1024),2) ) cur_size_MB ,
  MAX(ROUND((tsu0.tablespace_usedsize   * dt.block_size )/(1024*1024),2)) used_size_MB,
  MAX(ROUND(( (tsu1.tablespace_usedsize - tsu0.tablespace_usedsize)  * dt.block_size )/(1024*1024),2)) diff_used_size_MB
FROM 
  dba_hist_snapshot s0,
  dba_hist_snapshot s1,
  DBA_HIST_TBSPC_SPACE_USAGE tsu0,
  DBA_HIST_TBSPC_SPACE_USAGE tsu1,
  v$tablespace ts,
  DBA_TABLESPACES dt
WHERE s1.snap_id        = s0.snap_id + 1
AND tsu0.snap_id        = s0.snap_id
AND tsu1.snap_id        = s0.snap_id + 1
AND tsu0.tablespace_id   = ts.ts#
AND tsu1.tablespace_id   = ts.ts#
AND ts.name             = dt.tablespace_name
GROUP BY s0.snap_id, TO_CHAR(s0.END_INTERVAL_TIME,'MM/DD/YY HH24:MI:SS'), ts.ts#, ts.name
)
--where 
--tm > to_char(sysdate - 7, 'MM/DD/YY HH24:MI')
--and name = 'SYSAUX'
--and id in (224,225,226)
--and id = 225
ORDER BY id asc;



-- without dba_hist_snapshot
SELECT 
  sp.snap_id, 
  TO_CHAR (sp.begin_interval_time,'DD-MM-YYYY') days ,
  ts.name ,
  MAX(ROUND((tsu.tablespace_maxsize    * dt.block_size )/(1024*1024),2) ) max_size_MB ,
  MAX(ROUND((tsu.tablespace_size       * dt.block_size )/(1024*1024),2) ) cur_size_MB ,
  MAX(ROUND((tsu.tablespace_usedsize   * dt.block_size )/(1024*1024),2)) used_size_MB
FROM 
  DBA_HIST_TBSPC_SPACE_USAGE tsu,
  v$tablespace ts,
  DBA_HIST_SNAPSHOT sp,
  DBA_TABLESPACES dt
WHERE tsu.tablespace_id = ts.ts#
AND tsu.snap_id         = sp.snap_id
AND ts.name             = dt.tablespace_name
--and sp.snap_id = 225
GROUP BY sp.snap_id, TO_CHAR (sp.begin_interval_time,'DD-MM-YYYY'), ts.name
ORDER BY ts.name,
  days;


}}}
http://docs.oracle.com/cd/E50790_01/doc/doc.121/e50471/views.htm#SAGUG21166

12.1 storage software introduced a bunch of DBA_HIST views
DBA_HIST_ASM_BAD_DISK
DBA_HIST_ASM_DISKGROUP
DBA_HIST_ASM_DISKGROUP_STAT
DBA_HIST_CELL_CONFIG
DBA_HIST_CELL_CONFIG_DETAIL
DBA_HIST_CELL_DB
DBA_HIST_CELL_DISKTYPE
DBA_HIST_CELL_DISK_NAME
DBA_HIST_CELL_DISK_SUMMARY
DBA_HIST_CELL_IOREASON
DBA_HIST_CELL_IOREASON_NAME
DBA_HIST_CELL_METRIC_DESC
DBA_HIST_CELL_NAME
DBA_HIST_CELL_OPEN_ALERTS

{{{
19:42:37 SYS@dbfs1> desc V$CELL_DB
 Name                                                                                                                                                  Null?    Type
 ----------------------------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------------------------------
 CELL_NAME                                                                                                                                                      VARCHAR2(400)
 CELL_HASH                                                                                                                                                      NUMBER
 INCARNATION_NUM                                                                                                                                                NUMBER
 TIMESTAMP                                                                                                                                                      DATE
 SRC_DBNAME                                                                                                                                                     VARCHAR2(256)
 SRC_DBID                                                                                                                                                       NUMBER
 METRIC_ID                                                                                                                                                      NUMBER
 METRIC_NAME                                                                                                                                                    VARCHAR2(257)
 METRIC_VALUE                                                                                                                                                   NUMBER
 METRIC_TYPE                                                                                                                                                    VARCHAR2(17)
 CON_ID                                                                                                                                                         NUMBER

19:42:39 SYS@dbfs1> desc dba_hist_cell_db
 Name                                                                                                                                                  Null?    Type
 ----------------------------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------------------------------
 SNAP_ID                                                                                                                                               NOT NULL NUMBER
 DBID                                                                                                                                                  NOT NULL NUMBER
 CELL_HASH                                                                                                                                             NOT NULL NUMBER
 INCARNATION_NUM                                                                                                                                       NOT NULL NUMBER
 SRC_DBID                                                                                                                                              NOT NULL NUMBER
 SRC_DBNAME                                                                                                                                                     VARCHAR2(256)
 DISK_REQUESTS                                                                                                                                                  NUMBER
 DISK_BYTES                                                                                                                                                     NUMBER
 FLASH_REQUESTS                                                                                                                                                 NUMBER
 FLASH_BYTES                                                                                                                                                    NUMBER
 CON_DBID                                                                                                                                                       NUMBER
 CON_ID                                                                                                                                                         NUMBER
}}}
http://timurakhmadeev.wordpress.com/2012/02/15/ruoug-in-saint-petersburg/
http://iusoltsev.wordpress.com/2012/02/12/awr-snapshot-suspend-oracle-11g/

http://arjudba.blogspot.com/2010/08/ora-13516-awr-operation-failed-swrf.html
Mythbusters: AWR retention days and SYSAUX tablespace usage on it?
http://goo.gl/jTjsk

{{{
-- TO VIEW RETENTION INFORMATION
select * from dba_hist_wr_control;
set lines 300
select b.name, a.DBID,
   ((TRUNC(SYSDATE) + a.SNAP_INTERVAL - TRUNC(SYSDATE)) * 86400)/60 AS SNAP_INTERVAL_MINS,
   ((TRUNC(SYSDATE) + a.RETENTION - TRUNC(SYSDATE)) * 86400)/60 AS RETENTION_MINS,
   ((TRUNC(SYSDATE) + a.RETENTION - TRUNC(SYSDATE)) * 86400)/60/60/24 AS RETENTION_DAYS,
   TOPNSQL
from dba_hist_wr_control a, v$database b
where a.dbid = b.dbid;

/*
-- SET RETENTION PERIOD TO 30 DAYS (UNIT IS MINUTES)
execute dbms_workload_repository.modify_snapshot_settings (interval => 30, retention => 43200);
-- SET RETENTION PERIOD TO 6 months (UNIT IS MINUTES)
exec dbms_workload_repository.modify_snapshot_settings (interval => 30, retention => 262800);
-- SET RETENTION PERIOD TO 365 DAYS (UNIT IS MINUTES)
exec dbms_workload_repository.modify_snapshot_settings (interval => 30, retention => 525600);

}}}

AWR snap difference (15mins and 60mins) effect on CPU sizing
''Consolidate 4 instances with different snap intervals - link'' http://www.evernote.com/shard/s48/sh/b7147378-ebb4-4eec-afb5-61222259ce2d/f94d5d98afea81c3ab10af8016775048
Master Note on AWR Warehouse (Doc ID 1907335.1)

Oracle® Database 2 Day + Performance Tuning Guide 12c Release 1 (12.1)
http://docs.oracle.com/database/121/TDPPT/tdppt_awr_warehouse.htm#TDPPT145

Analyze Long-term Performance Across Enterprise Databases Using AWR Warehouse
https://apex.oracle.com/pls/apex/f?p=44785:24:0::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:9887,29

blogs about it 
http://www.dbi-services.com/index.php/blog/entry/oracle-oem-cloud-control-12104-awr-warehouse
http://dbakevlar.com/2014/07/awr-warehouse-status/
http://www.slideshare.net/kellynpotvin/odtug-webinar-awr-warehouse
good stuff - https://www.doag.org/formes/pubfiles/7371313/docs/Konferenz/2015/vortraege/Oracle%20Datenbanken/2015-K-DB-Kellyn_PotVin-Gorman-The_Power_of_the_AWR_Warehouse_and_Beyond-Manuskript.pdf
good stuff - https://jhdba.wordpress.com/2016/01/04/oem-12-awr-warehouse-not-quite-the-finished-product-yet/
good stuff - http://www.slideshare.net/fullscreen/krishna.setwin/con8449-athreya-awr-warehouse


! the workflow
''source''
when awr warehouse is deployed on a target.. it deploys jobs, and this is an agent push only
source exports the awr dump as a job on the local filesystem, this doesn't have to be the same directory path as the warehouse
if you have a 12 month retention, it will not do it all at once, it does the export per 2GB every 3hours
it first exports the oldest snap_id 

''oms''
oms does an agent to agent talk when source is transferring the dump file and then puts on the "warehouse stage directory"
oms agent checks the directory every 5 mins and if there's a new file that file gets imported to the warehouse

''warehouse''
warehouse has this ETL job and databases are partitioned per DBID 
there's a mapping table to map the database to DBID
ideally if you want to put other data on the warehouse put it on another schema and views as well, because a warehouse upgrade or patch may wipe them out
as long as you have a diag pack license on the source, you are good with the warehouse license, this is pretty much like the licensing scheme of OMS


<<<
Karl Arao
kellyn, question on awr warehouse, so it is recommended to be on a separate database. and on that separate database it requires a separate diag pack license?
so let's say you have a diag and tuning pack on the source... and then you've got to have diag pack on the target awr warehouse?
thanks!
Kellyn Pot'vin
11:09am
Kellyn Pot'vin
No,  you're diag pack from source db grants the limited ee license for the awrw
Does that make sense?
Karl Arao
11:09am
Karl Arao
i see
essentially it's free
because source would have diag and tuning pack anyway
Kellyn Pot'vin
11:10am
Kellyn Pot'vin
It took them while to come up with that,  but same for em12c omr
Now, if you add anything, rac or data guard,  then you have to license that
Karl Arao
11:11am
Karl Arao
yeap which is also the case on omr
Kellyn Pot'vin
11:12am
Kellyn Pot'vin
Exactly
Karl Arao
11:12am
Karl Arao
now, can we add anymore tables on the awr warehouse
let's say i want that to be my longer retention data for my metric extension as well
Kellyn Pot'vin
11:13am
Kellyn Pot'vin
Yes, but no additional partitioning and it may impact patches/upgrades
Karl Arao
11:13am
Karl Arao
sure
Kellyn Pot'vin
11:14am
Kellyn Pot'vin
I wouldn't do triggers or partitions
Views are cool
Karl Arao
11:16am
Karl Arao
we are going to evaluate this soon, just upgraded to r4
Kellyn Pot'vin
11:16am
Kellyn Pot'vin
Otn will post a new article of advance awr usage next week from me
Karl Arao
11:16am
Karl Arao
another question,
on my source i have 12months of data
Kellyn Pot'vin
11:17am
Kellyn Pot'vin
And know exadata team is asking how to incorporate it as part of healthcheck design
Karl Arao
11:17am
Karl Arao
will it ETL that to the warehouse
like 1 shot
Kellyn Pot'vin
11:17am
Kellyn Pot'vin
That is my focus in dev right now
Karl Arao
11:17am
Karl Arao
that's going to be 160GB of data
and with exp warehouse
there's going to be an impact for sure
Kellyn Pot'vin
11:18am
Kellyn Pot'vin
No, it has throttle and will take 2gb file loads in 3hr intervals, oldest snapshots first
Karl Arao
11:18am
Karl Arao
I'm just curious on the etl
what do you mean 3hours intervals ?
2GB to finish in 3hours
Kellyn Pot'vin
11:19am
Kellyn Pot'vin
Tgen go back to 24 hr interval auto after any catchup, same on downtime catchup
<<<















! articles 
https://hemantoracledba.blogspot.com/2016/12/122-new-features-4-awr-for-pluggable.html
https://blog.dbi-services.com/oracle-12cr2-awr-views-in-multitenant/
http://oracledbpro.blogspot.com/2017/03/awr-differences-between-12c-release-1.html
https://www.google.com/search?q=awr+retention+pdb+cdb&oq=awr+retention+pdb+cdb&aqs=chrome..69i57.4803j0j0&sourceid=chrome&ie=UTF-8

AWR_SNAPSHOT_TIME_OFFSET


! setup 
{{{
-- if set on CDB level, it will take effect on all PDBs
-- if set on PDB, it will take effect only on that PDB

select * from cdb_hist_wr_control;

alter session set container = CDB$ROOT;
alter system set awr_pdb_autoflush_enabled=true;
alter system set AWR_SNAPSHOT_TIME_OFFSET=1000000 scope=both;
-- 1000000 is the magic number based on pdb name to avoid flushing at the same time 

exec dbms_workload_repository.modify_snapshot_settings(interval => 30, dbid => 4182556862);
select con_id, instance_number, snap_id, begin_interval_time, end_interval_time from cdb_hist_snapshot order by 1,2,3;
}}}


! MOS 
AWR Snapshots and Reports from Oracle Multitentant Database(CDB, PDB) (Doc ID 2295998.1)
How to Create an AWR Report at the PDB level on 12.2 or later (Doc ID 2334006.1)
ORA-20200 Error When Generating AWR or ADDM Report as a PDB DBA User From a 12.2.0.1 CDB Database (Doc ID 2267849.1)
Bug 25941188 : ORA-20200 WHEN ADDM REPORT GENERATED FROM A PDB DATABASE USING AWR_PDB OPTION
AWR Report run from a Pluggable Database (PDB) Runs Much Slower than from a Container Database (CDB) on 12c (Doc ID 1995938.1
How to Modify Statistics collection by MMON for AWR repository (Doc ID 308450.1)


https://docs.oracle.com/en/database/oracle/oracle-database/12.2/refrn/AWR_SNAPSHOT_TIME_OFFSET.html#GUID-90CD8379-DCB2-4681-BB90-0A32C6029C4E
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/refrn/AWR_PDB_AUTOFLUSH_ENABLED.html#GUID-08FA21BC-8FB1-4C51-BEEA-139C734D17A7











.
<<<


Oracle on RDS 

There are Oracle native features that do not work with RDS such as RAC, Data Guard and RMAN.
Instead of Data Guard,  RDS uses replicas which is basically block copies from primary to replica copy.
Backups via RMAN are not possible.  AWS performs storage volume snapshots.
Hot backups are possible only if there is a replica in play.  In this case, the backup is taken from the secondary instead of the primary.  If there is only a primary, the storage snapshot will cause and temporary I/O suspension; so no hot backup.
No access to sys/system; some normal DBA tasks will need to be done via AWS api.
No access to underlying file system
 
If they really want Oracle on AWS, I would recommend putting Oracle on EC2.  The performance and cost are better on OCI.
 

Oracle RDS database max size up to 6TB
Getting Data into RDS is also another challenge – you are limited to datapump  and can’t not use lift and shift approach as RMAN backup is not supported.




<<<
https://www.google.com/search?q=AWS+S3+merge+SCD&oq=AWS+S3+merge+SCD&aqs=chrome..69i57j33l2.6636j0j0&sourceid=chrome&ie=UTF-8

https://cloudbasic.net/white-papers/data-warehousing-scd/
https://www.google.com/search?q=AWS+S3+SCD+type+2&ei=Oa4CXY2SBc-f_Qb9gqugDg&start=20&sa=N&ved=0ahUKEwjNkrq3oufiAhXPT98KHX3BCuQ4ChDy0wMIdw&biw=1334&bih=798
http://resources.pythian.com/hubfs/Framework-For-Migrate-Your-Data-Warehouse-Google-BigQuery-WhitePaper.pdf
https://stackoverflow.com/questions/52919985/incremental-updates-of-data-in-an-s3-data-lake
https://sonra.io/2009/02/01/one-pass-scd2-load-how-to-load-a-slowly-changing-dimension-type-2-with-one-sql-merge-statement-in-oracle/



! discussion 

<<<
Need your help on one of the scenario.
 
We are copying the simple txt/parquet  files from (Hadoop Cluster)HDFS to simple s3 bucket.
Base copy is okay, we are good with that but after that whatever changes that are happening on source(HDFS)  just want to copy the incremental changes on S3.
 
I am going through the DataSync and other options in the meantime.
 
Not looking for any tool option(like Attunity, etc.), looking for some free option. 
<<<

<<<
Not sure whether your incremental changes happen on the source base file or a different new file. Usually on HDFS, you want the incremental changes happens on new files. Then to get the complete view the data, you need both base and new incremental files together. After sometime, you need to merge these two kinds of files into a new base file, this usually refers to Compact operation.
<<<

<<<
I’m thinking handle the SCD logic in Hadoop (with ACID on) then append the new data to S3
<<<

<<<
You can also look into Nifi which guarantees delivery of packet and it is open-source package. Also, it can track last record processed, but it comes with some challenges of its own. Another option would we to land your daily feeds into daily_feed_tables, process data to S3 and run the compact operation as suggested.
<<<
https://www.udemy.com/aws-certified-solutions-architect-associate/
https://www.udemy.com/aws-certified-developer-associate/
https://www.udemy.com/aws-certified-sysops-administrator-associate/
https://www.udemy.com/aws-codedeploy/
https://www.udemy.com/get-oracle-flying-in-aws-cloud/
https://www.udemy.com/architecting-amazon-web-services-an-introduction/


! exam
https://aws.amazon.com/certification/

! practice questions 
the ones from the LA or cloud guru courses + the aws ones (https://aws.psiexams.com/#/dashboard  about 20$) + the free question (available for sysops not sa)




<<showtoc>>


! CPU
https://david-codes.hatanian.com/2019/06/09/aws-costs-every-programmer-should-now.html
https://github.com/dhatanian/aws-ec2-costs	


! network
<<<
TIL what EC2's "Up to" means. I used to think it simply indicates best effort bandwidth, but apparently there's a hard baseline bottleneck for most EC2 instance types (those with an "up to"). It's significantly smaller than the rating, and it can be reached in just a few minutes.
<<<
https://twitter.com/dvassallo/status/1120171727399448576
Mmm.. It's a long story, just check out this blog post..  http://karlarao.wordpress.com/2010/04/10/my-personal-wiki-karlarao-tiddlyspot-com/   :)

Also check out my Google profile here https://plus.google.com/102472804060828276067/about to know more about my web/social media presence

check here [[.TiddlyWiki]] to get started on setting up and configuring your own wiki











https://www.slideshare.net/Enkitec/presentations

<<<
https://connectedlearning.accenture.com/curator/chanea-heard
Golden Gate Admin https://connectedlearning.accenture.com/learningboard/goldengate-administration
APEX https://connectedlearning.accenture.com/leaning-list-view/16597
ZFS storage appliance https://connectedlearning.accenture.com/learningboard/16600-zfs-storage-appliance
SPARC Supercluster Admin https://connectedlearning.accenture.com/leaning-list-view/16596
Exadata Admin https://connectedlearning.accenture.com/leaning-list-view/12954
Exadata Optimizations https://connectedlearning.accenture.com/leaning-list-view/13051
All AEG https://connectedlearning.accenture.com/learningactivities
SQL Tuning with SQLTXPLAIN https://connectedlearning.accenture.com/leaning-list-view/13097
E4 2015 https://connectedlearning.accenture.com/leaning-list-view/13512
https://mediaexchange.accenture.com/tag/tagid/hadoop
AEG webinars https://connectedlearning.accenture.com/leaning-list-view/110872
media exchange tag "enkitec" https://mediaexchange.accenture.com/tag/tagid/enkitec

<<<


! Oracle Unlimited Learning Subscription 

<<<
Your Unlimited Learning Subscription provides you with:
 
-          Unlimited access to all courses in the Oracle University Training-on-Demand (ToD) catalog – over 450 titles of in depth training courses for Database, Applications and Middleware
-          Unlimited access to all Oracle University Learning Subscriptions, including the latest in Oracle’s Cloud Solutions, Product Solutions and Industry Solutions
-          Unlimited access to all Oracle University Learning Streams for continuous learning around Oracle’s  Database, Middleware, EBS and PSFT products
-          Access to Public live virtual classroom training sessions offered by Oracle University in the case that a Training on Demand is not available


https://urldefense.proofpoint.com/v2/url?u=http-3A__launch.oracle.com_-3Faglp&d=CwMFAg&c=eIGjsITfXP_y-DLLX0uEHXJvU8nOHrUK8IrwNKOtkVU&r=uuYKy3Gs1_0JIUEV5KRRHtJRajKnrRi8D07dW2RkXus&m=2h0zMlY_aYkRW_DfSm0TQNCIUJvQ5Ym10XaIfNNla8M&s=ku1WbuFdaZky-ezYS3fnO2V0R9RXG75rJctvZg-ztY0&e=

Digital Training Learning Portal https://isdportal.oracle.com/pls/portal/tsr_admin.page.main?pageid=33,986&dad=portal&schema=PORTAL&p_k=hCwPEObICeHNWFJNdHCxsnXIyaWOpibldVWGShuxqGCGEmtoGCkVshGgcTdu1191413973

Program Overview http://link.brightcove.com/services/player/bcpid1799411699001?bckey=AQ~~,AAABmsB_z2k~,HvNx0XQhsPxXu5er5IYkstkCq_O9j5dg&bctid=4731151798001

Learning Paths https://isdportal.oracle.com/pls/portal/tsr_admin.page.main?pageid=33,976&dad=portal&schema=PORTAL&p_k=hCwPEObICeHNWFJNdHCxsnXIyaWOpibldVWGShuxqGCGEmtoGCkVshGgcTdu1191413973


<<<


http://www.ardentperf.com/2011/08/19/developer-access-to-10046-trace-files/
http://dioncho.wordpress.com/2009/03/19/another-way-to-use-trace-file/
http://kb.acronis.com/content/2788
http://kb.acronis.com/search/apachesolr_search/true%20image%202012%20slow%20backup?filters=%20type%3Aarticle
http://forum.acronis.com/forum/5399
http://kb.acronis.com/content/2293


''Amanda'' http://www.amanda.org/   <-- but this requires a client agent
http://gjilevski.wordpress.com/2010/03/14/creating-oracle-11g-active-standby-database-from-physical-standby-database/
''Oracle Active Data Guard: What’s Really Under the Hood?'' http://www.oracle.com/technetwork/database/features/availability/s316924-1-175932.pdf


''Read only and vice versa''
http://www.adp-gmbh.ch/ora/data_guard/standby_read_only.html
http://juliandyke.wordpress.com/2010/10/14/oracle-11gr2-active-data-guard/
http://www.oracle-base.com/articles/11g/data-guard-setup-11gr2.php#read_only_active_data_guard


! to be in Active DG, remove "read only" step for normal managed recovery
{{{
startup mount
alter database open read only;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE disconnect;
}}}


!''Snapper on Standby Database''
<<<
On the standby site, if the database is open read only with apply you should be able to run snapper on it or do ash queries as well 
Check out some commands here http://karlarao.tiddlyspot.com/#snapper
And if you want to loop it and leave it running and check the data the next day you can do this http://karlarao.tiddlyspot.com/#snapperloop (sections “snapper loop showing activity across all instances (must use snapper v4)” and “process the snap.txt file as csv input”)

Some commands you can use and things to check is attached as well. But I would start with 
@snapper ash 5 1 all@*
Just to see what’s going on during the slow period
<<<

! triggers on ADG
Using Active Data Guard Reporting with Oracle E-Business Suite Release 12.1 and Oracle Database 11g (Doc ID 1070491.1)
<<<
Section 7: Database Triggers

ADG support delivers three schema level database triggers as follows:

Logon and Logoff
These triggers are a key component of the simulation testing. The logon trigger enables the read-only violation trace, whereas the logoff trigger records the actual number of violations. If these triggers are not enabled, the trace errors and V$ data are not recorded, in other words, the simulations are treated as having no errors. 
 
Servererror
The error trigger is only executed if an ORA-16000 is raised, which is read-only violation (the trigger does nothing on the primary). The error count for the concurrent program is incremented only if standby_error_checking has been enabled as described in 4.2 General Options. If the error trigger is not enabled, report failures will not be recorded and failures will not lead to run_on_standby being disabled. 
<<<
http://www.toadworld.com/platforms/oracle/b/weblog/archive/2014/04/27/oracle-apps-r12-offloading-reporting-workload-with-active-data-guard.aspx

! resource management on ADG
Configuring Resource Manager for Oracle Active Data Guard (Doc ID 1930540.1)
<<<
Configuring a resource plan on a physical standby database requires the plan to be created on primary database.

I/O Resource Manager helps multiple databases and workloads within the databases share the I/O resources on the Exadata storage. In a data guard environment, IORM can help protect the I/O latency for the redo apply I/Os from the standby database.
Critical I/Os from standby database backgrounds such as Managed Recovery Process (MRP) or Logical Standby Process (LSP) are automatically prioritized by enabling IORM on the Exadata storage. Database resource plans enabled on the standby databases are automatically pushed to the Exadata storage. Enabling IORM enforces database resource plans on the storage cells to minimize the latency for the critical redo-apply I/Os.  To enable IORM, set the IORM objective to 'auto' on the Exadata storage cells.

Bug 12601274: Updates to consumer group mappings on the primary database are not reflected on the standby database. This bug is fixed in 11.2.0.4 and 12.1.0.2. On older releases, the updates are only reflected on the standby upon a restart of the standby database.
<<<







http://www.oracle-base.com/articles/11g/AwrBaselineEnhancements_11gR1.php <-- a good HOWTO
http://neerajbhatia.files.wordpress.com/2010/10/adaptive-thresholds.pdf
http://oracledoug.com/serendipity/index.php?/archives/1496-Adaptive-Thresholds-in-10g-Part-1-Metric-Baselines.html
http://oracledoug.com/serendipity/index.php?/archives/1497-Adaptive-Thresholds-in-10g-Part-2-Time-Grouping.html
http://oracledoug.com/serendipity/index.php?/archives/1498-Adaptive-Thresholds-in-10g-Part-3-Setting-Thresholds.html
http://oracledoug.com/metric_baselines_10g.pdf  <-- ''GOOD STUFF''
http://oracledoug.com/adaptive_thresholds_faq.pdf <-- ''GOOD STUFF''
http://www.cmg.org/conference/cmg2007/awards/7122.pdf
http://optimaldba.com/papers/IEDBMgmt.pdf
http://www.oracle-base.com/articles/11g/AwrBaselineEnhancements_11gR1.php
Strategies for Monitoring Large Data Centers with Oracle Enterprise Manager http://gavinsoorma.com/wp-content/uploads/2011/03/monitoring_large_data_centers_with_OEM.pdf
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1525205200346930663
<<<
{{{
You Asked

In version 11g of Oracle database, there is a new feature whereby current performance data (obtained from AWR snapshots) can be compared against an AWR baseline and an alarm triggered if a given metric exceeds a certain threshold. From what I understand, there are 3 types of thresholds : fixed value, percent of maximum and significance level. The first type (fixed value) is very easy to understand - alarms are triggered whenever the metric in question exceeds certain fixed values specified for the warning and critical alerts (without reference to the baseline). The 2nd type (percent of maximum) presumably means that an alert is triggered whenever the current value of the metric exceeds the specified percent of the maximum value of the metric that was observed in the whole baseline period (if I understood this correctly - correct me if I'm wrong). 
However, the 3rd type (significance level) is not at all easy to understand. The Oracle documentation is not at all clear on that point, nor could I find any Metalink notes on the subject. I also tried searching the OTN forums, to no avail. Could you please explain, in very simple terms, when exactly an alarm would be triggered if "significance level" is specified for the threshold type, if possible by giving a simple example. There are apparently 4 levels of such thresholds (high, very high, severe and extreme). 
and we said...

I asked Graham Wood and John Beresniewicz for their input on this, they are the experts in this particular area 

they said: 

Graham Wood wrote: 
> Sure, 
> Copying JB as this is his specialty area, in case I don't get it right. :-) 
> 
> The basic idea of using significance level thresholds for alerting is that we are trying to detect outliers in the distribution of metric values, rather than setting a simple threshold value. 
> 
> By looking at the historical metric data from AWR we can identify values for 25th, 50th (median), 75th, 90th, 95th and 99th percentiles. Using a curve fitting algorithm we also extrapolate the 99.9th and 99.99th percentiles. We derive these percentiles based on time grouping, such as day, night, and hour of day. 
> 
> In the adaptive baselines feature in 11g we allow the user to specify the alert level, which equates to one of these percentile values: 
> High 95th percentile 
> Very High 99th percentile 
> Severe 99.9th percentile 
> Extreme 99.99th percentile 
> 
> Using the AWR history (actually the SYSTEM_MOVING_WINDOW baseline) the database will automatically determine the threshold level for a metric that corresponds to the selected significance level for the current time period. 
> 
> Setting a significance level of Extreme means that we would only alert on values that we would only expect to see once in a 10000 observations (approximately once in every years for hourly thresholds). 
> 
> Cheers, Graham 

JB wrote: 
Shorter answer: 
--------------- 
The significance level thresholds are intended to produce alert threshold values for key performance metrics that represent the following: 

"Automatically set threshold such that values observed above the threshold are statistically unusual (i.e. significant) at the Nth percentile based on actual data observed for this metric over the SYSTEM_MOVING_WINDOW baseline." 

The premise here is that systems with relatively stable performance characteristics should show statistical stability in core performance metric values, and when unusual but high-impact performance events occur we expect these will be reflected in highly unusual observations in one or more (normally statistically stable) metrics. The significance level thresholds give users a way to specify alerting in terms of "how unusual" rather than "how much". 


Longer (original) reply: 
----------------------------- 
Hi Tom - 
Graham did a pretty good job, but I'll add some stuff. 

Fixed thresholds are set explicitly by user, and change only when user unsets or sets a different threshold. They are based entirely on user understanding of the underlying metrics in relation to the underlying application and workload. This is the commonly understood paradigm for detecting performance issues: trigger an alert when metric threshold is crossed. There are numerous issues we perceived with this basic mechanism: 

1) "Performance" expectations, and thus alert thresholds, often vary by application, workload, database size, etc. This results in what I call the MxN problem, which is that M metrics over N systems becomes MxN threshold decisions each of which can be very specific (i.e. threshold decisions not transferable.) This is potentially very manually intensive for users with many databases. 

2) Workload may vary predictably on system (e.g. online day vs. batch night) and different performance expectations (and thus alert thresholds) may pertain to different workloads, so one threshold for all workloads is inappropriate. 

3) Systems evolve over time and thresholds applicable for the system supporting 1,000 users may need to be altered when system supports 10,000 users. 

The adaptive thresholds feature tries to address these issues as follows: 

A) Thresholds are computed by the system based on a context of prior observations of this metric on this system. System-and-metric-specific thresholds are developed without obliging user to understand the specifics (helps relieve the MxN problem.) 

B) Thresholds are periodically recomputed using statistical characterizations of metric values over the SYSTEM_MOVING_WINDOW baseline. Thus the thresholds adapt to slowly evolving workload or demand, as the moving window moves forward. 

C) Metric statistics for adaptive thresholds are computed over grouping buckets (which we call "time groups") that can accommodate the common workload periodicities (day/night, weekday/weekend, etc.) Thresholds resets can happen as frequently as every hour. 

So the net-net is that metric alert thresholds are determined and set automatically by the system using actual metric observations as their basis and using metric-and-system-independent semantics (significance level or pct of max.) 

JB 

From Tom - Thanks both! 
}}}
<<<


''Oracle By Example:''
http://docs.oracle.com/cd/E11882_01/server.112/e16638/autostat.htm#CHDHBGJD
metric baseline http://www.oracle.com/webfolder/technetwork/tutorials/demos/db/10g/r2/metric_baselines.viewlet/metric_baselines_viewlet_swf.html
create baseline http://www.oracle.com/webfolder/technetwork/tutorials/demos/db/11g/r2/11gr2_baseline/11gr2_baseline_viewlet_swf.html
OEM system monitoring http://www.oracle.com/webfolder/technetwork/tutorials/obe/em/emgc10gr2/quick_start/system_monitoring/system_monitoring.htm
**Creating the Monitoring Template
**Creating the User-Defined Metrics
**Setting the Metric Baseline
SQL baseline http://www.oracle.com/webfolder/technetwork/tutorials/demos/db/10g/r2/sql_baseline.viewlet/sql_baseline_viewlet_swf.html





''Proactive Database Monitoring'' http://docs.oracle.com/cd/B28359_01/server.111/b28301/montune001.htm
''15 User-Defined Metrics'' http://docs.oracle.com/cd/B16240_01/doc/em.102/e10954/udm2.htm
''3 Cluster Database'' http://docs.oracle.com/cd/B19306_01/em.102/b25986/rac_database.htm
http://oracledoug.com/serendipity/index.php?/archives/1302-Oracle-Workload-Metrics.html
http://oracledoug.com/serendipity/index.php?/archives/1470-Time-Matters-Throughput-vs.-Response-Time.html
http://docs.oracle.com/cd/B14099_19/manage.1012/b16241/Monitoring.htm#sthref333
http://carymillsap.blogspot.com/2008/12/performance-as-service-part-2.html





-- notes and ideas about R2 and adaptive thresholds
[img[ https://lh5.googleusercontent.com/-JNRShrEzpiQ/T4W3xfgGzII/AAAAAAAABiY/VBPKaiA-zus/s800/AdaptiveThresholds.JPG ]]



http://en.wikipedia.org/wiki/Control_chart
http://en.wikipedia.org/wiki/Exponential_smoothing
http://www.sciencedirect.com/science/article/pii/S0169207003001134

http://prodlife.wordpress.com/2013/10/14/control-charts/

















! ACS doesn't work with parallel execution 


http://kerryosborne.oracle-guy.com/2009/06/oracle-11g-adaptive-cursor-sharing-acs/
http://aychin.wordpress.com/2011/04/04/adaptive-cursor-sharing-and-spm/

Adaptive Cursor Sharing: Worked Example [ID 836256.1]


https://blogs.oracle.com/optimizer/explain-adaptive-cursor-sharing-behavior-with-cursorsharing-similar-and-force
https://oracle.readthedocs.io/en/latest/plsql/bind/adaptive-cursor-sharing.html
<<showtoc>>

! 11gR2 
http://jarneil.wordpress.com/2010/11/05/11gr2-database-services-and-instance-shutdown/   <-- 11gR2 version.. 

http://pat98.tistory.com/531 <-- good stuff, well explained difference on admin and policy managed services

{{{
srvctl add service -d RACDB -s <SERVICE NAME HERE> -preferred RACDB1,RACDB2 -clbgoal short -rlbgoal SERVICE_TIME 
}}}


! Create Service PDB 

!! Adding a Service to a PDB in RAC
{{{
    srvctl add service -db RAC -service MYSVC -preferred RAC1,RAC2 -tafpolicy BASIC -clbgoal SHORT -rlbgoal SERVICE_TIME -pdb PDB
}}}
https://hemantoracledba.blogspot.com/2017/04/12cr1-rac-posts-9-adding-service-to-pdb.html?m=1
https://docs.oracle.com/database/121/RACAD/GUID-15576271-E204-4ABD-961B-09876762EBF4.htm#RACAD5047
https://github.com/karlarao/OracleScheduledNodeAllocationTAF
https://karlarao.github.io/karlaraowiki/index.html#%5B%5BSRVCTL%20useful%20commands%5D%5D



..
do this to collect the most recent occurrence of the error on any of the trace files 
{{{
find . -type f -printf '%TY-%Tm-%Td %TT %p\n' | sort
}}}

files to look out
{{{

Agent Log and Trace files

Note: if there are multiple Agents experiencing problems, the files must be uploaded for each Agent.

From $ORACLE_HOME/sysman/log/*.* directory for a single agent. 
From $ORACLE_HOME/host/sysman/log/*.* for a RAC agent. 
The files are: 
emagent.nohup: Agent watchdog log file, Startup errors are recorded in this file. 
emagent.log: Main agent log file 
emagent.trc: Main agent trace file 
emagentfetchlet.log: Log file for Java Fetchlets 
emagentfetchlet.trc: Trace file for Java Fetchlets

<OMS_HOME>/sysman/log/emoms.trc
<OMS_HOME>/sysman/log/emoms.log
}}}

output below 
{{{

2011-06-28 10:20:11 ./sysman/emd/state/0005.dlt
2011-06-28 10:20:11 ./sysman/emd/state/snapshot
2011-06-28 10:26:00 ./sysman/emd/cputrack/emagent_11747_2011-06-28_10-26-00_cpudiag.trc
2011-06-28 10:26:00 ./sysman/log/emctl.log
2011-06-28 10:28:43 ./sysman/emd/upload/EM_adaptive_thresholds.dat
2011-06-28 10:30:32 ./sysman/emd/state/parse-log-3CBBC0C79ED9B7E65B93EAC0D7457308
2011-06-28 10:30:39 ./sysman/emd/upload/mgmt_db_hdm_metric_helper.dat
2011-06-28 10:30:54 ./sysman/emd/upload/rawdata8.dat
2011-06-28 10:31:05 ./sysman/emd/state/adr/141DB5270B29BDF93743E123C2DF1231.alert.log.xml.state
2011-06-28 10:32:13 ./sysman/emd/state/adr/C12313AF3162E92001DE7952A752106A.alert.log.xml.state
2011-06-28 10:32:37 ./sysman/emd/upload/mgmt_ha_mttr.dat
2011-06-28 10:32:51 ./sysman/emd/upload/rawdata3.dat
2011-06-28 10:33:08 ./sysman/emd/state/adr/5A9DF4683EEF44F8898ABA391E70D194.alert.log.xml.state
2011-06-28 10:33:08 ./sysman/emd/upload/rawdata5.dat
2011-06-28 10:33:54 ./sysman/emd/agntstmp.txt
2011-06-28 10:33:55 ./sysman/emd/upload/rawdata0.dat
2011-06-28 10:34:06 ./sysman/emd/upload/rawdata9.dat
2011-06-28 10:34:09 ./sysman/emd/upload/rawdata2.dat
2011-06-28 10:34:21 ./sysman/emd/upload/rawdata4.dat
2011-06-28 10:34:25 ./sysman/emd/state/3CBBC0C79ED9B7E65B93EAC0D7457308.alerttd01db01.log
2011-06-28 10:34:25 ./sysman/emd/state/progResUtil.log
2011-06-28 10:34:27 ./sysman/emd/upload/rawdata7.dat
2011-06-28 10:34:28 ./sysman/log/emagent.trc


2011-06-28 09:51:42,537 Thread-1118013760 ERROR recvlets.aq: duplicate registration of metric Recovery_Area for target dbm rac_database
2011-06-28 09:51:42,537 Thread-1118013760 ERROR recvlets.aq: Unable to add metric Recovery_Area to AQDatabase [oracle_database dbm_dbm1] for rac_database dbm
2011-06-28 09:51:42,537 Thread-1118013760 ERROR recvlets: Error adding metric Recovery_Area, target dbm rac_database, to recvlet AQMetrics
2011-06-28 09:51:42,538 Thread-1118013760 ERROR recvlets.aq: duplicate registration of metric Snap_Shot_Too_Old for target dbm rac_database
2011-06-28 09:51:42,538 Thread-1118013760 ERROR recvlets.aq: Unable to add metric Snap_Shot_Too_Old to AQDatabase [oracle_database dbm_dbm1] for rac_database dbm
2011-06-28 09:51:42,538 Thread-1118013760 ERROR recvlets: Error adding metric Snap_Shot_Too_Old, target dbm rac_database, to recvlet AQMetrics
2011-06-28 09:51:42,538 Thread-1118013760 ERROR recvlets.aq: duplicate registration of metric WCR for target dbm rac_database
2011-06-28 09:51:42,538 Thread-1118013760 ERROR recvlets.aq: Unable to add metric WCR to AQDatabase [oracle_database dbm_dbm1] for rac_database dbm
2011-06-28 09:51:42,538 Thread-1118013760 ERROR recvlets: Error adding metric WCR, target dbm rac_database, to recvlet AQMetrics
2011-06-28 09:51:42,539 Thread-1118013760 ERROR recvlets.aq: duplicate registration of metric wrc_client for target dbm rac_database
2011-06-28 09:51:42,539 Thread-1118013760 ERROR recvlets.aq: Unable to add metric wrc_client to AQDatabase [oracle_database dbm_dbm1] for rac_database dbm
2011-06-28 09:51:42,539 Thread-1118013760 ERROR recvlets: Error adding metric wrc_client, target dbm rac_database, to recvlet AQMetrics
2011-06-28 09:51:42,540 Thread-1118013760 WARN  recvlets.aq: [oracle_database dbm_dbm1] deferred nmevqd_refreshState for dbm rac_database
2011-06-28 09:51:42,540 Thread-1118013760 WARN  upload: Upload manager has no Failure script: disabled
2011-06-28 09:51:48,569 Thread-1136912704 WARN  collector: the column name first_message_age in this condition does not exist in metric aq_msgs_persistentq_per_subscriber
2011-06-28 09:51:48,571 Thread-1136912704 WARN  collector: the column name first_message_age in this condition does not exist in metric aq_msgs_persistentq_per_subscriber
2011-06-28 09:51:48,575 Thread-1136912704 ERROR recvlets.aq: duplicate registration of metric problemTbsp for target dbm rac_database
2011-06-28 09:51:48,575 Thread-1136912704 ERROR recvlets.aq: Unable to add metric problemTbsp to AQDatabase [oracle_database dbm_dbm1] for rac_database dbm
2011-06-28 09:51:48,575 Thread-1136912704 ERROR recvlets: Error adding metric problemTbsp, target dbm rac_database, to recvlet AQMetrics
2011-06-28 09:51:48,576 Thread-1136912704 ERROR recvlets.aq: duplicate registration of metric Suspended_Session for target dbm rac_database
2011-06-28 09:51:48,576 Thread-1136912704 ERROR recvlets.aq: Unable to add metric Suspended_Session to AQDatabase [oracle_database dbm_dbm1] for rac_database dbm
2011-06-28 09:51:48,576 Thread-1136912704 ERROR recvlets: Error adding metric Suspended_Session, target dbm rac_database, to recvlet AQMetrics
2011-06-28 09:51:48,576 Thread-1136912704 ERROR recvlets.aq: duplicate registration of metric Recovery_Area for target dbm rac_database
2011-06-28 09:51:48,576 Thread-1136912704 ERROR recvlets.aq: Unable to add metric Recovery_Area to AQDatabase [oracle_database dbm_dbm1] for rac_database dbm
2011-06-28 09:51:48,576 Thread-1136912704 ERROR recvlets: Error adding metric Recovery_Area, target dbm rac_database, to recvlet AQMetrics
2011-06-28 09:51:48,576 Thread-1136912704 ERROR recvlets.aq: duplicate registration of metric Snap_Shot_Too_Old for target dbm rac_database
2011-06-28 09:51:48,576 Thread-1136912704 ERROR recvlets.aq: Unable to add metric Snap_Shot_Too_Old to AQDatabase [oracle_database dbm_dbm1] for rac_database dbm
2011-06-28 09:51:48,576 Thread-1136912704 ERROR recvlets: Error adding metric Snap_Shot_Too_Old, target dbm rac_database, to recvlet AQMetrics
2011-06-28 09:51:48,577 Thread-1136912704 ERROR recvlets.aq: duplicate registration of metric WCR for target dbm rac_database
2011-06-28 09:51:48,577 Thread-1136912704 ERROR recvlets.aq: Unable to add metric WCR to AQDatabase [oracle_database dbm_dbm1] for rac_database dbm
2011-06-28 09:51:48,577 Thread-1136912704 ERROR recvlets: Error adding metric WCR, target dbm rac_database, to recvlet AQMetrics
2011-06-28 09:51:48,577 Thread-1136912704 ERROR recvlets.aq: duplicate registration of metric wrc_client for target dbm rac_database
2011-06-28 09:51:48,577 Thread-1136912704 ERROR recvlets.aq: Unable to add metric wrc_client to AQDatabase [oracle_database dbm_dbm1] for rac_database dbm
2011-06-28 09:51:48,577 Thread-1136912704 ERROR recvlets: Error adding metric wrc_client, target dbm rac_database, to recvlet AQMetrics
2011-06-28 09:51:48,578 Thread-1136912704 WARN  recvlets.aq: [oracle_database dbm_dbm1] deferred nmevqd_refreshState for dbm rac_database
2011-06-28 09:51:48,579 Thread-1136912704 WARN  upload: Upload manager has no Failure script: disabled
2011-06-28 09:51:48,615 Thread-1136912704 WARN  collector: the column name first_message_age in this condition does not exist in metric aq_msgs_persistentq_per_subscriber
2011-06-28 09:51:48,617 Thread-1136912704 WARN  recvlets.aq: [oracle_database dbm_dbm1] deferred nmevqd_refreshState for dbm rac_database
2011-06-28 09:52:03,663 Thread-1130613056 WARN  collector: the column name first_message_age in this condition does not exist in metric aq_msgs_persistentq_per_subscriber
2011-06-28 09:52:03,669 Thread-1130613056 WARN  collector: the column name first_message_age in this condition does not exist in metric aq_msgs_persistentq_per_subscriber
2011-06-28 09:52:03,675 Thread-1130613056 WARN  collector: the column name first_message_age in this condition does not exist in metric aq_msgs_persistentq_per_subscriber
2011-06-28 09:52:03,678 Thread-1130613056 WARN  collector: the column name first_message_age in this condition does not exist in metric aq_msgs_persistentq_per_subscriber
2011-06-28 09:52:03,690 Thread-1130613056 WARN  recvlets.aq: [rac_database dbm] deferred nmevqd_refreshState for dbm rac_database
2011-06-28 09:52:03,691 Thread-1130613056 WARN  upload: Upload manager has no Failure script: disabled
2011-06-28 09:54:21,234 Thread-1136912704 ERROR vpxoci: ORA-03113: end-of-file on communication channel
2011-06-28 09:59:52,513 Thread-1146362176 ERROR util.fileops: error: file /u01/app/oracle/product/grid/agent11g/bin/nmo is not a setuid file
2011-06-28 09:59:52,513 Thread-1146362176 WARN  Authentication: nmo binary in current oraHome doesn't have setuid privileges !!!
2011-06-28 09:59:52,513 Thread-1146362176 ERROR Authentication: altNmo binary doesn't exist ... reverting back to nmo
2011-06-28 10:00:41,308 Thread-1084578112 ERROR util.fileops: error: file /u01/app/oracle/product/grid/agent11g/bin/nmo is not a setuid file
2011-06-28 10:00:41,309 Thread-1084578112 WARN  Authentication: nmo binary in current oraHome doesn't have setuid privileges !!!
2011-06-28 10:00:41,309 Thread-1084578112 ERROR Authentication: altNmo binary doesn't exist ... reverting back to nmo
2011-06-28 10:00:54,251 Thread-1146362176 ERROR util.fileops: error: file /u01/app/oracle/product/grid/agent11g/bin/nmo is not a setuid file
2011-06-28 10:00:54,251 Thread-1146362176 WARN  Authentication: nmo binary in current oraHome doesn't have setuid privileges !!!
2011-06-28 10:00:54,252 Thread-1146362176 ERROR Authentication: altNmo binary doesn't exist ... reverting back to nmo
2011-06-28 10:01:11,931 Thread-1121163584 ERROR util.fileops: error: file /u01/app/oracle/product/grid/agent11g/bin/nmo is not a setuid file
2011-06-28 10:01:11,931 Thread-1121163584 WARN  Authentication: nmo binary in current oraHome doesn't have setuid privileges !!!
2011-06-28 10:01:11,932 Thread-1121163584 ERROR Authentication: altNmo binary doesn't exist ... reverting back to nmo
2011-06-28 10:01:53,036 Thread-1130613056 ERROR util.fileops: error: file /u01/app/oracle/product/grid/agent11g/bin/nmo is not a setuid file
2011-06-28 10:01:53,036 Thread-1130613056 WARN  Authentication: nmo binary in current oraHome doesn't have setuid privileges !!!
2011-06-28 10:01:53,036 Thread-1130613056 ERROR Authentication: altNmo binary doesn't exist ... reverting back to nmo
2011-06-28 10:34:28,828 Thread-1130613056 ERROR util.fileops: error: file /u01/app/oracle/product/grid/agent11g/bin/nmo is not a setuid file
2011-06-28 10:34:28,828 Thread-1130613056 WARN  Authentication: nmo binary in current oraHome doesn't have setuid privileges !!!
2011-06-28 10:34:28,828 Thread-1130613056 ERROR Authentication: altNmo binary doesn't exist ... reverting back to nmo

2011-06-28 10:20:11 ./sysman/emd/state/snapshot
2011-06-28 10:26:00 ./sysman/emd/cputrack/emagent_11747_2011-06-28_10-26-00_cpudiag.trc
2011-06-28 10:26:00 ./sysman/log/emctl.log
2011-06-28 10:28:43 ./sysman/emd/upload/EM_adaptive_thresholds.dat
2011-06-28 10:30:32 ./sysman/emd/state/parse-log-3CBBC0C79ED9B7E65B93EAC0D7457308
2011-06-28 10:30:54 ./sysman/emd/upload/rawdata8.dat
2011-06-28 10:32:13 ./sysman/emd/state/adr/C12313AF3162E92001DE7952A752106A.alert.log.xml.state
2011-06-28 10:32:51 ./sysman/emd/upload/rawdata3.dat
2011-06-28 10:33:08 ./sysman/emd/state/adr/5A9DF4683EEF44F8898ABA391E70D194.alert.log.xml.state
2011-06-28 10:34:06 ./sysman/emd/upload/rawdata9.dat
2011-06-28 10:34:25 ./sysman/emd/state/3CBBC0C79ED9B7E65B93EAC0D7457308.alerttd01db01.log
2011-06-28 10:34:25 ./sysman/emd/state/progResUtil.log
2011-06-28 10:34:27 ./sysman/emd/upload/rawdata7.dat
2011-06-28 10:35:17 ./sysman/emd/upload/mgmt_ha_mttr.dat
2011-06-28 10:35:21 ./sysman/emd/upload/rawdata5.dat
2011-06-28 10:35:58 ./sysman/emd/upload/rawdata2.dat
2011-06-28 10:36:05 ./sysman/emd/state/adr/141DB5270B29BDF93743E123C2DF1231.alert.log.xml.state
2011-06-28 10:36:28 ./sysman/emd/upload/mgmt_db_hdm_metric_helper.dat
2011-06-28 10:36:32 ./sysman/emd/upload/rawdata4.dat
2011-06-28 10:36:37 ./sysman/log/emagent.trc
2011-06-28 10:36:51 ./sysman/emd/upload/rawdata0.dat
2011-06-28 10:36:54 ./sysman/emd/agntstmp.txt

[td01db01:oracle:dbm1] /home/oracle
> dcli -l oracle -g dbs_group id oracle
td01db01: uid=500(oracle) gid=500(oinstall) groups=500(oinstall),101(fuse),501(dba)
td01db02: uid=500(oracle) gid=500(oinstall) groups=500(oinstall),101(fuse),501(dba)
td01db03: uid=500(oracle) gid=500(oinstall) groups=500(oinstall),101(fuse),501(dba)
td01db04: uid=500(oracle) gid=500(oinstall) groups=500(oinstall),101(fuse),501(dba)

[td01db01:oracle:dbm1] /home/oracle
>

[td01db01:oracle:dbm1] /home/oracle
> dcli -l oracle -g dbs_group ls -l /u01/app/oracle/product/grid/agent11g/bin/nmo
td01db01: -rwxr-xr-x 1 oracle oinstall 32872 Jun 22 17:02 /u01/app/oracle/product/grid/agent11g/bin/nmo
td01db02: -rws--x--- 1 root oinstall 32872 Jun 22 16:07 /u01/app/oracle/product/grid/agent11g/bin/nmo
td01db03: -rws--x--- 1 root oinstall 32872 Jun 22 16:14 /u01/app/oracle/product/grid/agent11g/bin/nmo
td01db04: -rws--x--- 1 root oinstall 32872 Jun 22 16:14 /u01/app/oracle/product/grid/agent11g/bin/nmo


2011-06-28 10:01:53,036 Thread-1130613056 ERROR Authentication: altNmo binary doesn't exist ... reverting back to nmo
2011-06-28 10:34:28,828 Thread-1130613056 ERROR util.fileops: error: file /u01/app/oracle/product/grid/agent11g/bin/nmo is not a setuid file
2011-06-28 10:34:28,828 Thread-1130613056 WARN  Authentication: nmo binary in current oraHome doesn't have setuid privileges !!!
2011-06-28 10:34:28,828 Thread-1130613056 ERROR Authentication: altNmo binary doesn't exist ... reverting back to nmo
2011-06-28 10:36:37,862 Thread-1130613056 ERROR util.fileops: error: file /u01/app/oracle/product/grid/agent11g/bin/nmo is not a setuid file
2011-06-28 10:36:37,862 Thread-1130613056 WARN  Authentication: nmo binary in current oraHome doesn't have setuid privileges !!!
2011-06-28 10:36:37,863 Thread-1130613056 ERROR Authentication: altNmo binary doesn't exist ... reverting back to nmo


-rwxr-xr-x 1 oracle oinstall     5985 Jun 22 16:14 owm        | -rwxr-xr-x 1 oracle oinstall     5985 Jun 22 17:02 owm
-rwxr-xr-x 1 oracle oinstall     2994 Jun 22 16:14 orapki     | -rwxr-xr-x 1 oracle oinstall     2994 Jun 22 17:02 orapki
-rwxr-xr-x 1 oracle oinstall     2680 Jun 22 16:14 mkstore    | -rwxr-xr-x 1 oracle oinstall     2680 Jun 22 17:02 mkstore
-rwxr-xr-x 1 oracle oinstall     2326 Jun 22 16:14 bndlchk    | -rwxr-xr-x 1 oracle oinstall     2326 Jun 22 17:02 bndlchk
-rwxr-xr-x 1 oracle oinstall     3602 Jun 22 16:14 umu        | -rwxr-xr-x 1 oracle oinstall     3602 Jun 22 17:02 umu
-rwxr-xr-x 1 oracle oinstall     1641 Jun 22 16:14 eusm       | -rwxr-xr-x 1 oracle oinstall     1641 Jun 22 17:02 eusm
-rwxr-xr-x 1 oracle oinstall    60783 Jun 22 16:14 chronos_se | -rwxr-xr-x 1 oracle oinstall    60783 Jun 22 17:02 chronos_se
-rwxr-xr-x 1 oracle oinstall     1551 Jun 22 16:14 chronos_se | -rwxr-xr-x 1 oracle oinstall     1551 Jun 22 17:02 chronos_se
-rwxr-x--x 1 oracle oinstall    19217 Jun 22 16:14 tnsping    | -rwxr-x--x 1 oracle oinstall    19217 Jun 22 17:02 tnsping
-rwxr-x--x 1 oracle oinstall   418787 Jun 22 16:14 wrc        | -rwxr-x--x 1 oracle oinstall   418787 Jun 22 17:02 wrc
-rwxr-x--x 1 oracle oinstall    25297 Jun 22 16:14 adrci      | -rwxr-x--x 1 oracle oinstall    25297 Jun 22 17:02 adrci
-rwxr-x--x 1 oracle oinstall 16793110 Jun 22 16:14 rmanO      | -rwxr-x--x 1 oracle oinstall 16793110 Jun 22 17:02 rmanO
-rwxr-xr-x 1 oracle oinstall   227069 Jun 22 16:14 ojmxtool   | -rwxr-xr-x 1 oracle oinstall   227069 Jun 22 17:02 ojmxtool
-rwxr-xr-x 1 oracle oinstall    26061 Jun 22 16:14 nmupm      | -rwxr-xr-x 1 oracle oinstall    26061 Jun 22 17:02 nmupm
-rwxr-xr-x 1 oracle oinstall    84093 Jun 22 16:14 nmei       | -rwxr-xr-x 1 oracle oinstall    84093 Jun 22 17:02 nmei
-rwx------ 1 oracle oinstall   112352 Jun 22 16:14 emdctl     | -rwx------ 1 oracle oinstall   112352 Jun 22 17:02 emdctl
-rwxr-xr-x 1 oracle oinstall    37130 Jun 22 16:14 emagtmc    | -rwxr-xr-x 1 oracle oinstall    54596 Jun 22 17:02 emagtm
-rwxr-xr-x 1 oracle oinstall    54596 Jun 22 16:14 emagtm     | -rwx------ 1 oracle oinstall    15461 Jun 22 17:02 emagent
-rwx------ 1 oracle oinstall    15461 Jun 22 16:14 emagent    | -rwxr-xr-x 1 oracle oinstall      656 Jun 22 17:02 commonenv.
-rwxr-xr-x 1 oracle oinstall      656 Jun 22 16:14 commonenv. | -rwx------ 1 oracle oinstall      347 Jun 22 17:02 opmnassoci
-rwx------ 1 oracle oinstall      347 Jun 22 16:14 opmnassoci | -rwxr-xr-x 1 oracle oinstall     2934 Jun 22 17:02 onsctl.opm
-rwxr-xr-x 1 oracle oinstall     2934 Jun 22 16:14 onsctl.opm | -rwxr-xr-x 1 oracle oinstall   484287 Jun 22 17:02 nmosudo
-rwxr-xr-x 1 oracle oinstall   484287 Jun 22 16:14 nmosudo    | -rwxr-xr-x 1 oracle oinstall    24725 Jun 22 17:02 nmocat
-rwxr-xr-x 1 oracle oinstall    24725 Jun 22 16:14 nmocat     | -rwxr-xr-x 1 oracle oinstall    32872 Jun 22 17:02 nmo.0
-rwxr-xr-x 1 oracle oinstall    32872 Jun 22 16:14 nmo.0      | -rwxr-xr-x 1 oracle oinstall    32872 Jun 22 17:02 nmo
-rws--x--- 1 root   oinstall    32872 Jun 22 16:14 nmo        | -rwxr-xr-x 1 oracle oinstall    58483 Jun 22 17:02 nmhs.0
-rwxr-xr-x 1 oracle oinstall    58483 Jun 22 16:14 nmhs.0     | -rwxr-xr-x 1 oracle oinstall    58483 Jun 22 17:02 nmhs
-rws--x--- 1 root   oinstall    58483 Jun 22 16:14 nmhs       | -rwxr-xr-x 1 oracle oinstall    22746 Jun 22 17:02 nmb.0
-rwxr-xr-x 1 oracle oinstall    22746 Jun 22 16:14 nmb.0      | -rwxr-xr-x 1 oracle oinstall    22746 Jun 22 17:02 nmb
-rws--x--- 1 root   oinstall    22746 Jun 22 16:14 nmb        | -rwsr-s--- 1 oracle oinstall    76234 Jun 22 17:02 emtgtctl2
-rwsr-s--- 1 oracle oinstall    76234 Jun 22 16:14 emtgtctl2  | -rwxr-xr-x 1 oracle oinstall  3895446 Jun 22 17:02 emsubagent
-rwxr-xr-x 1 oracle oinstall  3895446 Jun 22 16:14 emsubagent | -rwxr-xr-x 1 oracle oinstall    37130 Jun 22 17:02 emagtmc
-rwxr-xr-x 1 oracle oinstall  3031365 Jun 22 16:14 e2eme      | -rwxr-xr-x 1 oracle oinstall  3031365 Jun 22 17:02 e2eme
-rwx------ 1 oracle oinstall     1634 Jun 22 16:14 dmstool    | -rwx------ 1 oracle oinstall     1634 Jun 22 17:02 dmstool
-rwxr-xr-x 1 oracle oinstall     2639 Jun 22 16:14 db2gc      | -rwxr-xr-x 1 oracle oinstall     2639 Jun 22 17:02 db2gc
-rwxr-xr-x 1 oracle oinstall     5258 Jun 22 16:14 emutil     | -rwxr-xr-x 1 oracle oinstall     5258 Jun 22 17:02 emutil
-rwxr-xr-x 1 oracle oinstall     1516 Jun 22 16:14 emtgtctl   | -rwxr-xr-x 1 oracle oinstall     1516 Jun 22 17:02 emtgtctl
-rwx------ 1 oracle oinstall    19063 Jun 22 16:14 emctl.pl   | -rwx------ 1 oracle oinstall    19063 Jun 22 17:02 emctl.pl
-rwxr--r-- 1 oracle oinstall    14476 Jun 22 16:14 emctl      | -rwxr--r-- 1 oracle oinstall    14476 Jun 22 17:02 emctl
-rwxr-xr-x 1 oracle oinstall      641 Jun 22 16:14 commonenv  | -rwxr-xr-x 1 oracle oinstall      641 Jun 22 17:02 commonenv
-rwxr-xr-x 1 oracle oinstall      701 Jun 22 16:14 agentca    | -rwxr-xr-x 1 oracle oinstall      701 Jun 22 17:02 agentca
-rwxr-x--x 1 oracle oinstall 16792553 Jun 22 16:14 rman       | -rwxr-x--x 1 oracle oinstall 16792553 Jun 22 17:03 rman

}}}
Grid Control Target Maintenance: Steps to Diagnose Issues Related to "Agent Unreachable" Status [ID 271126.1]
In Grid Control Receiving Agent Unreachable Notification Emails Very Often After 10.2.0.4 Agent Upgrade [ID 752296.1]
https://blogs.oracle.com/db/entry/oracle_support_master_note_for_10g_grid_control_enterprise_manager_communication_and_upload_issues_d
* Tagging search solution design Advanced edition https://www.slideshare.net/AlexanderTokarev4/tagging-search-solution-design-advanced-edition   <- GOOD STUFF
* Faceted search with Oracle InMemory option https://www.slideshare.net/AlexanderTokarev4/faceted-search-with-oracle-inmemory-option
* P9 speed of-light faceted search via oracle in-memory option by alexander tokarev https://www.slideshare.net/AlexanderTokarev4/p9-speed-oflight-faceted-search-via-oracle-inmemory-option-by-alexander-tokarev


Oracle json caveats https://www.slideshare.net/AlexanderTokarev4/oracle-json-caveats



...
http://wikis.sun.com/display/Performance/Aligning+Flash+Modules+for+Optimal+Performance
http://blogs.oracle.com/lisan/entry/io_sizes_and_alignments_with

! console 
console.aws.amazon.com



! documentation 
https://docs.aws.amazon.com/index.html






http://guyharrison.squarespace.com/blog/2011/6/8/a-first-look-at-oracle-on-amazon-rds.html

High perf IOPS on AWS http://aws.typepad.com/aws/2012/09/new-high-performance-provisioned-iops-amazon-rds.html

service dashboard status http://status.aws.amazon.com/

''a Systematic Look at EC2 I/O'' http://blog.scalyr.com/2012/10/16/a-systematic-look-at-ec2-io/

''EC2 compute units'' http://gevaperry.typepad.com/main/2009/03/figuring-out-the-roi-of-infrastructureasaservice.html, http://stackoverflow.com/questions/4849723/a-question-about-amazon-ec2-compute-units

! official doc 
data warehousing guide - 19 SQL for Analysis and Reporting 
https://docs.oracle.com/en/database/oracle/oracle-database/19/dwhsg/sql-analysis-reporting-data-warehouses.html#GUID-20EFBF1E-F79D-4E4A-906C-6E496EECA684
https://docs.oracle.com/en/database/oracle/oracle-database/19/dwhsg/sql-analysis-reporting-data-warehouses.html#GUID-D6AC065D-670A-40E8-8DA0-E90A7307CFC2

SQL Language Reference - Analytic Functions 
https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/Analytic-Functions.html#GUID-527832F7-63C0-4445-8C16-307FA5084056
https://docs.oracle.com/en/database/oracle/oracle-database/18/sqlrf/Analytic-Functions.html#GUID-527832F7-63C0-4445-8C16-307FA5084056


{{{
Analytic functions are commonly used in data warehousing environments. In the list of analytic functions that follows, 
functions followed by an asterisk (*) allow the full syntax, including the windowing_clause.

    AVG *
    CLUSTER_DETAILS
    CLUSTER_DISTANCE
    CLUSTER_ID
    CLUSTER_PROBABILITY
    CLUSTER_SET
    CORR *
    COUNT *
    COVAR_POP *
    COVAR_SAMP *
    CUME_DIST
    DENSE_RANK
    FEATURE_DETAILS
    FEATURE_ID
    FEATURE_SET
    FEATURE_VALUE
    FIRST
    FIRST_VALUE *
    LAG
    LAST
    LAST_VALUE *
    LEAD
    LISTAGG
    MAX *
    MIN *
    NTH_VALUE *
    NTILE
    PERCENT_RANK
    PERCENTILE_CONT
    PERCENTILE_DISC
    PREDICTION
    PREDICTION_COST
    PREDICTION_DETAILS
    PREDICTION_PROBABILITY
    PREDICTION_SET
    RANK
    RATIO_TO_REPORT
    REGR_ (Linear Regression) Functions *
    ROW_NUMBER
    STDDEV *
    STDDEV_POP *
    STDDEV_SAMP *
    SUM *
    VAR_POP *
    VAR_SAMP *
    VARIANCE *

}}}





https://oracle-base.com/articles/sql/articles-sql#analytic-functions
https://oracle-base.com/articles/misc/avg-and-median-analytic-functions
https://leetcode.com/problems/median-employee-salary/
[img(80%,80%)[ https://user-images.githubusercontent.com/3683046/124168274-ea7b2d00-da72-11eb-9645-8246cfebf122.png ]]



Find Answers Faster
By Jonathan Gennick and Anthony Molinaro 
http://www.oracle.com/technology/oramag/oracle/05-mar/o25dba.html

LAG
http://www.appsdba.com/blog/?p=383

CAST function
http://www.oracle.com/technetwork/database/focus-areas/manageability/diag-pack-ow08-131537.pdf
http://psoug.org/reference/cast.html

SQL – RANK, MAX Analytical Functions, DECODE, SIGN
http://hoopercharles.wordpress.com/2009/12/26/sql-–-rank-max-analytical-functions-decode-sign/

RANK - fist and last 
https://oracle-base.com/articles/misc/rank-dense-rank-first-last-analytic-functions#first_and_last
https://stackoverflow.com/questions/40404497/select-latest-row-for-each-group-from-oracle

CONNECT BY - hierarchical queries 
https://www.linkedin.com/pulse/step-by-step-guide-creating-sql-hierarchical-queries-bibhas-mitra/


http://www.slideshare.net/hamcdc/sep13-analytics
http://www.odtug.com/p/cm/ld/fid=65&tid=35&sid=972
http://www.amazon.com/Window-Functions-SQL-Jonathan-Gennick-ebook/dp/B006YITKJO/ref=sr_1_2?ie=UTF8&qid=1385753351&sr=8-2&keywords=window+functions+in+sql
http://gennick.com/database/?tag=WindowSS

Analytic Functions in Oracle 8i Srikanth Bellamkonda  http://infolab.stanford.edu/infoseminar/archive/SpringY2000/speakers/agupta/paper.pdf
Enhanced subquery optimizations in Oracle http://www.vldb.org/pvldb/2/vldb09-423.pdf
Analytic SQL in 12c http://www.oracle.com/technetwork/database/bi-datawarehousing/wp-in-database-analytics-12c-2132656.pdf
Adaptive and big data scale parallel execution in oracle http://dl.acm.org/citation.cfm?id=2536235



..


https://forums.oracle.com/forums/thread.jspa?threadID=2220970

''analyze table sysadm.PSOPRDEFN                   validate structure cascade online ; ''
''andrew ng''
publications http://cs.stanford.edu/people/ang/?page_id=414
http://en.wikipedia.org/wiki/Andrew_Ng
http://cs.stanford.edu/people/ang/
http://creiley.wordpress.com/
https://www.coursera.org/course/ml

Oracle Clusterware and Application Failover Management [ID 790189.1]

Application Management http://www.oracle.com/technetwork/oem/app-mgmt/app-mgmt-084358.html


http://onlineappsdba.com/index.php/2010/08/30/time-out-while-waiting-for-a-managed-process-to-stop-http_server/

cman http://arup.blogspot.com/2011/08/setting-up-oracle-connection-manager.html
Database Resident Connection Pool (drcp) http://www.oracle-base.com/articles/11g/database-resident-connection-pool-11gr1.php	

[img(50%,50%)[ https://lh6.googleusercontent.com/-TEaGT5fnFH0/UZpDd8TgAaI/AAAAAAAAB7A/EqsT3qE_WLg/w599-h798-no/timfoxconnectionpool.JPG ]]

[img(50%,50%)[ https://lh3.googleusercontent.com/-7PfskV3MC1o/UZpHKolfKeI/AAAAAAAAB7w/wvj7c22xHWk/w458-h610-no/timfoxconnectionpool2.JPG ]]
{{{
http://www.oracle.com/technology/software/products/ias/files/ha-certification.html

How to Obtain Pre-Requisites for Oracle Application Server 10g Installation
  	Doc ID: 	Note:433077.1

Oracle Application Server 10g Release 3 (10.1.3) Support Status and Alerts
  	Doc ID: 	Note:397022.1

How to Find Certification Details for Oracle Application Server 10g
  	Doc ID: 	Note:431578.1


How to Verify 9iAS Release 2 (9.0.2) Components
  	Doc ID: 	Note:226187.1 	

What is a 9iAS (9.0.2) Farm
  	Doc ID: 	Note:218038.1

What is a 9iAS (9.0.2) Cluster
  	Doc ID: 	Note:218039.1



Steps to Maintain Oracle Application Server 10g Release 2 (10.1.2)
  	Doc ID: 	Note:415222.1




Subject: 	Installing Oracle Application Server 10g with Oracle E-Business Suite Release 11i
  	Doc ID: 	Note:233436.1
  	
Oracle Application Server 10g Release 2 (10.1.2) Support Status and Alerts
  	Doc ID: 	Note:329361.1 	
  	
Oracle Application Server 10g Examples for Critical Patch Updates
  	Doc ID: 	Note:405972.1
  	
Using Oracle Applications with a Split Configuration Database Tier on Oracle 10g Release 2
  	Doc ID: 	Note:369693.1
  	
Using Oracle Applications with a Split Configuration Database Tier on Oracle 10g Release 1
  	Doc ID: 	Note:356839.1
  	
How to Obtain Pre-Requisites for Oracle Application Server 10g Installation
  	Doc ID: 	Note:433077.1
  	
Oracle Application Server with Oracle E-Business Suite Release 11i FAQ
  	Doc ID: 	Note:186981.1
  	
How to Find Certification Details for Oracle Application Server 10g
  	Doc ID: 	Note:431578.1
  	









Oracle Server - Export Data Pump and Import DataPump FAQ
  	Doc ID: 	Note:556636.1
  	
Oracle E-Business Suite Release 11i Technology Stack Documentation Roadmap
  	Doc ID: 	Note:207159.1
  	
Using Oracle Applications with a Split Configuration Database Tier on Oracle 10g Release 1
  	Doc ID: 	Note:356839.1
  	
10g Release 2 Export/Import Process for Oracle Applications Release 11i
  	Doc ID: 	Note:362205.1
  	
Oracle Application Server with Oracle E-Business Suite Release 12 FAQ
  	Doc ID: 	Note:415007.1
  	
About Oracle E-Business Suite Applied Technology Family Pack ATG_PF.H
  	Doc ID: 	Note:284086.1
  	
Installing Oracle Application Server 10g with Oracle E-Business Suite Release 11i
  	Doc ID: 	Note:233436.1
  	
Oracle Applications Documentation Resources, Release 12
  	Doc ID: 	Note:394692.1
  	
https://metalink.oracle.com/metalink/plsql/f?p=130:14:491566816839019350::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,461709.1,1,1,1,helvetica
Implement, Upgrade and Optimize  �  Upgrade Guide  � Oracle E-Business Suite Upgrade Resource � Oracle E-Business Suite Upgrade Resource Plan

Globalization Guide for Oracle Applications Release 12
  	Doc ID: 	Note:393861.1
  	
Oracle Applications Release 12 Technology Stack Documentation Resources
  	Doc ID: 	Note:396957.1
  	
Oracle E-Business Suite Release 12 Technology Stack Documentation Roadmap
  	Doc ID: 	Note:380482.1
  	 	
How to Migrate OAS 4.x Applications to 9iAS Release 1 (1.0.2)
  	Doc ID: 	Note:122826.1 	
  	
  	
Disaster Recovery Setup: Middle Tier and Collocated Infrastructure on the Same Server
  	Doc ID: 	Note:420824.1

Understanding OracleAS 10g High Availability - A Roadmap
  	Doc ID: 	Note:412159.1

What make and version of Cluster Managers are supported by Oracle in an OracleAS Cold Failover Cluster setup?
  	Doc ID: 	Note:303161.1

Examples of Building Highly Available, Highly Secure, Scalable OracleAS 10g Solutions
  	Doc ID: 	Note:435025.1

Storage Solutions for OracleAS 10g R2 and OracleAS 10g R3
  	Doc ID: 	Note:371251.1



9.0.2.0.1 documentation
http://download-uk.oracle.com/docs/cd/B10202_07/index.htm


Oracle9iAS Release 2 (9.0.3) Support Status and Alerts
  	Doc ID: 	Note:248328.1

Installation and Connection Issues with 9iAS 1.0.2.2 and 9i
  	Doc ID: 	Note:162843.1

9iAS Release 1 and Release 2 Install Options
  	Doc ID: 	Note:203509.1

Explanation of 9iAS Release 1 Installation Prompts
  	Doc ID: 	Note:158688.1

9iAS 1.0.2.2.2A Installation Hangs at 100% on Windows
  	Doc ID: 	Note:180418.1

Installing 9iAS Release 1 (1.0.2.2) and RDBMS 8.1.7 on the Same Windows Server
  	Doc ID: 	Note:170756.1

9iAS Release 1 (1.0.2.2) Installation Requirements Checklist for Linux
  	Doc ID: 	Note:158856.1

9iAS Release 1 (1.0.2.2) EE Installation Requirements Checklist (Microsoft Windows NT/2000)
  	Doc ID: 	Note:158863.1

ALERT: Windows NT/2000 - 9iAS v.1.0.2.2.1 Unsupported on Pentium 4
  	Doc ID: 	Note:136038.1

Checking 9iAS Release 1 Installation Requirements
  	Doc ID: 	Note:158634.1

Oracle9i Application Server (9iAS) 9.0.3.1 FAQ
  	Doc ID: 	Note:251781.1








--########## FORMS  	

The History and Methods of Running Oracle Forms Over The Web
  	Doc ID: 	Note:166640.1 	

Overview of Oracle Forms and Using the Oracle Forms Builder
  	Doc ID: 	Note:358712.1

Note 166640.1 - The History and Methods of Running Oracle Forms Over The Web
Note 2056834.6 - Does Oracle Support the Use of Emulators to Run Oracle Products?
Note 266541.1 - Patching Lifecycle / Strategy of Oracle Developer (Forms and Reports)
Note 299938.1 - Moving Forms Applications From One Platform To Another
Note 340215.1 - Required Support Files (RSF) in Oracle Forms and Reports
Note 68047.1 - Support of Terminal Emulators, Terminal Server ( e.g. Citrix) with Developer Tools
Note 73736.1 - Installing Developer on a LAN - Is This Supported?
Note 74145.1 - Developer Production and Patchset Version Numbers on MS Windows

How to Web Deploy Oracle Forms Using The Static HTML File Method?
  	Doc ID: 	Note:232371.1

Are Unix Clients Supported for Deploying Oracle Forms over the Web?
  	Doc ID: 	Note:266439.1

Changing the Oracle Password in Oracle Forms
  	Doc ID: 	Note:16365.1

Failed To Detect Change Window Password Of Oracle Forms 6
  	Doc ID: 	Note:563955.1

Changing the Oracle Password in Oracle Forms
  	Doc ID: 	Note:16365.1


--########## JINITIATOR VERSIONS

oracle 9iR1	-	1.1.8.7
oracle10gr2 AS 	- 	1.3.1.22




  	
--########## PORTAL

Overview of the Portal Export-Import Process
  	Doc ID: 	Note:306785.1
  	




Note 456456.1 How to Find the Oracle Application Server 10g Upgrade and Compatibility Guide
     
Note 433077.1 How to Obtain Pre-Requisites for Oracle Application Server 10g Installation
     
Note 431028.1 Oracle Fusion Middleware Support of IPv6
     
Note 429995.1 Is it Supported to Run OracleAS Components on Different Operating Systems and Versions?
     
Note 420210.1 What User Can Be Used to Perform the IAS Patches/Upgrades?
     
Note 412439.1 Can A Manually Managed Cluster Be Installed Across Windows And Unix/Linux?
     
Note 394525.1 How to Know If a New Patch is Released ?
     
Note 400134.1 How to force Oracle Installer to use Virtual Hostname When Installing an OracleAS Instance?
     
Note 302535.1 Can Oracle AS 10g Release 2 (10.1.2) Be Installed to Upgrade Forms, Reports and Portal 10g (9.0.4)?

Note 317085.1 OracleAS 10g (10.1.2) Installation Requirements for Linux Red Hat 4.0 / Oracle Enterprise Linux





-- 9.0.3

Oracle9iAS Release 2 (9.0.3) Support Status and Alerts
  	Doc ID: 	Note:248328.1

Installation and Connection Issues with 9iAS 1.0.2.2 and 9i
  	Doc ID: 	Note:162843.1

9iAS Release 1 and Release 2 Install Options
  	Doc ID: 	Note:203509.1

Explanation of 9iAS Release 1 Installation Prompts
  	Doc ID: 	Note:158688.1

9iAS Release 1 and Release 2 Install Options
  	Doc ID: 	Note:203509.1

9iAS Release 1 (1.0.2.2) Installation Requirements Checklist for Linux
  	Doc ID: 	Note:158856.1

9iAS Release 1 (1.0.2.2) EE Installation Requirements Checklist (Microsoft Windows NT/2000)
  	Doc ID: 	Note:158863.1

ALERT: Windows NT/2000 - 9iAS v.1.0.2.2.1 Unsupported on Pentium 4
  	Doc ID: 	Note:136038.1

Checking 9iAS Release 1 Installation Requirements
  	Doc ID: 	Note:158634.1

Oracle9i Application Server (9iAS) 9.0.3.1 FAQ
  	Doc ID: 	Note:251781.1

Unable to Bind to Server Machine After Install of Discoverer 4.1.37
  	Doc ID: 	Note:149678.1




-- HTTP SERVER

HTTP Server Intermittently Restarted By OPMN
  	Doc ID: 	469720.1

Linux OS Service 'httpd'
  	Doc ID: 	550870.1

Is There a Way to Increase the Maximum Value of ThreadsperChild on Windows?
  	Doc ID: 	460443.1

Unable to Increase Value of Maxclients Above 256 in httpd.conf File
  	Doc ID: 	149874.1

How Apache Works
  	Doc ID: 	334763.1

OC4J_SECURITY Is Falling To Start After Problems With Database
  	Doc ID: 	Note:550631.1







-- TUNING / TROUBLESHOOTING

Troubleshooting Web Deployed Oracle Forms Performance Issues
  	Doc ID: 	363285.1

Configurable Connection Limits in Application Server Components
  	Doc ID: 	289908.1





-- AIX

Does OracleAS 10g Support AIX VIO Logical Partitioning (LPAR)?
  	Doc ID: 	Note:470083.1



-- EBUSINESS SUITE

Oracle Application Server with Oracle E-Business Suite Release 11i FAQ
  	Doc ID: 	Note:186981.1

}}}
{{{
col dest_name format a30
select inst_id, dest_name, status, error, gap_status from gV$ARCHIVE_DEST_STATUS;

SELECT name, free_mb, total_mb, free_mb/total_mb*100 "%" FROM v$asm_diskgroup;

set lines 100
col name format a60
select name, floor(space_limit / 1024 / 1024) "Size MB", ceil(space_used / 1024 / 1024) "Used MB"
from v$recovery_file_dest
order by name;

}}}

{{{

alter system set db_recovery_file_dest_size=<bigger size>;
archive log all;


crosscheck archivelog all;
list expired archivelog all; 
delete expired archivelog all;
OR
delete archivelog all completed before 'sysdate-1';

}}}
----------------------------------------------------
Archivelog Mode On RAC 10G, 11g
----------------------------------------------------

1) In Oracle 10.1, you cannot directly enable archive logging in a RAC database. Instead, you must temporarily convert your RAC database to a single-instance database to issue the command. First change the CLUSTER_DATABASE parameter in the SPFILE to FALSE

      ALTER SYSTEM SET CLUSTER_DATABASE = FALSE SCOPE = SPFILE;

  In Oracle 10.2 and 11g, you can run the ALTER DATABASE SQL statement to change the archiving mode in RAC as long as the database is mounted by the local instance but not open in any instances. You do not need to modify parameter settings to run this statement.

2) Set parameters

  If you are using a filesystem do this:
      alter system set log_archive_format='orcl_%t_%s_%r.arc' scope=spfile;
      alter system set log_archive_dest_1 = 'LOCATION=/u03/flash_recovery_area/ORCL/archivelog' scope=both;

  If you are using ASM do this:
      alter system set log_archive_format='orcl_%t_%s_%r.arc' scope=spfile;
      alter system set db_recovery_file_dest_size=800G scope=both;
      alter system set db_recovery_file_dest='+RECOVERY_1' scope=both;
      alter system set log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST';

3) shutdown the database

      srvctl stop database -d RAC

4) Start a single instance using the following:

      srvctl start instance -d RAC - i RACl -o mount

5) Enable archiving as follows:

      ALTER DATABASE ARCHIVELOG;

6) In Oracle 10.1, Change the CLUSTER_DATABASE parameter in the SPFILE back to TRUE:

      ALTER SYSTEM SET CLUSTER_DATABASE = TRUE SCOPE = SPFILE;

7) The next time the database is stopped and started, it will be a RAC database. Use the following command to stop the instance:

      srvctl stop instance -d RAC -i RACl

8) start the database

      srvctl start database -d RAC

9) do other stuff: 

    -- Edit related parameters 
    alter system set control_file_record_keep_time=14; 
    alter database enable block change tracking using file '+RECOVERY_1/ORCL/orcl.bct';

    -- Configure RMAN settings and related directories
    on +RECOVERY_1... mkdir AUTOBACKUP BACKUPSET

    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
    CONFIGURE BACKUP OPTIMIZATION OFF;
    CONFIGURE DEFAULT DEVICE TYPE TO DISK;
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '+RECOVERY_1/ORCL/AUTOBACKUP/%d-%F';
    CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
    CONFIGURE CHANNEL DEVICE TYPE DISK MAXPIECESIZE 2 G FORMAT   '+RECOVERY_1/ORCL/BACKUPSET/%d-%T-%U';
    CONFIGURE MAXSETSIZE TO UNLIMITED;
    CONFIGURE ENCRYPTION FOR DATABASE OFF;
    CONFIGURE ENCRYPTION ALGORITHM 'AES128';
    CONFIGURE COMPRESSION ALGORITHM 'BZIP2'; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+RECOVERY_1/ORCL/sncf_orcl.f';

      List of directories on +RECOVERY_1:
	Y    ARCHIVELOG/
	N    AUTOBACKUP/
	N    BACKUPSET/
	Y    CHANGETRACKING/
	Y    CONTROLFILE/





----------------------------------------------------
Archivelog Mode On RAC 9i by ORACLE-BASE
----------------------------------------------------

This article highlights the differences between resetting the archive log mode on a single node instance and a Real Application Clusters (RAC).

On a single node instance the archive log mode is reset as follows:
	ALTER SYSTEM SET log_archive_start=TRUE SCOPE=spfile;
	ALTER SYSTEM SET log_archive_dest_1='location=/u01/oradata/MYSID/archive/' SCOPE=spfile;
	ALTER SYSTEM SET log_archive_format='arch_%t_%s.arc' SCOPE=spfile;
	
	SHUTDOWN IMMEDIATE;
	STARTUP MOUNT;
	ARCHIVE LOG START;
	ALTER DATABASE ARCHIVELOG;
	ALTER DATABASE OPEN;


The ALTER DATABASE ARCHIVELOG command can only be performed if the database in mounted in exclusive mode. This means the whole clustered database must be stopped before the operation can be performed. First we set the relevant archive parameters:
	ALTER SYSTEM SET log_archive_start=TRUE SCOPE=spfile;
	ALTER SYSTEM SET log_archive_dest_1='location=/u01/oradata/MYSID/archive/' SCOPE=spfile;
	ALTER SYSTEM SET log_archive_format='arch_%t_%s.arc' SCOPE=spfile;
Since we need to mount the database in exclusive mode we must also alter the following parameter:
	ALTER SYSTEM SET cluster_database=FALSE SCOPE=spfile;
From the command line we can stop the entire cluster using:
	srvctl stop database -d MYSID
With the cluster down we can connect to a single node and issue the following commands:
	STARTUP MOUNT;
	ARCHIVE LOG START;
	ALTER DATABASE ARCHIVELOG;
	ALTER SYSTEM SET cluster_database=TRUE SCOPE=spfile;
	SHUTDOWN IMMEDIATE;
Notice that the CLUSTER_DATABASE parameter has been reset to it's original value. Since the datafiles and spfile are shared between all instances this operation only has to be done from a single node.

From the command line we can now start the cluster again using:
	srvctl start database -d MYSID
The current settings place all archive logs in the same directory. This is acceptible since the thread (%t) is part of the archive format preventing any name conflicts between instances. If node-specific locations are required the LOG_ARCHIVE_DEST_1 parameter can be repeated for each instance with the relevant SID prefix.
Archiver Best Practices
  	Doc ID: 	Note:45042.1
http://www.linuxjournal.com/content/arduino-open-hardware-and-ide-combo

Python Meets the Arduino http://www.youtube.com/watch?v=54XwSUC8klI
http://makeprojects.com/Project/Arduino+and+Python%3A+Learn+Serial+Programming/667/1#.UKSnQYc70hU
http://www.arduino.cc/playground/interfacing/python
https://python.sys-con.com/node/2386200


http://designcodelearn.com/blog/2012/12/01/how-to-make-$10m-in-one-night/
https://levels.io/korea-4g/
https://www.arqbackup.com/features/
Amazon Glacier https://aws.amazon.com/glacier/
<<showtoc>>


Logical I/O(consistent get) and Arraysize relation with SQL*PLUS
http://tonguc.wordpress.com/2007/01/04/logical-ioconsistent-get-and-arraysize-relation-with-sqlplus/
{{{


Master Note for Automatic Storage Management (ASM) [ID 1187723.1]


-- HOMEs COMPATIBILITY MATRIX

Note 337737.1 Oracle Clusterware - ASM - Database Version Compatibility
Note 363254.1 Applying one-off Oracle Clusterware patches in a mixed version home environment


-- BEST PRACTICE

ASM Technical Best Practices (Doc ID 265633.1)


-- SETUP 
How To Setup ASM on Linux Using ASMLIB Disks, Raw Devices or Block Devices? [ID 580153.1] <— mentions 10gR2 and 11gR2 configuration
Device Persistence and Oracle Linux ASMLib [ID 394959.1]


MOVING ORACLE_HOME
  	Doc ID: 	Note:28433.1

Recover database after disk loss
  	Doc ID: 	Note:230829.1

Doing Incomplete Recovery and Moving Redo Logs From Corrupted Disk
  	Doc ID: 	Note:77643.1 	

Cross-Platform Migration Using Rman Convert Database on Destination Host ( Windows 32-bit to Linux 32-bit )
  	Doc ID: 	Note:414878.1

How to recover and open the database if the archivelog required for recovery is either missing, lost or corrupted?
  	Doc ID: 	Note:465478.1

Recovering From A Lost Control File
  	Doc ID: 	Note:1014504.6

ORACLE V6 INSTALLATION PROCEDURES
  	Doc ID: 	Note:11196.1




-- TROUBLESHOOTING

How To Gather/Backup ASM Metadata In A Formatted Manner?
  	Doc ID: 	470211.1

Troubleshooting a multi-node ASMLib installation (Doc ID 811457.1)
ASM is Unable to Detect ASMLIB Disks/Devices. (Doc ID 457369.1)


HOW TO MAP ASM FILES WITH ONLINE DATABASE FILES
  	Doc ID: 	552082.1



-- TRACE DEVICES
How to identify exactly which disks on a SAN have been allocated to an ASM Diskgroup (Doc ID 398435.1)
How to map device name to ASMLIB disk (Doc ID 1098682.1)



-- IMBALANCE

Script to Report the Percentage of Imbalance in all Mounted Diskgroups (Doc ID 367445.1)



-- PERFORMANCE
Comparing ASM to Filesystem in benchmarks [ID 1153664.1]
File System's Buffer Cache versus Direct I/O [ID 462072.1]
question regarding "ASM Performance", version 10.2.0 http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:2109833600346625821
http://kevinclosson.wordpress.com/2007/02/11/what-performs-better-direct-io-or-direct-io-there-is-no-such-thing-as-a-stupid-question/
http://www.freelists.org/post/oracle-l/filesystemio-options-setting,4
http://www.freelists.org/post/oracle-l/split-block-torn-page-problem,6
ASM Inherently Performs Asynchronous I/O Regardless of filesystemio_options Parameter [ID 751463.1]







--======================
-- ASM
--======================

Problems with ASM in 10gR2
  	Doc ID: 	Note:353065.1

Deployment of very large databases (10TB to PB range) with Automatic Storage Management (ASM)
  	Doc ID: 	Note:368055.1

ASMIOSTAT Script to collect iostats for ASM disks
  	Doc ID: 	Note:437996.1

How to copy a datafile from ASM to a file system not using RMAN
  	Doc ID: 	Note:428893.1

How to upgrade ASM instance from 10.1 to 10.2 (Single Instance)
  	Doc ID: 	Note:329987.1

Problems with ASM in 10gR2
  	Doc ID: 	Note:353065.1

Unable to startup ASM instance after OS kernel upgrade
  	Doc ID: 	Note:313833.1

How To Extract Datapump File From ASM Diskgroup To Local Filesystem?
  	Doc ID: 	Note:566941.1

How To Determinate If An EMCPOWER Partition Is Valid For ASMLIB?
  	Doc ID: 	Note:566676.1

HOW TO MAP ASM FILES WITH ONLINE DATABASE FILES
  	Doc ID: 	Note:552082.1

How To Add a New Disk(s) to An Existing Diskgroup on RAC (Best Practices).
  	Doc ID: 	Note:557348.1

Diagnosing Disk not getting discovered in ASM
  	Doc ID: 	Note:311926.1

How To Gather/Backup ASM Metadata In A Formatted Manner?
  	Doc ID: 	Note:470211.1

How To Move The Database To Different Diskgroup (Change Diskgroup Redundancy)
  	Doc ID: 	Note:438580.1

Tips On Installing and Using ASMLib on Linux
  	Doc ID: 	Note:394953.1

RHEL5 and ASMLib
  	Doc ID: 	Note:434775.1

Oracle Linux ASMLib README Documentation
  	Doc ID: 	Note:454035.1

ASM Using Files Instead of Real Devices on Linux
  	Doc ID: 	Note:266028.1

CHECKSUMS DIFFER FOR ASM DATAFILES WHEN COPIED USING XDB/FTP
  	Doc ID: 	Note:459819.1

How to rename/move a datafile in the same ASM diskgroup
  	Doc ID: 	Note:564993.1

How To Remove An Empty ASM System Directory
  	Doc ID: 	Note:444812.1

Database Instance Crashes In Case Of Path Offlined In Multipath Storage
  	Doc ID: 	Note:555371.1

How To Change ASM SYS PASSWORD ?
  	Doc ID: 	Note:452076.1

ASM Instances Are Not Mounted Consistently
  	Doc ID: 	Note:351114.1

How To Delete Archive Log Files Out Of +Asm?
  	Doc ID: 	Note:300472.1

ENABLE/DISABLE ARCHIVELOG MODE AND FLASH RECOVERY AREA IN A DATABASE USING ASM
  	Doc ID: 	Note:468984.1

Unable To Make Disks Available From Asmlib Using SAN
  	Doc ID: 	Note:302020.1

Oracle ASM and Multi-Pathing Technologies
  	Doc ID: 	Note:294869.1

How to rename ASM disks?
  	Doc ID: 	Note:418542.1

Does Asm Survive Change Of Disc Path?
  	Doc ID: 	Note:466231.1

Steps To Migrate/Move a Database From Non-ASM to ASM And Vice-Versa
  	Doc ID: 	Note:252219.1

Raw Devices and Cluster Filesystems With Real Application Clusters
  	Doc ID: 	Note:183408.1

How To Resize An ASM Disk On Release 10.2.0.X?
  	Doc ID: 	Note:470209.1

ASM Fast Mirror Resync - Example To Simulate Transient Disk Failure And Restore Disk
  	Doc ID: 	Note:443835.1


----------------------------------------------------------------------------------




Note:294869.1 Oracle ASM and Multi-Pathing Technologies
Note:461079.1 ASM does not discover disk(s) on AIX platform
Note:353761.1 Assigning a PVID To An Existing ASM Disk Corrupts the ASM Disk Header
Note:279353.1 Multiple 10g Oracle Home installation - ASM
Note:265633.1 ASM Technical Best Practices
Note:243245.1 10G New Storage Features and Enhancements
Note:282036.1 Minimum Software Versions and Patches Required to Support Oracle Products on IBM pSeries
Note:249992.1 New Feature on ASM (Automatic Storage Manager)
Note:252219.1 Steps To Migrate Database From Non-ASM to ASM And Vice-Versa
Note:303760.1 ASM & ASMlib Using Files Instead of Real Devices on Linux
Note:266028.1 ASM Using Files Instead of Real Devices on Linux
Note:471877.1 Raw Slice Not Showing Up When Trying To Add In Existing ASM Diskgroup
Note:551205.1 11g ASM New Features Technical White Paper
Note:402526.1 Asm Devices Are Still Held Open After Dismount or Drop
Note:452076.1 How To Change ASM SYS PASSWORD 
Note:340277.1 How to connect to ASM instance from a remote client (SQL*NET)
Note:351866.1 How To Reclaim Asm Disk Space
Note:470573.1 How To Delete SPFILE in +ASM DISKGROUP And Recreate in $ORACLE_HOME Directory
Note:458419.1 How to Bind RAW devices to Physical Partitions on Linux to be used by ASM
Note:469082.1 How To Setup ASM (10.2) on Windows Platforms
Note:471055.1 OUI Complains That ASM Is Not Release 2 While Installing 10g Database
Note:390274.1 How to move a datafile from a file system to ASM
Note:460909.1 Asm Can'T See Disks After Upgrade to 10.2.0.3 on Itanium
Note:382669.1 Duplicate database from non ASM to ASM (vise versa) to a different host
Note:413389.1 Asynchonous I/O not reported in /proc/slabinfo KIOCB slabdata
Note:437555.1 Created ASM Stamped Disks But Unable To Create Diskgroup
Note:370355.1 How to upgrade an ASM Instance From 10.2.0 lower version To higher version
Note:452924.1 How to Prepare Storage for ASM
Note:313387.1 HOWTO Which Disks Are Handled by ASMLib Kernel Driver
Note:331661.1 How to Re-configure Asm Disk Group
Note:428893.1 How to copy a datafile from ASM to a file system not using RMAN
Note:416046.1 ASM - Internal Handling of Block Corruptions
Note:340848.1 Performing duplicate database with ASM-OMF-RMAN
Note:342234.1 How to relocate an spfile from one ASM diskgroup to another on a RAC environment
Note:330084.1 Install: How To Migrate Oracle10g R1 ASM Database To 10g R2
Note:209850.1 RAC Survival Kit ORA-29702
Note:467354.1 ASM Crashes When Rebooting a Server With ORA-29702 Error
Note:334726.1 Cannot configure ASM because CSS Does Not Start on AIX 5L



STARTUP
Note:404728.1 Automatic Database Startup Does not Work With ASM through DBSTART.
Note:264235.1 ORA-29701 On Reboot When Instance Uses Automatic Storage Management (ASM)



DBMS_FILE_TRANSFER
Note:330103.1 How to Move Asm Database Files From one Diskgroup To Another


Async IO
Note:432854.1 Asynchronous IO Support on OCFS-OCFS2 and Related Settings filesystemio_options, disk_asynch_io
Note 237299.1 HOW TO CHECK IF ASYNCHRONOUS IO IS WORKING ON LINUX


Windows
Note 331796.1 How to setup ASM on Windows


11g
Note:429098.1 11g ASM New Feature
Note:443835.1 ASM Fast Mirror Resync - Example To Simulate Transient Disk Failure And Restore Disk
Note:445037.1 ASM Fast Rebalance





Note 199457.1 Step-By-Step Installation of RAC on IBM AIX (RS/6000)
Note:240575.1 RAC on Linux Best Practices
Note:245356.1 Oracle9i - AIX5L Installation Tips


Note:29676.1 Making the decision to use raw devices
Note:38281.1 RAID and Oracle - 20 Common Questions and Answers
ASM & ASMlib Using Files Instead of Real Devices on Linux
  	Doc ID: 	Note:303760.1
Configuring Oracle ASMLib on Multipath Disks
  	Doc ID: 	Note:309815.1
Tips On Installing and Using ASMLib on Linux
  	Doc ID: 	Note:394953.1
Raw Devices on Linux
  	Doc ID: 	Note:224302.1









-- PERFORMANCE 

File System's Buffer Cache versus Direct I/O
  	Doc ID: 	Note:462072.1

ASMIOSTAT Script to collect iostats for ASM disks
  	Doc ID: 	437996.1










Note:341782.1 Linux Quick Reference
Note:264736.1 How to Create a Filesystem inside of a Linux File (loop device)




-- 11gR2 BUG DETECT ASM ON OCR
Device Checks for ASM Fails with PRVF-5150: Path ORCL: is not a valid path [ID 1210863.1]
FAQ ASMLIB CONFIGURE,VERIFY, TROUBLESHOOT [ID 359266.1]
http://oraclue.com/2010/11/09/grid-11-2-0-2-install-nightmare/
http://gjilevski.wordpress.com/2010/10/03/fresh-oracle-11-2-0-2-grid-infrastructure-installation-prvf-5150-prvf-5184/
PRVF-5449 : Check of Voting Disk location "ORCL:(ORCL:)" failed [ID 1267569.1]



-- DROP DISK ISSUE, BUG

ORA-15041 V$ASM_DISK Shows HUNG State for Dropped Disks
  	Doc ID: 	Note:419014.1

ORA-15041 IN A DISKGROUP ALTHOUGH FREE_MB REPORTS SUFFICIENT SPACE
  	Doc ID: 	Note:460155.1



-- DROP/CREATE

How To Add Back An ASM Disk or Failgroup (Normal or High Redundancy) After A Transient Failure Occurred (On Release 10.2. or 10.1)? (Doc ID 946213.1)




-- BUG FIXES ON AIX 64bit 10.2.0.2

Note 433399.1-Could not add datafile due to ORA-01119, ORA-17502 and ORA-15041

	1. Apply fix for Patch 4691191.
	OR
	2. Apply 10.2.0.3.




-- AIX

Subject: 	ASM does not discover disk(s) on AIX platform
  	Doc ID: 	Note:461079.1 	Type: 	PROBLEM
  	Last Revision Date: 	24-JAN-2008 	Status: 	PUBLISHED


-- UPGRADE ASM

How to upgrade ASM instance from 10.1 to 10.2 (Single Instance)
  	Doc ID: 	Note:329987.1
  	
How To Upgrade ASM from 10.2 to 11.1 (single Instance configuration / Non-RAC)?
 	Doc ID:	Note:736121.1
 	
How To Upgrade ASM from 10.2 to 11.1 (RAC)?
 	Doc ID:	Note:736127.1
 	
How to upgrade an ASM Instance From 10.2.0 lower version To higher version? from 10.2.0.1 to patchset 10.2.0.2
 	Doc ID:	Note:370355.1
 	
Install: How To Migrate Oracle10g R1 ASM Database To 10g R2
 	Doc ID:	Note:330084.1
  	
Asm Can'T See Disks After Upgrade to 10.2.0.3 on Itanium
 	Doc ID:	Note:460909.1
 	


-- UNINSTALL

How to cleanup ASM installation (RAC and Non-RAC)
  	Doc ID: 	Note:311350.1




-- QUERY

ASM Extent Size 
  Doc ID:  Note:465039.1 

How To Identify If A Disk/Partition Is Still Used By ASM, Has Been Used by ASM Or Has Not Been Used by ASM (Unix/Linux)?
  	Doc ID: 	603210.1



-- DEBUG

Information to gather when diagnosing ASM space issues
 	Doc ID:	Note:351117.1

How To Gather/Backup ASM Metadata In A Formatted Manner? 
  Doc ID:  Note:470211.1 
 	
 	
 	
-- COMPATIBLE.ASM

Bug 7173616 - CREATE DISKGROUP with compatible.asm=10.2 fails (OERI:kfdAllocateAu_00)
 	Doc ID:	Note:7173616.8
 	
 	
-- 11g NEW FEATURE

11g ASM New Feature
 	Doc ID:	Note:429098.1
 	
 	


-- RESIZE

How to resize a physical disk or LUN and an ASM DISKGROUP
  	Doc ID: 	311619.1



-- RAC ASM

How to Convert a Single-Instance ASM to Cluster ASM
  	Doc ID: 	452758.1




-- LABEL

Adding The Label To ASMLIB Disk Using 'oracleasm renamedisk' Command
  	Doc ID: 	280650.1 	


-- REMOVE INSTANCE

How to remove an ASM instance and its corresponding database(s) on WINDOWS?
  	Doc ID: 	342530.1 	



-- ADD DISK

How To Add a New Disk(s) to An Existing Diskgroup on RAC (Best Practices). (Doc ID 557348.1)



-- ADD DISK WINDOWS

RAC Assurance Support Team: RAC Starter Kit and Best Practices (Windows) [ID 811271.1]
How To Setup ASM (10.2) on Windows Platforms [ID 469082.1]
ORA-17502 and ORA-15081 when creating a datafile on a ASM diskgroup [ID 369898.1]
New Partitions in Windows 2003 RAC Environments Not Visible on Remote Nodes [ID 454607.1]
RAC: Frequently Asked Questions [ID 220970.1]
Oracle Tools Available for Working With RAW Partitions on Windows Platforms [ID 555645.1]  	 
How to Extend A Raw Logical Volume in Windows [ID 555273.1]  	 
OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE), including moving from RAW Devices to Block Devices. [ID 428681.1]   <-- helpful
Asmtoolg Generates An Access Violation When Stamping Disks [ID 443635.1]
Disk Is not Discovered in ASM, Diskgroup Creation Fails with Ora-15018 Ora-15031 Ora-15014 [ID 431013.1]  	 



-- REMOVE DISK

How to Dynamically Add and Remove SCSI Devices on Linux
  	Doc ID: 	603868.1


-- RESYNC

ASM 11g New Features - How ASM Disk Resync Works. (Doc ID 466326.1)



-- RENAME DISK

How to rename ASM disks? (Doc ID 418542.1)
Adding The Label To ASMLIB Disk Using 'oracleasm renamedisk' Command (Doc ID 280650.1)
Oracleasm Createdisk Fails: Device '/dev/emcpoweraxx Is Not A Partition [Failed] (Doc ID 469163.1)
New ASMLib / oracleasm Disk Gets "header_status=Unknown" - Cannot be Added to Diskgroup (Doc ID 391136.1)
Oracleasm Createdisk Fails: Device '/dev/emcpoweraxx Is Not A Partition [Failed] (Doc ID 469163.1)



-- PASSWORD

How To Change ASM SYS PASSWORD ?
  	Doc ID: 	452076.1


-- CSS MISCOUNT

How to Increase CSS Misscount in single instance ASM installations
 	Doc ID:	Note:729878.1
 	
10g RAC: Steps To Increase CSS Misscount, Reboottime and Disktimeout
 	Doc ID:	Note:284752.1



-- CLEAN UP ASM INSTALL, UNINSTALL

How to cleanup ASM installation (RAC and Non-RAC)
  	Doc ID: 	311350.1



-- RECREATE ASM DISKGROUPS

Steps to Re-Create ASM Diskgroups
  	Doc ID: 	Note:268481.1



-- DUPLICATE CONTROLFILE

Note 345180.1 - How to duplicate a controlfile when ASM is involved



-- MULTIPLE ASM HOME

Multiple 10g Oracle Home installation - ASM
  	Doc ID: 	279353.1



-- 11g CP command

ASMCMD cp command fails with ORA-15046
  	Doc ID: 	452158.1

ASMCMD - New commands in 11g
  	Doc ID: 	451900.1

Copying File Using ASMCMD Copy Command Failed With ASMCMD-08010
  	Doc ID: 	786364.1

Unable To Copy Directory Using ASMCMD Cp -r Command
  	Doc ID: 	829040.1

Asmcmd CP Command Can Not Copy Files Larger Than 2 GB
  	Doc ID: 	786258.1



-- EXPDP

Creating dumpsets in ASM
  	Doc ID: 	559878.1

How To Extract Datapump File From ASM Diskgroup To Local Filesystem?
  	Doc ID: 	566941.1



-- MIGRATION

How to Prepare Storage for ASM
  	Doc ID: 	452924.1

Exact Steps To Migrate ASM Diskgroups To Another SAN Without Downtime.
  	Doc ID: 	837308.1

Steps To Migrate/Move a Database From Non-ASM to ASM And Vice-Versa
  	Doc ID: 	252219.1

How To Migrate From OCFS To ASM
  	Doc ID: 	579468.1

Install: How To Migrate Oracle10g R1 ASM Database To 10g R2
  	Doc ID: 	330084.1

Migrating Raw Devices to ASMLib on Linux
  	Doc ID: 	394955.1

How To Migrate ASMLIB devices to Block Devices (non-ASMLIB)?
  	Doc ID: 	567508.1



-- FAILOVER

Does Oracle Support Failover Of Asm Based Instance
  	Doc ID: 	762674.1



-- MOVE FILES IN ASM

How to move a datafile from a file system to ASM [ID 390274.1]
How to Copy Archivelog Files From ASM to Filesystem and vice versa [ID 944831.1]
How to transfer backups from ASM to filesystem when restoring to a new host [ID 345134.1]
How To Move Controlfile To ASM [ID 468458.1]
Can RMAN duplex backups to Flash Recovery Area and a Disk location [ID 434222.1]
How to restore archive logs to an alternative location when they already reside on disk [ID 399894.1]
How To Backup Database When Files Are On Raw Devices/File System [ID 469716.1]
RMAN10g: backup copy of database [ID 266980.1]
How To Move The Database To Different Diskgroup (Change Diskgroup Redundancy) [ID 438580.1]  	 



-- 11gR2, Grid Infra
ASM 11.2 Configuration KIT (ASM 11gR2 Installation & Configuration, Deinstallation, Upgrade, ASM Job Role Separation. [ID 1092213.1]
11gR2 Clusterware and Grid Home - What You Need to Know [ID 1053147.1]
Pre 11.2 Database Issues in 11gR2 Grid Infrastructure Environment [ID 948456.1]
Database Creation on 11.2 Grid Infracture with Role Separation ( ORA-15025, KFSG-00312, ORA-15081 ) [ID 1084186.1]




-- ACFS - backup and recovery, rman acfs
https://forums.oracle.com/forums/thread.jspa?threadID=2175933
http://download.oracle.com/docs/cd/E11882_01/server.112/e16102/asmfiles.htm#g1030822   <-- supported files on acfs



-- Backing Up an ASM Instance [ID 333257.1]




-- RAW DEVICES

Raw Devices and Cluster Filesystems With Real Application Clusters
  	Doc ID: 	183408.1



-- ASM SEPARATE HOME

DBCA Rejects Asm Password When Creating a New Database
  	Doc ID: 	431312.1

DBCA Is Unable To Connect To +ASM Instance With Error : Invalid Credentials
  	Doc ID: 	277223.1



Diskgroup Mount with Long ASMLib Labels Fails with ORA-15040 ORA-15042
  	Doc ID: 	787082.1

Placeholder for AMDU binaries and using with ASM 10g
  	Doc ID: 	553639.1

How To Migrate ASMLIB devices to Block Devices (non-ASMLIB)?
  	Doc ID: 	567508.1

Bug 5039964 - ASM disks show as provisioned although kfed shows valid disk header
  	Doc ID: 	5039964.8

ORA-15063 When Mounting a Diskgroup After Storage Cloning
  	Doc ID: 	784776.1

ORA-15036 When Starting An ASM Instance
  	Doc ID: 	553319.1

CASE STUDY - WHAT CAUSED ERROR ora-1186 ora-1122 on RAC with ASM
  	Doc ID: 	333816.1

ASM Using Files Instead of Real Devices on Linux
  	Doc ID: 	266028.1





}}}


http://www.oaktable.net/content/auto-dop-and-direct-path-inserts
http://www.pythian.com/news/27867/secrets-of-oracles-automatic-degree-of-parallelism/
http://uhesse.wordpress.com/2011/10/12/auto-dop-differences-of-parallel_degree_policyautolimited/
http://uhesse.wordpress.com/2009/11/24/automatic-dop-in-11gr2/
http://www.rittmanmead.com/2010/01/in-memory-parallel-execution-in-oracle-database-11gr2/


! AUTO DOP
{{{
delete from resource_io_calibrate$;
insert into resource_io_calibrate$ values(current_timestamp, current_timestamp, 0, 0, 200, 0, 0);
commit;
alter system set parallel_degree_policy=AUTO scope=both sid='*';
alter system flush shared_pool;
select 'alter table '||owner||'.'||table_name||' parallel (degree default);' from dba_tables where owner='<app schema>'
}}}

! AUTO DOP + PX queueing, with no in-mem PX
{{{
delete from resource_io_calibrate$;
insert into resource_io_calibrate$ values(current_timestamp, current_timestamp, 0, 0, 200, 0, 0);
commit;
alter system set parallel_degree_policy=LIMITED scope=both sid='*';
alter system set "_parallel_statement_queuing"=TRUE scope=both sid='*';
}}}

''and some other config variations....''
<<<
!AUTO DOP PATH AND IGNORE HINTS
{{{
1) Calibrate the IO
 
delete from resource_io_calibrate$;
insert into resource_io_calibrate$ values(current_timestamp, current_timestamp, 0, 0, 200, 0, 0);
commit;
 
2) Parallel_Degree_policy=limited
3) _parallel_statement_queueing=true
4) alter session set "_optimizer_ignore_hints" = TRUE ;
5) set the table and index to “default” degree
}}}
 
! NO AUTO DOP PATH AND IGNORE HINTS
{{{
1) Calibrate the IO
 
delete from resource_io_calibrate$;
insert into resource_io_calibrate$ values(current_timestamp, current_timestamp, 0, 0, 200, 0, 0);
commit;
 
2) Resource manager directive to limit the PX per session  = per session 4
3) alter session set "_optimizer_ignore_hints" = TRUE ;
4) _parallel_statement_queueing=true
}}}
 
! NO AUTO DOP PATH WITHOUT IGNORING HINTS
{{{
1) Calibrate the IO
 
delete from resource_io_calibrate$;
insert into resource_io_calibrate$ values(current_timestamp, current_timestamp, 0, 0, 200, 0, 0);
commit;
 
2) Resource manager directive to limit the PX per session  = per session 4
3) _parallel_statement_queueing=true
}}}
<<<


! Monitoring
<<<
! determine if PX underscore params are set
{{{
select a.ksppinm name, b.ksppstvl value
from x$ksppi a, x$ksppsv b
where a.indx = b.indx
and a.ksppinm in ('_parallel_cluster_cache_pct','_parallel_cluster_cache_policy','_parallel_statement_queuing','_optimizer_ignore_hints')
order by 1,2
/
}}}

! list if SQLs are using in-mem PX
{{{
The fourth column indicates whether the cursor was satisfied using In-Memory PX; if the 
number of parallel servers is greater than zero but the bytes eligible for predicate offload is
zero, it’s a good indication that In-Memory PX was in use.

select ss.sql_id,
sum(ss.PX_SERVERS_EXECS_total) px_servers,
decode(sum(ss.io_offload_elig_bytes_total),0,'No','Yes') offloadelig,
decode(sum(ss.io_offload_elig_bytes_total),0,'Yes','No') impx,
sum(ss.io_offload_elig_bytes_total)/1024/1024 offloadbytes,
sum(ss.elapsed_time_total)/1000000/sum(ss.px_servers_execs_total) elps,
dbms_lob.substr(st.sql_text,60,1) st
from dba_hist_sqlstat ss, dba_hist_sqltext st
where ss.px_servers_execs_total > 0
and ss.sql_id=st.sql_id
and upper(st.sql_text) like '%IN-MEMORY PX T1%'
group by ss.sql_id,dbms_lob.substr(st.sql_text,60,1)
order by 5
/
}}}
<<<



! Quick PX test case
{{{

select degree,num_rows from dba_tables
where owner='&owner' and table_name='&table_name';

#!/bin/sh
for i in 1 2 3 4 5
do
nohup sqlplus oracle/oracle @px_test.sql $i &
done

set serveroutput on size 20000
variable n number
exec :n := dbms_utility.get_time;
spool autodop_&1..lst
select /* queue test 0 */ count(*) from big_table;
begin
dbms_output.put_line
( (round((dbms_utility.get_time - :n)/100,2)) || ' seconds' );
end;
/
spool off
exit

}}}

Related articles:
http://jamesmorle.wordpress.com/2010/06/02/log-file-sync-and-awr-not-good-bedfellows/
http://rnm1978.wordpress.com/2010/09/14/the-danger-of-averages-measuring-io-throughput/


Investigate on metric tables.. especially the fileio metric which has 10 minutes deltas.. 

-- note: average_read_time is in centiseconds.. *10 to make it ms.. 
alter session set nls_date_format='dd-mm-yyyy hh24:mi';
select begin_time, end_time, file_id, 
physical_reads reads, 
nvl(physical_reads,0)/603 rps, 
average_read_time*10 atpr, 
nvl(physical_block_reads,0) / decode(nvl(physical_reads,0),0,to_number(NULL),physical_reads) bpr, 
physical_writes writes,
nvl(physical_writes,0)/603 wps,
average_write_time*10 atpwt,  
nvl(physical_block_writes,0)/ decode(nvl(physical_writes,0),0,to_number(NULL),physical_writes) bpw,
physical_reads + physical_writes ios,
nvl((physical_reads + physical_writes),0) / 60000 iops
from v$filemetric_history order by 1 asc;


{{{
sys@IVRS> set lines 300
drop table ioms;
create table ioms as select 
                                      file#
                                      , nvl(b.phyrds,0)  phyrds
                                      , nvl(b.readtim,0)  readtim
                                      , nvl(b.phywrts,0)  phywrts
                                      , nvl(b.phyblkrd,0) phyblkrd
from v$filestat b;

exec dbms_lock.sleep(seconds => 600);

select 
                                        e.file#
                                      , nvl(e.phyrds,0)  ephyrds
                                      , nvl(e.readtim,0)  ereadtim
                                      , nvl(e.phywrts,0)  ephywrts
                                      , nvl(e.phyblkrd,0) ephyblkrd
                                      , e.phyrds - i.phyrds                       reads
                                      , (e.phyrds - nvl(i.phyrds,0))/ 603                rps
                                      , decode ((e.phyrds - nvl(i.phyrds, 0)), 0, to_number(NULL), ((e.readtim  - nvl(i.readtim,0)) / (e.phyrds   - nvl(i.phyrds,0)))*10)         atpr_ms
                                      , decode ((e.phyrds - nvl(i.phyrds, 0)), 0, to_number(NULL), (e.phyblkrd - nvl(i.phyblkrd,0)) / (e.phyrds   - nvl(i.phyrds,0)) )             bpr
                                      , e.phywrts - nvl(i.phywrts,0)                    writes
                                      , (e.phywrts - nvl(i.phywrts,0))/ 603             wps
                                      , (e.phyrds  - nvl(i.phyrds,0)) + (e.phywrts - nvl(i.phywrts,0))                     ios,
                                     ((e.phyrds  - nvl(i.phyrds,0)) + (e.phywrts - nvl(i.phywrts,0))) / 600 iops     
from v$filestat e, ioms i
where e.file# = i.file#;sys@IVRS> 
Table dropped.

sys@IVRS>   2    3    4    5    6    7  
Table created.

sys@IVRS> sys@IVRS> 

PL/SQL procedure successfully completed.

sys@IVRS> sys@IVRS>   2    3    4    5    6    7    8    9   10   11   12   13   14   15   16  
     FILE#    EPHYRDS	EREADTIM   EPHYWRTS  EPHYBLKRD	    READS	 RPS	ATPR_MS        BPR     WRITES	     WPS	IOS	  IOPS
---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
	 1	 7374	   12818	446	 10365		1 .001658375	      0 	 1	   26 .043117745	 27	  .045
	 2	   62	     144	472	    62		0	   0				   26 .043117745	 26 .043333333
	 3	 2990	    4699	907	  9525		0	   0				   10 .016583748	 10 .016666667
	 4	 8803	    4715       1104	 37702		9 .014925373 6.66666667 	 1	   78 .129353234	 87	  .145
	 5	   66	     115	  9	    93		0	   0				    0	       0	  0	     0
	 6	    5	       6	  1	     5		0	   0				    0	       0	  0	     0
	 7	    5	       1	  1	     5		0	   0				    0	       0	  0	     0
	 8	    5	       2	  1	     5		0	   0				    0	       0	  0	     0
	 9	    5	       2	  1	     5		0	   0				    0	       0	  0	     0
	10	    5	       2	  1	     5		0	   0				    0	       0	  0	     0
	11	    5	       2	  1	     5		0	   0				    0	       0	  0	     0
	12	    5	      15	  1	     5		0	   0				    0	       0	  0	     0
	13	 2341	    2333       1297	 10584	       16 .026533997	  5.625 	 1	   76 .126036484	 92 .153333333

13 rows selected.
}}}


{{{
BEGIN_TIME	 END_TIME	     FILE_ID AVERAGE_READ_TIME*10 AVERAGE_WRITE_TIME*10 PHYSICAL_READS PHYSICAL_WRITES PHYSICAL_BLOCK_READS PHYSICAL_BLOCK_WRITES
---------------- ---------------- ---------- -------------------- --------------------- -------------- --------------- -------------------- ---------------------
17-06-2010 01:28 17-06-2010 01:38	  12			0		      0 	     0		     0			  0			0
17-06-2010 01:28 17-06-2010 01:38	  11			0		      0 	     0		     0			  0			0
17-06-2010 01:28 17-06-2010 01:38	  10			0		      0 	     0		     0			  0			0
17-06-2010 01:28 17-06-2010 01:38	   9			0		      0 	     0		     0			  0			0
17-06-2010 01:28 17-06-2010 01:38	   8			0		      0 	     0		     0			  0			0
17-06-2010 01:28 17-06-2010 01:38	   7			0		      0 	     0		     0			  0			0
17-06-2010 01:28 17-06-2010 01:38	  13		    5.625		      0 	    16		    76			 16		      179
17-06-2010 01:28 17-06-2010 01:38	   2			0		      0 	     0		    26			  0		       83
17-06-2010 01:28 17-06-2010 01:38	   3			0		      0 	     0		    10			  0		       10
17-06-2010 01:28 17-06-2010 01:38	   4	       6.66666667		      0 	     9		    78			  9		       93
17-06-2010 01:28 17-06-2010 01:38	   5			0		      0 	     0		     0			  0			0
17-06-2010 01:28 17-06-2010 01:38	   6			0		      0 	     0		     0			  0			0
17-06-2010 01:28 17-06-2010 01:38	   1			0		      0 	     1		    28			  1		       30
}}}
{{{
Karl@Karl-LaptopDell /cygdrive/c/Users/Karl/Desktop
$ cat awk.txt
11 12 13
21 22 23
31 32 33

Karl@Karl-LaptopDell /cygdrive/c/Users/Karl/Desktop
$ cat awk.txt | awk 'FNR == 2 {print $2}'				<-- output line 2 column 2
22

$ cat awk.txt | awk '$2=="12"||$2=="32" {print $0}'		<-- filter rows on column 2 with "12" or "32"
11 12 13
31 32 33

$ cat awr_genwl.txt | awk '{print $0}'						<-- output all rows and columns

$ cat awr_genwl.txt | awk '{print $2}'						<-- output only column 2

$ cat awr_genwl.txt | awk '$4=="1" {print $0}'			<-- filter on column 4 (instance number) with value "1" and output all rows with that value

$ cat awr_genwl.txt | awk '$4=="1" &&  $12>10 {print $0}'		<-- filter on column 4 and 12 (AAS) with values "1" and "AAS greater than 10" and output all rows with that value
}}}

{{{
cat awr_iowlexa.txt | awk '$6>1 {print $1,$2,$3,$6}' | less    <-- will show snap_id, tm, aas > 1
cat awr_topsqlexa2.txt | awk '$6>1 {print $1,$6,$27,$28,$29,$30}' | less     <-- will show snap_id, sql, aas > 1 
}}}


''awk dcli vmstat output''
dcli -l root -g cell_group --vmstat 1 > oslogs.txt
{{{
-- add the usr and sys columns
 cat osload.txt | grep cx02db01 | awk '{print $14 + $15}' > cx02db01.txt

-- add the usr and sys columns for storage cells
cat osload.txt | egrep "cx02cel01|cx02cel02|cx02cel03" | awk '{print $14 + $15}' > storagecells.txt


-- discover the bad lines in vmstat output
cat snap_724-725_1057-1107.txt | grep cx02db01 | grep -v memory | grep -v buff | perl -p -e "s|  | |g" -| perl -p -e "s|  | |g" - | perl -p -e "s|  | |g" - | awk 'BEGIN {x=1}; {print x++ " " $1 " " $2 " " $3 " " $4 " " $5 " " $6 " " $7 " " $8 " " $9 " " $10 " " $11 " " $12 " " $13 " " $14 " " $15 " " $16;}' | column -t | less


-- gives you bad vmstat lines where cs and us columns are not aligning
cat snap_724-725_1057-1107.txt | grep cx02db01 | grep -v memory | grep -v buff | perl -p -e "s|  | |g" -| perl -p -e "s|  | |g" - | perl -p -e "s|  | |g" - | awk 'BEGIN {x=1}; {print x++ " " $1 " " $2 " " $3 " " $4 " " $5 " " $6 " " $7 " " $8 " " $9 " " $10 " " $11 " " $12 " " $13 " " $14 " " $15 " " $16;}' | column -t | grep '\:[0-9]' | less


-- will show the data bug !!!!
cat oslogs.txt | awk 'BEGIN{buf=""} /[0-9]:[0-9][0-9]:[0-9]/{buf=$0} /cx02db01/{print $0,buf}' | column -t | grep '[a-zA-Z0-9]\{8\}:[0-9]\{2\}' | wc -l


prints the usr sys
 cat fixed.txt | awk '{print $14, $15, $16}'

prints usr without the newline and us
cat start.txt | awk '{ print $1 }' | grep . | grep -v us | less


-- discard zero 
cat finalout.txt | awk ' $0>0 { print $0 }' | less
}}}

''Final''
{{{
# vmstat.sh
# usage:
# sh vmstat.sh <text file output> <hostname filter>
# sh vmstat.sh oslogs.txt cx02db01

# cleanup
rm $2_datapoints.txt &> /dev/null

# regex stuff
foo=`echo "$2"| wc -c`; count=$((${foo}-1))
regexp=[a-zA-Z0-9]'\'{$count'\'}:[0-9]'\'{2'\'}

cat > $2_execvmstat.sh << EOF

# fix the vmstat data bug
cat $1 | awk 'BEGIN{buf=""} /[0-9]:[0-9][0-9]:[0-9]/{buf=\$0} /$2/{print \$0,buf}' | column -t | grep -v '$regexp' > $2_good.txt
cat $1 | awk 'BEGIN{buf=""} /[0-9]:[0-9][0-9]:[0-9]/{buf=\$0} /$2/{print \$0,buf}' | column -t | grep '$regexp' > $2_bad.txt
cat $2_good.txt | awk '{print \$19, "$2", \$14, \$15, \$16, \$17, \$18, \$14+\$15+\$17+\$18 }' | column -t >> $2_datapoints.txt
cat $2_bad.txt | awk '{print \$18, "$2", \$13, \$14, \$15, \$16, \$17, \$13+\$14+\$16+\$17}' | column -t >> $2_datapoints.txt

# create files for statistical analysis
sort -k1 $2_datapoints.txt | awk '\$8>0 {print \$8}' | sort -n > $2_graph_totcpu.txt
sort -k1 $2_datapoints.txt | awk '\$3+\$4>0 {print \$3+\$4}' | sort -n > $2_graph_usrsys.txt
sort -k1 $2_datapoints.txt | awk '\$6>0 {print \$6}' | sort -n > $2_graph_wa.txt

# show data points above 70pct total cpu 
sort -k1 $2_datapoints.txt | awk '\$8>70 {print \$0}' > $2_data_totcpu_gt70pct.txt

# show data points above 70pct usr and sys
sort -k1 $2_datapoints.txt | awk '\$3+\$4>70 {print \$0}' > $2_data_usrsys_gt70pct.txt

# show data points above 0pct wait io
sort -k1 $2_datapoints.txt | awk '\$6>0 {print \$0}' > $2_data_wa_gt0pct.txt

# wc on all output files
wc -l $2_graph_totcpu.txt
wc -l $2_graph_usrsys.txt
wc -l $2_graph_wa.txt
wc -l $2_data_totcpu_gt70pct.txt
wc -l $2_data_usrsys_gt70pct.txt
wc -l $2_data_wa_gt0pct.txt

EOF

sh $2_execvmstat.sh
}}}

http://stackoverflow.com/questions/3600170/how-to-cat-two-files-after-each-other-but-omit-the-last-first-line-respectively
http://www.google.com.ph/search?q=cat+filter+2+lines+before&hl=tl&prmd=ivns&ei=yR7MTZO4E9Sutweo_8XrBw&start=20&sa=N
http://www.ibm.com/developerworks/aix/library/au-badunixhabits.html
http://www.linuxquestions.org/questions/linux-software-2/cat-output-specific-number-of-lines-130360/
http://stackoverflow.com/questions/4643022/awk-and-cat-how-to-ignore-multiple-lines
http://tldp.org/LDP/abs/html/textproc.html <-- GOOD STUFF
http://www.ibm.com/developerworks/linux/library/l-lpic1-v3-103-2/ <-- GOOD STUFF
http://www.google.com.ph/search?sourceid=chrome&ie=UTF-8&q=grep+2+lines+before
http://www.dbforums.com/unix-shell-scripts/1069858-printing-2-2-line-numbers-including-grep-word-line.html 
http://www.google.com.ph/search?q=how+to+grep+specific+lines&hl=tl&prmd=ivnsfd&ei=nyDMTemaMcOgtgejksT5Bw&start=10&sa=N <-- GOOD STUFF
http://www.computing.net/answers/unix/grep-to-find-a-specific-line-number/6484.html
http://www.unix.com/shell-programming-scripting/67045-grep-specific-line-file.html  <-- grep specific line
http://stackoverflow.com/questions/2914197/how-to-grep-out-specific-line-ranges-of-a-file
http://forums.devshed.com/unix-help-35/displaying-lines-above-grep-ed-line-173427.html
http://vim.1045645.n5.nabble.com/How-to-go-a-file-at-a-specific-line-number-using-the-output-from-grep-td1186768.html
http://www.google.com.ph/search?q=grep+usr+column+vmstat&hl=tl&prmd=ivns&ei=MyjMTdy0OoOutwfjqqDjBw&start=10&sa=N  <-- vmstat cs usr
http://www.tuschy.com/nagios/plugins/check_cpu_usage
http://ytrudeau.wordpress.com/2007/11/20/generating-graphs-from-vmstat-output/  <-- GOOD STUFF
http://sourceforge.net/projects/gnuplot/files/gnuplot/4.4.3/ <-- gnuplot
http://www.google.com.ph/search?q=vmstat+aligning&hl=tl&prmd=ivns&ei=qi7MTauxJ8jj0gH4p4n4Bg&start=20&sa=N <-- vmstat aligning
http://stackoverflow.com/questions/3259776/vmstat-and-column <-- GOOD STUFF column -t
http://www.robelle.com/smugbook/regexpr.html <-- GOOD STUFF searching files on linux
http://unix.ittoolbox.com/groups/technical-functional/ibm-aix-l/vmstat-and-ps-ef-columns-not-aligning-786668 <-- GOOD STUFF cs and usr column issue
http://www.issociate.de/board/post/235976/sed_and_newline_(x0a).html
http://www.commandlinefu.com/commands/view/2942/remove-newlines-from-output  <-- GOOD STUFF remove newline
http://linux.dsplabs.com.au/rmnl-remove-new-line-characters-tr-awk-perl-sed-c-cpp-bash-python-xargs-ghc-ghci-haskell-sam-ssam-p65/ <-- GOOD STUFF remove new line
http://www.tek-tips.com/viewthread.cfm?qid=1211423&page=1
http://www.computing.net/answers/unix/sed-newline/5640.html
## additional links on the script creation
http://3spoken.wordpress.com/2006/12/10/cpu-steal-time-the-new-statistic/   <-- GOOD STUFF "Steal Time" cpu statistic
http://goo.gl/OPukP  <-- put in one line
http://compgroups.net/comp.unix.solaris/Adding-date-time-to-line-in-vmstat  <-- GOOD STUFF vmstat with time output 
             vmstat 2 | while read line; do echo "`date +%T/%m/%d/%y`" "$line" ; done
http://bytes.com/topic/db2/answers/644323-include-date-time-vmstat
http://mishmashmoo.com/blog/?p=65   <-- GOOD STUFF from a performance tester.. reformating vmstat/sar/iostat logs :: loadrunner analysis
http://www.regexbuddy.com/create.html <-- GOOD STUFF regex buddy
http://gskinner.com/RegExr/ <-- regex tool
http://www.fileformat.info/tool/regex.htm <-- regex tool
http://www.txt2re.com/index-perl.php3?s=cx02db01:25&-24&-1 <-- regex tool 
http://www.regular-expressions.info/reference.html <-- GOOD STUFF reference for REGEX!!!
http://stackoverflow.com/questions/304864/how-do-i-use-regular-expressions-in-bash-scripts
http://work.lauralemay.com/samples/perl.html <-- GOOD STUFF chapter on regex
http://ask.metafilter.com/80862/how-split-a-string-in-bash  <-- split a string in bash
http://www.linuxforums.org/forum/programming-scripting/136269-bash-scripting-can-i-split-word-letter.html <-- split word
http://www.google.com.ph/search?sourceid=chrome&ie=UTF-8&q=grep+%5C%3A%5B0-9%5D <-- search grep \:[0-9]
http://www.unix.com/unix-dummies-questions-answers/128913-finding-files-numbers-file-name.html <-- find file numbers name
http://www.cyberciti.biz/faq/grep-regular-expressions/ <-- GOOD STUFF on regex CYBERCITI
http://www.robelle.com/smugbook/regexpr.html <-- REALLY GOOD STUFF.. got the idea of awk a-zA-Z0-9
http://goo.gl/KV47w <-- search bash word count
http://linux.byexamples.com/archives/57/word-count/#comments <-- GOOD STUFF word count
http://www.softpanorama.org/Scripting/Shellorama/arithmetic_expressions.shtml 
http://stackoverflow.com/questions/673016/bash-how-to-do-a-variable-expansion-within-an-arithmetic-expression <-- REALLY GOOD STUFF word count
http://goo.gl/dyYsf   <-- search on grep: Invalid content of \{\} variable
http://www.mail-archive.com/debian-bugs-dist@lists.debian.org/msg53360.html
http://marc.info/?l=logcheck-devel&m=114076370027762
http://us.generation-nt.com/answer/bug-575204-initscripts-grep-complains-about-invalid-back-reference-umountfs-help-196593881.html
http://www.unix.com/unix-dummies-questions-answers/158405-modifying-shell-script-without-using-editor.html <-- REALLY GOOD STUFF create script w/o editor
http://goo.gl/LLaPG <-- search sed + put space
http://www.unix.com/shell-programming-scripting/41417-add-white-space-end-line-sed.html
http://www.unix.com/shell-programming-scripting/150966-help-sed-insert-space-between-string-form-xxxaxxbcx-without-replacing-pattern.html
http://goo.gl/LeDay <-- search bash sort column
http://www.skorks.com/2010/05/sort-files-like-a-master-with-the-linux-sort-command-bash/ <-- REALLY GOOD STUFF BASH SORTING
http://www.linuxquestions.org/questions/linux-newbie-8/sorting-columns-in-bash-664705/ <-- REALLY GOOD STUFF sort -k1















''References:''

Filter records in a file with sed or awk (UNIX)
http://p2p.wrox.com/other-programming-languages/70727-filter-records-file-sed-awk-unix.html  <-- GOOD STUFF

how to get 2 row 2 column
http://studentwebsite.blogspot.com/2010/11/how-to-get-2-row-2-column-using-awk.html  <-- GOOD STUFF

Filtering rows for first two instances of a value
http://www.unix.com/shell-programming-scripting/135452-filtering-rows-first-two-instances-value.html

Deleting specific rows in large files having rows greater than 100000
http://www.unix.com/shell-programming-scripting/125807-deleting-specific-rows-large-files-having-rows-greater-than-100000-a.html

awk notes
http://www.i-justblog.com/2009/07/awk-notes.html

awk to select a column from particular line number
http://www.unix.com/shell-programming-scripting/27255-awk-select-column-particular-line-number.html

select row and element in awk
http://stackoverflow.com/questions/1506521/select-row-and-element-in-awk

Filter and migrate data from row to column
http://www.unix.com/shell-programming-scripting/137404-filter-migrate-data-row-column.html

Extracting particular column name values using sed/ awk/ perl
http://stackoverflow.com/questions/1630710/extracting-particular-column-name-values-using-sed-awk-perl

AWK SELECTION
https://www.prodigyone.com/in/doc/docs.php?view=1&nid=317


AWK one liners
http://www.krazyworks.com/useful-awk-one-liners/   <-- AWK NR==




http://www.idevelopment.info/data/Oracle/DBA_tips/LOBs/LOBS_1.shtml
http://www.idevelopment.info/data/Oracle/DBA_tips/LOBs/LOBS_5.shtml

-- quick example
{{{

mkdir -p /home/oracle/oralobfiles
grant create any directory to hr;


DROP TABLE test_lob CASCADE CONSTRAINTS
/

CREATE TABLE test_lob (
      id           NUMBER(15)
    , clob_field   CLOB
    , blob_field   BLOB
    , bfile_field  BFILE
)
/

CREATE OR REPLACE DIRECTORY
    EXAMPLE_LOB_DIR
    AS
    '/home/oracle/oralobfiles'
/

INSERT INTO test_lob
    VALUES (  1001
            , 'Some data for record 1001'
            , '48656C6C6F' || UTL_RAW.CAST_TO_RAW(' there!') 
            , BFILENAME('EXAMPLE_LOB_DIR', 'file1.txt')
    );

COMMIT;

col clob format a30
col blob format a30
SELECT
      id
    , clob_field "Clob"
    , UTL_RAW.CAST_TO_VARCHAR2(blob_field) "Blob"
FROM test_lob;

  Id Clob                      Blob
---- ------------------------- -------------
1001 Some data for record 1001 Hello there!
}}}
Damn, these kids are smart! 

https://github.com/awreece <- he has a lot of good stuff 
https://github.com/davidgomes



https://blog.memsql.com/bpf-linux-performance/
https://blog.memsql.com/linux-off-cpu-investigation/
http://codearcana.com/posts/2015/12/20/using-off-cpu-flame-graphs-on-linux.html
http://codearcana.com/posts/2013/05/18/achieving-maximum-memory-bandwidth.html


<<showtoc>>

! the backbone.js environment 
https://jsfiddle.net/karlarao/uf3njwe8/


{{{

// ######################################################################
// MODELS
// ######################################################################

// ----------------------------------------------------------------------
// extend

var Vehicle = Backbone.Model.extend({		// extend the backbone model
	prop1: '1'		// default property
});

var v = new Vehicle();		// instantiate new model of Vehicle type
var v2 = new Vehicle();

v.prop1 = 'one';	// assign value

console.log(v);
console.log(v.prop1);
console.log(v2.prop1);

// ----------------------------------------------------------------------
// class properties - 2nd argument as class properties
// 1st argument is where we usually configure our model object

var Vehicle = Backbone.Model.extend({},		// possible to have class properties
	{																				// by providing a 2nd argument to extend
		summary : function () {
			return 'Vehicles are for travelling';
		}
	}
);

Vehicle.summary();	// you can call the function even w/o instantiating a new Vehicle type

// ----------------------------------------------------------------------
// instantiating models
// models are constructor functions, call it with "new"

var model = new Backbone.Model();

// this works
var model = new Backbone.Model({
	name : 'Karl',
	age : 100
});
console.log(model);

// or use custom types
var Vehicle = Backbone.Model.extend({});		// empty model definition
var ford = new Vehicle();

// initialize w/ a function
var Vehicle = Backbone.Model.extend({
	initialize : function () {
		console.log('new car created');
	}
});
var newCar = new Vehicle();

// ----------------------------------------------------------------------
// inheritance
// models can inherit from other models

var Vehicle = Backbone.Model.extend({});	// Vehicle is a model type that extends backbone.model

var Car = Vehicle.extend({});		// Car is a model type that extends vehicle

// example A and B inheritance
var A = Backbone.Model.extend({
	initialize : function () {
		console.log('initialize A');
	},
	asString : function () {
		return JSON.stringify(this.toJSON());
	}
});

var a = new A({		// create the object a
	one : '1',
	two : '2'
});

console.log(a.asString());		// test the asString function
var B = A.extend({});		// create a new type B, will extend A
var b = new B({			// create the object b
	three : '3'
});
console.log(b.asString());	// test state of b

console.log(typeof b);
console.log(b instanceof B);	// true			instanceof will test the type of object
console.log(b instanceof A);	// true
console.log(b instanceof Backbone.Model);	// true
console.log(a instanceof B);	// false

// ----------------------------------------------------------------------
// attributes
// model attributes hold your data
// set, get, escape (html escaped), has

var ford =  new Vehicle();
ford.set('type', 'car');
console.log(ford);

// many properties at once, append maxSpeed and color
ford.set({
	'maxSpeed' : '99',
	'color' : 'blue'
});
console.log(ford);

// get
ford.get('type');
ford.get('color');

// let's try this
var Vehicle = Backbone.Model.extend({
	dump: function () {
		console.log(JSON.stringify(this.toJSON()));
	}
});

var v = new Vehicle({
	type : 'car'
});

v.dump();
v.set('color','blue');
v.set({
	description: "<script>alert('this is injection') </script>",
	weight: 1000
});
v.dump();

$('body').append(v.escape('description'));		// before returning it, it will html encode it

v.has('type'); // true

// ----------------------------------------------------------------------
// model events
// by wrapping attributes to get and set methods backbone can raise events when their state changes
// model "listen" to "changes"

// "on" method bind to "event" and function to execute
ford.on('change', function () {});

// or listen to a change to a property
// "event" bind to "property name" and function to execute
ford.on('change:color', function () {});

// let's try this
var Vehicle = Backbone.Model.extend({			// backbone model w/ one attribute
	color : 'blue'
});

var ford = new Vehicle();		// instantiate

ford.on('change', function () {						// bind an event handler to the "change" event by using on
	console.log('something has changed');		// the callback
});

ford.set('color','red');		// this will "trigger" the event and should say "something has changed"
console.log(ford.get('color'));		// this should be red

ford.on('change:color', function () {			// listen event
	console.log('color has changed');
});
ford.set('color','orange');		// this should output 'something has changed' and 'color has changed'

// custom model events - possible to define "triggers" to events
ford.on('retired', function() {});
// then trigger method to fire the event
ford.trigger('retired');

// let's try this - example 1
var volcano = _.extend({}, Backbone.Events);

volcano.on('disaster:eruption', function () {			// namespace convention separated by :
	console.log('duck and go');
});

volcano.trigger('disaster:eruption');

// let's try this - example 2
var volcano = _.extend({}, Backbone.Events);

volcano.on('disaster:eruption', function (options) {
	console.log('duck and go - ' + options.plan);
});

volcano.trigger('disaster:eruption', {plan : 'run'});

volcano.off('disaster:eruption');		// this will turn off the event handlers
volcano.trigger('disaster:eruption', {plan : 'run'});

// ----------------------------------------------------------------------
// Model Identity





















































































































}}}
http://en.wikipedia.org/wiki/Backplane
''passive backplane'' http://www.webopedia.com/TERM/B/backplane.html
http://electronicstechnician.tpub.com/14091/css/14091_36.htm
''back plane vs mother board'' http://www.freelists.org/post/si-list/back-plane-vs-mother-board,1
<<<
Basically a backplane has nothing but connectors, and maybe passive
terminating networks for transmission lines, on it.  The cards that do
the real work plug into the backplane.

A motherboard, such as those used in personal computers (PC's), usually
has a processor, logic, memory, DC-DC converters, etc. along with a
backplane-like section for adapter/daughter cards.  I designed a couple
of motherboards for my previous employer.  Layout can really be a bear,
because you keep finding yourself blocked by these big connectors whose
locations, orientations, and pinouts have been fixed in advance for
mechanical and electrical compatibility reasons.
<<<
RPO vs RTO 
<<<
RPO: Recovery Point Objective

Recovery Point Objective (RPO) describes the interval of time that might pass during a disruption before the quantity of data lost during that period exceeds the Business Continuity Plan’s maximum allowable threshold or “tolerance.”

Example: If the last available good copy of data upon an outage is from 18 hours ago, and the RPO for this business is 20 hours then we are still within the parameters of the Business Continuity Plan’s RPO. In other words it the answers the question – “Up to what point in time could the Business Process’s recovery proceed tolerably given the volume of data lost during that interval?”

RTO: Recovery Time Objective

The Recovery Time Objective (RTO) is the duration of time and a service level within which a business process must be restored after a disaster in order to avoid unacceptable consequences associated with a break in continuity. In other words, the RTO is the answer to the question: “How much time did it take to recover after notification of business process disruption?“

RPO designates the variable amount of data that will be lost or will have to be re-entered during network downtime. RTO designates the amount of “real time” that can pass before the disruption begins to seriously and unacceptably impede the flow of normal business operations.

There is always a gap between the actuals (RTA/RPA) and objectives introduced by various manual and automated steps to bring the business application up. These actuals can only be exposed by disaster and business disruption rehearsals.
<<<
https://www.druva.com/blog/understanding-rpo-and-rto/













.
Recovery Manager RMAN Documentation Index
 	Doc ID:	Note:286589.1

RMAN Myths Dispelled: Common RMAN Performance Misconceptions
  	Doc ID: 	134214.1



-- RMAN COMPATIBILITY

RMAN Compatibility Oracle8i 8.1.7.4 - Oracle10g 10.1.0.4
  	Doc ID: 	Note:307022.1

RMAN Compatibility Matrix
  	Doc ID: 	Note:73431.1

RMAN Standard and Entrprise Edition Compatibility (Doc ID 730193.1)

Answers To FAQ For Restoring Or Duplicating Between Different Versions And Platforms (Doc ID 369644.1)
<<<
It is possible to use the 10.2 RMAN executable to restore a 9.2 database (same for 11.2 to 11.1 or 11.1 to 10.2, etc) even if the restored datafiles will be stored in ASM. 
<<<



-- SCENARIOS

List of Database Outages
  	Doc ID: 	Note:76449.1

Backup and Recovery Scenarios
 	Doc ID:	Note:94114.1




-- BEST PRACTICES

Top 10 Backup and Recovery best practices.
  	Doc ID: 	Note:388422.1

Oracle 9i Media Recovery Best Practices
  	Doc ID: 	Note:240875.1

Oracle Suggested Strategy & Backup Retention 
  Doc ID:  Note:351455.1 



-- SAMPLE SCRIPTS

RMAN Backup Shell Script Example
  	Doc ID: 	Note:137181.1



-- NOLOGGING

Note 290161.1 The Gains and Pains of Nologging Operations



-- 32bit to 64bit

RMAN Restoring A 32 bit Database to 64 bit - An Example
  	Doc ID: 	Note:467676.1

How I Solved a Problem During a Migration of 32 bit to 64 bit on 10.2.0.2
  	Doc ID: 	452416.1


-- RMAN BUG

Successful backups are not shown in the list backup.Not able to restore them also.
  	Doc ID: 	284002.1




-- 9iR2 stuff
RMAN Restore/Recovery When the Recovery Catalog and Controlfile are Lost in 9i (Doc ID 174623.1)
How To Catalog Backups / Archivelogs / Datafile Copies / Controlfile Copies (Doc ID 470463.1)
Create Standby Database using RMAN changing backuppiece location (Doc ID 753902.1)
Rolling a Standby Forward using an RMAN Incremental Backup in 9i (Doc ID 290817.1)
RMAN : Block-Level Media Recovery - Concept & Example (Doc ID 144911.1)
Persistent Controlfile configurations for RMAN in 9i and 10g. (Doc ID 305565.1)
Using RMAN to Restore and Recover a Database When the Repository and Spfile/Init.ora Files Are Also Lost (Doc ID 372996.1)
How To Restore Controlfile From A Backupset Without A Catalog Or Autobackup (Doc ID 403883.1)
https://docs.google.com/viewer?url=http://www.nyoug.org/Presentations/2005/20050929rman.pdf
http://www.orafaq.com/wiki/Oracle_database_Backup_and_Recovery_FAQ#Can_one_restore_RMAN_backups_without_a_CONTROLFILE_and_RECOVERY_CATALOG.3F


-- RMAN PERFORMANCE

Advise On How To Improve Rman Performance
  	Doc ID: 	Note:579158.1

RMAN Backup Performance 
  Doc ID:  Note:360443.1 

Known RMAN Performance Problems 
  Doc ID:  Note:247611.1 

TROUBLESHOOTING GUIDE: Common Performance Tuning Issues 
  Doc ID:  Note:106285.1 

RMAN Myths Dispelled: Common RMAN Performance Misconceptions 
  Doc ID:  Note:134214.1 




-- FRA, Flash Recovery Area, Fast Recovery Area
Flash Recovery Area - FAQ [ID 833663.1]





-- SHARED DISK ERROR

RAC BACKUP FAILS WITH ORA-00245: CONTROL FILE BACKUP OPERATION FAILED [ID 1268725.1]






-- DUPLICATE CONTROLFILE

Note 345180.1 - How to duplicate a controlfile when ASM is involved



-- RECREATE CONTROLFILE

How to Recover Having Lost Controlfiles and Online Redo Logs
  	Doc ID: 	103176.1

http://www.orafaq.com/wiki/Control_file_recovery

http://www.databasejournal.com/features/oracle/article.php/3738736/Recovering-from-Loss-of-All-Control-Files.htm

Recreating the Controlfile in RAC and OPS
  	Doc ID: 	118931.1

How to Recreate a Controlfile for Locally Managed Tablespaces
  	Doc ID: 	221656.1

How to Recreate a Controlfile
  	Doc ID: 	735106.1

Step By Step Guide On How To Recreate Standby Control File When Datafiles Are On ASM And Using Oracle Managed Files
  	Doc ID: 	734862.1

RECREATE CONTROLFILE, USERS ACCEPT SYS LOSE THEIR SYSDBA/SYSOPER PRIVS
  	Doc ID: 	335971.1

Steps to recreate a Physical Standby Controlfile
  	Doc ID: 	459411.1

http://www.freelists.org/post/oracle-l/Recreate-standby-controlfile-for-DB-that-uses-OMF-and-ASM

Steps to recreate a Physical Standby Controlfile (Doc ID 459411.1)

Steps to perform for Rolling forward a standby database using RMAN incremental backup when primary and standby are in ASM filesystem (Doc ID 836986.1)




-- DISK LOSS

Recover database after disk loss
  	Doc ID: 	Note:230829.1

Disk Lost in External Redundancy FLASH Diskgroup Having Controlfile and Redo Member
  	Doc ID: 	Note:387103.1



-- LOST DATAFILE

Note 1060605.6 Recover A Lost Datafile With No Backup
Note 1029252.6 How to resize a datafile
Note 30910.1   Recreating database objects
Note 1013221.6 Recovering from a lost datafile in a ROLLBACK tablespace
Note 198640.1  How to Recover from a Lost Datafile with Different Scenarios
How to 'DROP' a Datafile from a Tablespace
  	Doc ID: 	111316.1
Common Causes and Solutions on ORA-1157 Error Found in Backup & Recovery
  	Doc ID: 	184327.1
How to Recover from a Lost Datafile with Different Scenarios
  	Doc ID: 	198640.1



-- REDO LOG 
	
How To Recover Using The Online Redo Log (Doc ID 186137.1)
Loss Of Online Redo Log And ORA-312 And ORA-313 (Doc ID 117481.1)



-- RESETLOGS

Recovering READONLY tablespace backups made before a RESETLOGS Open
  	Doc ID: 	Note:266991.1 	


-- INCARNATION

RMAN RESTORE fails with RMAN-06023 or ORA-19505 or RMAN-06100 inspite of proper backups (Doc ID 457769.1)
RMAN RESTORE FAILS WITH RMAN-06023 BUT THERE ARE BACKUPS AVAILABLE [ID 965122.1]
RMAN-06023 when Duplicating a Database [ID 108883.1]
Rman Restore Fails With 'RMAN-06023: no backup ...of datafile .. to restore' Although Backup is Available [ID 793401.1]
RMAN-06023 DURING RMAN DUPLICATE [ID 414384.1]

ORA-19909 datafile 1 belongs to an orphan incarnation - http://www.the-playground.de/joomla//index.php?option=com_content&task=view&id=216&Itemid=29
Impact of Partial Recovery and subsequent resetlogs on daily Incrementally Updated Backups [ID 455543.1]
Recovery through resetlogs using User Managed Online Backups [ID 431816.1]
How to duplicate a databaset to a previous Incarnation [ID 293717.1]
RMAN restore of database fails with ORA-01180: Cannot create datafile 1 [ID 392237.1]
How to Recover Through a Resetlogs Command Using RMAN [ID 237232.1]
RMAN: Point-in-Time Recovery of a Backup From Before Last Resetlogs [ID 1070453.6]
How to recover an older incarnation without a controlfile from that time [ID 284510.1]
RMAN-6054 report during recover database [ID 880536.1]
RMAN-06054 While Recovering a Database in NOARCHIVELOG mode [ID 577939.1]
http://oraware.blogspot.com/2008/05/recovery-with-old-controlfilerecover.html
http://hemantoracledba.blogspot.com/2009/09/rman-can-identify-and-catalog-use.html
http://oracle.ittoolbox.com/groups/technical-functional/oracle-db-l/ora01190-controlfile-or-data-file-1-is-from-before-the-last-resetlogs-870241





-- READ ONLY

RMAN Backup With Skip Read Only Takes More Time 
  Doc ID:  Note:561071.1 




-- RESTORE

How To Restore From An Old Backupset Using RMAN?
  	Doc ID: 	209214.1

RMAN : Consistent Backup, Restore and Recovery using RMAN
  	Doc ID: 	162855.1

RMAN: Restoring an RMAN Backup to Another Node
  	Doc ID: 	Note:73974.1




-- RESTORE HIGHER PATCHSET

Restoring a database to a higher patchset
      Doc ID:     558408.1

Oracle Database Upgrade Path Reference List 
  Doc ID:  Note:730365.1 

Database Server Upgrade/Downgrade Compatibility Matrix 
  Doc ID:  Note:551141.1 



-- CATALOG

RMAN: How to Query the RMAN Recovery Catalog
  	Doc ID: 	98342.1

RMAN Troubleshooting Catalog Performance Issues 
  Doc ID:  Note:748257.1 

How To Catalog Multiple Archivelogs in Unix and Windows
  	Doc ID: 	Note:404515.1





-- FLASH RECOVERY AREA

Flash Recovery area - Space management Warning & Alerts
  	Doc ID: 	Note:305812.1

ENABLE/DISABLE ARCHIVELOG MODE AND FLASH RECOVERY AREA IN A DATABASE USING ASM
  	Doc ID: 	468984.1

How To Delete Archive Log Files Out Of +Asm?
  	Doc ID: 	300472.1

How do you prevent extra archivelog files from being created in the flash recovery area?
  	Doc ID: 	Note:353106.1
  	
  	
  	
-- ORA-1157

Common Causes and Solutions on ORA-1157 Error Found in Backup & Recovery
 	Doc ID:	Note:184327.1



-- Ora-19660

Restore Validate Database Always Fails Ora-19660
  	Doc ID: 	353614.1

OERR: ORA 19660 some files in the backup set could not be verified
  	Doc ID: 	49356.1

Corrupted Blocks Found During Restore of Backup with RMAN and TIVOLI ORA-19612
  	Doc ID: 	181080.1




-- 8i RMAN

Note 50875.1 Getting Started with Server-Managed Recovery (SMR) and RMAN 8.0-8i 

RMAN 8.0 to 8i - Getting Started
 	Doc ID:	Note:120084.1

How To Show Rman Configuration Parameters on Oracle 8.1.7 ?
 	Doc ID:	Note:725922.1

Maintaining V8.0 and V8.1 RMAN Repository
 	Doc ID:	Note:125303.1

RMAN: How to Recover a Database from a Total Failure Using RMAN 8i
 	Doc ID:	Note:121227.1

How To Use RMAN to Backup Archive Logs
 	Doc ID:	Note:237407.1




-- INCREMENTAL, CUMMULATIVE

How To Determine If A RMAN Backup Is Differential Or Cumulative 
  Doc ID:  Note:356349.1 

Does RMAN Oracle10g Db support Incremental Level 2 backups? 
  Doc ID:  Note:733535.1 

Incrementally Updated Backup In 10G 
  Doc ID:  Note:303861.1 

RMAN versus EXPORT Incremental backups 
  Doc ID:  Note:123146.1 

Merged Incremental Strategy creates backups larger than expected 
  Doc ID:  Note:413265.1 

Merged Incremental Backup Strategies
  	Doc ID: 	745798.1



-- RETENTION POLICY

Rman backup retention policy 
  Doc ID:  Note:462978.1 

How to ensure that backup metadata is retained in the controlfile when setting a retention policy and an RMAN catalog is NOTused. 
  Doc ID:  Note:461125.1 

RMAN Delete Obsolete Command Deletes Archivelog Backups Inside Retention Policy 
  Doc ID:  Note:734323.1 
 	
 	
 	
-- OBSOLETE

Delete Obsolete Does Not Delete Obsolete Backups 
  Doc ID:  Note:314217.1 



-- BACKUP OPTIMIZATION

RMAN 9i: Backup Optimization 
  Doc ID:  Note:142962.1 



-- LIST, REPORT

LIST and REPORT Commands in RMAN 
  Doc ID:  Note:114284.1 




-- FORMAT

What are the various % format code used during RMAN backups 
  Doc ID:  Note:553927.1 




-- CONTROL_FILE_RECORD_KEEP_TIME

Setting CONTROL_FILE_RECORD_KEEP_TIME For Incrementally Updated Backups 
  Doc ID:  Note:728471.1 



-- TAPE, MEDIA LIBRARY, SBT_LIBRARY=oracle.disksbt

RMAN Tape Simulation - virtual tape
http://www.appsdba.com/blog/?p=205
http://groups.google.com/group/oracle_dba_experts/browse_thread/thread/6990d83752256e20?pli=1

RMAN and Specific Media Managers Environment Variables. 
  Doc ID:  Note:312737.1 

Does Unused Block Compression Works With Tape ?
  	Doc ID: 	565237.1

RMAN 10gR2 Tape vs Disk Backup Performance When Database is 99% Empty
  	Doc ID: 	428344.1	

How to Configure RMAN to Work with Netbackup for Oracle 
  Doc ID:  Note:162355.1 




-- COMPRESSION

A Complete Understanding of RMAN Compression
  	Doc ID: 	563427.1


-- MEMORY CORRUPTION
FAQ Memory Corruption [ID 429380.1]


-- BLOCK CORRUPTIONS

Handling Oracle Block Corruptions in Oracle7/8/8i/9i/10g 
  Doc ID:  Note:28814.1 

CAUSES OF BLOCK CORRUPTIONS
  	Doc ID: 	77589.1

BLOCK CORRUPTIONS ON ORACLE AND UNIX
  	Doc ID: 	77587.1

TECH: Database Block Checking Features
  	Doc ID: 	32969.1

DBMS_REPAIR example
  	Doc ID: 	Note:68013.1

BLOCK CORRUPTIONS ON ORACLE AND UNIX
  	Doc ID: 	Note:77587.1

FAQ: Physical Corruption
  	Doc ID: 	Note:403747.1

V$Database_Block_Corruption Does not clear after Block Recover Command 
  Doc ID:  Note:422889.1 

How to Format Corrupted Block Not Part of Any Segment 
  Doc ID:  Note:336133.1 

V$DATABASE_BLOCK_CORRUPTION Shows a File Which Does not Exist 
  Doc ID:  Note:298137.1 

RMAN 9i: Block-Level Media Recovery - Concept & Example
  	Doc ID: 	144911.1

Does Block Recovery use Incremental Backups?? -- BLOCKRECOVER command will ONLY use archivelog backups to complete it's recovery
  	Doc ID: 	727706.1

HOW TO PERFORM BLOCK MEDIA RECOVERY (BMR) WHEN BACKUPS ARE NOT TAKEN BY RMAN.
  	Doc ID: 	342972.1

How to Find All the Corrupted Objects in Your Database.
  	Doc ID: 	472231.1

RMAN Does not Report a Corrupt Block if it is not Part of Any Segment
  	Doc ID: 	463821.1

Note 336133.1  -  How to Format Corrupted Block Not Part of Any Segment. 

Note 269028.1 - DBV Reports Corruption Even After Drop/Recreate Object

Note 209691.1 - V$BACKUP_CORRUPTION Contains Information About Corrupt Blocks

How to Check Archivelogs for Corruption using RMAN
  	Doc ID: 	377146.1

Warnings : Recovery is repairing media corrupt block 
  Doc ID:  213311.1 

Is it possible to use RMAN Block Media Recovery to recover LOGICALLY corrupt blocks? <-- NO
  Doc ID:  391120.1 

TECH: Database Block Checking Features	<-- with 11g
  Doc ID:  32969.1 

DBVerify Reports Blocks as 'influx - most likely media corrupt' 
  Doc ID:  468995.1 

Meaning of the message "Block found already corrupt" when running dbverify 
  Doc ID:  139425.1 

BLOCK CORRUPTIONS ON ORACLE AND UNIX 
  Doc ID:  77587.1 

How To Check For Corrupt Or Invalid Archived Log Files 
  Doc ID:  177559.1 

CORRUPT BLOCK INFO NOT REPORTED TO ALERT.LOG 
  Doc ID:  114357.1 

Best Practices for Avoiding and Detecting Corruption 
  Doc ID:  428570.1 

-----
Block Corruption FAQ
  	Doc ID: 	47955.1

Physycal and Logical Block Corruptions. All you wanted to know about it.
  	Doc ID: 	840978.1

ORA-1578 Main Reference Index for Solutions
  	Doc ID: 	830997.1

How to identify the corrupt Object reported by ORA-1578 / RMAN / DBVERIFY
  	Doc ID: 	819533.1

Frequently Encountered Corruption Errors, Diagnostics and Resolution - Reference
  	Doc ID: 	463479.1

Data Recovery Advisor -Reference Guide.
  	Doc ID: 	466682.1

Extracting Data from a Corrupt Table using ROWID Range Scans in Oracle8 and higher
  	Doc ID: 	61685.1

Some Statements Referencing a Table with WHERE Clause Fails with ORA-01578
  	Doc ID: 	146851.1

Extracting Data from a Corrupt Table using SKIP_CORRUPT_BLOCKS or Event 10231
  	Doc ID: 	33405.1

How to identify all the Corrupted Objects in the Database reported by RMAN
  	Doc ID: 	472231.1

ORA-1578 / ORA-26040 Corrupt blocks by NOLOGGING - Error explanation and solution
  	Doc ID: 	794505.1

ORA-1578 ORA-26040 in a LOB segment - Script to solve the errors
  	Doc ID: 	293515.1

OERR: ORA-1578 "ORACLE data block corrupted (file # %s, block # %s)"
  	Doc ID: 	18976.1

Diagnosing and Resolving 1578 reported on a Local Index of a Partitioned table
  	Doc ID: 	432923.1

HOW TO TROUBLESHOOT AND RESOLVE an ORA-1110
  	Doc ID: 	434013.1

Cannot Reuse a Corrupt Block in Flashback Mode, ORA-1578
  	Doc ID: 	729433.1

ORA-01578, ORA-0122, ORA-01204: On Startup
  	Doc ID: 	1041424.6

ORA-01578 After Recovering Database Running In NOARCHIVELOG Mode
  	Doc ID: 	122266.1

Identify the corruption extension using RMAN/DBV/ANALYZE etc
  	Doc ID: 	836658.1

"hcheck.sql" script to check for known problems in Oracle8i, Oracle9i, Oracle10g and Oracle 11g
  	Doc ID: 	136697.1

Introduction to the "H*" Helper Scripts
  	Doc ID: 	101466.1

"hout.sql" script to install the "hOut" helper package
  	Doc ID: 	101468.1

ASM - Internal Handling of Block Corruptions
  	Doc ID: 	416046.1

BLOCK CORRUPTIONS ON ORACLE AND UNIX
  	Doc ID: 	77587.1

Introduction to the Corruption Category
  	Doc ID: 	68117.1

Note 33405.1   Extracting Data from a Corrupt Table using SKIP_CORRUPT_BLOCKS or Event 10231
	Note 34371.1   Extracting Data from a Corrupt Table using ROWID or Index Scans in Oracle7
	Note 61685.1   Extracting Data from a Corrupt Table using ROWID Range Scans in Oracle8/8i 
	Note 1029883.6 Extracting Data from a Corrupt Table using SALVAGE Scripts / Programs 
	Note 97357.1   SALVAGE.PC  - Oracle8i Pro*C Code to Extract Data from a Corrupt Table
	Note 2077307.6 SALVAGE.PC  - Oracle7 Pro*C Code to Extract Data from a Corrupt Table
	Note 2064553.4 SALVAGE.SQL - PL/SQL Code to Extract Data from a Corrupt Table

ORA-1578, ORA-1110, ORA-26040 on Standby Database Using Index Subpartitions
  	Doc ID: 	431435.1

FAQ: Physical Corruption
  	Doc ID: 	403747.1

Note 250968.1 Block Corruption Error Messages in Alert Log File 

How we identified and fixed the workflow tables corruption errors after the database restore
  	Doc ID: 	736033.1

ORA-01578 'ORACLE data block corrupted' When Attempting to Drop a Materialized View
  	Doc ID: 	454955.1

Cloned Olap Database Gets ORA-01578 Nologging
  	Doc ID: 	374036.1 	

ORA-01578: AGAINST A NEW DATAFILE
  	Doc ID: 	1068001.6

Query of Table Using Index Fails With ORA-01578
  	Doc ID: 	153888.1

Extracting Datafile Blocks From ASM
  	Doc ID: 	294727.1

ORA-1578: Oracle Data Block Corrupted (File # 148, Block # 237913)
  	Doc ID: 	103845.1

Data Corruption fixes in Red Hat AS 2.1 e.24 kernel
  	Doc ID: 	241820.1

TECH: Database Block Checking Features
  	Doc ID: 	32969.1

Analyze Table Validate Structure Cascade Online Is Slow
  	Doc ID: 	434857.1

ANALYZE INDEX VALIDATE STRUCTURE ONLINE DOES NOT POPULATE INDEX_STATS
  	Doc ID: 	283974.1

Meaning of the message "Block found already corrupt" when running dbverify
  	Doc ID: 	139425.1

RMAN Does not Report a Corrupt Block if it is not Part of Any Segment
  	Doc ID: 	463821.1

How to Format Corrupted Block Not Part of Any Segment
  	Doc ID: 	336133.1

DBV Reports Corruption Even After Drop/Recreate Object
  	Doc ID: 	269028.1

TFTS: Converting DBA's (Database Addresses) to File # and Block #
  	Doc ID: 	113005.1

Bug 7329252 - ORA-8102/ORA-1499/OERI[kdsgrp1] Index corruption after rebuild index ONLINE
  	Doc ID: 	7329252.8

ORA-600 [qertbfetchbyrowid]
  	Doc ID: 	300637.1

ORA-600 [qertbfetchbyuserrowid]
  	Doc ID: 	809259.1

ORA-600 [kdsgrp1]
  	Doc ID: 	285586.1

ORA-1499. Table/Index row count mismatch
  	Doc ID: 	563070.1





-- BLOCK CORRUPTION PREVENTION

How To Use RMAN To Check For Logical & Physical Database Corruption
  	Doc ID: 	283053.1

How to check for physical and logical database corruption using "backup validate check logical database" command for database on a non-archivelog mode
  	Doc ID: 	466875.1

How To Check (Validate) If RMAN Backup(s) Are Good
  	Doc ID: 	338607.1

SCHEMA VALIDATION UTILITY
  	Doc ID: 	286619.1

11g New Feature V$Database_block_corruption Enhancements and Rman Validate Command
  	Doc ID: 	471716.1

How to Check/Validate That RMAN Backups Are Good
  	Doc ID: 	466221.1

Which Blocks Will RMAN Check For Corruption Or Include In A Backupset?
  	Doc ID: 	561010.1

Best Practices for Avoiding and Detecting Corruption
  	Doc ID: 	428570.1

v$DATABASE_BLOCK_CORRUPTION Reports Corruption Even After Tablespace is Dropped
  	Doc ID: 	454431.1

V$DATABASE_BLOCK_CORRUPTION Shows a File Which Does not Exist
  	Doc ID: 	298137.1

Performing a Test Backup (VALIDATE BACKUP) Using RMAN
  	Doc ID: 	121109.1






-- DBV

DBVERIFY - Database file Verification Utility (7.3.2 - 10.2) 
  Doc ID:  Note:35512.1 



DBVERIFY enhancement - How to scan an object/segment 
  Doc ID:  Note:139962.1 



Extract rows from a CORRUPT table creating ROWID from DBA_EXTENTS
  	Doc ID: 	Note:422547.1

ORA-8103 Diagnostics and Solution
  	Doc ID: 	Note:268302.1

Init.ora Parameter "DB_BLOCK_CHECKING" Reference Note
  	Doc ID: 	Note:68483.1

ORA-00600 [510] and ORA-1578 Reported with DB_BLOCK_CHECKING Set to True
  	Doc ID: 	Note:456439.1

New Parameter DB_ULTRA_SAFE introduce In 11g
  	Doc ID: 	Note:465130.1

ORA-600s and possible corruptions using the RAC TCPIP Interconnect.
  	Doc ID: 	Note:244940.1

TECH: Database Block Checking Features
  	Doc ID: 	Note:32969.1

[8.1.5] (14) Initialization Parameters
  	Doc ID: 	Note:68895.1

Export/Import DataPump Parameter ACCESS_METHOD - How to Enforce a Method of Loading and Unloading Data ?
  	Doc ID: 	Note:552424.1




-- RMAN ERRORS

RMAN-20020 Error after Registering Database Twice in a Session 
  Doc ID:  Note:102776.1 

Main Index of Common Causes for ORA-19511
  	Doc ID: 	227517.1



-- STUCK RECOVERY

ORA-600 [3020] "Stuck Recovery" 
  Doc ID:  Note:30866.1 


Resolving ORA-600[3020] Raised During Recovery 
  Doc ID:  Note:361172.1 


Resolving ORA-00600 [3020] Against A Data Guard Database. 
  Doc ID:  Note:470220.1 


ORA-600 [3020] "Stuck Recovery" 
  Doc ID:  Note:30866.1 


Stuck recovery of database ORA-00600[3020] 
  Doc ID:  Note:283269.1 


Trial Recovery 
  Doc ID:  Note:283262.1 

Resolving ORA-00600 [3020] Against A Data Guard Database. 
  Doc ID:  Note:470220.1 

Bug 4594917 - Write IO error can cause incorrect file header checkpoint information 
  Doc ID:  Note:4594917.8 


ORA-00313 During RMAN Recovery 
  Doc ID:  Note:437319.1 


RMAN Tablespace Recovery Fails With ORA-00283 RMAN-11003 ORA-01579 
  Doc ID:  Note:419692.1 


RMAN Recovery Until Time Failed When Redo-Logs Missed - ORA-00313, ORA-00312 AND ORA-27037 
  Doc ID:  Note:550077.1 


RMAN-11003 and ORA-01153 When Doing Recovery through RMAN 
  Doc ID:  Note:264113.1 


ORA-600 [kccocx_01] Reported During Primary Database Shutdown 
  Doc ID:  Note:466571.1 


ORA-1122, ORA-1110, ORA-120X 
  Doc ID:  Note:1011557.6 


OERR: ORA 1205 not a datafile - type number in header is  
  Doc ID:  Note:18777.1 

Rman/Nsr Restore Fails. Attempt to Recover Results in ora-01205 
  Doc ID:  Note:260150.1 




-- CLONE / DUPLICATE

Database Cloning Process in case of Shutdown Abort
  	Doc ID: 	428623.1

How to clone/duplicate a database with added datafile with no backup.
  	Doc ID: 	Note:292947.1

Answers To FAQ For Restoring Or Duplicating Between Different Versions And Platforms
  	Doc ID: 	Note:369644.1

RMAN Duplicate Database From RAC ASM To RAC ASM
  	Doc ID: 	Note:461479.1

Subject: 	RMAN-06023 DURING RMAN DUPLICATE
  	Doc ID: 	Note:414384.1 

How To Make A Copy Of An Open Database For Duplication To A Different Machine
  	Doc ID: 	224274.1

How to Make a Copy of a Database on the Same Unix Machine
  	Doc ID: 	18070.1

Duplicate Database Without Connecting To Target And Without Using RMAN
  	Doc ID: 	732625.1

Performing duplicate database with ASM/OMF/RMAN [ID 340848.1]

Article on How to do Rman Duplicate on ASM/RAC/OMF/Single Instance
  	Doc ID: 	840647.1

Creating a physical standby from ASM primary
  	Doc ID: 	787793.1

RMAN Duplicate Database From RAC ASM To RAC ASM
  	Doc ID: 	461479.1


How To Create A Production (Full or Partial) Duplicate On The Same Host
  	Doc ID: 	388424.1




-- DUPLICATE ERRORS

Rman Duplicate fails with ORA-19870 ORA-19587 ORA-17507
  	Doc ID: 	469132.1

ORA-19870 Control File Not Found When Creating Standby Database With RMAN
  	Doc ID: 	430621.1

ORA-19505 ORA-27037 FAILED TO IDENTIFY FILE
  	Doc ID: 	444610.1

Database Instance Will Not Mount. Ora-19808
  	Doc ID: 	391828.1

RMAN-06136 On Duplicate Database for Standby with OMF and ASM
  	Doc ID: 	341591.1





-- RMAN RAC backup

RMAN: RAC Backup and Recovery using RMAN 
  Doc ID:  Note:243760.1 

HowTo Restore RMAN Disk backups of RAC Database to Single Instance On Another Node
  	Doc ID: 	415579.1




-- MISSING ARCHIVELOG 

NT: Online Backups 
  Doc ID:  41946.1 

Which System Privileges are required for a User to Perform Backup Operator Tasks 
  Doc ID:  180019.1 

Scripts To Perform Dynamic Hot/Online Backups 
  Doc ID:  152111.1 

EVENT: 10231 "skip corrupted blocks on _table_scans_" 
  Doc ID:  21205.1 

RECOVER A DATAFILE WITH MISSING ARCHIVELOGS 
  Doc ID:  418476.1 

How to recover and open the database if the archivelog required for recovery is either missing, lost or corrupted? 	<--- FUJI
  Doc ID:  465478.1 

Incomplete Recover Fails with ORA-01194, ORA-01110 and Warning "Recovering from Fuzzy File". 
  Doc ID:  165671.1 

Fuzzy File Warning When Recovering From Cold Backup 
  Doc ID:  103100.1 

How to recover and open the database if the archivelog required for recovery is either missing, lost or corrupted?
  	Doc ID: 	Note:465478.1

RECOVER A DATAFILE WITH MISSING ARCHIVELOGS
  	Doc ID: 	Note:418476.1



-- MISSING

RECREATE MISSING TABLESPACE AND DATAFILE
  	Doc ID: 	Note:2072805.6

DATAFILES ARE MISSING AFTER DATABASE IS OPEN IN RESETLOGS
  	Doc ID: 	Note:420730.1



-- CROSS PLATFORM

Migration of Oracle Instances Across OS Platforms
  	Doc ID: 	Note:733205.1

How To Use RMAN CONVERT DATABASE for Cross Platform Migration
  	Doc ID: 	Note:413586.1

Export/Import DataPump Parameter VERSION - Compatibility of Data Pump Between Different Oracle Versions
  	Doc ID: 	Note:553337.1

10g : Transportable Tablespaces Across Different Platforms
  	Doc ID: 	Note:243304.1



-- TAG

How to use RMAN TAG name with different attributes or variables.
  	Doc ID: 	580283.1

How to use Substitution Variables in RMAN commands
  	Doc ID: 	427229.1


-- DATA GUARD ROLL FORWARD

Rolling a Standby Forward using an RMAN Incremental Backup in 10g
  	Doc ID: 	290814.1


-- BACKUP ON RAW DEVICE

How To Backup Database When Files Are On Raw Devices/File System
  	Doc ID: 	469716.1


-- BACKUP COPY OF DATABASE

RMAN10g: backup copy of database
  	Doc ID: 	266980.1




-- ORACLE SECURE BACKUP

OSB Cloud Module - FAQ (Doc ID 740226.1)
How To Determine The Free Space On A Tape? (Doc ID 415026.1)





-- REDO LOG

How To Mulitplex Redo Logs So That One Copy Will Be In FRA ?
  	Doc ID: 	833553.1



-- TSPITR

Limitations of RMAN TSPITR
  	Doc ID: 	304305.1

What Checks Oracle Does during Tablespace Point-In-Time Recovery (TSPITR)
  	Doc ID: 	153981.1

Perform Tablespace Point-In-Time Recovery Using Transportable Tablespace
  	Doc ID: 	100698.1

TSPITR:How to check dependency of the objects and identifying objects that will be lost after TSPITR
  	Doc ID: 	304308.1

How to Recover a Drop Tablespace with RMAN
  	Doc ID: 	455865.1

RMAN: Tablespace Point In Time Recovery (TSPITR) Procedure.
  	Doc ID: 	109979.1

Automatic TSPITR in 10G RMAN -A walk Through
  	Doc ID: 	335851.1




-- TRANSPORTABLE TABLESPACE

Transportable Tablespaces -- An Example to setup and use
  	Doc ID: 	77523.1



-- RMAN TEMPFILES

Recovery Manager and Tempfiles
  	Doc ID: 	305993.1











In that case use RMAN to take the backup to filesystem, here is an example, note that RMAN will not copy redo log members, then in case of needed to restore, the database will need to be open using resetlogs:

rman nocatalog target /
shutdown immediate
startup mount
backup as copy database format '/oracle/bkp/Df_%U';
copy current controlfile to '/oracle/bkp/%d_controlfile.ctl';
backup spfile format '/oracle/bkp/%d_spfile.ora';
shutdown immediate;


mkdir /u04/oradata/RAC/backup/
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u04/oradata/RAC/backup/%F';
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u04/oradata/RAC/backup/snapcf_RAC.f';
BACKUP FORMAT '/u04/oradata/RAC/backup/%d_D_%T_%u_s%s_p%p' DATABASE;
-- BACKUP CURRENT CONTROLFILE FOR STANDBY FORMAT '/u04/oradata/RAC/backup/%d_C_%U'; -- if creating standby database
BACKUP CURRENT CONTROLFILE FORMAT '/u04/oradata/RAC/backup/%d_C_%U';
SQL "ALTER SYSTEM ARCHIVE LOG CURRENT";
BACKUP FILESPERSET 10 ARCHIVELOG ALL FORMAT '/u04/oradata/RAC/backup/%d_A_%T_%u_s%s_p%p';



or we could copy the backupset using backup backupset...
http://www.freelists.org/post/oracle-l/Experiencesthoughts-about-hardware-recommendations    <-- this is the BIG question

http://structureddata.org/2009/12/22/the-core-performance-fundamentals-of-oracle-data-warehousing-balanced-hardware-configuration/

http://dsstos.blogspot.com/2009/09/download-link-for-storage-design-for.html


{{{
The Core Performance Fundamentals Of Oracle Data Warehousing – Balanced Hardware Configuration
http://structureddata.org/?p=716

Balanced Hardware Configuration
http://download.oracle.com/docs/cd/E11882_01/server.112/e10578/tdpdw_system.htm#CFHFJEDD

General Performance and I/O Topics
http://kevinclosson.wordpress.com/kevin-closson-index/general-performance-and-io-topics/

Oracle Real Application Clusters: Sizing and Capacity Planning Then and Now
http://www.oracleracsig.org/pls/apex/Z?p_url=RAC_SIG.download_my_file?p_file=1001042&p_id=1001042&p_cat=documents&p_user=KARAO&p_company=994323795175833

RAC Performance Experts Reveal All  http://www.scribd.com/doc/6850001/RAC-Performance-Experts-Reveal-All

“Storage Design for Datawarehousing”
http://dsstos.blogspot.com/2009/09/download-link-for-storage-design-for.html

Oracle Database Capacity Planning
http://dsstos.blogspot.com/2008/08/oracle-database-capacity-planning.html

Simple Userland tools on Unix to help analyze application impact as a non-root user – Storage Subsystem
http://dsstos.blogspot.com/2008/07/simple-userland-tools-on-unix-to-help.html
}}}
''Docs'' http://wiki.bash-hackers.org/doku.php , ''FAQ'' http://mywiki.wooledge.org/BashFAQ


Sorting data by dates, numbers and much much more
http://prefetch.net/blog/index.php/2010/06/24/sorting-data-by-dates-numbers-and-much-much-more/

{{{
This is crazy useful, and I didn’t realize sort could be used to sort by date. I put this to use today, when I had to sort a slew of data that looked similar to this:
Jun 10 05:17:47 some_data_string
May 20 05:17:48 some_data_string2
Jun 17 05:17:49 some_data_string0
I was able to first sort by the month, and then by the day of the month:
$ awk ‘{printf “%-3s %-2s %-8s %-50s\n”, $1, $2, $3, $4 }’ data | sort -k1M -k2n
May 17 05:17:49 some_data_string0
Jun 01 05:17:47 some_data_string
Jun 20 05:17:48 some_data_string2
}}}

http://www.linuxconfig.org/Bash_scripting_Tutorial
http://www.oracle.com/technetwork/articles/servers-storage-dev/kornshell-1523970.html
* jmeter http://jakarta.apache.org/jmeter/
* httperf http://httperf.comlore.com/
* misc stuff http://www.idsia.ch/~andrea/sim/simvis.html
* geekbench http://browse.geekbench.ca/


http://blogs.netapp.com/virtualization/

SPEC - Standard Performance Evaluation Corporation http://www.spec.org/
spec sfs http://queue.acm.org/blogposting.cfm?id=11445

SPEC FAQ http://www.spec.org/spec/faq/

Ideas International - Benchmark Gateway
http://www.ideasinternational.com/benchmark/ben010.aspx

comp.benchmarks FAQ
http://pages.cs.wisc.edu/~thomas/comp.benchmarks.FAQ.html

PDS: The Performance Database Server
http://performance.netlib.org/performance/html/PDStop.html

Iozone Filesystem Benchmark
http://www.iozone.org/

How to measure I/O Performance on Linux (Doc ID 1931009.1)





File Format Benchmark Avro JSON ORC and Parquet
https://www.youtube.com/watch?v=tB28rPTvRiI

Hadoop Tutorial for Beginners - 32 Hive Storage File Formats: Sequence, RC, ORC, Avro, Parquet
https://www.youtube.com/watch?v=UXhyENkYokw

https://www.youtube.com/results?search_query=parquet+vs+orc
''What is big data?'' http://radar.oreilly.com/2012/01/what-is-big-data.html
http://www.slideshare.net/ksankar/the-art-of-big-data


''What is data science?'' http://radar.oreilly.com/2010/06/what-is-data-science.html


nutanix guy https://sites.google.com/site/mohitaron/research



''Big Data Videos'' 
http://www.zdnet.com/big-data-projects-is-the-hardware-infrastructure-overlooked-7000005940/
http://www.livestream.com/fbtechtalks/video?clipId=pla_a3d62538-1238-4202-a3be-e257cd866bb9
<<<
If you're a database guy you'll love this 2 hour video, facebook engineers discussed the following – performance focus, server provisioning, automatic server rebuilds, backup & recovery, online schema changes, sharding, HBase and Hadoop, the Q&A part at the end is also interesting at 1:28:46 Mark Callaghan also answered why they chose MySQL vs commercial databases that already have the features that their engineers are hacking. Good stuff!
<<<




Index Is Not Used If Defined On a CHAR Column That Is TDE Encrypted And WHERE Clause Uses Binds (Doc ID 1470350.1)
{{{
The premises of this issue are as follows:
1. create an encrypted column of datatype CHAR and encryption NO SALT.
2. create an index on this encrypted column.
3. run a query that uses a WHERE clause with bind variables on the encrypted column
The query would access the table using Full Table Scan access path.
The issue does not reproduce if using another datatype for the encrypted column or if using literals instead of bind variables.
A succint example is given below:

conn / as sysdba
drop user test cascade;
create user test identified by xxxxx;
grant dba to test;

conn test/xxxxx

create table tbl1
(
col1 char(20) encrypt no salt,
col2 number
);

create index tbl1_col1_ix on tbl1(COL1);

begin
 for i in 1..10000 loop
 insert into tbl1 values('col1'||i,i);
 commit;
 end loop;
end;
/

execute dbms_stats.gather_schema_stats('TEST');

conn test/test

variable v_col1 char(19);
execute :v_col1:='col110';

--Then generate either the 10046 or 10053 and check the resulting trace:

alter session set events='10053 trace name context forever, level 1';
alter session set events='10046 trace name context forever, level 8';

select t1.*
from tbl1 t1
where t1.col1=:v_col1;

============
Plan Table
============
-------------------------------------+-----------------------------------+
| Id  | Operation          | Name    | Rows  | Bytes | Cost  | Time      |
-------------------------------------+-----------------------------------+
| 0   | SELECT STATEMENT   |         |       |       |    25 |           |
| 1   |  TABLE ACCESS FULL | TBL1    |   100 |  2500 |    25 |  00:00:01 |
-------------------------------------+-----------------------------------+
Predicate Information:
----------------------
1 - filter(INTERNAL_FUNCTION("T1"."COL1")=:V_COL1)

select t1.*
from tbl1 t1
where t1.col1='col110';

============
Plan Table
============
---------------------------------------------------+-----------------------------------+
| Id  | Operation                    | Name        | Rows  | Bytes | Cost  | Time      |
---------------------------------------------------+-----------------------------------+
| 0   | SELECT STATEMENT             |             |       |       |     1 |           |
| 1   |  TABLE ACCESS BY INDEX ROWID | TBL1        |     1 |   102 |     1 |  00:00:01 |
| 2   |   INDEX RANGE SCAN           | TBL1_COL1_IX|     1 |       |     1 |  00:00:01 |
---------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
2 - access("T1"."COL1"='COL110')
}}}


<<<
CAUSE 

This issue has been investigated in bug:
Bug 13926287 - INDEXES ON TDE CHAR COLUMNS ARE NOT USED WITH CHAR BIND VARIABLES
The same problem has been investigated in: 17162592,  16197787, 14639274, 9672564.

This issue concerns the use of CHAR binds and columns with the decryptor to encryptor transformation.
The decryptor to encryptor transformation transforms a predicate of the form: decrypt(col) = expr to col = encrypt(expr). This transformation is not done if the literal (expr) size is longer than encrypted column size.
Equally, if a bind variable is used and the bind buffer length is longer than the encrypted length, the transformation will not occur.
If the column and the bind variables are both of type CHAR then these are both blank padded to the full extent of their current size.

With bind variable length >2000 , the above mentioned transformation does not take place, since encryption increases the length of the value. If the bind length is at the largest size and is encrypted then there is nowhere to go, hence this transformation cannot occur.
These restrictions can only be overcome by changing the used datatypes or encryption types or by manually enforcing the literals/bind variable length.
There will be no Oracle software patch addressing these limitations. 
<<<


<<<
SOLUTION

Workarounds
Instead of encrypting a column, place the whole table in an encrypted tablespace. This means that the decryption is not required in the predicate.
Use a VARCHAR2 bind variable – this may not be valid in all cases
Add a substr() function to the bind so that it stays within the maximum allowed limit (even with the added encryption length) . Suggestion: SUBSTR(:B1,1,3900))

Enhancement request:
Bug 14236789 - INDEX USAGE ON TDE CHAR COLUMNS
has been raised to address this issue in the future Oracle releases. 
<<<


{{{
oracle SUBSTR(:B1,1,3900))
}}}



! tanel nonshared 	
https://github.com/PoderC/vconf2021/blob/main/slides/02-Cursor-Sharing.pdf
https://github.com/PoderC/vconf2021/tree/main/scripts/cursor_reuse



! other references
https://jonathanlewis.wordpress.com/2007/01/05/bind-variables/
https://topic.alibabacloud.com/a/a-good-memory-is-better-than-a-rotten-pen-oracle-font-colorredsqlfont-optimization-2_1_46_30060534.html
https://blog.toadworld.com/why-my-execution-plan-has-not-been-shared-part-iii
https://hourim.wordpress.com/?s=bind+variable
https://www.slideshare.net/MohamedHouri






! get value 
{{{


SQL> select utl_raw.cast_to_varchar2('424f52524f574552') from dual;

UTL_RAW.CAST_TO_VARCHAR2('424F52524F574552')
--------------------------------------------------------------------------------
BORROWER

SQL> select to_char(utl_raw.cast_to_number('c103')) from dual;

TO_CHAR(UTL_RAW.CAST_TO_NUMBER('C103'))
----------------------------------------
2

select rtrim(
               to_char(100*(to_number(substr(timestamp_value,1,2),'XX')-100)
                      + (to_number(substr(timestamp_value,3,2),'XX')-100),'fm0000')||'-'||
               to_char(to_number(substr(timestamp_value,5,2),'XX'),'fm00')||'-'||
               to_char(to_number(substr(timestamp_value,7,2),'XX'),'fm00')||' '||
               to_char(to_number(substr(timestamp_value,9,2),'XX')-1,'fm00')||':'||
               to_char(to_number(substr(timestamp_value,11,2),'XX')-1,'fm00')||':'||
               to_char(to_number(substr(timestamp_value,13,2),'XX')-1,'fm00')
              ||'.'||to_char(to_number(substr(timestamp_value,15,8),'XXXXXXXX')))
from (select '787B06050E091938EC24C0' timestamp_value from dual);

2023-06-05 13:08:24.955000000


select rtrim(
               to_char(100*(to_number(substr(timestamp_value,1,2),'XX')-100)
                      + (to_number(substr(timestamp_value,3,2),'XX')-100),'fm0000')||'-'||
               to_char(to_number(substr(timestamp_value,5,2),'XX'),'fm00')||'-'||
               to_char(to_number(substr(timestamp_value,7,2),'XX'),'fm00')||' '||
               to_char(to_number(substr(timestamp_value,9,2),'XX')-1,'fm00')||':'||
               to_char(to_number(substr(timestamp_value,11,2),'XX')-1,'fm00')||':'||
               to_char(to_number(substr(timestamp_value,13,2),'XX')-1,'fm00')
              ||'.'||to_char(to_number(substr(timestamp_value,15,8),'XXXXXXXX')))
from (select '787b060d140a0503b20b80' timestamp_value from dual);

2023-06-13 19:09:04.62000000


select utl_raw.cast_to_varchar2('4650') from dual;

select to_char(utl_raw.cast_to_number('C40209394A')) from dual;

select rtrim(
               to_char(100*(to_number(substr(timestamp_value,1,2),'XX')-100)
                      + (to_number(substr(timestamp_value,3,2),'XX')-100),'fm0000')||'-'||
               to_char(to_number(substr(timestamp_value,5,2),'XX'),'fm00')||'-'||
               to_char(to_number(substr(timestamp_value,7,2),'XX'),'fm00')||' '||
               to_char(to_number(substr(timestamp_value,9,2),'XX')-1,'fm00')||':'||
               to_char(to_number(substr(timestamp_value,11,2),'XX')-1,'fm00')||':'||
               to_char(to_number(substr(timestamp_value,13,2),'XX')-1,'fm00')
              ||'.'||to_char(to_number(substr(timestamp_value,15,8),'XXXXXXXX')))
from (select '787b060d140a0503b20b80' timestamp_value from dual);
}}}



! other 
http://kerryosborne.oracle-guy.com/2009/03/bind-variable-peeking-drives-me-nuts/
http://www.pythian.com/news/867/stabilize-oracle-10gs-bind-peeking-behaviour-by-cutting-histograms/

https://oracle.readthedocs.io/en/latest/plsql/bind/bind-peeking.html
http://psoug.org/reference/bindvars.html
http://surachartopun.com/2008/12/todateoctmon-ora-01843-not-valid-month.html

http://www-03.ibm.com/systems/bladecenter/resources/benchmarks/whitepapers/
http://husnusensoy.wordpress.com/2008/07/28/readonly-tablespace-vs-block-change-tracking-file/

Data Loss on BCT
http://sai-oracle.blogspot.com/2010/09/beware-of-data-loss-in-bct-based-rman.html
<<<
"Reliability of BCT:
On 11.2.0.1 standby, I've seen managed standby recovery failing to start until BCT is reset at least while running the above tests. It doesn't seem like matured enough to be used on the physical standby. I'm working with Oracle support to get all these issues fixed.
As of 11.2.0.1, I don't recommend using BCT on the standby for running RMAN backups. I think it is pretty safe to use it on the primary database."
<<<

ORACLE 10G BLOCK CHANGE TRACKING INSIDE OUT (Doc ID 1528510.1)

{{{
You can enable change tracking with the following statement:

  SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;

Alternatively, you can specify location of block change tracking file:

  SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE '/DB1/bct.ora';
-- USING CLAUSE  THE FILE  'MYDG +' ;

To disable:

  SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;

View V$BLOCK_CHANGE_TRACKING can be queried to find out the status of change tracking in
the database.
}}}

http://dsstos.blogspot.com/2009/07/map-disk-block-devices-on-linux-host.html
{{{

2014/12/04: <a href="http://karlarao.wordpress.com/2014/12/04/my-timetaskgoalhabit-ttgh-management/">my Time/Task/Goal/Habit (TTGH) management</a>
2013/09/22: <a href="http://karlarao.wordpress.com/2013/09/22/oow-and-oaktable-world-2013/">OOW and OakTable World 2013</a>
2013/05/23: <a href="http://karlarao.wordpress.com/2013/05/23/speaking-at-e4-2013-and-some-exadata-patents-good-stuff/">Speaking at E4 2013! … and some Exadata Patents good stuff</a>
2013/02/05: <a href="http://karlarao.wordpress.com/2013/02/05/rmoug-ioug-collaborate-kscope-and-e4-2013/">RMOUG, IOUG Collaborate, KSCOPE, and E4 2013</a>
2012/10/16: <a href="http://karlarao.wordpress.com/2012/10/16/oracle-big-data-appliance-first-boot/">Oracle Big Data Appliance First Boot</a>
2012/09/27: <a href="http://karlarao.wordpress.com/2012/09/27/oaktable-world-2012/">OakTable World 2012</a>
2012/06/29: <a href="http://karlarao.wordpress.com/2012/06/29/speaking-at-e4/">Speaking at E4!</a>
2012/06/29: <a href="http://karlarao.wordpress.com/2012/06/29/the-effect-of-asm-redundancyparity-on-readwrite-iops-slob-test-case-for-exadata-and-non-exa-environments/">The effect of ASM redundancy/parity on read/write IOPS – SLOB test case! for Exadata and non-Exa environments</a>
2012/05/14: <a href="http://karlarao.wordpress.com/2012/05/14/iosaturationtoolkit-v2-with-iorm-and-awesome-text-graph">IOsaturationtoolkit-v2 with IORM and AWESOME text graph</a>
2012/03/24: <a href="http://karlarao.wordpress.com/2012/03/24/fast-analytics-of-awr-top-events/">Fast Analytics of AWR Top Events</a>
2012/02/13: <a href="http://karlarao.wordpress.com/2012/02/13/rmoug-2012-training-days/">RMOUG 2012 training days</a>
2012/02/11: <a href="http://karlarao.wordpress.com/2012/02/11/sqltxplain-quick-tips-and-tricks-and-db-optimizer-vst/">SQLTXPLAIN quick tips and tricks and DB Optimizer VST</a>
2011/12/31: <a href="http://karlarao.wordpress.com/2011/12/31/easy-and-fast-environment-framework/">Easy and fast environment framework</a>
2011/12/06: <a href="http://karlarao.wordpress.com/2011/12/06/mining-emgc-notification-alerts">Mining EMGC Notification Alerts</a>
2011/09/21: <a href="http://karlarao.wordpress.com/2011/09/21/oracle-database-appliance-oda-installation-configuration/">Oracle Database Appliance (ODA) Installation / Configuration</a>
2011/07/18: <a href="http://karlarao.wordpress.com/2011/07/18/virtathon-mining-the-awr/">VirtaThon – Mining the AWR</a>
2011/07/14: <a href="http://karlarao.wordpress.com/2011/07/14/enkitec-university-exadata-courses-for-developers-and-dbas/">Enkitec University – Exadata Courses for Developers and DBAs</a>
2011/05/17: <a href="http://karlarao.wordpress.com/2011/05/17/nocoug-journal-ask-the-oracle-aces-why-is-my-database-slow/">NoCOUG Journal – Ask the Oracle ACEs – Why is my database slow?</a>
2011/03/23: <a href="http://karlarao.wordpress.com/2011/03/23/oracle-by-example-portal-now-shows-12g/">Oracle by Example portal now shows 12g</a>
2011/03/11: <a href="http://karlarao.wordpress.com/2011/03/11/hotsos-2011-mining-the-awr-repository-for-capacity-planning-visualization-and-other-real-world-stuff">Hotsos 2011 – Mining the AWR Repository for Capacity Planning, Visualization, and other Real World Stuff</a>
2011/01/30: <a href="http://karlarao.wordpress.com/2011/01/30/migrating-your-vms-from-vmware-to-virtualbox-on-a-netbook">Migrating your VMs from VMware to VirtualBox (on a Netbook)</a>
2010/12/21: <a href="http://karlarao.wordpress.com/2010/12/21/wheeew-i-am-now-a-redhat-certified-engineer">Wheeew, I am now a RedHat Certified Engineer!</a>
2010/11/07: <a href="http://karlarao.wordpress.com/2010/11/07/ill-be-speaking-at-hotsos-2011">I’ll be speaking at HOTSOS 2011!</a>
2010/10/07: <a href="http://karlarao.wordpress.com/2010/10/07/after-oow-my-laptop-broke-down-data-rescue-scenario">After OOW, my laptop broke down – data rescue scenario</a>
2010/09/24: <a href="http://karlarao.wordpress.com/2010/09/24/oracle-closed-world-and-unconference-presentations">Oracle Closed World and Unconference Presentations</a>
2010/09/20: <a href="http://karlarao.wordpress.com/2010/09/20/oow-2010-the-highlights">OOW 2010 - the highlights</a>
2010/09/12: <a href="http://karlarao.wordpress.com/2010/09/12/oow-2010-my-schedule">OOW 2010 - my schedule</a>
2010/08/31: <a href="http://karlarao.wordpress.com/2010/08/31/statistically-summarize-oracle-performance-data">Statistically summarize Oracle Performance data</a>
2010/07/27: <a href="http://karlarao.wordpress.com/2010/07/27/guesstimations">Guesstimations</a>
2010/07/25: <a href="http://karlarao.wordpress.com/2010/07/25/graphing-the-aas-with-perfsheet-a-la-enterprise-manager">Graphing the AAS with Perfsheet a la Enterprise Manager</a>
2010/07/05: <a href="http://karlarao.wordpress.com/2010/07/05/oracle-datafile-io-latency-part-1">Oracle datafile IO latency - Part 1</a>
2010/06/28: <a href="http://karlarao.wordpress.com/2010/06/28/the-not-a-problem-problem-and-other-related-stuff">The “Not a Problem” Problem and other related stuff</a>
2010/06/18: <a href="http://karlarao.wordpress.com/2010/06/18/oracle-mix-oow-2010-suggest-a-session">Oracle Mix - OOW 2010 Suggest-A-Session</a>
2010/05/30: <a href="http://karlarao.wordpress.com/2010/05/30/seeing-exadata-in-action">Seeing Exadata in action</a>
2010/04/10: <a href="http://karlarao.wordpress.com/2010/04/10/my-personal-wiki-karlarao-tiddlyspot-com">My Personal Wiki - karlarao.tiddlyspot.com</a>
2010/03/27: <a href="http://karlarao.wordpress.com/2010/03/27/ideas-build-off-ideas-making-use-of-social-networking-sites">“Ideas build off ideas”… making use of Social Networking sites</a>
2010/02/04: <a href="http://karlarao.wordpress.com/2010/02/04/devcon-luzon-2010">DEVCON Luzon 2010</a>
2010/02/01: <a href="http://karlarao.wordpress.com/2010/02/01/craig-shallahamer-is-now-blogging">Craig Shallahamer is now blogging!</a>
2010/01/31: <a href="http://karlarao.wordpress.com/2010/01/31/workload-characterization-using-dba_hist-tables-and-ksar">Workload characterization using DBA_HIST tables and kSar</a>
2009/12/31: <a href="http://karlarao.wordpress.com/2009/12/31/50-sql-performance-optimization-scenarios/">50+ SQL Performance Optimization scenarios</a>
2009/11/21: <a href="http://karlarao.wordpress.com/2009/11/21/rac-system-load-testing-and-test-plan/">RAC system load testing and test plan</a>
2009/11/03: <a href="http://karlarao.wordpress.com/2009/11/03/rhev-red-hat-enterprise-virtualization-is-out/">RHEV (Red Hat Enterprise Virtualization) is out!!!</a>
2009/08/15: <a href="http://karlarao.wordpress.com/2009/08/15/knowing-the-trend-of-deadlock-occurrences-from-the-alert-log">Knowing the trend of Deadlock occurrences from the Alert Log</a>
2009/07/30: <a href="http://karlarao.wordpress.com/2009/07/30/lucky-to-find-it">Lucky to find it..</a>
2009/06/07: <a href="http://karlarao.wordpress.com/2009/06/07/diagnosing-and-resolving-gc-block-lost">Diagnosing and Resolving “gc block lost”</a>
2009/05/08: <a href="http://karlarao.wordpress.com/2009/05/08/yast-on-oel">Yast on OEL</a>
2009/05/08: <a href="http://karlarao.wordpress.com/2009/05/08/understanding-the-scn">Understanding the SCN</a>
2009/04/20: <a href="http://karlarao.wordpress.com/2009/04/20/advanced-oracle-troubleshooting-by-tanel-poder-in-singapore">Advanced Oracle Troubleshooting by Tanel Poder in Singapore</a>
2009/04/06: <a href="http://karlarao.wordpress.com/2009/04/06/os-thread-startup">OS Thread Startup</a>
2009/04/04: <a href="http://karlarao.wordpress.com/2009/04/04/single-instance-and-rac-kernel-os-upgrade">Single Instance and RAC Kernel/OS upgrade</a>
2009/02/27: <a href="http://karlarao.wordpress.com/2009/02/27/security-forecasting-oracle-performance-and-some-stuff-to-post-soon">Security, Forecasting Oracle Performance and Some stuff to post… soon…</a>
2009/01/03: <a href="http://karlarao.wordpress.com/2009/01/03/migrate-from-windows-xp-64bit-to-ubuntu-intrepid-ibex-810-64bit">Migrate from Windows XP 64bit to Ubuntu Intrepid Ibex 8.10 64bit</a>
2008/11/07: <a href="http://karlarao.wordpress.com/2008/11/07/oraclevalidatedinstallationonoel45">Oracle-Validated RPM on OEL 4.5</a>

<span style="color:white;"> </span>
<span style="color:white;"> </span>
<h1>By Category</h1>

<h3><span style="text-decoration:underline;">Performance/Troubleshooting</span></h3>
<ul>
<li>Capacity Planning
    <ul>
    <li><a href="http://karlarao.wordpress.com/2010/01/31/workload-characterization-using-dba_hist-tables-and-ksar">Workload characterization using DBA_HIST tables and kSar</a></li>
	<li><a href="http://karlarao.wordpress.com/2010/08/31/statistically-summarize-oracle-performance-data">Statistically summarize Oracle Performance data</a></li>
    </ul>
</li>
<li>Database Tuning
    <ul>
    <li><a href="http://karlarao.wordpress.com/2010/07/05/oracle-datafile-io-latency-part-1">Oracle datafile IO latency - Part 1</a></li>
    </ul>
</li>
<li>Hardware and Operating System
    <ul>
	<li>Exadata
	       <ul>
	               <li><a href="http://karlarao.wordpress.com/2010/05/30/seeing-exadata-in-action">Seeing Exadata in action</a></li>
                   <li><a href="http://karlarao.wordpress.com/2012/05/14/iosaturationtoolkit-v2-with-iorm-and-awesome-text-graph">IOsaturationtoolkit-v2 with IORM and AWESOME text graph</a></li> 
                   <li><a href="http://karlarao.wordpress.com/2012/06/29/the-effect-of-asm-redundancyparity-on-readwrite-iops-slob-test-case-for-exadata-and-non-exa-environments/">The effect of ASM redundancy/parity on read/write IOPS – SLOB test case! for Exadata and non-Exa environments</a></li>
	       </ul>
	</li>
    <li>Oracle Database Appliance
	       <ul>
	               <li><a href="http://karlarao.wordpress.com/2011/09/21/oracle-database-appliance-oda-installation-configuration/">Oracle Database Appliance (ODA) Installation / Configuration</a></li>
	       </ul>
	</li>
    <li>Oracle Big Data Appliance
	       <ul>
	               <li><a href="http://karlarao.wordpress.com/2012/10/16/oracle-big-data-appliance-first-boot/">Oracle Big Data Appliance First Boot</a></li>
	       </ul>
	</li>
	<li>VirtualBox
	       <ul>
	               <li><a href="http://karlarao.wordpress.com/2011/01/30/migrating-your-vms-from-vmware-to-virtualbox-on-a-netbook">Migrating your VMs from VMware to VirtualBox (on a Netbook)</a></li>
	       </ul>
	</li>
    </ul>
</li>
<li>SQL Tuning
    <ul>
    <li><a href="http://karlarao.wordpress.com/2009/12/31/50-sql-performance-optimization-scenarios/">50+ SQL Performance Optimization scenarios</a></li>
    <li><a href="http://karlarao.wordpress.com/2012/02/11/sqltxplain-quick-tips-and-tricks-and-db-optimizer-vst/">SQLTXPLAIN quick tips and tricks and DB Optimizer VST</a> </li>
    </ul>
</li>
<li>Troubleshooting & Internals
    <ul>
	<li>Wait Events
	       <ul>
	               <li><a href="http://karlarao.wordpress.com/2009/04/06/os-thread-startup">OS Thread Startup</a></li>
	               <li><a href="http://karlarao.wordpress.com/2009/06/07/diagnosing-and-resolving-gc-block-lost">Diagnosing and Resolving “gc block lost”</a></li>
	       </ul>
	</li>
	<li>Deadlock
	       <ul>
	               <li><a href="http://karlarao.wordpress.com/2009/08/15/knowing-the-trend-of-deadlock-occurrences-from-the-alert-log">Knowing the trend of Deadlock occurrences from the Alert Log</a></li>
	       </ul>
	</li>
	<li>Systematic Approach and Method
	       <ul>
	               <li><a href="http://karlarao.wordpress.com/2010/06/28/the-not-a-problem-problem-and-other-related-stuff">The “Not a Problem” Problem and other related stuff</a></li>
	               <li><a href="http://karlarao.wordpress.com/2010/07/25/graphing-the-aas-with-perfsheet-a-la-enterprise-manager">Graphing the AAS with Perfsheet a la Enterprise Manager</a></li>
	               <li><a href="http://karlarao.wordpress.com/2010/07/27/guesstimations">Guesstimations</a></li>
                   <li><a href="http://karlarao.wordpress.com/2011/05/17/nocoug-journal-ask-the-oracle-aces-why-is-my-database-slow/">NoCOUG Journal – Ask the Oracle ACEs – Why is my database slow?</a></li>
                   <li><a href="http://karlarao.wordpress.com/2012/03/24/fast-analytics-of-awr-top-events/">Fast Analytics of AWR Top Events</a></li>
	       </ul>
	</li>
    </ul>
</li>
</ul>

<h3><span style="text-decoration:underline;">RAC</span></h3>
<ul>
<li>Upgrade
    <ul>
    <li><a href="http://karlarao.wordpress.com/2009/04/04/single-instance-and-rac-kernel-os-upgrade">Single Instance and RAC Kernel/OS upgrade</a></li>
    </ul>
</li>
<li>Benchmark and Testing
    <ul>
    <li><a href="http://karlarao.wordpress.com/2009/11/21/rac-system-load-testing-and-test-plan/">RAC system load testing and test plan</a></li>
    </ul>
</li>
<li>Performance
    <ul>
    <li><a href="http://karlarao.wordpress.com/2009/06/07/diagnosing-and-resolving-gc-block-lost">Diagnosing and Resolving “gc block lost”</a></li>
    </ul>
</li>
</ul>

<h3><span style="text-decoration:underline;">Enterprise Manager</span></h3>
<ul>
<li>EM troubleshooting
    <ul>
    <li><a href="http://karlarao.wordpress.com/2011/12/06/mining-emgc-notification-alerts">Mining EMGC Notification Alerts</a></li>
    </ul>
</li>
</ul>

<h3><span style="text-decoration:underline;">Linux</span></h3>
<ul>
<li>RedHat
    <ul>
    <li>RHEV
<ul>
<li><a href="http://karlarao.wordpress.com/2009/11/03/rhev-red-hat-enterprise-virtualization-is-out/">RHEV (Red Hat Enterprise Virtualization) is out!!!</a></li>
</ul>
    </li>
<li>RHCE
<ul>
<li><a href="http://karlarao.wordpress.com/2010/12/21/wheeew-i-am-now-a-redhat-certified-engineer">Wheeew, I am now a RedHat Certified Engineer!</a></li>
</ul>
    </li>
    </ul>
</li>
<li>OEL
    <ul>
    <li><a href="http://karlarao.wordpress.com/2008/11/07/oraclevalidatedinstallationonoel45">Oracle-Validated RPM on OEL 4.5</a></li>
    <li><a href="http://karlarao.wordpress.com/2009/05/08/yast-on-oel">Yast on OEL</a></li>
    </ul>
</li>
<li>Ubuntu
    <ul>
    <li><a href="http://karlarao.wordpress.com/2009/01/03/migrate-from-windows-xp-64bit-to-ubuntu-intrepid-ibex-810-64bit">Migrate from Windows XP 64bit to Ubuntu Intrepid Ibex 8.10 64bit</a></li>
    </ul>
</li>
<li>Fedora
    <ul>
    <li><a href="http://karlarao.wordpress.com/2010/10/07/after-oow-my-laptop-broke-down-data-rescue-scenario">After OOW, my laptop broke down – data rescue scenario</a></li>
    </ul>
</li>
</ul>

<h3><span style="text-decoration:underline;">Reviews</span></h3>
<ul>
<li><a href="http://karlarao.wordpress.com/2009/02/27/security-forecasting-oracle-performance-and-some-stuff-to-post-soon">Security, Forecasting Oracle Performance and Some stuff to post… soon…</a></li>
<li><a href="http://karlarao.wordpress.com/2009/04/20/advanced-oracle-troubleshooting-by-tanel-poder-in-singapore">Advanced Oracle Troubleshooting by Tanel Poder in Singapore</a></li>
<li><a href="http://karlarao.wordpress.com/2009/07/30/lucky-to-find-it">Lucky to find it..</a></li>
<li><a href="http://karlarao.wordpress.com/2010/02/01/craig-shallahamer-is-now-blogging">Craig Shallahamer is now blogging!</a></li>
</ul>

<h3><span style="text-decoration:underline;">Backup and Recovery</span></h3>
<ul>
<li><a href="http://karlarao.wordpress.com/2009/05/08/understanding-the-scn">Understanding the SCN</a></li>
</ul>

<h3><span style="text-decoration:underline;">Community</span></h3>
<ul>
<li><a href="http://karlarao.wordpress.com/2010/02/04/devcon-luzon-2010">DEVCON Luzon 2010</a></li>
<li><a href="http://karlarao.wordpress.com/2010/03/27/ideas-build-off-ideas-making-use-of-social-networking-sites">“Ideas build off ideas”… making use of Social Networking sites</a></li>
<li><a href="http://karlarao.wordpress.com/2010/04/10/my-personal-wiki-karlarao-tiddlyspot-com">My Personal Wiki - karlarao.tiddlyspot.com</a></li>
<li><a href="http://karlarao.wordpress.com/2010/06/18/oracle-mix-oow-2010-suggest-a-session">Oracle Mix - OOW 2010 Suggest-A-Session</a></li>
<li><a href="http://karlarao.wordpress.com/2010/09/12/oow-2010-my-schedule">OOW 2010 - my schedule</a></li>
<li><a href="http://karlarao.wordpress.com/2010/09/20/oow-2010-the-highlights">OOW 2010 - the highlights</a></li>
<li><a href="http://karlarao.wordpress.com/2010/09/24/oracle-closed-world-and-unconference-presentations">Oracle Closed World and Unconference Presentations</a></li>
<li><a href="http://karlarao.wordpress.com/2010/11/07/ill-be-speaking-at-hotsos-2011">I’ll be speaking at HOTSOS 2011!</a></li>
<li><a href="http://karlarao.wordpress.com/2011/03/11/hotsos-2011-mining-the-awr-repository-for-capacity-planning-visualization-and-other-real-world-stuff">Hotsos 2011 – Mining the AWR Repository for Capacity Planning, Visualization, and other Real World Stuff</a></li>
<li><a href="http://karlarao.wordpress.com/2011/03/23/oracle-by-example-portal-now-shows-12g/">Oracle by Example portal now shows 12g</a></li>
<li><a href="http://karlarao.wordpress.com/2011/07/14/enkitec-university-exadata-courses-for-developers-and-dbas/">Enkitec University – Exadata Courses for Developers and DBAs</a></li>
<li><a href="http://karlarao.wordpress.com/2011/07/18/virtathon-mining-the-awr/">VirtaThon – Mining the AWR</a></li>
<li><a href="http://karlarao.wordpress.com/2011/12/31/easy-and-fast-environment-framework/">Easy and fast environment framework</a></li> 
<li><a href="http://karlarao.wordpress.com/2012/02/13/rmoug-2012-training-days/">RMOUG 2012 training days</a></li>
<li><a href="http://karlarao.wordpress.com/2012/06/29/speaking-at-e4/">Speaking at E4!</a></li>
<li><a href="http://karlarao.wordpress.com/2012/09/27/oaktable-world-2012/">OakTable World 2012</a></li>
<li><a href="http://karlarao.wordpress.com/2013/02/05/rmoug-ioug-collaborate-kscope-and-e4-2013/">RMOUG, IOUG Collaborate, KSCOPE, and E4 2013</a></li>
<li><a href="http://karlarao.wordpress.com/2013/05/23/speaking-at-e4-2013-and-some-exadata-patents-good-stuff/">Speaking at E4 2013! … and some Exadata Patents good stuff</a></li>
<li><a href="http://karlarao.wordpress.com/2013/09/22/oow-and-oaktable-world-2013/">OOW and OakTable World 2013</a></li>
<li><a href="http://karlarao.wordpress.com/2014/12/04/my-timetaskgoalhabit-ttgh-management/">my Time/Task/Goal/Habit (TTGH) management</a></li>
</ul>
<span style="color:white;"> </span>
<span style="color:white;"> </span>
<span style="color:white;"> </span>
<span style="color:white;"> </span>
<span style="color:white;"> </span>

}}}
http://thecomingstorm.us/smf/index.php?topic=323.0
http://golanzakai.blogspot.com/2012/01/openvswitch-with-virtualbox.html
http://www.evernote.com/shard/s48/sh/f5866bf1-97c9-46b1-8830-205d7fa4cde6/ba217e6d8c137d6e30917e1bf375a519
http://www.brendangregg.com/dtrace.html

http://www.brendangregg.com/DTrace/dtrace_oneliners.txt
the paper
http://queue.acm.org/detail.cfm?id=2413037

here's the video of the USE method
http://dtrace.org/blogs/brendan/2012/09/21/fisl13-the-use-method/
Rappler's Mood Navigator
https://www.evernote.com/shard/s48/sh/2a8e6b17-e499-49cb-a1b7-2944be0eb88e/967899d7670f3e470380b8206bb184c5
POWERLINK - buffer io error
http://knowledgebase.emc.com/emcice/documentDisplay.do;jsessionid=E5086F44F54525E1C3E2930AD5ABB7D9?docType=1006&clusterName=DefaultCluster&resultType=5002&groupId=1&page=&docProp=$solution_id&docPropValue=emc187631&passedTitle=null
http://knowledgebase.emc.com/emcice/documentDisplay.do?docType=1006&clusterName=DefaultCluster&resultType=5002&groupId=1&page=&docProp=$solution_id&docPropValue=emc199974&passedTitle=null
http://knowledgebase.emc.com/emcice/documentDisplay.do?docType=1006&clusterName=DefaultCluster&resultType=5002&groupId=1&page=&docProp=$solution_id&docPropValue=emc157139&passedTitle=null
http://knowledgebase.emc.com/emcice/documentDisplay.do?docType=1006&clusterName=DefaultCluster&resultType=5002&groupId=1&page=&docProp=$solution_id&docPropValue=emc203991&passedTitle=null

{{{
"Linux host devices log I/O errors during server reboot"
ID:	emc157139

URL:
http://knowledgebase.emc.com/emcice/documentDisplay.do?docType=1006&clusterName=DefaultCluster&resultType=5002&groupId=1&page=&docProp=$solution_id&docPropValue=emc157139&passedTitle=null

Knowledgebase Solution	 

Environment:	OS: Red Hat Linux
Environment:	Product: CLARiiON CX-series
Environment:	Product: CLARiiON CX3-series
Environment:	EMC SW: PowerPath
Problem:	Linux host devices log I/O errors during server reboot.
Problem:	Dmesg log or messages log have:
Buffer I/O error on device sdm, logical block 2
Buffer I/O error on device sdm, logical block 3
Buffer I/O error on device sdm, logical block 4
Buffer I/O error on device sdm, logical block 5
Device sdm not ready.
end_request: I/O error, dev sdm, sector 16
Buffer I/O error on device sdm, logical block 2
Device sdm not ready.
end_request: I/O error, dev sdm, sector 128

Problem:	Output of powermt display dev=all shows:
Pseudo name=emcpowera
CLARiiON ID=CK200063301081 [SG2]
Logical device ID=600601604EE419004A308F0C5AD0DB11 [LUN 61]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A
==============================================================================
---------------- Host ---------------   - Stor -   -- I/O Path -  -- Stats ---
### HW Path                 I/O Paths    Interf.   Mode    State  Q-IOs Errors
==============================================================================
   0 qla2xxx                   sdm       SP B0     active  alive      0      0

Change:	Server rebooted for maintenance
Root Cause:	The devices logging the I/O error are assigned to SP-B, but currently "owned" by SP-A.  The CLARiiON array is an active-passive array so this is normal behavior when multiple paths are utilized.
Fix:	These messages logged at boot up may be ignored.
}}}
-- HASH GROUP BY

_GBY_HASH_AGGREGATION_ENABLED=FALSE
_UNNEST_SUBQUERY = FALSE

in Metalink3 even in patch set 10.2.0.4, PeopleSoft have a workaround on the bug by using the hidden parameter

Wrong Results Possible on 10.2 When New "HASH GROUP BY" Feature is Used
  	Doc ID: 	Note:387958.1

Bug 4604970 - Wrong results with 'hash group by' aggregation enabled
  	Doc ID: 	Note:4604970.8

ORA-00600 [32695] [hash aggregation can't be done] During Insert.
  	Doc ID: 	Note:729447.1

Bug 6471770 - OERI [32695] [hash aggregation can't be done] from Hash GROUP BY
  	Doc ID: 	Note:6471770.8

10.2.0.3 Patch Set - List of Bug Fixes by Problem Type [ID 391116.1]





-- running out of OS kernelI/O resources

WARNING:1 Oracle process running out of OS kernelI/O resources
  	Doc ID: 	748607.1

Bug 6687381 - "WARNING: Oracle process running out of OS kernel I/O resources" messages
  	Doc ID: 	6687381.8
http://www.devx.com/dbzone/10MinuteSolution/22191/1954
http://www.dba-oracle.com/t_delete_performance_speed.htm

-- speedup delete
http://dbaforums.org/oracle/index.php?showtopic=534
https://forums.oracle.com/forums/thread.jspa?threadID=987536
http://www.mail-archive.com/oracle-l@fatcity.com/msg15356.html

CAP_SYS_ADMIN: the new root https://lwn.net/Articles/486306/
https://stackoverflow.com/questions/51911368/what-restriction-is-perf-event-paranoid-1-actually-putting-on-x86-perf
https://www.stigviewer.com/stig/red_hat_enterprise_linux_8/2023-12-01/finding/V-230270
https://unix.stackexchange.com/questions/454708/how-do-you-add-cap-sys-admin-permissions-to-user-in-centos-7
https://stackoverflow.com/questions/26504457/how-to-use-cap-sys-admin


! more context behind 
https://man7.org/linux/man-pages/man7/capabilities.7.html
https://forums.grsecurity.net/viewtopic.php?f=7&t=2522
* Oracle Utilities Customer Care and Billing (CC&B) - utilities companies like DLC

* Oracle Utilities (CC&B, MDM, ...) - Develop, Deploy and Debug with Eclipse and the SDK
** https://www.youtube.com/watch?v=fuPzFCBEEWg


! batch 
Oracle Utilities Customer Care And Billing Batch Operations and Configuration Guide https://docs.oracle.com/cd/E18733_01/pdf/E18372_01.pdf
Cloudera Data Platform — the industry’s first enterprise data cloud
https://www.cloudera.com/campaign/try-cdp-public-cloud.html
! before - 6K seconds 
{{{
SELECT COUNT (DISTINCT AIR.STORE_ID) STR_COUNT_STYLE
FROM
    ALGO_INPUT_FOR_REVIEW AIR,
    ALLOC_BATCH_LI_DETAILS LI_DETAILS
WHERE
    COALESCE (AIR.ALLOCATED_UNIT_QTY,
              AIR.SUGGESTED_ALLOCATED_QTY) > 0 AND
    AIR.BATCH_ID = LI_DETAILS.BATCH_ID AND
    AIR.BATCH_LINE_NO = LI_DETAILS.BATCH_LINE_NO AND
    AIR.ITEM_ID = LI_DETAILS.ITEM_ID AND
    AIR.IMMINENT_RELEASE = 'Y' AND
    AIR.BATCH_ID = 1426 AND
    ALLOCATION_SEQ_NO = (SELECT MAX (AIR2.ALLOCATION_SEQ_NO)
                         FROM ALGO_INPUT_FOR_REVIEW AIR2
                         WHERE AIR2.BATCH_ID = AIR.BATCH_ID)
}}}

! after - below 1 sec 
{{{
WITH max_seq AS
    (SELECT /*+ MATERIALIZE */
            batch_id,
            MAX(allocation_seq_no) max_allocation_seq_no
       FROM algo_input_for_review
      WHERE batch_id = :b_batch_id
   GROUP BY batch_id),
algo_slice AS
    (SELECT /*+ MATERIALIZE */
           aifr.*
      FROM algo_input_for_review aifr
     WHERE (aifr.batch_id, aifr.allocation_seq_no) IN (SELECT batch_id, max_allocation_seq_no FROM max_seq)
       AND aifr.imminent_release = 'Y'
     )
SELECT COUNT (DISTINCT air.store_id) str_count_style
  FROM algo_slice air,
       alloc_batch_li_details li_details
WHERE COALESCE (air.allocated_unit_qty, air.suggested_allocated_qty) > 0
   AND air.batch_id = li_details.batch_id
   AND air.batch_line_no = li_details.batch_line_no
   AND air.item_id = li_details.item_id;
}}}
Lawrence To - COE List of database outages
https://docs.google.com/fileview?id=0B5H46jS7ZPdJNGUxNmNiYWQtZGYxZC00OWFhLWEzMmMtYThlYTlhNjQzNjU3&hl=en

Lawrence To - COE Outage Prevention, Detection, And Repair
https://docs.google.com/fileview?id=0B5H46jS7ZPdJNjIzMDNlZjQtYjgyZi00M2M4LWE4OTUtNDFkMDUwYzQ2MjA4&hl=en
http://hackingexpose.blogspot.com/2012/05/oracle-wont-patch-four-year-old-zero.html
http://www.freelists.org/post/oracle-l/Oracle-Security-Alert-for-CVE20121675-10g-extended-support,3
https://blogs.oracle.com/security/entry/security_alert_for_cve_2012
http://asanga-pradeep.blogspot.com/2012/05/using-class-of-secure-transport-cost-to.html
http://seclists.org/fulldisclosure/2012/Apr/343
http://seclists.org/fulldisclosure/2012/Apr/204

Using Class of Secure Transport (COST) to Restrict Instance Registration in Oracle RAC (Doc ID 1340831.1)
2s8c16t - 2sockets,8cores,16threads
1s4c8t - 1socket,4cores,8threads

-list of xeon CPUs
http://en.wikipedia.org/wiki/List_of_Intel_Xeon_microprocessors

CPU CPI 
Intel Xeon Phi Coprocessor High Performance Programming http://goo.gl/Ycri3u
Optimization and Performance Tuning for Intel® Xeon Phi™ Coprocessors, Part 2: Understanding and Using Hardware Events http://software.intel.com/en-us/articles/optimization-and-performance-tuning-for-intel-xeon-phi-coprocessors-part-2-understanding
Simultaneous Multi-Threading - CPI http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/smt.htm
HOWTO processor numbers http://www.intel.com/content/www/us/en/processors/processor-numbers.html
E7 family http://www.intel.com/content/dam/www/public/us/en/documents/white-papers/xeon-e7-family-performance-model-numbers-paper.pdf
SKUs bin example http://www.anandtech.com/show/5475/intel-releases-seven-sandy-bridge-cpus

''Intel price list''
Intel® Xeon® Processor 5500 Series price list http://ark.intel.com/products/series/39565/Intel-Xeon-Processor-5500-Series

''AMD price list''
http://www.amd.com/us/products/pricing/Pages/server-opteron.aspx


{{{
Exadata updating to Xeon E5 (Fall 2012)..(X3-2,X3-2L,X2-4)
QPI replaces FSB 
memory is still in parallel bus which doesn't mingle well with the serial bus

Intel 2 level memory Nahelem is the death of the FSB
}}}
http://en.wikipedia.org/wiki/Uncore
<<<
The uncore is a term used by Intel to describe the functions of a microprocessor that are not in the Core, but which are essential for Core performance.[1] The Core contains the components of the processor involved in executing instructions, including the ALU, FPU, L1 and L2 cache. Uncore functions include QPI controllers, L3 cache, snoop agent pipeline, on-die memory controller, and Thunderbolt controller.[2] Other bus controllers such as PCI Express and SPI are part of the chipset.[3]
The Intel Uncore design stems from its origin as the Northbridge. The design of the Intel Uncore reorganizes the functions critical to the Core, making them physically closer to the Core on-die, thereby reducing their access latency. Functions from the Northbridge which are less essential for the Core, such as PCI Express or the Power Control Unit (PCU), are not integrated into the Uncore -- they remain as part of the Chipset.[4]
''Specifically, the micro-architecture of the Intel Uncore is broken down into a number of modular units. The main Uncore interface to the Core is the Cache Box (CBox), which interfaces with the Last Level Cache (LLC) and is responsible for managing cache coherency. Multiple internal and external QPI links are managed by Physical Layer units, referred to as PBox. Connections between the PBox, CBox, and one or more iMC's (MBox) are managed by System Config Controller (UBox) and a Router (RBox). [5]''
Removal of serial bus controllers from the Intel Uncore further enables increased performance by allowing the Uncore clock (UCLK) to run at a base of 2.66 GHz, with upwards overclocking limits in excess of 3.44 GHz.[6] This increased clock rate allows the Core to access critical functions (such as the iMC) with significantly less latency (typically reducing Core access to DRAM by 10ns or more).
<<<

''CPU Core'' - contains the components of the processor involved in executing instructions
* ALU
* FPU
* L1 and L2 cache

''CPU Uncore''
* QPI controllers
* L3 cache
* snoop agent pipeline
* on-die memory controller
* Thunderbolt

''Chipset'' http://en.wikipedia.org/wiki/Chipset
* PCI Express 
* SPI


http://www.evernote.com/shard/s48/sh/3ca3db4e-6cc9-4139-9548-716d22a9ec32/ab43be72457b9ff412efd509f58ca1e6
The Xeon E5520: Popular for VMWare http://h30507.www3.hp.com/t5/Eye-on-Blades-Blog-Trends-in/The-Xeon-E5520-Popular-for-VMWare/ba-p/79934#.Uvmtn0JdWig
https://software.intel.com/en-us/videos/what-is-persistent-memory-persistent-memory-programming-series
http://www.pcgamer.com/rumor-intel-may-release-3d-xpoint-system-memory-in-2018/
https://itpeernetwork.intel.com/new-breakthrough-persistent-memory-first-public-demo/


<<<
this could come as a new switch/parameter on new Exadata versions to enable the use of the hardware feature just like what they did before on HW flash compression on the flash devices. Or it could come enabled as default.
But for sure this is a big boost in performance, look at that microsecond difference in latency.
<<<

<<<
it’s my believe that the new memory cache with Exadata actually is a development based upon 3dxpoint, alias memory that can be written to to persist.
<<<

<<<
3D XPoint (cross point) memory, which will be sold under the name Optane
<<<

<<<
Looks like the application/DB vendor should make code change to take advantage of the new memory layer. Without code change to use the persistent memory structures with the kernel code of the DB software  ( like what was done in SqlServer 2016 for very fast transaction/redo log write), it looks like we cannot easily add this super fast layer.
 
So, I think the question is the adoption of this new persistent memory layer within the software.
 
Some examples I see are
 
https://channel9.msdn.com/Shows/Data-Exposed/SQL-Server-2016-and-Windows-Server-2016-SCM--FAST ( check details from 8:00 Min)
 
https://software.intel.com/en-us/videos/a-c-example-persistent-memory-programming-series
 
<<<
{{{
top - 12:14:35 up 10 days, 10:42, 24 users,  load average: 20.15, 19.97, 19.14
Tasks: 351 total,   1 running, 350 sleeping,   0 stopped,   0 zombie
Cpu(s):  2.3%us, 27.7%sy,  1.7%ni, 40.7%id, 27.1%wa,  0.2%hi,  0.2%si,  0.0%st
Mem:  16344352k total,  6098504k used, 10245848k free,     1912k buffers
Swap: 20021240k total,   988764k used, 19032476k free,    83860k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
12442 root      15   0 1135m 4704 3068 S 36.1  0.0 162:59.73 /usr/lib/virtualbox/VirtualBox --comment x_x3 --startvm 94756484-d2d5-4bdb
12413 root      15   0 1196m 5004 3196 S 30.3  0.0  67:55.23 /usr/lib/virtualbox/VirtualBox --comment x_x2 --startvm cc54fb4c-170b-430a
12384 root      15   0 1195m 7660 3248 S 26.1  0.0 162:04.52 /usr/lib/virtualbox/VirtualBox --comment x_x1 --startvm e266cad2-403f-4d98
 3972 root      15   0 60376 4588 1524 S  7.7  0.0 583:04.20 /usr/bin/ssh -x -oForwardAgent no -oPermitLocalCommand no -oClearAllForwardings
 1053 root      15   0 1526m 5496 3048 S  4.9  0.0 386:27.97 /usr/lib/virtualbox/VirtualBox --comment windows7 --startvm 3da776bd-1d5e-4eec-
 3971 root      18   0 54300 1020  848 D  1.6  0.0  49:15.64 scp -rpv 20111015-backup 192.168.0.100 /DataVolume/shares/Public/Backup
12226 root      15   0  251m 9876 2268 S  0.6  0.1 686:00.02 /usr/lib/nspluginwrapper/npviewer.bin --plugin /usr/lib/mozilla/plugins/libflas
12786 root      15   0 1476m 4624 3072 S  0.6  0.0   6:38.29 /usr/lib/virtualbox/VirtualBox --comment x_db1 --startvm 1c3b929d-bdbd-40da-8
12947 root      15   0 1478m 4360 2904 S  0.5  0.0   7:36.47 /usr/lib/virtualbox/VirtualBox --comment x_db2 --startvm f3e1060d-28f5-4a72-8
 4620 root      15   0 76600 5500 1252 S  0.2  0.0  78:13.20 Xvnc :1 -desktop desktopserver.localdomain:1 (root) -httpd /usr/share/vnc/class
 4729 root      18   0  543m  13m 3668 D  0.2  0.1   9:41.13 nautilus --no-default-window --sm-client-id default3
 5808 root      15   0  348m 2576 1324 S  0.2  0.0  39:54.52 /usr/lib/virtualbox/VBoxSVC --auto-shutdown
 5754 oracle    16   0  260m 1220  988 S  0.1  0.0   0:25.12 gnome-terminal
 5800 root      15   0  112m  856  756 S  0.1  0.0  18:51.71 /usr/lib/virtualbox/VBoxXPCOMIPCD
 5899 root      15   0  303m 6264 1860 S  0.1  0.0  15:42.25 gnome-terminal
13941 root      15   0 12892 1216  768 R  0.1  0.0   0:05.06 top -c
29960 root      16   0  109m 6716 1176 S  0.1  0.0  82:39.70 /usr/bin/perl -w /usr/bin/collectl --all -o T -o D
30089 root      15   0  109m 4336 1072 S  0.1  0.0  33:21.60 /usr/bin/perl -w /usr/bin/collectl -sD --verbose -o T -o D
    1 root      15   0 10368   88   60 S  0.0  0.0   0:04.05 init [5]
    2 root      RT  -5     0    0    0 S  0.0  0.0   0:00.02 [migration/0]
    3 root      34  19     0    0    0 S  0.0  0.0  11:11.08 [ksoftirqd/0]
    4 root      RT  -5     0    0    0 S  0.0  0.0   0:00.00 [watchdog/0]
    5 root      RT  -5     0    0    0 S  0.0  0.0   0:00.35 [migration/1]
    6 root      34  19     0    0    0 S  0.0  0.0   0:00.62 [ksoftirqd/1]
    7 root      RT  -5     0    0    0 S  0.0  0.0   0:00.00 [watchdog/1]
    8 root      RT  -5     0    0    0 S  0.0  0.0   0:01.26 [migration/2]
    9 root      34  19     0    0    0 S  0.0  0.0   0:00.72 [ksoftirqd/2]
   10 root      RT  -5     0    0    0 S  0.0  0.0   0:00.00 [watchdog/2]
   11 root      RT  -5     0    0    0 S  0.0  0.0   0:04.89 [migration/3]
   12 root      34  19     0    0    0 S  0.0  0.0   0:00.63 [ksoftirqd/3]
   13 root      RT  -5     0    0    0 S  0.0  0.0   0:00.00 [watchdog/3]
   14 root      RT  -5     0    0    0 S  0.0  0.0   0:02.66 [migration/4]
   15 root      34  19     0    0    0 S  0.0  0.0   0:41.68 [ksoftirqd/4]
   16 root      RT  -5     0    0    0 S  0.0  0.0   0:00.00 [watchdog/4]
   17 root      RT  -5     0    0    0 S  0.0  0.0   0:01.27 [migration/5]
   18 root      34  19     0    0    0 S  0.0  0.0   0:12.36 [ksoftirqd/5]
   19 root      RT  -5     0    0    0 S  0.0  0.0   0:00.00 [watchdog/5]

root@192.168.0.101's password:
Last login: Fri Oct 21 10:43:53 2011 from desktopserver.localdomain
[root@desktopserver ~]# vmstat 1 100000 | while read line; do echo "`date +%T`" "$line" ; done
12:13:36 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
12:13:36 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
12:13:36 4  3 988016 10329664   1916  70672    4    1   502   322    5    4  2  4 91  2  0
12:13:36 4  3 988016 10316972   1964  79892 4592    0 15488     4 4689 19386  4 28 47 22  0
12:13:37 5  3 987760 10309344   2000  82368 5500    0 16196    52 4555 16869  4 28 48 20  0
12:13:38 3  3 987636 10314420   1776  62736 3456    0 57144     8 2967 13880  3 28 51 17  0
12:13:44 3  5 987636 10424500   1416  18448 5032    0 35004  3060 5930 24333  4 32 42 22  0
12:13:47 2 27 987820 10446888   1508  17316 4632 14944 14096 16088 6657 13410  3 21 31 46  0
12:13:47 0 34 988132 10471332   1540  14196 3460 10840 13916 10892 4853 9797  2 15 47 37  0
12:13:47 2 31 988132 10458716   1616  21952 8076 1768 17948  1768 3684 9470  2  6 62 30  0
12:13:47 3 27 988372 10466144   1588  15752 6540 2916 10832  2968 2154 6603  1 21 57 21  0
12:13:47 1 27 988372 10466100   1588  16508 9128   16 18148    20 1970 6116  2 23 56 18  0
12:13:47 4 25 988368 10440804   1636  28680 13864    0 27728    32 3783 10942  2 24 53 21  0
12:13:47 3 19 988368 10410040   1784  44260 14644    0 33052     0 5570 18623  3 20 45 32  0
12:13:48 4  6 988328 10381236   1848  56596 16680    0 29676     0 4648 23356  4 23 39 34  0
12:13:49 3  6 988328 10356104   1872  67584 14004    0 27300    40 4966 23598  4 30 42 24  0
12:13:50 4  5 987876 10332532   1928  77784 13932    0 25780     0 4908 19443  4 28 47 21  0
12:13:51 5  6 987876 10336848   1760  68392 9140    0 82780     0 3112 13723  3 29 45 23  0
12:13:52 5  5 987876 10356808   1752  47148 5464    0 18388  1440 4893 17567  5 29 48 19  0
12:13:53 4  4 987820 10353080   1880  45368 6124    0 18932     0 4608 17808  4 30 48 18  0
12:13:54 6  2 987696 10344684   1900  56192 4452    0 15092    44 4694 22556  4 29 48 18  0
12:14:01 3 23 988248 10454908   1472  14144 5692 15532 26980 23396 10868 28611  4 22 33 41  0
12:14:02 2 36 989052 10478864   1552  11596 6976 14452 15120 14924 4457 9650  2 25 41 32  0
12:14:02 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
12:14:02 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
12:14:02 2 34 989076 10468140   1536  16912 16380  608 42292   696 7441 19332  3 19 48 30  0
12:14:02 3 33 989076 10449392   1620  23468 13628    0 21540     0 2065 5908  1 17 49 32  0
12:14:02 1 22 989072 10427880   1736  28984 16260    0 24148    40 2129 7560  1 11 44 45  0
12:14:02 4 19 989072 10409308   1764  34976 13268    0 20720     0 2642 11492  2 16 54 28  0
12:14:02 4 10 989072 10316360   1884  54384 13684    0 35704     0 4805 22982  4 27 43 26  0
12:14:03 3  5 989072 10362300   1776  57220 12468    0 84288    16 4556 16173  4 30 44 23  0
12:14:04 3  6 988756 10359512   1824  43856 17520    0 34060   340 5176 23298  4 26 46 24  0
12:14:05 4  7 988748 10345436   1832  55780 8660    0 21276     0 4631 19355  4 28 38 30  0
12:14:06 4  5 988700 10331388   1844  62976 12920    0 25012     0 5087 19602  4 29 49 18  0
12:14:07 5  3 988520 10325812   1900  69584 4964    0 16312     4 4738 17187  4 32 45 19  0
12:14:08 3  4 988520 10368964   1792  38452 5004    0 17408  2020 5246 18092  5 29 50 16  0
12:14:16 1 37 988864 10450680   1448  14028 3684 13940 18928 17804 8262 20373  3 24 29 44  0
12:14:16 0 37 989068 10476748   1500  10940 7156 14072 13328 14112 4870 6545  0  4 56 39  0
12:14:16 0 49 989080 10404760   1556  23028 17332 3652 41824  3652 4791 9224  1  2 64 33  0
12:14:16 1 37 989076 10409020   1680  24412 12204    0 32676    16 2168 5833  1 14 49 35  0
12:14:16 2 24 989076 10381452   1784  42528 14900    0 35860     0 2040 6524  1 18 43 38  0
12:14:16 2 20 989060 10370832   1788  76880 14364    0 59880    24 3810 13617  3 11 54 33  0
12:14:16 3  8 989060 10358380   1848  78412 14528    0 32416     0 4995 28265  3 21 41 34  0
12:14:17 5  5 989040 10358372   1916  72364 6144    0 18348     0 4715 16760  4 29 39 27  0
12:14:18 4  7 989004 10354776   1980  61908 13700    0 26612   204 4820 16973  4 29 43 24  0
12:14:19 3  4 988660 10345956   1992  60504 12496    0 27176   108 5166 21841  4 32 46 18  0



on /vbox... not blocked
------------------------------------------------------------------------------------------------------------------------------

[root@desktopserver ~]# vmstat 1 100000 | while read line; do echo "`date +%T`" "$line" ; done
12:28:04 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
12:28:04 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
12:28:04 5  5 967424 10157364   1728 125884    4    1   505   322    6    1  2  4 91  2  0
12:28:05 5  5 967424 10113688   1816 167580   36    0 39196    20 11806 58927 12 48 25 15  0
12:28:06 6  3 967424 10101692   1764 178888  104    0 33756    92 10547 55163 11 44 26 19  0
12:28:07 6  2 967424 9993612   1896 284384   12    0 120744     0 8168 36067  8 38 33 21  0
12:28:08 4  3 967424 9989976   1836 289252   36    0 28224  2084 9152 46527  9 41 29 20  0
12:28:09 4  3 967424 10186920   1592  94984   20    0 21336 96740 8117 44754  8 39 38 16  0
12:28:10 3  5 967424 10080256   1572 140400  236    0 48252 24100 11487 57187 12 43 28 17  0
12:28:11 6  3 967424 9993084   1772 285304   28    0 108504    68 7980 34287  8 37 40 15  0
12:28:12 6  2 967424 9986940   1776 290368   64    0 38116     0 11581 56771 11 44 31 13  0
12:28:13 5  3 967424 9895152   1828 342476   48    0 60348     0 10379 45136 10 42 30 19  0
12:28:15 5  4 967424 9838808   1952 436660  284    0 94468     0 8581 41748  9 37 38 16  0
12:28:15 6  5 967424 10095720   1840 181592   80    0 34988 124936 11126 53636 11 44 25 20  0
12:28:16 4  2 967424 10077976   1612 165864   48    0 60840    32 9095 46720  9 40 30 22  0
12:28:17 4  3 967424 10053744   1720 226140   84    0 69404     0 11419 54791 11 44 27 18  0
12:28:18 6  1 967424 10165980   1652 114008    0    0 72700 60800 12525 59365 12 46 29 13  0
12:28:19 6  5 967424 10079536   1552 201004   12    0 91700     0 8773 41036  9 37 36 18  0
12:28:20 5  5 967424 10010352   1640 267340    0    0 36484     0 11164 49569 11 41 35 13  0
12:28:21 5  2 967424 9946732   1720 278668    0    0 81480    40 10572 49893 10 43 34 13  0
12:28:22 6  3 967424 9941164   1812 336472   92    0 73448   820 8063 35725  8 34 38 20  0
12:28:23 7  2 967424 10185156   1708  95096    0    0 37880 116020 11820 56886 11 45 30 14  0
12:28:24 4  4 967424 10158376   1532 122232   16    0 29000 14540 9389 47092  9 41 33 17  0
12:28:25 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
12:28:25 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
12:28:25 9  3 967424 9995728   1720 282168    0    0 109168     0 5062 24050  5 32 50 14  0
12:28:26 6  3 967424 9975200   1764 302068    8    0 35616     0 10937 48511 11 43 32 15  0
12:28:27 6  3 967424 9898512   1844 376736   44    0 37016    40 11216 47423 11 42 33 14  0
12:28:28 5  2 967424 9777288   1944 464060   28    0 89372    32 7675 44210  9 38 39 14  0
12:28:29 7  4 967424 10187020   1508  93940   52    0 49944 188124 7343 41070  9 40 32 19  0
12:28:30 6  2 967424 10167236   1580 114732   36    0 40932    64 12227 56547 12 39 35 13  0
12:28:31 7  3 967424 10081364   1664 200636   56    0 91096     0 8624 34942  9 34 40 17  0
12:28:32 8  2 967424 10022828   1788 231756   20    0 40728    36 10431 51118 11 39 33 17  0
12:28:33 8  3 967424 10182900   1696  97704    0    0 58244  1756 10384 56330 10 45 31 14  0
12:28:34 2  4 967424 10162252   1548 117156   20    0 83572 83360 7049 33628  6 37 39 17  0
12:28:35 3  3 967424 10191104   1544  89604  100    0  2944 31372 2563 13785  3 28 48 21  0
12:28:36 11  4 967424 10173104   1600 108220   36    0 33268     0 10110 52401 10 41 34 14  0
12:28:37 4  3 967424 10116640   1648 134780    0    0 40816    40 11156 54719 11 44 32 13  0
12:28:38 5  4 967424 10185980   1640  95592   12    0 116940    24 8661 35493  8 36 38 18  0
12:28:39 6  2 967424 10133704   1520 145244    0    0 38928   232 11766 61009 12 45 27 16  0
12:28:40 8  3 967424 10188060   1544  92580   68    0 24604 76548 8546 41263  9 41 32 19  0
12:28:41 8  3 967424 10126928   1512 155672   12    0 119696     0 8124 36208  8 39 39 15  0
12:28:42 4  3 967424 10183592   1492  97764    0    0 43424    48 12891 52804 12 48 29 10  0
12:28:43 5  3 967424 10160868   1512 118668   20    0 34336 10932 10721 54931 10 46 31 13  0
12:28:44 8  2 967424 9993748   1708 283296   60    0 115504     0 6847 32959  6 36 39 19  0
12:28:45 7  3 967424 9973180   1752 304788    0    0 39256    12 11889 48368 12 43 32 12  0
12:28:46 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
12:28:46 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
12:28:46 8  2 967424 9871024   1792 342204   44    0 45900    36 12519 51742 12 43 32 13  0
12:28:47 8  0 967268 9869416   1900 407064  136    0 70860     0 4509 25553  8 34 41 16  0

top - 12:29:22 up 10 days, 10:57, 24 users,  load average: 7.40, 10.08, 14.93
Tasks: 353 total,   2 running, 351 sleeping,   0 stopped,   0 zombie
Cpu(s):  8.4%us, 39.8%sy,  1.9%ni, 33.2%id, 15.1%wa,  0.6%hi,  0.9%si,  0.0%st
Mem:  16344352k total,  6264212k used, 10080140k free,     1468k buffers
Swap: 20021240k total,   967256k used, 19053984k free,   170788k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
12786 root      15   0 1495m  31m 3424 S 84.1  0.2   9:43.24 /usr/lib/virtualbox/VirtualBox --comment x_db1 --startvm 1c3b929d-bdbd-40da-8
12413 root      15   0 1196m 5904 3352 S 81.7  0.0  77:33.67 /usr/lib/virtualbox/VirtualBox --comment x_x2 --startvm cc54fb4c-170b-430a
12442 root      15   0 1136m 6028 3324 S 81.1  0.0 172:35.79 /usr/lib/virtualbox/VirtualBox --comment x_x3 --startvm 94756484-d2d5-4bdb
12384 root      15   0 1195m 8712 3360 S 68.9  0.1 172:34.70 /usr/lib/virtualbox/VirtualBox --comment x_x1 --startvm e266cad2-403f-4d98
14504 root      15   0 60440 7376 2540 S 43.8  0.0   0:33.65 /usr/bin/ssh -x -oForwardAgent no -oPermitLocalCommand no -oClearAllForwardings
 3972 root      15   0 60376 4596 1524 R 15.8  0.0 584:54.46 /usr/bin/ssh -x -oForwardAgent no -oPermitLocalCommand no -oClearAllForwardings
 1053 root      15   0 1527m  12m 3500 S 12.2  0.1 387:51.13 /usr/lib/virtualbox/VirtualBox --comment windows7 --startvm 3da776bd-1d5e-4eec-
14503 root      18   0 53884 1904 1452 D  9.2  0.0   0:06.67 scp 1122.tar.bz2 oracle@db1 ~oracle
 3971 root      18   0 54300 1056  864 D  3.6  0.0  49:38.29 scp -rpv 20111015-backup 192.168.0.100 /DataVolume/shares/Public/Backup
12226 root      16   0  251m  75m 2408 S  3.6  0.5 686:16.31 /usr/lib/nspluginwrapper/npviewer.bin --plugin /usr/lib/mozilla/plugins/libflas
  486 root      10  -5     0    0    0 D  1.3  0.0  25:52.02 [kswapd0]
12947 root      15   0 1478m 9692 3372 S  1.3  0.1   7:47.68 /usr/lib/virtualbox/VirtualBox --comment x_db2 --startvm f3e1060d-28f5-4a72-8
 4620 root      15   0 73236  12m 2380 S  1.0  0.1  78:17.59 Xvnc :1 -desktop desktopserver.localdomain:1 (root) -httpd /usr/share/vnc/class
14428 root      18   0  109m  17m 1964 S  0.7  0.1   0:01.79 /usr/bin/perl -w /usr/bin/collectl --all -o T -o D
    8 root      RT  -5     0    0    0 S  0.3  0.0   0:01.33 [migration/2]
   20 root      RT  -5     0    0    0 S  0.3  0.0   0:01.00 [migration/6]
  180 root      10  -5     0    0    0 S  0.3  0.0   0:05.36 [kblockd/0]
 5014 root      15   0  348m 1216  956 S  0.3  0.0   2:08.20 /usr/libexec/mixer_applet2 --oaf-activate-iid=OAFIID:GNOME_MixerApplet_Factory
14559 root      15   0 12892 1320  824 R  0.3  0.0   0:00.03 top -c
    1 root      15   0 10368   88   60 S  0.0  0.0   0:04.08 init [5]
    2 root      RT  -5     0    0    0 S  0.0  0.0   0:00.02 [migration/0]
    3 root      34  19     0    0    0 S  0.0  0.0  11:11.08 [ksoftirqd/0]
    4 root      RT  -5     0    0    0 S  0.0  0.0   0:00.00 [watchdog/0]
    5 root      RT  -5     0    0    0 S  0.0  0.0   0:00.44 [migration/1]
    6 root      34  19     0    0    0 S  0.0  0.0   0:00.62 [ksoftirqd/1]
    7 root      RT  -5     0    0    0 S  0.0  0.0   0:00.00 [watchdog/1]

[root@desktopserver stage]# scp 1122.tar.bz2 oracle@db1:~oracle
oracle@db1's password:
1122.tar.bz2                                                                                               68% 2116MB  23.4MB/s   00:40 ETA

}}}
! Setup
<<<
1) download the cputoolkit at http://karlarao.wordpress.com/scripts-resources/

2) untar, then modify the orion_3_fts.sh under the aas30 folder

{{{
oracle@desktopserver.local:/home/oracle/dba/benchmark/cputoolkit/aas30:dw
$ ls -ltr
total 16
-rwxr-xr-x 1 oracle dba 315 Sep 27 22:32 saturate
-rwxr-xr-x 1 oracle dba 159 Sep 27 23:10 orion_3_ftsall.sh
-rwxr-xr-x 1 oracle dba 236 Sep 27 23:10 orion_3_ftsallmulti.sh
-rwxr-xr-x 1 oracle dba 976 Nov 26 15:47 orion_3_fts.sh

oracle@desktopserver.local:/home/oracle/dba/benchmark/cputoolkit/aas30:dw
$ cat orion_3_fts.sh
# This is the main script
export DATE=$(date +%Y%m%d%H%M%S%N)

sqlplus -s /NOLOG <<! &
connect / as sysdba

declare
        rcount number;
begin
        -- 600/60=10 minutes of workload
        for j in 1..3 loop

        -- lotslios by Tanel Poder
        select /*+ cputoolkit ordered
                                use_nl(b) use_nl(c) use_nl(d)
                                full(a) full(b) full(c) full(d) */
                            count(*)
                            into rcount
                        from
                            sys.obj$ a,
                            sys.obj$ b,
                            sys.obj$ c,
                            sys.obj$ d
                        where
                            a.owner# = b.owner#
                        and b.owner# = c.owner#
                        and c.owner# = d.owner#
                        and rownum <= 10000000;
        dbms_lock.sleep(60);
        end loop;
        end;
/

exit;
!
}}}


3) run the workload 
{{{
oracle@desktopserver.local:/home/oracle/dba/benchmark/cputoolkit/aas30:dw
$ ./saturate 16 dw

}}}
<<<

! Instrumentation 
<<<
this will show you pretty much 8+ CPUs being used
{{{
spool snapper.txt
@snapper out 1 120 "select sid from v$session where status = 'ACTIVE'"
spool off

less snapper.txt  | grep -B6 "CPU"
}}}


of course before every run do a begin snap, then run the test case, then do an end snap... then compare the output to snapper 
you'll see that snapper is able to catch the fly by CPU load
{{{
exec dbms_workload_repository.create_snapshot;
execute statspack.snap;

@?/rdbms/admin/awrrpt
@?/rdbms/admin/spreport
}}}
<<<









awr_topevents_v2.sql - added "CPU wait" (new in 11g) to include "unaccounted DB Time" on high run queue workloads http://goo.gl/trwKp, http://twitpic.com/89hp4p

this script was pretty useful, because on the usual AWR reports you don't see this CPU wait 

''What could possibly cause cpu wait?''
from my performance work and the workloads that I've seen here are the possible reasons so far


! if you are asking more CPU work than the number of CPUs
{{{
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
12  0 1054352 377816 767340 1990420    0    0   532   260 15924 11174 92  8  0  0  0
13  2 1054352 373904 767400 1990432    0    0   524     0 21159 10178 93  7  0  0  0
12  1 1054352 373284 767544 1990476    0    0   768    78 17628 11605 92  7  0  0  0
12  1 1054352 373904 767552 1990480    0    0   736    80 16470 12939 95  4  0  0  0
14  1 1054352 372532 767756 1990408    0    0   876     0 17323 13067 92  7  0  0  0
12  1 1054352 324776 767768 2017136    0    0 26957   206 24215 12566 95  5  0  0  0
14  0 1054352 320924 767788 2017168    0    0   796   136 21818 12009 94  5  0  0  0
14  1 1054352 324900 767944 2017180    0    0   836    40 17699 12674 95  5  0  0  0
}}}

! from the AAS investigation, it was caused by hundreds of users being forked at the same time doing select * from a table and not having enough CPU to service those surge of processes causing high "b" - blocked on IO on vmstat 
{{{
$ vmstat 1 1000
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 0 277 1729904 467564 603852 5965768    0    0  1165    67    0    0  5 22 71  1  0
 2 275 1729904 465944 603856 5965772    0    0 521776  2676 15712 17116  6  5 15 74  0
 3 274 1729904 495324 603868 5965400    0    0 543848    48 10366 47365 14  5  1 80  0
 2 275 1729904 478732 603880 5966036    0    0 616776   248 10361 40782 13  5  0 82  0
 3 276 1729904 473764 603880 5966296    0    0 538416   816 10809 16695  7  3  0 90  0
 0 276 1729904 473136 603880 5966300    0    0 620120    16 15006 15223 13  3 10 74  0
 1 275 1729904 485808 603880 5966300    0    0 552696     0 8953 16632  5  3 12 80  0
 0 275 1729904 486204 603880 5966308    0    0 536784    52 11397 15096  5  3  1 90  0
 3 274 1729904 492916 603880 5966312    0    0 556352    56 10594 15988  7  5  2 86  0
}}}

! PGA usage maxing out the memory and the kswapd kicks in and the server starts to swap like crazy causing high on CPU WAIT IO
<<<
{{{
top - 12:58:20 up 132 days, 42 min,  2 users,  load average: 13.68, 10.22, 9.07
Tasks: 995 total,  42 running, 919 sleeping,   0 stopped,  34 zombie
Cpu(s): 48.5%us, 28.4%sy,  0.0%ni, 10.5%id, 11.2%wa,  0.0%hi,  1.3%si,  0.0%st
Mem:  98848968k total, 98407164k used,   441804k free,      852k buffers
Swap: 25165816k total,  2455968k used, 22709848k free,   383132k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
13483 oracle    25   0 12.9g 509m  43m R 80.1  0.5 214:25.43 oraclemtaprd111 (LOCAL=NO)
24308 oracle    25   0 13.4g 1.0g  97m R 77.1  1.1  15:58.80 oraclemtaprd111 (LOCAL=NO)
16227 oracle    25   0 13.4g 1.0g  95m R 74.1  1.1   1312:47 oraclemtaprd111 (LOCAL=NO)
1401 root      11  -5     0    0    0 R 67.8  0.0 113:21.15 [kswapd0]
--
top - 12:59:48 up 132 days, 44 min,  2 users,  load average: 116.16, 43.81, 20.96
Tasks: 985 total,  73 running, 879 sleeping,   0 stopped,  33 zombie
Cpu(s):  8.6%us, 90.1%sy,  0.0%ni,  0.6%id,  0.7%wa,  0.0%hi,  0.1%si,  0.0%st
Mem:  98848968k total, 98407396k used,   441572k free,     2248k buffers
Swap: 25165816k total,  2645544k used, 22520272k free,   370780k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
32349 oracle    18   0 9797m 1.1g  33m S 493.3  1.2   0:36.76 oraclebiprd2 (LOCAL=NO)
29495 oracle    15   0  216m  26m  11m S 466.6  0.0   3:01.86 /u01/app/11.2.0/grid/bin/diskmon.bin -d -f
32726 oracle    16   0 8788m 169m  37m R 447.6  0.2   0:24.86 oraclebiprd2 (LOCAL=NO)
32338 oracle    18   0 9525m 905m  42m R 407.0  0.9   0:33.20 oraclebiprd2 (LOCAL=NO)
--
top - 12:59:54 up 132 days, 44 min,  2 users,  load average: 107.27, 44.31, 21.37
Tasks: 991 total,  16 running, 942 sleeping,   0 stopped,  33 zombie
Cpu(s): 30.3%us,  3.6%sy,  0.0%ni, 14.3%id, 51.6%wa,  0.0%hi,  0.1%si,  0.0%st
Mem:  98848968k total, 98167188k used,   681780k free,     5264k buffers
Swap: 25165816k total,  2745440k used, 22420376k free,   369676k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
1401 root      10  -5     0    0    0 S 77.8  0.0 114:36.03 [kswapd0]                     <-- KSWAPD kicked in
19163 oracle    15   0 2152m  72m  16m S 74.9  0.1   9:42.45 /u01/app/11.2.0/grid/bin/oraagent.bin
3394 oracle    15   0  436m  23m  14m S 33.8  0.0  12:23.44 /u01/app/11.2.0/grid/bin/oraagent.bin
2171 root      16   0  349m  28m  12m S 28.6  0.0   1:50.29 /u01/app/11.2.0/grid/bin/orarootagent.bin


> vmstat 1 5000
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 2 69 9893340 442000   9032 346328 1160 2760  1264  2904 2083 13656 10  0 34 56  0
 3 67 9894700 443524   9036 345768 1204 3236  1268  3300 1930 13332  7  0 47 46  0
 1 72 9895936 446228   9052 346484 1052 3156  1220  3648 1819 13674  5  0 47 48  0
 2 74 9897156 448732   9064 346616 1724 3432  2128  3436 1936 14598  7  0 44 49  0
 3 73 9897724 446904   9068 347580 1524 2468  1636  2480 1730 13363  6  0 32 61  0
 7 65 9898208 448312   9080 347472 1328 1944  1660  1952 2496 14019 16  0 32 52  0
 8 61 9898500 444836   9092 347904 2128 2004  2464  2208 3381 16093 29  1 23 47  0
 1 79 9899372 441588   9104 348048 1236 2684  1424  3300 2774 14103 23  0 24 53  0
13 54 9909828 551780   9224 349588 36124 63296 37608 64800 126067 443473 18  0 23 59  0
16 40 9910136 536048   9260 350004 4208  988  5076  2044 5434 17055 51  3 10 36  0

}}}
<<<


''wait io on CPUIDs'' https://www.evernote.com/shard/s48/sh/0da5e22a-6a80-4a82-86a9-581d9203ed9c/8f5b5f4f63b789a9c3d4dc6a618128d0

http://www.ludovicocaldara.net/dba/how-to-collect-oracle-application-server-performance-data-with-dms-and-rrdtool/

http://allthingsmdw.blogspot.com/2012/02/analyzing-thread-dumps-in-middleware.html
Capacity Planning for LAMP
http://www.scribd.com/doc/43281/Slides-from-Capacity-Planning-for-LAMP-talk-at-MySQL-Conf-2007

http://perfwork.wordpress.com/2010/03/20/cpu-utilization-on-ec2/

IO tuning
http://communities.vmware.com/thread/268869
http://vpivot.com/2010/05/04/storage-io-control/
http://communities.vmware.com/docs/DOC-5490
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1008205
http://book.soundonair.ru/hall2/ch06lev1sec1.html                             <-- COOL LVM Striping!!!
http://tldp.org/HOWTO/LVM-HOWTO/recipethreescsistripe.html
http://linux.derkeiler.com/Newsgroups/comp.os.linux.misc/2010-01/msg00325.html

http://www.goodwebpractices.com/other/wordpress-vs-joomla-vs-drupal.html
http://www.pcpro.co.uk/blogs/2011/02/02/joomla-1-6-vs-drupal-7-0/
http://www.pcpro.co.uk/reviews/software/364549/drupal-7
http://www.alledia.com/blog/general-cms-issues/joomla-and-drupal-which-one-is-right-for-you/ <-- nice comparison
Start/Stop CRS
http://www.dbaexpert.com/blog/2007/09/start-and-stop-crs/
https://forums.oracle.com/forums/thread.jspa?messageID=9817219&#9817219 <-- installation!

http://www.crisp.demon.co.uk/blog/2011-06.html  <-- his blog about his dtrace port
http://crtags.blogspot.com/  <-- the download page

{{{
cd /reco/installers/rpms/dtrace-20110718
make all
make load
build/dtrace -n 'syscall:::entry { @[execname] = count(); }'
build/dtrace -n 'syscall:::entry /execname == "VirtualBox"/ { @[probefunc] = count(); }'


[root@desktopserver dtrace-20110718]# build/dtrace -n 'syscall:::entry { @[execname] = count(); }'
dtrace: description 'syscall:::entry ' matched 633 probes
^C

  hpssd.py                                                          1
  VBoxNetDHCP                                                       2
  mapping-daemon                                                    2
  nmbd                                                              3
  init                                                              4
  gnome-panel                                                       6
  httpd                                                             6
  ntpd                                                              8
  tnslsnr                                                           8
  gpm                                                              14
  pam-panel-icon                                                   16
  perl                                                             17
  sshd                                                             17
  avahi-daemon                                                     22
  metacity                                                         30
  iscsid                                                           31
  nautilus                                                         38
  automount                                                        40
  ocssd.bin                                                        42
  gam_server                                                       45
  gdm-rh-security                                                  55
  gnome-screensav                                                  66
  emagent                                                          67
  gnome-power-man                                                  75
  tail                                                             82
  gnome-settings-                                                  86
  evmd.bin                                                        100
  mixer_applet2                                                   143
  escd                                                            165
  gnome-terminal                                                  205
  cssdagent                                                       221
  gconfd-2                                                        277
  pcscd                                                           372
  wnck-applet                                                     382
  TeamViewer.exe                                                  392
  pam_timestamp_c                                                 406
  wineserver                                                      412
  collectl                                                        525
  dtrace                                                          616
  ohasd.bin                                                      1063
  vncviewer                                                      1244
  oraagent.bin                                                   2046
  VBoxXPCOMIPCD                                                  2081
  Xvnc                                                           3348
  VBoxSVC                                                        8058
  oracle                                                        12601
  java                                                          39786
  firefox                                                       74345
  npviewer.bin                                                 204925
  VirtualBox                                                   415025


  
  
}}}


http://www.evernote.com/shard/s48/sh/1ccb0466-79b7-4090-9a5d-9371358ac54d/b8434e3e3b3130ce72422b9ae067e7b9
<<showtoc>>

! references 
https://css-tricks.com/the-difference-between-id-and-class/
http://stackoverflow.com/questions/12889362/difference-between-id-and-class-in-css-and-when-to-use-it
Some noteworthy tweets ... blog summary here http://www.oraclenerd.com/2011/03/fun-with-tuning.html
<<<
{{{
@DBAKevlar Shouldn't need that or any tricks. Defaults of CREATE TABLESPACE should work just fine. /cc @oraclenerd

@oraclenerd Likely because the writer slave set is slower than the reader slave set. Readers want to send more data, writers not ready.

Issue of a Balance HW config or too much PX? RT @GregRahn: @oraclenerd Likely because the writer slave set is slower than the reader slave set. Readers want to send more data, writers not ready
-- OR not proper PX 

@GregRahn can you dumb that down for me? slow disks? slow part of disks?

@oraclenerd Seems likely that the disk writes are the slow side of the execution. The read side probably faster. Got SQL Monitor report?

@GregRahn I have the Real Time SQL Monitoring report from SQL Dev. Didn't configure EM or anything else

observing @tomroachoracle run sar reports on my VM

@oraclenerd That should work. Email me that

OH: "Solutions are only useful when the problem is well understood"

@GregRahn could you improve @oraclenerd s parallel query?

@martinberx Indeed. Mr @oraclenerd did not have PARALLEL in the CTAS, only on the SELECT side. Many readers, 1 writer. He's much wiser now.

@GregRahn @oraclenerd top wait events (AWR): DB CPU (82%) - direct path read (15%) - direct path write (2%) can we avoid CPU work somehow?

replacing NULL with constant (-999) in DWH like env to avoid outer joins. Your ideas?

@martinberx Can use NOCOMPRESS. Better option - use more CPU cores. /cc @oraclenerd

@GregRahn good idea! trade CPU vs. IO @oraclenerd has to decide if he wants faster CTAS or query afterwards.

@martinberx If lots are null, you'll skew num_rows/NDV by using a constant instead. Histogram for col?
}}}
<<<


! likely because the writer slave set is slower than the reader slave set. readers want to send more data writers are not ready 
{{{
"Perhaps you have a parallel hint on the select but not on the table, like this"

CREATE TABLE claim
  COMPRESS BASIC
  NOLOGGING
  PARALLEL 8
AS
SELECT /*+ PARALLEL( c, 8 ) */
  date_of_service,
  date_of_payment,
  claim_count,
  units,
  amount,
  ...
  ...
}}}

Connect Time Failover & Transparent Application Failover for Data Guard
http://uhesse.wordpress.com/2009/08/19/connect-time-failover-transparent-application-failover-for-data-guard/

DataGuard Startup Service trigger
http://blog.dbvisit.com/the-power-of-oracle-services-with-standby-databases/
1) Read on the PDF here, that talks about capacity planning and sizing. It includes a tool and a sample scenario https://github.com/karlarao/sizing_worksheet
2) You can view my presentations and papers here http://www.slideshare.net/karlarao/presentations, http://www.slideshare.net/karlarao/documents
3) Some of my tools https://karlarao.wordpress.com/scripts-resources/, https://github.com/karlarao
4) And go through the entries under the following topics of my wiki 
http://karlarao.tiddlyspot.com/#OraclePerformance
http://karlarao.tiddlyspot.com/#Benchmark
http://karlarao.tiddlyspot.com/#%5B%5BCapacity%20Planning%5D%5D
http://karlarao.tiddlyspot.com/#%5B%5BHardware%20and%20OS%5D%5D
http://karlarao.tiddlyspot.com/#PerformanceTools
http://karlarao.tiddlyspot.com/#%5B%5BTroubleshooting%20%26%20Internals%5D%5D
http://karlarao.tiddlyspot.com/#CloudComputing
https://github.com/karlarao/forecast_examples

I recommend you read the books: 
•	FOP http://www.amazon.com/Forecasting-Oracle-Performance-Craig-Shallahamer/dp/1590598024/ref=sr_1_1?ie=UTF8&qid=1435948281&sr=8-1&keywords=forecasting+oracle+performance&pebp=1435948282498&perid=16H7PDMSDZYF4PJ4FWET
•	OPF http://www.amazon.com/Oracle-Performance-Firefighting-Craig-Shallahamer/dp/0984102302/ref=sr_1_3?ie=UTF8&qid=1435948281&sr=8-3&keywords=forecasting+oracle+performance
•	TAOS (headroom section) http://www.amazon.com/Art-Scalability-Architecture-Organizations-Enterprise/dp/0134032802/ref=sr_1_1?ie=UTF8&qid=1435948300&sr=8-1&keywords=the+art+of+scalability
•	TPOCSA (capacity planning section) http://www.amazon.com/Practice-Cloud-System-Administration-Distributed/dp/032194318X/ref=sr_1_1?ie=UTF8&qid=1435948307&sr=8-1&keywords=tom+limoncelli
•	"Cloud Capacity Management" http://www.apress.com/9781430249238  it goes through the end to end capacity service model which is ideal for a big shop (with a lot of hierarchy and bureaucracy) or if you are thinking about implementing database as a service in a large scale. some points are not very detailed but it goes through all the relevant terms/topics. we have done a similar thing in the past for a fortune 100 bank but focuses only on oracle services 


Also join:
•	GCAP google groups https://groups.google.com/forum/#!forum/guerrilla-capacity-planning


I’ve been meaning to put together a 1 day workshop about sizing, cap planning, and RM. 
At enkitec I’ve done about 80+ sizing engagements so I’ve got a ton of data to show and experiences to share. Hopefully by the end of this year I’ll be able to complete that workshop. 


! other
http://cyborginstitute.org/projects/administration/database-scaling/ , http://cyborginstitute.org/projects/administration/
book: Operating Systems: Concurrent and Distributed Software Design http://search.safaribooksonline.com/0-321-11789-1 









1) Read on the PDF here, that talks about capacity planning and sizing. It includes a tool and a sample scenario https://github.com/karlarao/sizing_worksheet
2) You can view my presentations and papers here http://www.slideshare.net/karlarao/presentations, http://www.slideshare.net/karlarao/documents
3) Some of my tools https://karlarao.wordpress.com/scripts-resources/, https://github.com/karlarao
4) And go through the entries under the following topics of my wiki 
http://karlarao.tiddlyspot.com/#OraclePerformance
http://karlarao.tiddlyspot.com/#Benchmark
http://karlarao.tiddlyspot.com/#%5B%5BCapacity%20Planning%5D%5D
http://karlarao.tiddlyspot.com/#%5B%5BHardware%20and%20OS%5D%5D
http://karlarao.tiddlyspot.com/#PerformanceTools
http://karlarao.tiddlyspot.com/#%5B%5BTroubleshooting%20%26%20Internals%5D%5D
http://karlarao.tiddlyspot.com/#CloudComputing
https://github.com/karlarao/forecast_examples

I recommend you read the books: 
•	FOP http://www.amazon.com/Forecasting-Oracle-Performance-Craig-Shallahamer/dp/1590598024/ref=sr_1_1?ie=UTF8&qid=1435948281&sr=8-1&keywords=forecasting+oracle+performance&pebp=1435948282498&perid=16H7PDMSDZYF4PJ4FWET
•	OPF http://www.amazon.com/Oracle-Performance-Firefighting-Craig-Shallahamer/dp/0984102302/ref=sr_1_3?ie=UTF8&qid=1435948281&sr=8-3&keywords=forecasting+oracle+performance
•	TAOS (headroom section) http://www.amazon.com/Art-Scalability-Architecture-Organizations-Enterprise/dp/0134032802/ref=sr_1_1?ie=UTF8&qid=1435948300&sr=8-1&keywords=the+art+of+scalability
•	TPOCSA (capacity planning section) http://www.amazon.com/Practice-Cloud-System-Administration-Distributed/dp/032194318X/ref=sr_1_1?ie=UTF8&qid=1435948307&sr=8-1&keywords=tom+limoncelli
•	"Cloud Capacity Management" http://www.apress.com/9781430249238  it goes through the end to end capacity service model which is ideal for a big shop (with a lot of hierarchy and bureaucracy) or if you are thinking about implementing database as a service in a large scale. some points are not very detailed but it goes through all the relevant terms/topics. we have done a similar thing in the past for a fortune 100 bank but focuses only on oracle services 



Also join:
•	GCAP google groups https://groups.google.com/forum/#!forum/guerrilla-capacity-planning


I’ve been meaning to put together a 1 day workshop about sizing, cap planning, and RM. 
At enkitec I’ve done about 80+ sizing engagements so I’ve got a ton of data to show and experiences to share. Hopefully by the end of this year I’ll be able to complete that workshop. 


! other
http://cyborginstitute.org/projects/administration/database-scaling/ , http://cyborginstitute.org/projects/administration/
book: Operating Systems: Concurrent and Distributed Software Design http://search.safaribooksonline.com/0-321-11789-1 



! chargeback , cost 
cloud capacity management https://learning.oreilly.com/library/view/cloud-capacity-management/9781430249238/9781430249238_Ch13.xhtml#Sec1
hybrid cloud management https://learning.oreilly.com/library/view/hybrid-cloud-management/9781785283574/ch08s04.html
cloud data centers cost modeling https://learning.oreilly.com/library/view/cloud-data-centers/9780128014134/xhtml/chp016.xhtml
cloud computing billing and chargeback https://learning.oreilly.com/library/view/cloud-computing-automating/9780132604000/ch09.html
cloud native architectures https://learning.oreilly.com/library/view/cloud-native-architectures/9781787280540/2840e1ee-fdf4-437d-84bf-22d2cda7892b.xhtml
oem chargeback https://www.oracle.com/enterprise-manager/downloads/chargeback-capacity-planning-downloads.html











http://www.oracle.com/us/products/engineered-systems/iaas/engineered-systems-iaas-ds-1897230.pdf
{{{

select /*+ OPT_PARAM('CONTAINER_DATA', 'CURRENT_DICTIONARY') */ * from dba_hist_sqlstat;
select /*+ OPT_PARAM('CONTAINER_DATA', 'CURRENT_DICTIONARY') */ * from dba_hist_sqltext where sql_text like '%incidentti0_1_.LIFECYCLE<>%';

SELECT /*+ OPT_PARAM('CONTAINER_DATA', 'CURRENT_DICTIONARY') */
    st.sql_id,
    st.snap_id,
    st.instance_number,
    sn.end_interval_time,
    tx.sql_text
FROM
    dba_hist_sqlstat st
JOIN
    dba_hist_sqltext tx
ON
    st.sql_id = tx.sql_id
JOIN
    dba_hist_snapshot sn
ON
    st.snap_id = sn.snap_id
    AND st.instance_number = sn.instance_number
WHERE
    sn.end_interval_time >= SYSDATE - 1
    AND tx.sql_text LIKE '%incidentti0_1_.LIFECYCLE<>%'
ORDER BY
    sn.end_interval_time DESC;



WITH filtered_sqls AS (
    SELECT /*+ OPT_PARAM('CONTAINER_DATA', 'CURRENT_DICTIONARY') */
        st.sql_id,
        st.snap_id,
        st.instance_number,
        sn.end_interval_time
    FROM
        dba_hist_sqlstat st
    JOIN
        dba_hist_sqltext tx
    ON
        st.sql_id = tx.sql_id
    JOIN
        dba_hist_snapshot sn
    ON
        st.snap_id = sn.snap_id
        AND st.instance_number = sn.instance_number
    WHERE
        sn.end_interval_time >= SYSDATE - 1/24
        AND tx.sql_text LIKE '%incidentti0_1_.LIFECYCLE<>%'
        and st.sql_id = '3216h5v2tp7k7'
)
SELECT /*+ OPT_PARAM('CONTAINER_DATA', 'CURRENT_DICTIONARY') */
    b.sql_id,
    b.name,
    b.value_string
FROM
    dba_hist_sqlbind b
JOIN
    filtered_sqls f
ON
    b.sql_id = f.sql_id
ORDER BY
    b.sql_id, b.name;

}}}



{{{
SELECT  * from table(
  select dbms_sqltune.extract_binds(bind_data) from v$sql
  where sql_id = '&sql_id'
  and child_number = &child_no)
/

select a.sql_id, a.name, a.value_string
from dba_hist_sqlbind a, dba_hist_snapshot b
where a.snap_id between b.snap_id - 1 and b.snap_id
and b.begin_interval_time <= to_date('&DATE_RUNNING', 'DD-MON-YYYY HH24:MI:SS')
and b.end_interval_time >= to_date('&DATE_RUNNING',  'DD-MON-YYYY HH24:MI:SS')
and sql_id = '&SQL_ID'
/
}}}
How do I know if the cardinality estimates in a plan are accurate?
http://blogs.oracle.com/optimizer/entry/how_do_i_know_if
http://blogs.oracle.com/optimizer/entry/cardinality_feedback
http://kerryosborne.oracle-guy.com/2011/07/cardinality-feedback/
http://kerryosborne.oracle-guy.com/2011/01/sql-profiles-disable-automatic-dynamic-sampling/
http://blogs.oracle.com/optimizer/entry/how_do_i_know_if
Martin -- Thanks for the question regarding "What does Buffer Sort mean", version 10.2 http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:3123216800346274434
9.2.0.4 buffer (sort) http://www.freelists.org/post/oracle-l/9204-buffer-sort
Buffer Sort explanation http://www.freelists.org/post/oracle-l/Buffer-Sort-explanation, http://www.orafaq.com/maillist/oracle-l/2005/08/07/0420.htm
Buffer Sorts http://jonathanlewis.wordpress.com/2006/12/17/buffer-sorts/
Buffer Sorts – 2 http://jonathanlewis.wordpress.com/2007/01/12/buffer-sorts-2/
Cartesian Merge Join http://jonathanlewis.wordpress.com/2006/12/13/cartesian-merge-join/
Optimizer Selects the Merge Join Cartesian Despite the Hints [ID 457058.1]   alter session set "_optimizer_mjc_enabled"=false ;
Scalar Subquery and Complex View Merging Disabled http://dioncho.wordpress.com/2009/04/17/scalar-subquery-and-complex-view-merging-disabled/
ZS -- Thanks for the question regarding "Why a Merge Join Cartesian?", version 8.1.7 http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4105951726381













<<showtoc>>


https://db-engines.com/en/system/Cassandra%3BOracle


! video learning 
* https://www.linkedin.com/learning/cassandra-data-modeling-essential-training/cassandra-and-relational-databases
* Cassandra for Developers https://app.pluralsight.com/library/courses/cassandra-developers/table-of-contents
* https://www.udemy.com/courses/search/?src=ukw&q=cassandra
FREE https://learning.oreilly.com/videos/cassandra-administration/9781782164203?autoplay=false
https://www.udemy.com/course/from-0-to-1-the-cassandra-distributed-database/
https://www.udemy.com/course/cassandra-administration/
https://learning.oreilly.com/videos/mastering-cassandra-essentials/9781491994122?autoplay=false
https://learning.oreilly.com/videos/distributed-systems-in/9781491924914?autoplay=false



! references 
https://www.google.com/search?q=cassandra+vs+mongodb&oq=cassandra+vs+m&aqs=chrome.1.69i57j0l7.3644j0j1&sourceid=chrome&ie=UTF-8
cassandra vs mongo vs redis https://db-engines.com/en/system/Cassandra%3BMongoDB%3BRedis
https://www.educba.com/cassandra-vs-redis/


! cassandra for architects 

!! Berglund and McCullough on Mastering Cassandra for Architects 
https://learning.oreilly.com/videos/berglund-and-mccullough/9781449327378/9781449327378-video153237?autoplay=false

!! Design a monitoring or analytics service like Datadog or SignalFx
https://leetcode.com/discuss/interview-question/system-design/287678/Design-a-monitoring-or-analytics-service-like-Datadog-or-SignalFx
<<<
What is the total events and number of users? The storage data depends on total events * number of users

Its a write heavy service.
Data persist for 6 months
can use Cassandra for storing logs
row key = event + client id, column key is time stamp, value store the the number of times this event happens during this timestamp
365/2 * 86400 = 15768000 seconds, if we needs to store number of time a event happens in each event, we need 15768000 column for each key. For the last 6 months, we can use minutes for storing as well. Which is 365/2 * 24 * 60 = 262800 minutes. We can also use hour to store older date such as 3 months ago to save space.
The database will be shard by event + client id
<<<

!! Cassandra for mission critical data
https://www.slideshare.net/semLiveEnv/cassandra-for-mission-critical-data

!! NoSQL & HBase overview 
https://www.slideshare.net/VenkataNagaRavi/hbase-overview-41046280





! cassandra use cases 

https://blog.pythian.com/cassandra-use-cases/
{{{
Ideal Cassandra Use Cases
It turns out that Cassandra is really very good for some applications.

The ideal Cassandra application has the following characteristics:

Writes exceed reads by a large margin.
Data is rarely updated and when updates are made they are idempotent.
Read Access is by a known primary key.
Data can be partitioned via a key that allows the database to be spread evenly across multiple nodes.
There is no need for joins or aggregates.
Some of my favorite examples of good use cases for Cassandra are:

Transaction logging: Purchases, test scores, movies watched and movie latest location.
Storing time series data (as long as you do your own aggregates).
Tracking pretty much anything including order status, packages etc.
Storing health tracker data.
Weather service history.
Internet of things status and event history.
Telematics: IOT for cars and trucks.
Email envelopes—not the contents.
}}}


!! migrate oracle to cassandra (NETFLIX)
Global Netflix - Replacing Datacenter Oracle with Global Apache Cassandra on AWS
http://www.hpts.ws/papers/2011/sessions_2011/GlobalNetflixHPTS.pdf


!! hadoop on cassandra - datastax 
https://stackoverflow.com/questions/14827693/hadoop-on-cassandra-database
<<<
If you interested to marry Hadoop and Cassandra - the first link should DataStax company which is built around this concept. http://www.datastax.com/ They built and support hadoop with HDFS replaced with cassandra. In best of my understanding - they do have data locality:http://blog.octo.com/en/introduction-to-datastax-brisk-an-hadoop-and-cassandra-distribution/
<<<


!! hadoop vs cassandra 
Hadoop vs. Cassandra https://www.youtube.com/watch?v=ZzFCfH8e3QA




! cassandra flask python
https://www.google.com/search?sxsrf=ACYBGNRzZb4Is9CEjMPzBaMsrc7CkGWPHA%3A1568334590979&ei=_uJ6XZKeO9Dy5gKp9oSQDA&q=cassandra+flask+python&oq=cassandra+flask&gs_l=psy-ab.3.1.0j0i22i30l5j0i22i10i30.5510.5925..7882...0.3..0.82.381.5......0....1..gws-wiz.......0i71.CY3xBlZ92os

http://rmehan.com/2016/04/18/using-cassandra-with-flask/
Collect pageviews with Flask and Cassandra https://mmas.github.io/pageviews-flask-cassandra




! cassandra sample application code 
https://learning.oreilly.com/library/view/cassandra-the-definitive/9781449399764/ch04.html




















.


.

this feature is new in RHEL6

Documentation http://linux.oracle.com/documentation/EL6/Red_Hat_Enterprise_Linux-6-Resource_Management_Guide-en-US.pdf
How I Used CGroups to Manage System Resources In Oracle Linux 6 http://www.oracle.com/technetwork/articles/servers-storage-admin/resource-controllers-linux-1506602.html

IO https://fritshoogland.wordpress.com/2012/12/15/throttling-io-with-linux/
CPU http://manchev.org/2014/03/processor-group-integration-in-oracle-database-12c/

Using PROCESSOR_GROUP_NAME to bind a database instance to CPUs or NUMA nodes on Linux (Doc ID 1585184.1)
Using PROCESSOR_GROUP_NAME to bind a database instance to CPUs or NUMA nodes on Solaris (Doc ID 1928328.1)


Modern Linux Servers with cgroups - Brandon Philips, CoreOS https://www.youtube.com/watch?v=ZD7HDrtkZoI
Resource allocation using cgroups https://www.youtube.com/watch?v=JN2Ei7zn2S0
OEL 6 doc http://docs.oracle.com/cd/E37670_01/E37355/html/index.html, https://docs.oracle.com/cd/E37670_01/E37355/html/ol_getset_param_cgroups.html, http://docs.oracle.com/cd/E37670_01/E37355/html/ol_use_cases_cgroups.html
How I Used CGroups to Manage System Resources http://www.oracle.com/technetwork/articles/servers-storage-admin/resource-controllers-linux-1506602.html





to get rid of ORA-28003: password verification for the specified password failed
{{{
ALTER PROFILE DEFAULT LIMIT PASSWORD_VERIFY_FUNCTION NULL;
alter profile DEFAULT limit PASSWORD_REUSE_MAX 6 PASSWORD_REUSE_TIME unlimited;
ALTER PROFILE DEFAULT LIMIT PASSWORD_LIFE_TIME UNLIMITED;
}}}

to change profile for a specific user
{{{
select username, account_status, PROFILE from dba_users;
ALTER PROFILE MONITORING_USER LIMIT PASSWORD_VERIFY_FUNCTION NULL;
alter profile MONITORING_USER limit PASSWORD_REUSE_MAX 6 PASSWORD_REUSE_TIME unlimited;
alter user HCMREADONLY identified by noentry;
ALTER PROFILE MONITORING_USER LIMIT PASSWORD_VERIFY_FUNCTION VERIFY_FUNCTION;
}}}

to get the old password and put it back after
{{{
create user TEST identified by TEST;
grant create session to TEST;
select username, password from dba_users where username = 'TEST';

 select username, password from dba_users where username = 'TEST';

USERNAME                       PASSWORD
------------------------------ ------------------------------
TEST                           7A0F2B316C212D67

alter user TEST identified by TEST2;

Alter user TEST identified by values 'OLD HASH VALUE ';
Alter user TEST identified by values '7A0F2B316C212D67';
}}}


! expired and locked
{{{
select username, account_status from dba_users;
select 'ALTER USER ' || username || ' ACCOUNT UNLOCK;' from dba_users where account_status like '%LOCKED%';

set heading off
set echo off
set long 9999999
select dbms_metadata.get_ddl('USER', username) || ';' usercreate
from dba_users where username = 'SYSMAN';

If you are, (if sec_case_sensitive_logon = TRUE), then you can do this:
select 'alter user '|| username '||' identified by values '||chr(39)||spare4||chr(39)||';' from dba_users where account_status like '%EXPIRED%';
If you're not using mixed case passwords (sec_case_sensitive_logon = FALSE), then do:
select 'alter user '||username '||' identified by value '||chr(39)||password||chr(39)||';' from dba_users where account_status like '%EXPIRED%';

select 'ALTER USER ' || username || ' identified by oracle1;' from dba_users where account_status like '%EXPIRED%';

http://laurentschneider.com/wordpress/2008/03/alter-user-identified-by-values-in-11g.html
http://coskan.wordpress.com/2009/03/11/alter-user-identified-by-values-on-11g-without-using-sysuser/

-- for sysman do this starting 10204
emctl setpasswd dbconsole
}}}



! example
{{{
21:06:35 SYS@cdb1> SET LONG 999999
21:06:45 SYS@cdb1> select dbms_metadata.get_ddl('USER','ALLOC_APP_USER') from dual;
 
DBMS_METADATA.GET_DDL('USER','ALLOC_APP_USER')
--------------------------------------------------------------------------------
 
   CREATE USER "ALLOC_APP_USER" IDENTIFIED BY VALUES 'S:135E0A81F4B08AD2EE81B3A0
E4B28DB3A08983E40524264C4764EDCEE856;H:9BCF43B8002C09A03D7C5B0C80D35B86;T:42BE3A
2EC61307ED9FF704DF80951D76986F47D43EA835799836001399562899F95FB95B171C9D58A16E45
F7459ADEE74901C7B4A9A9AEFDD92FD03278F3038B0EE0A03F14A3C7520FABCC386FA6A72A;E0E79
5741BD02FB0'
      DEFAULT TABLESPACE "USERS"
      TEMPORARY TABLESPACE "TEMP2"
 
 
21:06:49 SYS@cdb1>
21:06:51 SYS@cdb1> conn system/xxx
Connected.
21:07:15 SYSTEM@cdb1>
21:07:16 SYSTEM@cdb1>
21:07:16 SYSTEM@cdb1> alter user alloc_app_user identified by karlarao;
 
User altered.
 
21:07:37 SYSTEM@cdb1> conn alloc_app_user/karlarao
Connected.
21:07:45 ALLOC_APP_USER@cdb1>
21:07:46 ALLOC_APP_USER@cdb1>
21:07:46 ALLOC_APP_USER@cdb1> conn system/xxx
Connected.
21:07:51 SYSTEM@cdb1>
21:07:51 SYSTEM@cdb1>
21:07:51 SYSTEM@cdb1> alter user alloc_app_user identified by values 'S:135E0A81F4B08AD2EE81B3A0E4B28DB3A08983E40524264C4764EDCEE856;H:9BCF43B8002C09A03D7C5B0C80D35B86;T:42BE3A2EC61307ED9FF704DF80951D76986F47D43EA835799836001399562899F95FB95B171C9D58A16E45F7459ADEE74901C7B4A9A9AEFDD92FD03278F3038B0EE0A03F14A3C7520FABCC386FA6A72A;E0E795741BD02FB0';
 
User altered.
 
21:08:46 SYSTEM@cdb1> conn alloc_app_user/karlarao
ERROR:
ORA-01017: invalid username/password; logon denied
 
 
Warning: You are no longer connected to ORACLE.
21:08:55 @>
21:08:56 @> conn alloc_app_user/testalloc
Connected.
21:09:07 ALLOC_APP_USER@cdb1>
}}}


http://dbakevlar.blogspot.com/2010/08/simple-reporting-without-materialized.html
http://avdeo.com/2010/11/01/converting-migerating-database-character-set/


http://www.oracle-base.com/articles/9i/character-semantics-and-globalization-9i.php
https://forums.oracle.com/forums/thread.jspa?messageID=2371685
Modify NLS_LENGTH_SEMANTICS online http://gasparotto.blogspot.com/2009/03/modify-nlslengthsemantics-online.html


! 2022
https://www.kibeha.dk/2018/05/corrupting-characters-how-to-get.html
https://blogs.oracle.com/timesten/post/why-databasecharacterset-matters
''Chargeback Administration'' http://download.oracle.com/docs/cd/E24628_01/doc.121/e25179/chargeback_cloud_admin.htm#sthref232
''demo'' http://www.youtube.com/user/OracleLearning#start=0:00;end=6:18;autoreplay=false;showoptions=false <-- resources are managed like VM resources, VMWare has a similar tool 

http://www.oracle.com/technetwork/oem/cloud-mgmt/wp-em12c-chargeback-final-1585483.pdf
@@
Roadmap of Oracle Database Patchset Releases (Doc ID 1360790.1)
Release Schedule of Current Database Releases (Doc ID 742060.1)

Note 207303.1 Client Server Interoperability Support
Note 161818.1 RDBMS Releases Support Status Summary
Oracle Clusterware (CRS/GI) - ASM - Database Version Compatibility (Doc ID 337737.1)
Support of Linux and Oracle Products on Linux (Doc ID 266043.1)
ACFS Support On OS Platforms (Certification Matrix). (Doc ID 1369107.1)
Master Note For Database and Client Certification (Doc ID 1298096.1)
@@

On What Unix/Linux OS are Oracle ODBC Drivers Available ?
  	Doc ID: 	Note:396635.1

  	

Subject: 	Oracle - Compatibility Matrices and Release Information
  	Doc ID: 	Note:139580.1
  	
Subject: 	Statement of Direction - JDBC Driver Support within Oracle Application Server
  	Doc ID: 	Note:365120.1
  	
Subject: 	Oracle Database Server and Networking Patches for Microsoft Platforms
  	Doc ID: 	Note:161549.1
  	
Subject: 	Oracle Database Extensions for .Net support statement for 64-bit Windows
  	Doc ID: 	Note:414947.1
  	
Subject: 	Oracle Database Server support Matrix for Windows XP / 2003 64-Bit (Itanium)
  	Doc ID: 	Note:236183.1
  	
Subject: 	Oracle Database Server support Matrix for Windows XP / 2003 32-Bit
  	Doc ID: 	Note:161546.1
  	
Oracle Database Server product support Matrix for Windows 2000
  	Doc ID: 	Note:77627.1
  	
INTEL: Oracle Database Server Support Matrix for Windows NT
  	Doc ID: 	Note:45997.1
  	
Oracle Database Server support Matrix for Windows XP / 2003 64-Bit (x64)
  	Doc ID: 	Note:343737.1
  	
Are Unix Clients Supported for Deploying Oracle Forms over the Web?
  	Doc ID: 	Note:266439.1


Tru64 UNIX Statement of Direction for Oracle
  	Doc ID: 	Note:264137.1

Is Oracle10g Instant Client Certified With Oracle 9i or Oracle 8i Databases
  	Doc ID: 	Note:273972.1




ODBC and Oracle10g Supportability
  	Doc ID: 	Note:273215.1

Starting With Oracle JDBC Drivers
  	Doc ID: 	Note:401934.1

JDBC Features - classes12.jar , oracle.jdbc.driver, and OracleConnectionCacheImpl
  	Doc ID: 	Note:335754.1

ORA-12170 When Connecting Directly or Via Dblink From 10g To 8i
  	Doc ID: 	Note:363105.1


Which Oracle Client versions will connect to and work against which version of the Oracle Database?
  	Doc ID: 	Note:172179.1






How To Determine The C/C++ And COBOL Compiler Version / Release on LINUX/UNIX
  	Doc ID: 	Note:549826.1

Precompiler FAQ's About Migration / Upgrade
  	Doc ID: 	Note:377161.1




How To Upgrade The Oracle Database Client Software?
  	Doc ID: 	Note:428732.1



Certified Compilers
  	Doc ID: 	Note:43208.1




-- AIX

Note.273051.1 - How to configure Reports with IBM-DB2 Database using Pluggable Data Source
Note.239558.1 - How to Set Up Reports 9i Connecting to DB2 with JDBC using Merant Drivers
Note.246787.1 - How to Configure JDBC-ODBC Bridge for Reports 9i? 


-- JDBC

Example: Identifying Connection String Problems in JDBC Driver
  	Doc ID: 	Note:94091.1
http://srackham.wordpress.com/cloning-and-copying-virtualbox-virtual-machines/

<<<
easiest is https://www.youtube.com/watch?v=lbVi2yJOiZo on the current copied/renamed vdi
{{{
VBoxManage internalcommands sethduuid "/Volumes/vm/VirtualBox VMs/tableauserver/tableauserver.vdi"
}}}
<<<

or clone from vbox ui 
/***
|Name:|CloseOnCancelPlugin|
|Description:|Closes the tiddler if you click new tiddler then cancel. Default behaviour is to leave it open|
|Version:|3.0.1 ($Rev: 3861 $)|
|Date:|$Date: 2008-03-08 10:53:09 +1000 (Sat, 08 Mar 2008) $|
|Source:|http://mptw.tiddlyspot.com/#CloseOnCancelPlugin|
|Author:|Simon Baird <simon.baird@gmail.com>|
|License:|http://mptw.tiddlyspot.com/#TheBSDLicense|
***/
//{{{
merge(config.commands.cancelTiddler,{

	handler_mptw_orig_closeUnsaved: config.commands.cancelTiddler.handler,

	handler: function(event,src,title) {
		this.handler_mptw_orig_closeUnsaved(event,src,title);
		if (!store.tiddlerExists(title) && !store.isShadowTiddler(title))
			story.closeTiddler(title,true);
	 	return false;
	}

});

//}}}
http://en.wikipedia.org/wiki/Cloud_computing
http://johnmathon.wordpress.com/2014/02/11/a-simple-guide-to-cloud-computing-iaas-paas-saas-baas-dbaas-ipaas-idaas-apimaas/

! The Art of Scalability 

[img[ https://lh3.googleusercontent.com/l_WZ2l_67mz-u1ouW0jsOnkiHP9cPbWfkXAcTiE8Yss=w2048-h2048-no ]]
<<<
This came from the book called.. “The Art of Scalability” by Marty Abbott (http://akfpartners.com/about/marty-abbott) and Michael Fisher (http://akfpartners.com/about/michael-fisher). Both ex-military and grad of West Point and after that worked a lot on web scale infrastructures (paypal, ebay, etc.)
 
They formed this company called “AKF partners” which wrote 3 awesome books
·         The Art of Scalability
http://akfpartners.com/books/the-art-of-scalability, TOC here http://my.safaribooksonline.com/book/operating-systems-and-server-administration/9780137031436
·         Scalability Rules
http://akfpartners.com/books/scalability-rules, TOC here http://my.safaribooksonline.com/book/operating-systems-and-server-administration/9780132614016
·         The Power of Customer Misbehavior
http://akfpartners.com/books/the-power-of-customer-misbehavior, book preview here http://www.youtube.com/watch?v=w4twalWnfUg
 
and these are their clients http://akfpartners.com/clients
 
If you're an architect, engineer, or manager building or doing a cloud service model (IaaS, PaaS, SaaS, BaaS, DBaaS, iPaaS, IDaaS, APIMaaS), the 3 books mentioned above are awesome. Although you might actually just focus on 1st and 2nd, because the 3rd is about viral growth of products.
 
I think the book concepts are very suited for the “Exadata as a Service” or “DBaaS” service model.. it starts with Staffing, then Processes (incidents, escalations, headroom, perf testing, etc), then architecture, then challenges.  
 
Check out the table of contents (TOC) links, you’ll love it.
<<<

''Pre-req readables: Introducing Cluster Health Monitor (IPD/OS) (Doc ID 736752.1)''

! On the Database Server side

''Oracle recommends to not install the UI on the servers.'' 

''The OS Tool consists of three daemons: ologgerd, oproxyd and osysmond''
ologgerd - master daemon
osysmond - the collector on each node
oproxyd - public interface for external clients (like oclumon and crfgui)

__''Installation''__
1) Download CHM here 
Oracle Cluster Health Monitor - http://goo.gl/UZqS5

2) 
On all nodes ''(as root)''
{{{
useradd -d /opt/crfuser -s /bin/sh -g oinstall crfuser
echo "crfuser" | passwd --stdin crfuser
}}}

{{{
Create the following directories...
<directory>/oracrf                   <--- the install directory 
<directory>/oracrf_installer      <--- put the installer here
<directory>/oracrf_gui             <--- the GUI client goes here
<directory>/oracrf_dump         <--- this is where you will dump the diagnostic data
chown -R crfuser:root <directory>/oracrf*
}}}
as per the README, ideally it should be at /usr/lib/oracrf or C:\Program Files\oracrf

On all nodes ''(as crfuser)''
{{{
add the /usr/lib/oracrf/bin on the .bash_profile PATH
vi .bash_profile
-- then add it
source .bash_profile
}}}

3) Setup passwordless ssh for the user created

''On all the nodes in the cluster'' create the RSA and DSA key pairs
{{{
1) su - crfuser
2) 
mkdir -p ~/.ssh
chmod 700 ~/.ssh
touch ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
cd ~/.ssh
3) 
$ /usr/bin/ssh-keygen -t rsa
<then just hit ENTER all the way>

4) 
$ /usr/bin/ssh-keygen -t dsa
<then just hit ENTER all the way>

5) 
Repeat the above steps for each Oracle RAC node in the cluster.
}}}

''On the first node of the cluster'' Create an authorized key file on one of the nodes. 
An authorized key file is nothing more than a single file that contains a copy of everyone's (every node's) 
RSA and DSA public key. Once the authorized key file contains all of the public keys, 
it is then distributed to all other nodes in the RAC cluster.
{{{
1)  
$ cd ~/.ssh
$ ls -l *.pub
2) 
Use SSH to copy the content of the ~/.ssh/id_rsa.pub and ~/.ssh/id_dsa.pub public key from each 
Oracle RAC node in the cluster to the authorized key file just created (~/.ssh/authorized_keys). This will be done from the first node
$ ssh vmlinux1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ ssh vmlinux1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
$ ssh vmlinux2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ ssh vmlinux2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
3) 
Copy the  ~/.ssh/authorized_keys on the other nodes
$ scp -p ~/.ssh/authorized_keys vmlinux2:.ssh/authorized_keys
4) 
Enable  <--------------------------------------------- this is not anymore needed
$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add
ssh vmlinux1 date; ssh vmlinux2 date
}}}


4) If you have a previous install of this tool, delete it from all nodes. ''(as root)''

{{{
a. Disable the tool
"/etc/init.d/init.crfd disable"
"stopcrf" from a command prompt on Windows.
b. Uninstall
"/usr/lib/oracrf/install/crfinst.pl -d" on Linux
"perl C:\programm files\oracrf\install\crfinst.pl -d" on Windows
c. Make sure all BDB databases are deleted from all nodes.
d. Manually delete the install home if it still exists.
}}}

5) On the master node, Login as ''crfuser'' on Linux. Login as admin user on Windows.
Unzip the crfpack.zip file.
{{{
mv crfpack.zip <directory>/oracrf_installer
cd <directory>/oracrf_installer
unzip crfpack.zip
}}}

For the directory 
The location should
be a path on a volume with at least 5GB per node space available
and writable by privileged user only. It cannot be on root
filesystem in Linux. This location is required to be same on all
hosts. If that can not be done, please specify a different location
during finalize (-f) operation on each host, following the above
size requirements. The path MUST not be on shared disk. If a shared
BDB path is provided to multiple hosts, BDB corruption will happen.

6)  as ''crfuser'', run crfinst.pl on the <directory>/oracrf_installer/install directory
this will copy the installer on other nodes
{{{
$ ./crfinst.pl -i node1,node2,node3 -b <directory>/oracrf -m node1
}}}

7) as ''root'', once the step 6 finishes, it will instruct you to run crfinst.pl script
with -f and -b <bdb location> on each node to finalize the install on that node.
{{{
/home/oracle/oracrf_installer/install/crfinst.pl -f -b <directory>/oracrf
}}}
Don't be confused when it says.. "Installation completed successfully at /usr/lib/oracrf..." 
the /usr/lib/oracrf directory just contains some installation binaries that consumes around 120MB
and the BDB files will still be put in the <directory>/oracrf directory

8) Enable the tool on all nodes ''(as root)''
{{{
# /etc/init.d/init.crfd enable, on Linux
> runcrf, on Windows
}}}


__''Using the tool''__
1) Start the deamons on all nodes..''(as root)'' (The install does not enable/run the daemons by default)
# /etc/init.d/init.crfd enable
On windows, type 'runcrf' from windows command prompt.

2) Run the GUI

-g : Standalone UI installation on current node. Oracle recommends to
not install the UI on the servers. You can use this option to
install the UI-only client on a separate machine outside of
cluster.

where -d is used to specify hours (<hh>), minutes (<mm>) and seconds (<ss>) in
the past from the current time to start the GUI from e.g. crfgui -d "05:10:00"
starts the GUI and displays information from the database which is 5 hours and
10 minutes in the past from the current time.

{{{
$ crfgui                                    <-- to invoke on local node
$ crfgui -m <nodename>           <-- from a client
$ crfgui -r 5 -m <nodename>     <-- to change the refresh rate to 5, default is 1
$ crfgui -d "<hh>:<mm>:<ss>" -m <nodename>     <-- Invoking the GUI with '-d' option starts it in historical mode.
}}}

3) __''The oclumon''__ - A command line tool is included in the package
{{{
$ oclumon -h
$ oclumon dumpnodeview -v -allnodes -last "00:30:00"  <-- which will dump all stats for all nodes for last 30 minutes from the current time (includes process & device)
$ oclumon dumpnodeview -allnodes -s "2008-11-12 12:30:00" -e "2008-11-12 13:30:00"   <-- which will dump stats for all nodes from 12:30 to 13:30 on Nov 12th, 2008
$ oclumon dumpnodeview -allnodes   <-- To find the timezone on the servers in the cluster
$ oclumon dumpnodeview -v -n mynode -last "00:10:00"  <-- will dump all stats for 'mynode' for last 10 minutes
$ oclumon dumpnodeview -v -allnodes -alert -last "00:30:00" <-- To use oclumon to query for alerts only, use the '-alert' option which will dump all records for all 
                                                                                             nodes for last 30 minutes, which contains at least one alert.
}}}

{{{
Some useful attributes that can be passed to oclumon are

   1. Showobjects
      /usr/lib/oracrf/bin/oclumon showobjects -n stadn59 -time "2008-06-03 16:10:00"

   2. Dumpnodeview
      /usr/lib/oracrf/bin/oclumon dumpnodeview -n halinux4

   3. Showgaps - The output of that command can be used to see if OSwatcher was not scheduled. This generally means some 
                          problem with CPU scheduling or very high load on the node. Generally Cluster Health Monitor should always 
                          be scheduled since it is running as RT process.
      /usr/lib/oracrf/bin/oclumon showgaps -n celx32oe40d  \
      -s "2009-07-09 02:40:00"  -e "2009-07-09 03:59:00"  

      Number of gaps found = 0

   4. Showtrail
      $/usr/lib/oracrf/bin/oclumon showtrail -n celx32oe40d -diskid \
      sde qlen totalwaittime -s "2009-07-09 03:40:00" \
      -e "2009-07-09 03:50:00" -c "red" "yellow" "green"

      Parameter=QUEUE LENGTH
      2009-07-09 03:40:00     TO      2009-07-09 03:41:31     GREEN
      2009-07-09 03:41:31     TO      2009-07-09 03:45:21     GREEN
      2009-07-09 03:45:21     TO      2009-07-09 03:49:18     GREEN
      2009-07-09 03:49:18     TO      2009-07-09 03:50:00     GREEN
      Parameter=TOTAL WAIT TIME

      $/usr/lib/oracrf/bin/oclumon showtrail -n celx32oe40d -sys cpuqlen \
      -s "2009-07-09 03:40:00" -e "2009-07-09 03:50:00" \
      -c "red" "yellow" "green"

      Parameter=CPU QUEUELENGTH 

      2009-07-09 03:40:00     TO      2009-07-09 03:41:31     GREEN
      2009-07-09 03:41:31     TO      2009-07-09 03:45:21     GREEN
      2009-07-09 03:45:21     TO      2009-07-09 03:49:18     GREEN
      2009-07-09 03:49:18     TO      2009-07-09 03:50:00     GREEN

-- times for which the nicid eth1 has problems
      ./oclumon showtrail -n halinux4 -nicid eth1 effectivebw errors -c "red" "yellow" "orange" "green"
The above command tells us is the times for which the nicid eth1 has problems. The output is also depicted in colors such that 
green means good and yellow means it is not good but it not exactly bad and red means problems 

Similarly we can use the showtrail option to show cpu load
      ./oclumon showtrail -n halinux4 -sys usagepc cpuqlen cpunumprocess, openfds, numrt, numofiosps, lowmem, memfree, -c "red" "yellow"

From the above screen shot we can see that lowmem is in red all the time, Now we can get details of that lowmem usage using
      ./oclumon dumpnodeview -n halinux4 -s "2008-11-24 20:26:55" -e "2008-11-24 20:30:21" 
}}}


__''Other Utilities''__
ologdbg: This utility provides a debug mode loggerd daemon




__''The Metrics''__

1) CPU 
if a process consumes all of
one CPU on a 4 CPU system , the value reported is 100% for this process, and
aggregated system wide.

2) Data Sample retention

How much history of OS metrics is kept in Berkely DB?
By default the database retains the node views from all the nodes for the last
24 hours in a circular manner. However this limit can be increased to 72 hours
by using oclumon command : 'oclumon manage -bdb resize 259200'.

3) Process priority 
What does the PRIORITY of a process mean?
The linux priorities range from -20 to 19. There is static priority and there
is nice value. We report the dynamic nice value only. We report +ve priority
in the range 0-39 for non-RealTime processes. Processes in the RT class
are reported to have priorities from 41 to 139. This way a consistent "high
number means high priority" priority is reported across platforms. The math
used is (19 - nice_val) for non-RT and (40 + rtprio) for RT processes, where
nice_val and rtprio are corresponding fields in the /proc/<pid>/stat. This
is consistent with the Unix utility 'ps'. Also note that, Unix utility 'top'
reports priority and nice as two different values, and are different from
what IPD-OS reports.

4) Disk devices

Some disk devices are missing from the device view
This can happen for two reasons:
* We only collect and show top (decided by wait time on the disk) 127
devices in the output. OCR/VOTING/ASM/SWAP devices are pinned forever.
So, the missing device may have just fallen off of this list if you
have more than 127 devices (luns).
* The disks were added after the Cluster Health Monitor was started. In this
case, just restart the Cluster Health Monitor stack. Future versions of
Cluster Health Monitor will be able to handle this case without restart.


! Data Collection 
For Oracle 11.2 RAC installations use the diagcollection nscript that comes with Cluster Health Monitor:
{{{
/usr/lib/oracrf/bin/diagcollection.pl --collect --ipd
}}}
For other versions run
{{{
/usr/lib/oracrf/bin/oclumon dumpnodeview -allnodes -v -last "23:59:59" > <your-directory>/<your-filename>
}}}
Make sure <your-directory> has more than 2Gb space to create file<your-filename>
Zip or compress <your-filename> before uploading to the Service Request.

Also update the SR with the information when (date and time) you have observed a specific issue.



! On the Client side
The tool can be used by Customers to monitor their nodes online or offline. Generally when working with Oracle support, the data is viewed offline.

    Online mode can be used to detect problems live on customer environment. The data can be viewed using Cluster Health Monitor utility /usr/lib/oracrf/bin/crfgui. The GUI is not installed on the nodes of the server but can be installed on any other client using 
{{{

-- Create the following directories...
<directory>/oracrf_gui                   <--- the install directory 
<directory>/oracrf_installer      <--- put the installer here
chown -R crfuser:root <directory>/oracrf*

-- GUI installation
crfinst.pl -g <Install_dir>
}}}

       ''1.'' For example,  To look at the load on a node you can run the command .
{{{
          /usr/lib/oracrf/bin/crfgui.sh -m <Nodename>
}}}
          The default refresh rate for this GUI is 1 second. To change refresh rate to 5 seconds execute 
{{{
          /usr/lib/oracrf/bin/crfgui.sh -n <Node_to_be_monitored> -r 5
}}}
       ''2.'' Another attribute that can be passed to the tool is -d. This is used to view the data in the past from the current time. So if there was a node reboot 4 hours ago and you need to look at the data about 10 minutes before the reboot, you would pass -d "04:10:00"
{{{
          /usr/lib/oracrf/bin/crfgui.sh -d "04:10:05"
}}}
          All the above usage scenarios requires gui access to the nodes. 


! Mining the dumps

[karao@karl Downloads]$ less dump_20110103.txt | grep topcpu | less
[karao@karl Downloads]$ less dump_20110103.txt | grep "#cpu" | less
[karao@karl Downloads]$ less dump_20110103.txt | grep "type:" | less
[karao@karl Downloads]$ less dump_20110103.txt | grep "spent too much time" | less
[karao@karl Downloads]$ less dump_20110103.txt | grep "eth" | less
[karao@karl Downloads]$ less dump_20110103.txt | grep "OCR" | less


! Installation troubleshooting
''Log file location ''
/usr/lib/oracrf/log/hostname/crfmond/crfmond.log

''Config file location''
/usr/lib/oracrf/admin/crfnhostname.ora

''You can do strace''
/etc/init.d/init.crfd stop
/etc/init.d/init.crfd disable
/etc/init.d/init.crfd enable
strace -fo /tmp/crf_start.out /etc/init.d/init.crfd start
upload generated crf_start.out file.

''The typical config file''
[root@racnode1 ~]# cat /usr/lib/oracrf/admin/crfracnode1.ora
HOSTS=racnode2,racnode1
CRFHOME=/usr/lib/oracrf
MYNAME=racnode1
BDBLOC=/u01/oracrf
USERNAME=crfuser
MASTERPUB=192.168.203.12
MASTER=racnode2
REPLICA=racnode1
DEAD=
ACTIVE=racnode2,racnode1

[root@racnode2 ~]# cat /usr/lib/oracrf/admin/crfracnode2.ora
HOSTS=racnode2,racnode1
CRFHOME=/usr/lib/oracrf
MYNAME=racnode2
BDBLOC=/u01/oracrf
USERNAME=crfuser
DEAD=
MASTERPUB=192.168.203.12
MASTER=racnode2
STATE=mutated
ACTIVE=racnode2,racnode1
REPLICA=racnode1
http://www.debian-administration.org/articles/551
https://www.shellcheck.net/
https://fiddles.io/  - there's a fiddle for that 

http://www.smashingapps.com/2014/05/19/10-code-playgrounds-for-developers.html

jsFiddle
Test your JavaScript, CSS, HTML or CoffeeScript online with JSFiddle code editor.

LiveGap Editor
Free Online Html Editor with Syntax highlighting, live preview, code folding, fullscreen mode, themes, matching tags, auto completion, finding tags, frameWork and closing tags.

Codepen
CodePen is an HTML, CSS, and JavaScript code editor in your browser with instant previews of the code you see and write.

Cssdesk

Google Code Playground
The AJAX Code Playground is an educational tool to show code examples for various Google Javascript APIs.

jsbin
HTML, CSS, JavaScript playground that you can host on your server.

Editr

Ideone
Ideone is something more than a pastebin; it’s an online compiler and debugging tool which allows to compile and run code online in more than 40 programming languages.

Sqlfiddle
Application for testing and sharing SQL queries.

Chopapp
A little app from ZURB that lets people slice up bad code and share their feedback to help put it back together.

Gistboxapp
GistBox is the best interface to GitHub Gists. Organize your snippets with labels. Edit your code. Search by description. All in one speedy app.

D3-Playground


mongo , no-sql databases 
https://mongoplayground.net/

bash playground 
https://repl.it/repls/WorthyAbandonedDaemon

python 
https://pyfiddle.io/
http://pythonfiddle.com/


! jupyter notebooks online 
https://colab.research.google.com/notebooks/intro.ipynb  <- run beam code for free, free GPUs 
https://paiza.cloud/containers <- fast response time
https://notebooks.azure.com <- way slow but it works
cocalc.com <- meeh



CODING PRACTICE https://exercism.io/my/tracks/python


! cloud IDE
https://www.codeinwp.com/blog/best-cloud-ide/
<<<


If you just need to execute and share snippets of code, you should try JSFiddle or CodePen.

If you would like to create notebooks with a combination of Markdown and code outputs, you can give Azure Notebooks or Observable a try.

If you want an alternative to a local development environment, you should try out Google Cloud Shell.

If you would like a complete end-to-end solution, you should try Codeanywhere, Codenvy or Repl.it.

<<<

.










https://www.sonarlint.org/features/

https://www.castsoftware.com/products/code-analysis-tools
http://www.codecademy.com/en/tracks/python

! twitter globe sentiment analysis
http://challengepost.com/software/twitter-stream-globe
https://github.com/twitterdev/twitter-stream-globe
made by this guy http://joncipriano.com/#home
another platform you can use https://github.com/dataarts/webgl-globe

<<<
programming language framework 
  > data types
  > conditional statements
  > loops
  > functions
  > classes
<<<

''stories''

http://www.quora.com/What-do-full-time-software-developers-think-of-Codecademy-and-Code-School#

180 websites in 180 days http://jenniferdewalt.com/

http://irisclasson.com/2012/07/13/my-first-year-of-programming-july-11-2011-july-12-2012/

build your first iOS app http://www.lynda.com/articles/photographer-build-an-app, http://mikewong.me/how-to-build-your-first-ios-app/

http://rileyh.com/how-i-learned-to-code-in-under-10-months/

http://kodeaweso.me/is-full-stack-development-possible-in-windows/

knowledge to practice http://www.vit.vic.edu.au/prt/pages/3-applying-knowledge-to-practice-41.aspx
Theory and Research-based Principles of Learning http://www.cmu.edu/teaching/principles/learning.html

How do I learn to code? http://www.quora.com/How-do-I-learn-to-code-1/answer/Andrei-Soare?srid=Xff&share=1 , https://www.talentbuddy.co/blog/seven-villains-you-have-to-crush-when-learning-to-code/

write code every fucking day http://kaidez.com/write-code-every-f--king-day/

before you learn to code ask yourself why http://blog.underdog.io/post/129654418712/before-you-learn-to-code-ask-yourself-why

http://rob.conery.io/2015/10/06/how-to-learn-a-new-programming-language-while-maintaining-your-day-job-and-still-being-there-for-your-family/

https://medium.freecodecamp.com/being-a-developer-after-40-3c5dd112210c#.q2bocwagw

http://www.crashlearner.com/learn-to-code/

https://www.techinasia.com/talk/learn-to-learn-like-a-developer

(Programming|Computer) Language or Code https://gerardnico.com/wiki/language/start

Write Like A Programmer https://qntm.org/write

Programming Languages Don't Matter to Programmers https://github.com/t3rmin4t0r/notes/wiki/Language-Choice-and-Project-lifetimes

Things I wish I knew when I started Programming https://www.youtube.com/watch?v=GAgegNHVXxE  <- this is the dude
 






* getting real - 37 signals - 	the smarter, faster, easier way to build a successful web app 
* the phoenix project
* the power of customer behavior 
* tao te programming
* R data structures and algorithms
* data modeling by example series 
* the styles of database development 
* oracle sql perf tuning and optimization 
* pro active record https://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=pro+active+record
* Refactoring Databases: Evolutionary Database Design (2007)
* Refactoring to patterns (2008)
* @@Applied Rapid Development Techniques for Database Engineers file:///C:/Users/karl/Downloads/applied-rapid-development-techniques-for-database-engineers.pdf@@
<<<
nice writeup by Dominic on the overall workflow of database development. he brought up cool ideas like:
* social database development
* enabling audit trail for DDL to monitor progress of developers and changes across environments
<<<
! schools/camps
http://www.zappable.com/2012/11/chart-for-learning-a-programming-langauge/
http://www.codeacademy.com/
http://www.ocwconsortium.org/  <-- was originally popularized by MIT’s 2002 move to put its course materials online
https://www.coursera.org/  <-- find a wealth of computer science courses from schools not participating in the OCW program
https://www.khanacademy.org/cs  <-- includes science, economics, and yes, computer science.
Dash by General Assembly
udacity
codeschool
learnstreet
thinkful
http://venturebeat.com/2014/05/10/before-you-quit-your-job-to-become-a-developer-go-down-this-6-point-checklist/
http://venturebeat.com/2013/10/31/the-7-best-ways-to-learn-how-to-code/


! ''hadoop, big data''
{{{
Interesting India company there’s not a lot of players on the area of big data education (online), the company Lynda.com where I’m subscribed to does not have big data courses
And this one estimated to make $3M in FY14
 
http://www.edureka.in/company
http://www.edureka.in/company#media
http://www.edureka.in/hadoop-admin#CourseCurriculum
http://www.edureka.in/big-data-and-hadoop#CourseCurriculum
http://www.edureka.in/data-science#CourseCurriculum

}}}


! web dev
http://www.codengage.io/
{{{
Part 1 - Ruby and Object-Oriented Design
Part 2 - The Web, SQL, and Databases
Part 3 - ActiveRecord, Basic Rails, and Forms
Part 4 - Authentication and Advanced Rails
Part 5 - Javascript, jQuery, and AJAX
Part 6 - Opus Project, Useful Gems, and APIs
}}}

''HTML -> CSS -> JQuery -> Javascript programming'' path
{{{
https://www.khanacademy.org/cs
html and CSS http://www.codecademy.com/tracks/web
html dog - html,css,javascript http://htmldog.com/guides/
fundamentals of OOP and javascript http://codecombat.com/ 
ruby on rails http://railsforzombies.org/
http://www.w3fools.com/
https://www.codeschool.com
Dash https://dash.generalassemb.ly/ which is interactive..make a CSS robot! You can also check http://skillcrush.com/
}}}

see other paths here [[immersive code camps]]


! data
''Data Analysis Learning Path''
http://www.mysliderule.com/learning-paths/data-analysis/learn/
http://www.businessinsider.com/free-online-courses-for-professionals-2014-7
https://www.datacamp.com/   <-- R tutorials, some are paid



! vendor dev communities 	
https://developer.microsoft.com/en-us/collective/learning/courses?utm_campaign=DC19&utm_source=Instagram&utm_medium=Social&utm_content=CC36_videocard&utm_term=Grow




! meetup.com	
* get all previous meetups https://webapps.stackexchange.com/questions/47707/how-to-get-all-meetups-ive-been-to






http://www.rackspace.com/cloud/blog/2011/05/17/infographic-evolution-of-computer-languages/
https://mremoteng.atlassian.net/wiki/display/MR/List+of+Free+Tools+for+Open+Source+Projects

http://www.headfirstlabs.com/books/hfda/
http://www.headfirstlabs.com/books/hfhtml/
http://www.headfirstlabs.com/books/hfhtml5prog/
http://www.headfirstlabs.com/books/hfjs/
http://www.headfirstlabs.com/books/hfjquery/

HF C http://shop.oreilly.com/product/0636920015482.do
HF jQuery http://shop.oreilly.com/product/0636920012740.do
HF mobile web http://shop.oreilly.com/product/0636920018100.do
HF iPhone dev http://shop.oreilly.com/product/9780596803551.do

http://venturebeat.com/2012/09/17/why-everyone-should-code/
''Long term trends on programming language'' http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html
''Measuring programming popularity'' http://en.wikipedia.org/wiki/Measuring_programming_language_popularity
''10,000 hours'' http://norvig.com/21-days.html
http://venturebeat.com/2013/08/06/tynker-code-kids/
http://www.impactlab.net/2014/02/25/23-developer-skills-that-will-keep-you-employed-forever/







@@Try R http://tryr.codeschool.com/levels/1/challenges/1@@
file:///Volumes/T5_2TB/system/Users/kristofferson.a.arao/Dropbox2/Box%20Sync/bin/codeninja_comparison/codeninja_comparison.html (open with firefox)




https://github.com/andreis/interview



Cold failover for a single instance RAC database https://blogs.oracle.com/XPSONHA/entry/cold_failover_for_a_single_ins
Name: MptwSmoke
Background: #fff
Foreground: #000
PrimaryPale: #F5F5F5
PrimaryLight: #5C84A8
PrimaryMid: #111
PrimaryDark: #000
SecondaryPale: #ffc
SecondaryLight: #fe8
SecondaryMid: #db4
SecondaryDark: #841
TertiaryPale: #eee
TertiaryLight: #ccc
TertiaryMid: #999
TertiaryDark: #666
Error: #f88
http://blogs.oracle.com/clive/entry/colour_dtrace
http://blogs.oracle.com/vlad/entry/coloring_dtrace_output
http://blogs.oracle.com/ahl/entry/open_sourcing_the_javaone_keynote
{{{
alter table credit_rating modify (person_id encrypt);
-- if you plan to create indexes on an encrypted column, you must create it with NO SALT
-- see if the columns in question are part of a foreign key relationship. 

ALTER TABLE orders MODIFY (credit_card_number) ENCRYPT NO SALT) 

-- rekey the master key
alter system set key identified by “e3car61”;

-- rekey the column keys without changing the encryption algorithm:
ALTER TABLE employee REKEY;



CREATE TABLE test_lob (
      id           NUMBER(15)
    , clob_field   CLOB
    , blob_field   BLOB
    , bfile_field  BFILE
)
/

alter table test_lob modify (clob_field encrypt no salt);


-- error on 11gR1
04:33:36 HR@db01> alter table test_lob modify (clob_field encrypt no salt);
alter table test_lob modify (clob_field encrypt no salt)
*
ERROR at line 1:
ORA-43854: use of a BASICFILE LOB where a SECUREFILE LOB was expected


-- error on 11gR2
00:06:54 HR@dbv_1> alter table test_lob modify (clob_field encrypt no salt);
alter table test_lob modify (clob_field encrypt no salt)
*
ERROR at line 1:
ORA-43856: Unsupported LOB type for SECUREFILE LOB operation



-- table should be altered to securefile first.. then encrypt
CREATE TABLE test1 (doc CLOB ENCRYPT USING 'AES128') 
	LOB(doc) STORE AS SECUREFILE 
(CACHE NOLOGGING ); 

this of course can be done with online redef http://gjilevski.com/2011/05/11/migration-to-securefiles-using-online-table-redefinition-in-oracle-11gr2/
http://www.oracle-base.com/articles/11g/secure-files-11gr1.php#migration_to_securefiles
see tiddler about dbms_redef

}}}

! migration to securefiles
{{{

-- query table info 

col column_name format a30
select table_name, column_name, securefile, encrypt from user_lobs;

TABLE_NAME                     COLUMN_NAME                    SEC
------------------------------ ------------------------------ ---
TEST_LOB                       CLOB_FIELD                     NO
TEST_LOB                       BLOB_FIELD                     NO


col clob format a30
col blob format a30
SELECT
      id
    , clob_field "Clob"
    , UTL_RAW.CAST_TO_VARCHAR2(blob_field) "Blob"
FROM hr.test_lob;


-- create interim table
	
CREATE TABLE hr.test_lob_tmp (
      id           NUMBER(15)
    , clob_field   CLOB 
    , blob_field   BLOB
    , bfile_field  BFILE
)
LOB(clob_field) STORE AS SECUREFILE (CACHE)
/
alter table hr.test_lob_tmp modify (clob_field encrypt no salt);


-- after encrypt and migration to securefiles

select table_name, column_name, securefile, encrypt from user_lobs;05:30:45 HR@db01> 05:30:45 HR@db01>

TABLE_NAME                     COLUMN_NAME                    SEC ENCR
------------------------------ ------------------------------ --- ----
TEST_LOB                       CLOB_FIELD                     NO  NONE
TEST_LOB                       BLOB_FIELD                     NO  NONE
TEST_LOB_TMP                   CLOB_FIELD                     YES YES
TEST_LOB_TMP                   BLOB_FIELD                     NO  NONE


-- do the redefinition

  
begin
execute immediate 'ALTER SESSION ENABLE PARALLEL DML';
execute immediate 'ALTER SESSION FORCE PARALLEL DML PARALLEL 4';
execute immediate 'ALTER SESSION FORCE PARALLEL QUERY PARALLEL 4';
dbms_redefinition.start_redef_table
(
uname => 'HR',
orig_table => 'TEST_LOB',
int_table => 'TEST_LOB_TMP',
options_flag => dbms_redefinition.CONS_USE_ROWID
);
end start_redef;
/

ERROR at line 1:
ORA-12088: cannot online redefine table "HR"."TEST_LOB" with unsupported datatype
ORA-06512: at "SYS.DBMS_REDEFINITION", line 52
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1631
ORA-06512: at line 5

Do not attempt to online redefine a table containing a LONG column, an ADT column, or a FILE column. <-- of course!

}}}


! migration to securefiles.. 2nd take.. without the bfile

{{{
mkdir -p /home/oracle/oralobfiles
grant create any directory to hr;


DROP TABLE test_lob CASCADE CONSTRAINTS
/

CREATE TABLE test_lob (
      id           NUMBER(15)
    , clob_field   CLOB
    , blob_field   BLOB
)
/

CREATE OR REPLACE DIRECTORY
    EXAMPLE_LOB_DIR
    AS
    '/home/oracle/oralobfiles'
/

INSERT INTO test_lob
    VALUES (  1001
            , 'Some data for record 1001'
            , '48656C6C6F' || UTL_RAW.CAST_TO_RAW(' there!') 
    );

COMMIT;

col clob format a30
col blob format a30
SELECT
      id
    , clob_field "Clob"
    , UTL_RAW.CAST_TO_VARCHAR2(blob_field) "Blob"
FROM test_lob;

######

-- create interim table
	
CREATE TABLE hr.test_lob_tmp (
      id           NUMBER(15)
    , clob_field   CLOB 
    , blob_field   BLOB
)
LOB(clob_field) STORE AS SECUREFILE (CACHE)
/
alter table hr.test_lob_tmp modify (clob_field encrypt no salt);


-- after encrypt and migration to securefiles

select table_name, column_name, securefile, encrypt from user_lobs;

TABLE_NAME                     COLUMN_NAME                    SEC ENCR
------------------------------ ------------------------------ --- ----
TEST_LOB                       CLOB_FIELD                     NO  NONE
TEST_LOB                       BLOB_FIELD                     NO  NONE
TEST_LOB_TMP                   CLOB_FIELD                     YES YES
TEST_LOB_TMP                   BLOB_FIELD                     NO  NONE


-- do the redefinition

  
begin
dbms_redefinition.start_redef_table
(
uname => 'HR',
orig_table => 'TEST_LOB',
int_table => 'TEST_LOB_TMP',
options_flag => dbms_redefinition.CONS_USE_ROWID
);
end start_redef;
/



begin
dbms_redefinition.sync_interim_table(
uname => 'HR',
orig_table => 'TEST_LOB',int_table => 'TEST_LOB_TMP');
end;
/



begin
dbms_redefinition.finish_redef_table
(
uname => 'HR',
orig_table => 'TEST_LOB',
int_table => 'TEST_LOB_TMP'
);
end;
/

select table_name, column_name, securefile, encrypt from user_lobs;

TABLE_NAME                     COLUMN_NAME                    SEC ENCR
------------------------------ ------------------------------ --- ----
TEST_LOB_TMP                   CLOB_FIELD                     NO  NONE
TEST_LOB_TMP                   BLOB_FIELD                     NO  NONE
TEST_LOB                       CLOB_FIELD                     YES YES       <-- it works!!
TEST_LOB                       BLOB_FIELD                     NO  NONE

13:38:55 HR@db01> desc test_lob
 Name                                                                                                                                             Null?     Type
 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------- --------------------------------------------------------------------------------------------------------------------
 ID                                                                                                                                        NUMBER(15)
 CLOB_FIELD                                                                                                                                CLOB ENCRYPT
 BLOB_FIELD                                                                                                                                BLOB



}}}
http://documentation.commvault.com/dell/release_7_0_0/books_online_1/english_us/features/third_party_command_line/third_party_command_line.htm
http://documentation.commvault.com/dell/release_7_0_0/books_online_1/english_us/features/cli/rman_scripts.htm
http://www.streamreader.org/serverfault/questions/140055/commvault-oracle-rman-restore-to-new-host   <-- SAMPLE COMMAND
http://www.orafaq.com/wiki/Oracle_database_Backup_and_Recovery_FAQ
<<<
    IaaS - cloud provider
    DBaaS - managed database, you don't need to maintain the underlying OS
    PaaS - OS is managed for you, just deploy the app/software package
MLaaS - machine learning as a service
<<<
http://husnusensoy.wordpress.com/2008/02/01/using-oracle-table-compression/

Restrictions
http://oracle-randolf.blogspot.com/2010/07/compression-restrictions.html

''SOA 11G Database Growth Management Strategy'' http://www.oracle.com/technetwork/database/features/availability/soa11gstrategy-1508335.pdf

{{{
=CONCATENATE(G4,"-",C4)
}}}

concatenate percent 
http://answers.yahoo.com/question/index?qid=20080605090839AA6Dnxk
http://www.wikihow.com/Apply-Conditional-Formatting-in-Excel
http://www.podcast.tv/video-episodes/excel-2011-conditional-formatting-12937144.html
http://www.cyberciti.biz/hardware/5-linux-unix-commands-for-connecting-to-the-serial-console/

Find out information about your serial ports
{{{
$ dmesg | egrep --color 'serial|ttyS'
$ setserial -g /dev/ttyS[0123]
}}}


{{{
#1 cu command
#2 screen command
#3 minicom command
#4 putty command
#5 tip command
}}}



What is Connection Management Call Elapsed Wait and How to Improve It (Doc ID 1936329.1)	

Best	Practices	for	Application	 Performance,	Scalability,	and	Availability	
https://support.oracle.com/epmos/main/downloadattachmentprocessor?attachid=1936329.1%3ABESTPRACTICE&action=inline
https://support.oracle.com/epmos/main/downloadattachmentprocessor?attachid=1380043.1%3A2014_JONES-20141002&action=inline
https://wiki.apache.org/hadoop/ConnectionRefused
https://stackoverflow.com/questions/28661285/hadoop-cluster-setup-java-net-connectexception-connection-refused
https://apple.stackexchange.com/questions/153589/trying-to-get-hadoop-to-work-connection-refused-in-hadoop-and-in-telnet
<<<
APPLIES TO:

Enterprise Manager for Oracle Database - Version 12.1.0.4.0 and later
Information in this document applies to any platform.
GOAL

 Document describes about restrictions about multi level resource plan creation in 12.1.0.4 DB plugin.

SOLUTION

 we were going to discourage users from using multi-level resource plans for 2 reasons:

(1)    Most customers misinterpret how these multi-level plans work.  Therefore, their multi-level plans do not work as they expect. 

(2)    Multi-level plans are not supported for PDBs or CDBs.

 

By default, SYS_GROUP is a consumer group that contains user sessions logged in as SYS.  Resource Manager will control the CPU usage of sessions in SYS_GROUP.  These SYS sessions include job scheduler slaves and automated maintenance tasks.  However, any background work, such as LMS or PMON or DBWR or LGWR, is not managed in this consumer group.  These backgrounds use very little CPU and are hence not managed by Resource Manager.  The advantage of using Resource Manager is that these critical background processes do not have to compete with a heavy load of foreground processes to be scheduled by the O/S.      

 

We have no plans for desupporting multi-level resource plans.  However, we have decided on the following:

-          Resource Plans for a PDB are required to be single-level, are limited to 8 consumer groups, and cannot contain subplans.

-          Enterprise Manager does not support the creation of new multi-level resource plans.  However, it will continue to support editing of existing multi-level resource plans.  In addition, the PL/SQL interface can be used to create multi-level resource plans.

-          We are actively encouraging customers not to use multi-level plans.  The misconception shown in the “Common Mistakes” slide deck seems to be very pervasive and we feel that the single-level plans are sufficiently powerful for most customers.

 

The “Resource Manager – 12c” slide deck contains an overview of all the Resource Manager features, as of 12.1.0.1.

The “Resource Manager – Common Mistakes” slide deck contains various subtle “gotchas” with Resource Manager.  There are a few slides on multi-level resource plans.
<<<
Using Consolidation Planner http://download.oracle.com/docs/cd/E24628_01/doc.121/e25179/consolid_plan.htm
Database as a Service using Oracle Enterprise Manager 12c Cookbook http://www.oracle.com/technetwork/oem/cloud-mgmt/em12c-dbaas-cookbook-1432364.pdf

''SPEC CPU2006'' http://www.spec.org/auto/cpu2006/Docs/result-fields.html
http://www.amd.com/us/products/server/benchmarks/Pages/specint-rate-base2006-four-socket.aspx
http://www.spec.org/cpu2006/results/res2010q1/cpu2006-20091218-09300.html
http://download.oracle.com/docs/cd/E24628_01/license.121/e24474/appendix_a.htm#BGBBAEDE <-- on the official doc


''This research is still in progress.. there will be more updates in the next few days.''

! Some things to validate/investigate here:
* are we doing the same thing on the CPUSPECRate? see what I'm doing here [[cpu - SPECint_rate2006]] vs the consolidation planner here [[em12c SPEC computation]]
** basically yes, but what I don't like about the em12c approach is getting the AVG(SPEC_RATE) across the diff hardware platforms with different config..although this will still serve the purpose of having a single currency system where you can compare how fast A is to Z.  But normally what I would do is find the closest hardware for my source and get that SPEC number.. but here it's doing an AVG on the filtered samples
** the SPEC rate that the consolidation planner using is based on the ''SPEC Base number''.. and what I'm doing is ''Peak/Enabled Cores'' to get the ''SPECint_rate2006/core''
<<<
Here's the logic behind the SPEC search.. this is still a pretty cool stuff, but I would start on the hardware platform first. The thing is there's no way from the em12c side to get the server make and model from the MGMT_ECM_HW, MGMT$HW_CPU_DETAILS, and MGMT$OS_HW_SUMMARY views so there's really no way to start the search with the hardware platform BUT the consolidation planner allows you to override the SPEC values. Plus this tool is generic that you can use it on a non-database server.. so for them they need to come up with standard ways to derive things to productize it. And while I'm doing my investigation I came across the tables being used and the EMCT_* tables are tied not only to consolidation planner but also to the chargeback plugin.
{{{
-- Match with CPU Vendor
    -- CPU Vendor not found, return AVG of current match
-- CPU Vendor matched, Now match with Cores
    -- Cores not found, return AVG of current match + closest Cores match
-- CPU Vendor, Cores matched, Now match with CPU Family
    -- Family not found, return AVG of current match
-- CPU Vendor, Family, Cores matched, Now match with Speed
    -- Speed not found, return AVG of current match + closest Speed match
-- CPU Vendor, Cores, Family, Speed matched, Now match with Threads 
    -- No threads found, return AVG of current match + closest threads match
-- CPU Vendor, Cores, Family, Speed, Threads matched, Now match with Chips
    -- Chips not found, return AVG of current match + closest chips match
-- CPU Vendor, Cores, Family, Speed, Threads, Chips matched, Now match with 1st Cache MB
    -- 1st Cache MB not found, return AVG of current match + closest 1st Cache match
-- CPU Vendor, Cores, Family, Speed, Threads, Chipsi, 1st Cache matched, Now match with Memory GB
    -- Memory GB not found, return AVG of current match + closest Memory GB match
-- CPU Vendor, Family, Cores, Speed, Threads, Chips, 1st Cache, Memory matched, Now match with System Vendor
    -- System Vendor not found, return AVG of current match
}}}
The data points used by Consolidation Planner is here https://www.dropbox.com/s/41hjihib5xyz0lp/em12c_spec.csv
<<<
* comparison of the rollups with the AWR data
* can you do a stacked viz across 30+ databases?
** they did a pretty cool tremap (with different colors every 10% of utilization increase up to 100%) of what would the resource load on the destination server be across 31 days 
* on consolidation planner the IO collection part is just an average across 30days (but the range can be adjusted), the thing here is if you are consolidating 30+ databases you have to stack the data points across time series and get their peaks and check any possible IO workload contentions.. that's where you know if you'll be implementing IORM on which databases.
** here they're just getting the AVG IOPS and at the end if you have a bunch of servers they just add the averages altogether and come up with a final number and account it to the destination server's capacity. Take note that it just gets the IOPS and no account of MB/s

! Things consolidation planner can/cannot do
* give you the end utilization of the consolidated servers
** the problem here is, on a multi node environment it is also critical to see the utilization of each server when you have overlapping instances provisioned across different nodes
* scenario module is the "what if" thing where you'll feed a bunch of servers to consolidate then you'll be able to see if they fit on a particular platform let's say half rack exadata
** on prov worksheet, I can do scenarios where I would know what the end utilization of the rest of the servers if I shutdown one of the nodes. 
* the cool 31 days utilization treemap
** I can do this on each resource by doing a stacked graph in Tableau on a time dimension of AWR data.. what's also nice about that is I can tell which instance I should watch out for (peaks & high resource usage)


! The SQL used by Consolidation Planner plugin

{{{
set colsep ',' lines 4000
SELECT g.target_guid,
    MAX(g.target_name) ServerName ,
    ROUND(emct_target.get_spec_rate(g.target_guid),2) CPUSPECRate,
    MAX(DECODE(b.item_id,8014,b.value,NULL)) cpuuserspecrate,
    MAX(ROUND((c.mem/1024),2)) MemoryGB,
    MAX(c.disk) DiskStorageGB,
    MAX(DECODE(a.metric_column_name,'cpuUtil',a.metric,NULL)) cpuutil,
    MAX(DECODE(a.metric_column_name,'memUsedPct',a.metric,NULL)) memutil,
    MAX(DECODE(a.metric_column_name,'totpercntused',a.metric,NULL)) diskutil,
    MAX(d.vendor_name) CpuVendor,
    MAX(d.impl) CpuName,
    MAX(d.freq_in_mhz) FreqInMhz,
    MAX(DECODE(b.item_id,8063,b.value,NULL)) userdiskiocps,
    MAX(DECODE(b.item_id,8062,b.value,NULL)) userdiskiombps,
    MAX(DECODE(b.item_id,8061,b.value,NULL)) usernetworkiombps,
    MAX(DECODE(a.metric_column_name,'totiosmade',a.metric,NULL)) diskiocpsvalue,
    MAX(DECODE(a.metric_column_name,'totiosmade',a.max_metric,NULL)) diskiocpsmaxvalue,
    NULL AS diskiombpsvalue,
    MAX(DECODE(a.metric_column_name,'totalNetworkThroughPutRate',a.metric,NULL)) networkiombpsvalue,
    MAX(DECODE(a.metric_column_name,'totalNetworkThroughPutRate',a.max_metric,NULL)) networkiombpsmaxvalue,
    MAX(g.target_type) Type,
    MAX(c.os_summary) os_summary
  FROM
    (SELECT entity_guid,
      metric_column_name,
      ROUND(AVG(avg_value),2) AS metric,
      ROUND(MAX(max_value),2) AS max_metric
    FROM gc_metric_values_daily
    WHERE entity_type           ='host'
      AND (entity_guid in (null) and 1=0)
      AND metric_group_name in ('Load', 'DiskActivitySummary', 'TotalDiskUsage', 'NetworkSummary')
      AND metric_column_name in ('cpuUtil','memUsedPct', 'totiosmade', 'totpercntused', 'totalNetworkThroughPutRate')
      AND collection_time > (sysdate - 30)
    GROUP BY entity_guid, metric_column_name
    ) a,
    emct$latest_user_attrs b,
    mgmt$os_hw_summary c ,
    mgmt$hw_cpu_details d,
    gc$target g
  WHERE a.entity_guid(+)       =g.target_guid
  AND b.original_target_guid(+)=g.target_guid
  AND c.target_guid(+)         =g.target_guid
  AND d.target_guid(+)         =g.target_guid
  AND g.target_type            ='host'
  GROUP BY g.target_guid
  ORDER BY 2;
  
TARGET_GUID                     ,SERVERNAME          ,CPUSPECRATE,CPUUSERSPECRATE,   MEMORYGB,DISKSTORAGEGB,   CPUUTIL,   MEMUTIL,  DISKUTIL,CPUVENDOR    ,CPUNAME                                                                                                                                         , FREQINMHZ,USERDISKIOCPS,USERDISKIOMBPS,USERNETWORKIOMBPS,DISKIOCPSVALUE,DISKIOCPSMAXVALUE,D,NETWORKIOMBPSVALUE,NETWORKIOMBPSMAXVALUE,TYPE                                                         ,OS_SUMMARY
--------------------------------,-----------------------------------------------------------------------------------------------------------------------------------,-----------,---------------,----------,-------------,----------,----------,----------,--------------------------------------------------------------------------------------------------------------------------------,--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------,----------,-------------,--------------,-----------------,--------------,-----------------,-,------------------,---------------------,----------------------------------------------------------------,----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
0C474BF51B89823AFE1040B6ADC7147C,desktopserver.local ,        -121,               ,     15.61,       4773.3,          ,          ,          ,GenuineIntel ,Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz                                                                                                        ,       3401,       100000,              ,              125,              ,                 , ,                  ,                     ,host                                                        ,Oracle Linux Server release 5.7 2.6.32 200.13.1.el5uek(64-bit)
0EE088EC2D56D4DF9A747BBE24DFB7D8,emgc12c.local       ,     -10.48,                ,      3.87,      2033.27,          ,          ,          ,GenuineIntel ,Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz                                                                                                        ,       3401,             ,              ,                 ,              ,                 , ,                  ,                     ,host                                                        ,Oracle Linux Server release 5.7 2.6.32 100.0.19.el5(64-bit)

05:25:50 SYS@emrep12c> select * from emct$latest_user_attrs;

DATA_SOURCE_ID,ORIGINAL_TARGET_GUID            ,   ITEM_ID,APP_TYPE        ,CAT_TARGET_GUID    ,STRING_VALUE            ,      VALUE,UPDATED_BY           ,START_DAT,END_DATE
--------------,--------------------------------,----------,----------------,--------------------------------,--------------------------------------,----------,----------------------------------------------------------------------------------------------------------------------------------------------------------------,---------,---------
             2,0C474BF51B89823AFE1040B6ADC7147C,      4001,cat_common_lib  ,0C474BF51B89823AFE1040B6ADC7147C,Estimated  ,        121,cat.target           ,19-OCT-12,
             2,0C474BF51B89823AFE1040B6ADC7147C,      4003,cat_cpa_lib     ,0C474BF51B89823AFE1040B6ADC7147C,Estimated  ,        121,cat.target           ,19-OCT-12,
             2,0C474BF51B89823AFE1040B6ADC7147C,      8061,cpa             ,0C474BF51B89823AFE1040B6ADC7147C,           ,        125,                     ,19-OCT-12,
             2,0C474BF51B89823AFE1040B6ADC7147C,      8063,cpa             ,0C474BF51B89823AFE1040B6ADC7147C,           ,     100000,                     ,19-OCT-12,


-- bwahaha it's a package!
select ROUND(emct_target.get_spec_rate(g.target_guid),2) CPUSPECRate from gc$target g where rownum < 11;

CPUSPECRATE
-----------
    -224.31
       -121
    -224.31
     -10.48
    -224.31
    -224.31
    -224.31
    -224.31
    -224.31
    -224.31

10 rows selected.


col ServerName format a20
SELECT entity_guid,
       MAX(g.target_name) ServerName,
      metric_column_name,
      ROUND(AVG(avg_value),2) AS metric,
      ROUND(MAX(max_value),2) AS max_metric
    FROM gc_metric_values_daily a, gc$target g
    where a.entity_guid(+)       =g.target_guid
and metric_column_name in ('cpuUtil','memUsedPct','totpercntused','totiosmade','totalNetworkThroughPutRate')
    group by entity_guid,metric_column_name
    order by 2,3;
    
ENTITY_GUID                     ,SERVERNAME          ,METRIC_COLUMN_NAME                                              ,    METRIC,MAX_METRIC
--------------------------------,--------------------,----------------------------------------------------------------,----------,----------
0C474BF51B89823AFE1040B6ADC7147C,desktopserver.local ,cpuUtil                                                         ,      11.2,      16.6
0C474BF51B89823AFE1040B6ADC7147C,desktopserver.local ,memUsedPct                                                      ,     28.93,     37.04
0C474BF51B89823AFE1040B6ADC7147C,desktopserver.local ,totalNetworkThroughPutRate                                      ,         0,       .02
0C474BF51B89823AFE1040B6ADC7147C,desktopserver.local ,totiosmade                                                      ,    813.13,   2725.41
0C474BF51B89823AFE1040B6ADC7147C,desktopserver.local ,totpercntused                                                   ,     38.27,     38.28
0EE088EC2D56D4DF9A747BBE24DFB7D8,emgc12c.local       ,cpuUtil                                                         ,     17.19,     90.86
0EE088EC2D56D4DF9A747BBE24DFB7D8,emgc12c.local       ,memUsedPct                                                      ,     70.19,     73.36
0EE088EC2D56D4DF9A747BBE24DFB7D8,emgc12c.local       ,totalNetworkThroughPutRate                                      ,       .01,       .31
0EE088EC2D56D4DF9A747BBE24DFB7D8,emgc12c.local       ,totiosmade                                                      ,     67.41,    473.91
0EE088EC2D56D4DF9A747BBE24DFB7D8,emgc12c.local       ,totpercntused                                                   ,      7.14,      7.19

10 rows selected.



06:09:16 SYS@emrep12c> select owner, object_name, object_type from dba_objects where object_name = 'GC_METRIC_VALUES_DAILY';

OWNER                         ,OBJECT_NAME                                                                                                        ,OBJECT_TYPE
------------------------------,--------------------------------------------------------------------------------------------------------------------------------,-------------------
SYSMAN                        ,GC_METRIC_VALUES_DAILY                                                                                             ,VIEW
SYSMAN_RO                     ,GC_METRIC_VALUES_DAILY                                                                                             ,SYNONYM




col ServerName format a20
SELECT entity_guid,
     TO_CHAR(COLLECTION_TIME,'MM/DD/YY HH24:MI:SS'),
      metric_column_name,
      ROUND(avg_value,2) AS metric,
      ROUND(max_value,2) AS max_metric
    FROM gc_metric_values_daily a
where metric_column_name in ('totiosmade')
    order by 2,3;
    
ENTITY_GUID                     ,TO_CHAR(COLLECTIO,METRIC_COLUMN_NAME                                              ,    METRIC,MAX_METRIC
--------------------------------,-----------------,----------------------------------------------------------------,----------,----------
0EE088EC2D56D4DF9A747BBE24DFB7D8,10/17/12 00:00:00,totiosmade                                                      ,     53.28,    385.44
0EE088EC2D56D4DF9A747BBE24DFB7D8,10/18/12 00:00:00,totiosmade                                                      ,     81.55,    473.91
0C474BF51B89823AFE1040B6ADC7147C,10/18/12 00:00:00,totiosmade                                                      ,    813.13,   2725.41


gc_metric_values_daily
gc_metric_values_hourly
gc_metric_values_latest


 ENTITY_TYPE                                                                                                                                  NOT NULL VARCHAR2(64)
 ENTITY_NAME                                                                                                                                  NOT NULL VARCHAR2(256)
 ENTITY_GUID                                                                                                                                  NOT NULL RAW(16)
 PARENT_ME_TYPE                                                                                                                                VARCHAR2(64)
 PARENT_ME_NAME                                                                                                                                VARCHAR2(256)
 PARENT_ME_GUID                                                                                                                               NOT NULL RAW(16)
 TYPE_META_VER                                                                                                                                NOT NULL VARCHAR2(8)
 TIMEZONE_REGION                                                                                                                               VARCHAR2(64)
 METRIC_GROUP_NAME                                                                                                                            NOT NULL VARCHAR2(64)
 METRIC_COLUMN_NAME                                                                                                                           NOT NULL VARCHAR2(64)
 COLUMN_TYPE                                                                                                                                  NOT NULL NUMBER(1)
 COLUMN_INDEX                                                                                                                                 NOT NULL NUMBER(3)
 DATA_COLUMN_TYPE                                                                                                                             NOT NULL NUMBER(2)
 METRIC_GROUP_ID                                                                                                                              NOT NULL NUMBER(38)
 METRIC_GROUP_GUID                                                                                                                            NOT NULL RAW(16)
 METRIC_GROUP_LABEL                                                                                                                            VARCHAR2(64)
 METRIC_GROUP_LABEL_NLSID                                                                                                                      VARCHAR2(64)
 METRIC_COLUMN_ID                                                                                                                             NOT NULL NUMBER(38)
 METRIC_COLUMN_GUID                                                                                                                           NOT NULL RAW(16)
 METRIC_COLUMN_LABEL                                                                                                                           VARCHAR2(64)
 METRIC_COLUMN_LABEL_NLSID                                                                                                                     VARCHAR2(64)
 DESCRIPTION                                                                                                                                   VARCHAR2(128)
 SHORT_NAME                                                                                                                                    VARCHAR2(40)
 UNIT                                                                                                                                          VARCHAR2(32)
 IS_FOR_SUMMARY                                                                                                                                NUMBER
 IS_STATEFUL                                                                                                                                   NUMBER
 IS_TRANSPOSED                                                                                                                                NOT NULL NUMBER(1)
 NON_THRESHOLDED_ALERTS                                                                                                                        NUMBER
 METRIC_TYPE                                                                                                                                  NOT NULL NUMBER(1)
 USAGE_TYPE                                                                                                                                   NOT NULL NUMBER(1)
 METRIC_KEY_ID                                                                                                                                NOT NULL NUMBER(38)
 NUM_KEYS                                                                                                                                     NOT NULL NUMBER(1)
 METRIC_KEY_VALUE                                                                                                                              VARCHAR2(256)
 KEY_PART_1                                                                                                                                   NOT NULL VARCHAR2(256)
 KEY_PART_2                                                                                                                                   NOT NULL VARCHAR2(256)
 KEY_PART_3                                                                                                                                   NOT NULL VARCHAR2(256)
 KEY_PART_4                                                                                                                                   NOT NULL VARCHAR2(256)
 KEY_PART_5                                                                                                                                   NOT NULL VARCHAR2(256)
 KEY_PART_6                                                                                                                                   NOT NULL VARCHAR2(256)
 KEY_PART_7                                                                                                                                   NOT NULL VARCHAR2(256)
 COLLECTION_TIME                                                                                                                              NOT NULL DATE
 COLLECTION_TIME_UTC                                                                                                                           DATE
 COUNT_OF_COLLECTIONS                                                                                                                         NOT NULL NUMBER(38)
 AVG_VALUE                                                                                                                                     NUMBER
 MIN_VALUE                                                                                                                                     NUMBER
 MAX_VALUE                                                                                                                                     NUMBER
 STDDEV_VALUE                                                                                                                                  NUMBER
 AVG_VALUES_VARRAY                                                                                                                            NOT NULL SYSMAN.EM_METRIC_VALUE_ARRAY
 MIN_VALUES_VARRAY                                                                                                                            NOT NULL SYSMAN.EM_METRIC_VALUE_ARRAY
 MAX_VALUES_VARRAY                                                                                                                            NOT NULL SYSMAN.EM_METRIC_VALUE_ARRAY
 STDDEV_VALUES_VARRAY                                                                                                                         NOT NULL SYSMAN.EM_METRIC_VALUE_ARRAY

}}}
Where to find MAXxxxxxx control file parameters in Data Dictionary 
  Doc ID:  Note:104933.1 
Nutanix, Cisco HyperFlex, and Dell/EMC VxRail which is pretty much vSAN
https://www.reddit.com/r/sysadmin/comments/8oq5oi/nutanix_vs_vmware_vsan/
http://www.scribd.com/doc/19212001/To-convert-a-rac-node-using-asm-to-single-instance-node
''How to Convert a Single-Instance ASM to Cluster ASM [ID 452758.1]'' http://space.itpub.net/11134237/viewspace-687810
http://oracleinstance.blogspot.com/2010/07/converting-single-instance-to-rac.html
conference submission guidelines https://blogs.oracle.com/datawarehousing/entry/open_world_2015_call_for


Although this is very easy, handy notes will still be helpful

http://oracle.ittoolbox.com/documents/how-to-copy-an-oracle-database-to-another-machine-18603
http://www.pgts.com.au/pgtsj/pgtsj0211b.html
http://www.adp-gmbh.ch/ora/admin/creatingdbmanually.html
intro to coreos
http://youtu.be/l4oaIW37tU4

SmartOS (ZFS, DTrace, Zones and KVM) vs CoreOS (kernel+containers(docker/lxc - full isolation, docker/nspawn - little isolation))
https://www.youtube.com/watch?v=TtseOQoGJtk

CoreOS howto at digitalocean, based on Gentoo (Portage package manager)
https://www.digitalocean.com/community/tutorial_series/getting-started-with-coreos-2
https://www.digitalocean.com/community/tutorials/how-to-set-up-a-coreos-cluster-on-digitalocean
https://coreos.com/blog/digital-ocean-supports-coreos/
http://0pointer.net/blog/projects/stateless.html


STACKX User Guide
  	Doc ID: 	Note:362791.1

HOW TO HANDLE CORE DUMPS ON UNIX
  	Doc ID: 	Note:1007808.6

Segmentation Fault and Core Dump During Execution
  	Doc ID: 	Note:1012079.6

SOLARIS: SGA size, sgabeg attach address and Sun architectures
  	Doc ID: 	Note:61896.1

How To Debug a Core File
  	Doc ID: 	Note:559167.1

	
CoreUtils for Windows
http://gnuwin32.sourceforge.net/packages/coreutils.htm

Doc ID: 465714.1 "Count of Targets Not Uploading Data" Metric not Clearing Even if the Cause is Gone
{{{
1) Create a new DBFS file system for APAC Cutover.
2) Whereas the current dbfs file system is /dbfs/work,
    The new file system would be mounted under /dbfs as /dbfs/apac
3) The new file system would be created with initially with a max-size of 3TB.
4) The file for the new file system would be created on +RECO,
    where there is currently about 49TB of usable space on PD01.
    With the new 3TB dbfs file system, that would leave about 40TB.
5) This new file system would be temporary just for APAC cutover.

#################

To configure option #2 above, follow these steps:

Optionally create a second DBFS repository database.
Create a new tablespace and a DBFS repository owner account (database user) for the new DBFS filesystem as shown in step 4 above.
Create the new filesystem using the procedure shown in step 5 above. substituting the proper values for the tablespace name and desired filesystem name. 
If using a wallet, you must create a separate TNS_ADMIN directory and a separate wallet. Be sure to use the proper ORACLE_HOME, ORACLE_SID, username and password when setting up those components. 
Ensure you use the latest mount-dbfs.sh script attached to this note. Updates were made on 7-Oct-2010 to support multiple filesystems. If you are using previous versions of this script, download the new version and after applying the necessary configuration modifications in it, replace your current version.
To have Clusterware manage a second filesystem mount, use a second copy of the mount-dbfs.sh script. Rename it to a unique file name like mount-dbfs2.sh and place it in the proper directory as shown in step 16 above. Once mount-dbfs2.sh has been properly modified with proper configuration information, a second Clusterware resource (with a unique name) should be created. The procedure for this is outlined in step 17 above.

#################





###############
INSTALL
###############

create bigfile tablespace apac_tbs datafile '+DATA' size 500M autoextend on next 100M maxsize 1000M NOLOGGING EXTENT MANAGEMENT LOCAL AUTOALLOCATE  SEGMENT SPACE MANAGEMENT AUTO ;

create user apac identified by welcome 
         default   tablespace apac_tbs 
         temporary tablespace temp;
  
grant create session, 
             create table, 
             create procedure, 
             dbfs_role 
          to apac;

alter user apac quota unlimited on apac_tbs;



cd $ORACLE_HOME/rdbms/admin
sqlplus apac/welcome
@dbfs_create_filesystem_advanced.sql apac_tbs apac nocompress nodeduplicate noencrypt non-partition


-- create the file
/home/oracle/dba/bin/mount-dbfs-apac.sh


$ scp mount-dbfs-apac.sh td01db02:/home/oracle/dba/bin/
mount-dbfs-apac.sh                                                                                        100% 8058     7.9KB/s   00:00
$ scp mount-dbfs-apac.sh td01db03:/home/oracle/dba/bin/
mount-dbfs-apac.sh                                                                                        100% 8058     7.9KB/s   00:00
$ scp mount-dbfs-apac.sh td01db04:/home/oracle/dba/bin/
mount-dbfs-apac.sh                                                                                        100% 8058     7.9KB/s   00:00



dcli -l root -g dbs_group mkdir /dbfs2
dcli -l root -g dbs_group chown oracle:oinstall /dbfs2


ACTION_SCRIPT=/home/oracle/dba/bin/mount-dbfs-apac.sh
RESNAME=ora.apac.filesystem
DBNAME=dbfs
DBNAMEL=`echo $DBNAME | tr A-Z a-z`
ORACLE_HOME=/u01/app/11.2.0.3/grid
PATH=$ORACLE_HOME/bin:$PATH
export PATH ORACLE_HOME
crsctl add resource $RESNAME \
  -type local_resource \
  -attr "ACTION_SCRIPT=$ACTION_SCRIPT, \
         CHECK_INTERVAL=30,RESTART_ATTEMPTS=10, \
         START_DEPENDENCIES='hard(ora.$DBNAMEL.db)pullup(ora.$DBNAMEL.db)',\
         STOP_DEPENDENCIES='hard(ora.$DBNAMEL.db)',\
         SCRIPT_TIMEOUT=300"



crsctl start res ora.apac.filesystem
crsctl stop res ora.apac.filesystem



###############
CLEANUP
###############


crsctl stop res ora.apac.filesystem

sqlplus apac/welcome
@$ORACLE_HOME/rdbms/admin/dbfs_drop_filesystem.sql apac

crsctl delete resource ora.apac.filesystem -f
crsstat | grep -i files


select /* usercheck */ 'alter system disconnect session '''||s.sid||','||s.serial#||''''||' immediate;'
from v$session s
where   s.username = 'APAC';

drop user apac cascade;

drop tablespace apac_tbs including contents and datafiles;

}}}
http://www.oracle.com/technetwork/articles/servers-storage-admin/howto-create-zones-ops-center-1737990.html
{{{

-- create user
create user alloc_app_perf identified by testalloc;

-- user sql
alter user "alloc_app_perf" default tablespace "bas_data" temporary tablespace "temp" account unlock ;

-- quotas
alter user "alloc_app_perf" quota unlimited on bas_data;

-- roles
grant alloc_app_r to alloc_app_perf;
grant select_catalog_role to alloc_app_perf;
grant resource to alloc_app_perf;
grant select any dictionary to alloc_app_perf;
grant advisor to alloc_app_perf;
grant create job to alloc_app_perf;
grant oem_monitor to alloc_app_perf;
grant administer any sql tuning set to alloc_app_perf;   
grant administer sql management object to alloc_app_perf; 
grant create any sql_profile to alloc_app_perf;
grant drop any sql_profile to alloc_app_perf;
grant alter any sql_profile to alloc_app_perf;   

-- execute  
grant execute on dbms_monitor to alloc_app_perf;
grant execute on dbms_application_info to alloc_app_perf;
grant execute on dbms_workload_repository to alloc_app_perf;
grant execute on dbms_xplan to alloc_app_perf;     
grant execute on dbms_sqltune to alloc_app_perf;
grant execute on sys.dbms_lock to alloc_app_perf;


}}}


also create [[kill session procedure]]



http://structureddata.org/2011/09/25/critical-skills-for-performance-work/
http://www.integrigy.com/oracle-security-blog/archive/2010/10/14/oracle-cpu-oct-2010-monster
XTTS Migrating a Mission Critical 40 TB Oracle E-Business Suite from HP Superdomes to Cisco Unified Computing System
http://www.cisco.com/c/en/us/solutions/collateral/servers-unified-computing/ucs-5100-series-blade-server-chassis/Whitepaper_c11-707249.html

HOWTO: Oracle Cross-Platform Migration with Minimal Downtime
http://www.pythian.com/news/3653/howto-oracle-cross-platform-migration-with-minimal-downtime/


Using Transportable Tablespace In Oracle Database 10g
http://avdeo.com/2009/12/22/using-transportable-tablespace-in-oracle-database-10g/


Migrating an Oracle database Solaris to Linux
http://blog.nominet.org.uk/tech/2006/01/18/migrating-an-oracle-database-solaris-to-linux/


Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backups [ID 1389592.1]


Platform Migration Using Transportable Database Oracle Database 11g and 10g Release 2 
http://www.oracle.com/technetwork/database/features/availability/maa-wp-10gr2-platformmigrationtdb-131164.pdf



-- XTTS + RMAN 
12C - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2005729.1)
11G - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 1389592.1)




https://dba.stackexchange.com/questions/137762/explain-plan-gives-different-results-depending-on-schema
https://magnusjohanssontuning.wordpress.com/2012/08/01/cursor-not-shared-for-different-users/
https://hourim.wordpress.com/2015/04/06/bind_equiv_failure-or-when-you-will-regret-using-adaptive-cursor-sharing/
https://blog.toadworld.com/why-my-execution-plan-has-not-been-shared-part-i
http://oracleinaction.com/parent-child-curosr/
http://www.jcon.no/oracle/?p=1032  11gR2: “Unlucky” combination of a new feature, a fix, application design and code
https://gavinsoorma.com/2012/09/a-look-at-parsing-and-sharing-of-cursors/
https://www.google.com/search?ei=eO0DXbzZLvKe_QbWyJzACA&q=oracle+same+sql+plan+hash+value+across+different+schemas&oq=oracle+same+sql+plan+hash+value+across+different+schemas&gs_l=psy-ab.3...13632.16295..16476...0.0..0.135.1457.13j3......0....1..gws-wiz.......0i71j35i304i39.nH4q-OfkkJg




<<showtoc>>

! pre-req
d3vienno https://www.youtube.com/playlist?list=PL6il2r9i3BqH9PmbOf5wA5E1wOG3FT22p
http://www.pluralsight.com/courses/d3js-data-visualization-fundamentals
http://www.pluralsight.com/courses/interactive-data-visualization-d3js
http://www.lynda.com/D3js-tutorials/Data-Visualization-D3js/162449-2.html




d3.js
Visualizing Oracle Data - ApEx and Beyond http://ba6.us/book/export/html/268, http://ba6.us/d3js_application_express_basic_dynamic_action
Mike Bostock http://bost.ocks.org/mike/
https://github.com/mbostock/d3/wiki/Gallery
http://mbostock.github.com/d3/tutorial/bar-1.html
http://mbostock.github.com/d3/tutorial/bar-2.html
AJAX retrieval using Javascript Object Notation (JSON) http://anthonyrayner.blogspot.com/2007/06/ajax-retrieval-using-javascript-object.html
http://dboptimizer.com/2012/01/22/ash-visualizations-r-ggplot2-gephi-jit-highcharts-excel-svg/
Videos:
http://css.dzone.com/articles/d3js-way-way-more-just-another


''Video tutorials:''
http://www.youtube.com/user/d3Vienno/videos
http://www.quora.com/D3-JavaScript-library/Whats-the-best-way-for-someone-with-no-background-to-learn-D3-js
r to d3 https://github.com/hadley/r2d3
http://www.r-bloggers.com/basics-of-javascript-and-d3-for-r-users/


''Articles''
javascript viz without d3 http://dry.ly/data-visualization-with-javascript-without-d3

json - data
html - structure
JS+D3 - layout
CSS - pretty

webdevelop toolkit
chrome development
chrome developer toolkit

readables:
mbostock.github.com/d3/api
book: javascript: the good bits by douglas crockford
browse: https://developer.mozilla.org/en/SVG
watch: vimeo.com/29458354
clone: GraphDB https://github.com/sones/sones
clone: Cube http://square.github.com/cube
clone: d3py https://github.com/mikedewar/D3py
http://code.hazzens.com/d3tut/lesson_0.html

books:
Interactive Data Visualization for the Web An Introduction to Designing with D3 http://shop.oreilly.com/product/0636920026938.do
http://www.slideshare.net/arnicas/interactive-data-visualization-with-d3js

! other libraries 
dc.js - Dimensional Charting Javascript Library https://dc-js.github.io/dc.js/









https://community.hortonworks.com/articles/56636/hive-understanding-concurrent-sessions-queue-alloc.html
https://hortonworks.com/blog/introducing-tez-sessions/
https://stackoverflow.com/questions/25521363/apache-tez-architecture-explanation?rq=1
http://yaping123.wordpress.com/2008/09/02/db-link/
http://marcel.vandewaters.nl/oracle/database-oracle/creating-database-links-for-another-schema

{{{
select username, profile from dba_users where username in ('HCMREADONLY');
ALTER PROFILE APPLICATION_USER LIMIT PASSWORD_VERIFY_FUNCTION NULL;
alter user HCMREADONLY identified by HCMREADONLY;
ALTER PROFILE APPLICATION_USER LIMIT PASSWORD_VERIFY_FUNCTION VERIFY_FUNCTION;
~oracle/rac11gr2_mon.pl -h "/u01/app/11.2.0.3/grid"  -d HCM2UAT
oradcli cat /u01/app/oracle/product/11.2.0.3/dbhome_1/network/admin/tnsnames.ora | grep -i HCM2UAT

conn sysadm/<password>

select * from global_name@ROHCM2UAT;

set linesize 121
col owner format a15
col db_link format a45
col username format a15
col password format a15
col host format a15
SELECT owner, db_link, username, host, created FROM dba_db_links;

col name format a20
select NAME,USERID,PASSWORD,PASSWORDX from link$;
}}}


{{{
SQL> CREATE DATABASE LINK systemoracle CONNECT TO system IDENTIFIED BY oracle USING 'dw';

Database link created.

SQL> select sysdate from dual@systemoracle;

SYSDATE
-----------------
20120410 15:07:57

SQL>
SQL> select * from v$instance@systemoracle;

INSTANCE_NUMBER INSTANCE_NAME    HOST_NAME                                                        VERSION           STARTUP_TIME      STATUS       PAR    THREAD# ARCHIVE LOG_SWITCH_WAIT LOGINS     SHU DATABASE_STATUS   INSTANCE_ROLE      ACTIVE_ST BLO
--------------- ---------------- ---------------------------------------------------------------- ----------------- ----------------- ------------ --- ---------- ------- --------------- ---------- --- ----------------- ------------------ --------- ---
              1 dw               desktopserver.local                                              11.2.0.3.0        20120405 22:54:29 OPEN         NO           1 STOPPED                 ALLOWED    NO  ACTIVE            PRIMARY_INSTANCE   NORMAL    NO

SQL>
SQL> drop database link systemoracle;

Database link dropped.



set heading off
set echo off
set long 9999999

select dbms_metadata.get_ddl('USER', username) || ';' usercreate
from dba_users where username = 'SYSTEM';


06C70D7478FCFC00B4DBF384D2AF15886964CF872A2960378E4570ECFC0F1790089FF8275365309F74A257102E0041F7ADF4F15CFB6E87C2D7E0595E23E519939EF992402796F5850657B52496C109A164F090970A852CF163010DCC91750381FD832C59F63DBC990D88777E91E61D77DAEA09D347BE9E4C4D2C003FB53E243

   ALTER USER "SYSTEM" IDENTIFIED BY VALUES 'S:24BC4E96EFE7E21595038D261C75CFAAFC8BF2CF89C4EB867CA80C8C2850;2D594E86F93B17A1'
      TEMPORARY TABLESPACE "TEMP";

      
  CREATE DATABASE LINK "SYSTEMORACLE.LOCAL"
   CONNECT TO "SYSTEM" IDENTIFIED BY VALUES '06C70D7478FCFC00B4DBF384D2AF15886964CF872A2960378E4570ECFC0F1790089FF8275365309F74A257102E0041F7ADF4F15CFB6E87C2D7E0595E23E519939EF992402796F5850657B52496C109A164F090970A852CF163010DCC91750381FD832C59F63DBC990D88777E91E61D77DAEA09D347BE9E4C4D2C003FB53E243E'
   USING 'dw';

      
      

set heading off
set echo off
set long 9999999

select dbms_metadata.get_ddl('DB_LINK', DB_LINK) || ';' dblinkcreate
from dba_db_links;



-- if you change the password you'll get this 
	
	12:57:32 SYS@dw> select sysdate from dual@systemoracle;
	select sysdate from dual@systemoracle
	                         *
	ERROR at line 1:
	ORA-01017: invalid username/password; logon denied
	ORA-02063: preceding line from SYSTEMORACLE

	
-- new password 

   ALTER USER "SYSTEM" IDENTIFIED BY VALUES 'S:5039460190FA01698510988435D8B7E678432D4B4A0E4C5BF7C19D2BD7F4;DC391A4F3C7CC080'
      TEMPORARY TABLESPACE "TEMP";

      
-- this will not work!
  CREATE DATABASE LINK "SYSTEMORACLE.LOCAL"
   CONNECT TO "SYSTEM" IDENTIFIED BY VALUES 'S:5039460190FA01698510988435D8B7E678432D4B4A0E4C5BF7C19D2BD7F4;DC391A4F3C7CC080'
   USING 'dw';

   
-- the real fix is to put back the password
13:02:07 SYS@dw> select sysdate from dual@systemoracle;
select sysdate from dual@systemoracle
                         *
ERROR at line 1:
ORA-01017: invalid username/password; logon denied
ORA-02063: preceding line from SYSTEMORACLE


13:02:33 SYS@dw>
13:02:34 SYS@dw> alter user system identified by oracle;

User altered.

13:02:45 SYS@dw> select sysdate from dual@systemoracle;

SYSDATE
-----------------
20120910 13:02:50
}}}

! 2019 
https://edn.embarcadero.com/
https://www.embarcadero.com/support
https://supportforms.embarcadero.com   <- increase count 
https://supportforms.embarcadero.com/product/
https://members.embarcadero.com/login.aspx    <- download page 
https://cc.embarcadero.com/Default.aspx 	<- documentation videos 
https://community.idera.com/developer-tools/   <- URLs


''Documentation''
http://docs.embarcadero.com/products/db_optimizer/

''new feature''
3.0 http://docs.embarcadero.com/products/db_optimizer/3.0/ReadMe.htm
3.5 http://docs.embarcadero.com/products/db_optimizer/3.5/ReadMe.htm

''wiki''
http://docwiki.embarcadero.com/DBOptimizer/en/Main_Page

''DB Optimizer 3.0''
http://docs.embarcadero.com/products/db_optimizer/3.0/DBOptimizerQuickStartGuide.pdf
http://docs.embarcadero.com/products/db_optimizer/3.0/DBOptimizerUserGuide.pdf
http://docs.embarcadero.com/products/db_optimizer/3.0/ReadMe.htm

''Example usage - DB Optimizer example - 3mins to 10secs''  
https://www.evernote.com/shard/s48/sh/070796b4-673e-418f-9ff9-d362ae9941dd/9636928fbcf370e0dcf9fb940cc5a9c8    <-- after reading this check out the [[SQLT-tc (test case builder)]] tiddler on how to generate VST with SQLTXPLAIN

''Pricing''
''$1500'' http://store.embarcadero.com/store/embt/en_US/DisplayCategoryProductListPage/categoryID.52346400
''the $99 good deal'' http://www.freelists.org/post/oracle-l/Special-Offer-for-readers-of-OracleL

Example usage - DB Optimizer example - 3mins to 10secs
https://www.evernote.com/shard/s48/sh/070796b4-673e-418f-9ff9-d362ae9941dd/9636928fbcf370e0dcf9fb940cc5a9c8 <— after reading this check out the SQLT-tc (test case builder) tiddler on how to generate VST with SQLTXPLAIN
<<showtoc>>

!! intro
https://thomaswdinsmore.com/2017/02/01/year-in-sql-engines/
http://coding-geek.com/how-databases-work/



!! RDBMSGenealogy
https://hpi.de/fileadmin/user_upload/fachgebiete/naumann/projekte/RDBMSGenealogy/RDBMS_Genealogy_V6.pdf


!! ACID, CAP theorem, BASE, NRW notation, BigData 4Vs + 1
[[ACID, CAP theorem, BASE, NRW notation, BigData 4Vs + 1]]


!! quick references 
!!! Comparing Database Types: How Database Types Evolved to Meet Different Needs 
https://www.prisma.io/blog/comparison-of-database-models-1iz9u29nwn37
!!! 7 databases in 7 weeks 
2nd ed https://learning.oreilly.com/library/view/seven-databases-in/9781680505962/
1st ed https://learning.oreilly.com/library/view/seven-databases-in/9781941222829/
!!! Seven NoSQL Databases in a Week
https://learning.oreilly.com/library/view/seven-nosql-databases/9781787288867/









.




! DB2 Mainframe to Oracle Sizing
see discussions here https://www.evernote.com/l/ADBBWhmaiVJIpIL6imCf_Fi1OozqN9Usq08

modern day DBA vs Developer https://web.devopstopologies.com/index.html
https://www.teamblind.com/article/is-data-engineering-under-rated-5jtnitKv

http://tonyhasler.wordpress.com/2011/12/  FORCE_MATCH for Stored Outlines and/or SQL Baselines????? – follow up

How to use the Sql Tuning Advisor. [ID 262687.1]
<<<
SQL tuning information views, such as DBA_SQLTUNE_STATISTICS, DBA_SQLTUNE_BINDS, 
and DBA_SQLTUNE_PLANS views can also be queried to get this information.

Note: it is possible for the SQL Tuning Advisor to return no recommendations for
a particular SQL statement e.g. in cases where the plan is already optimal or the
Automatic Tuning Optimization mode cannot find a better plan.
<<<

* you can’t change the encryption, all TS needs to be encrypted
* even if you change the encrypt_new_tablespaces, it will complain when you create an unencrypted TS

{{{

22:06:00 SYS@kacdb> desc dba_tablespaces
 Name																		       Null?	Type
 ----------------------------------------------------------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------------------------------
 TABLESPACE_NAME																       NOT NULL VARCHAR2(30)
 BLOCK_SIZE																	       NOT NULL NUMBER
 INITIAL_EXTENT 																		NUMBER
 NEXT_EXTENT																			NUMBER
 MIN_EXTENTS																	       NOT NULL NUMBER
 MAX_EXTENTS																			NUMBER
 MAX_SIZE																			NUMBER
 PCT_INCREASE																			NUMBER
 MIN_EXTLEN																			NUMBER
 STATUS 																			VARCHAR2(9)
 CONTENTS																			VARCHAR2(21)
 LOGGING																			VARCHAR2(9)
 FORCE_LOGGING																			VARCHAR2(3)
 EXTENT_MANAGEMENT																		VARCHAR2(10)
 ALLOCATION_TYPE																		VARCHAR2(9)
 PLUGGED_IN																			VARCHAR2(3)
 SEGMENT_SPACE_MANAGEMENT																	VARCHAR2(6)
 DEF_TAB_COMPRESSION																		VARCHAR2(8)
 RETENTION																			VARCHAR2(11)
 BIGFILE																			VARCHAR2(3)
 PREDICATE_EVALUATION																		VARCHAR2(7)
 ENCRYPTED																			VARCHAR2(3)
 COMPRESS_FOR																			VARCHAR2(30)
 DEF_INMEMORY																			VARCHAR2(8)
 DEF_INMEMORY_PRIORITY																		VARCHAR2(8)
 DEF_INMEMORY_DISTRIBUTE																	VARCHAR2(15)
 DEF_INMEMORY_COMPRESSION																	VARCHAR2(17)
 DEF_INMEMORY_DUPLICATE 																	VARCHAR2(13)
 SHARED 																			VARCHAR2(13)
 DEF_INDEX_COMPRESSION																		VARCHAR2(8)
 INDEX_COMPRESS_FOR																		VARCHAR2(13)
 DEF_CELLMEMORY 																		VARCHAR2(14)
 DEF_INMEMORY_SERVICE																		VARCHAR2(12)
 DEF_INMEMORY_SERVICE_NAME																	VARCHAR2(1000)
 LOST_WRITE_PROTECT																		VARCHAR2(7)
 CHUNK_TABLESPACE																		VARCHAR2(1)

22:06:06 SYS@kacdb> select tablespace_name, encrypted, compress_for from dba_tablespaces;

TABLESPACE_NAME 	       ENC COMPRESS_FOR
------------------------------ --- ------------------------------
SYSTEM			       NO
SYSAUX			       NO
UNDOTBS1		       NO
TEMP			       NO
USERS			       YES

22:06:22 SYS@kacdb> select name from v$datafile;

NAME
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
/u02/app/oracle/oradata/kacdb_phx1q5/KACDB_PHX1Q5/D37EAE7206931EBAE053040A640AA08B/datafile/o1_mf_system_k22cl5v6_.dbf
/u02/app/oracle/oradata/kacdb_phx1q5/KACDB_PHX1Q5/D37EAE7206931EBAE053040A640AA08B/datafile/o1_mf_sysaux_k22clf46_.dbf
/u02/app/oracle/oradata/kacdb_phx1q5/KACDB_PHX1Q5/D37EAE7206931EBAE053040A640AA08B/datafile/o1_mf_undotbs1_k22cj072_.dbf
/u02/app/oracle/oradata/kacdb_phx1q5/KACDB_PHX1Q5/D37EAE7206931EBAE053040A640AA08B/datafile/o1_mf_users_k22cqyrg_.dbf

22:06:41 SYS@kacdb> create tablespace ts1 datafile '/u02/app/oracle/oradata/kacdb_phx1q5/KACDB_PHX1Q5/D37EAE7206931EBAE053040A640AA08B/datafile/ts1.dbf' size 1M;

Tablespace created.

22:07:08 SYS@kacdb> select tablespace_name, encrypted, compress_for from dba_tablespaces;

TABLESPACE_NAME 	       ENC COMPRESS_FOR
------------------------------ --- ------------------------------
SYSTEM			       NO
SYSAUX			       NO
UNDOTBS1		       NO
TEMP			       NO
USERS			       YES
TS1			       YES

6 rows selected.

22:07:14 SYS@kacdb> alter tablespace ts1 encryption online decrypt;
alter tablespace ts1 encryption online decrypt
*
ERROR at line 1:
ORA-28427: cannot create, import or restore unencrypted tablespace: TS1 in Oracle Cloud


22:13:58 SYS@kacdb> create tablespace ts2 datafile '/u02/app/oracle/oradata/kacdb_phx1q5/KACDB_PHX1Q5/D37EAE7206931EBAE053040A640AA08B/datafile/ts2.dbf' decrypt size 1M;
create tablespace ts2 datafile '/u02/app/oracle/oradata/kacdb_phx1q5/KACDB_PHX1Q5/D37EAE7206931EBAE053040A640AA08B/datafile/ts2.dbf' decrypt size 1M
                                                                                                                                             *
ERROR at line 1:
ORA-02180: invalid option for CREATE TABLESPACE


22:15:45 SYS@kacdb> create tablespace ts2 datafile '/u02/app/oracle/oradata/kacdb_phx1q5/KACDB_PHX1Q5/D37EAE7206931EBAE053040A640AA08B/datafile/ts2.dbf' size 1M decrypt;

Tablespace created.

22:15:59 SYS@kacdb> select tablespace_name, encrypted, compress_for from dba_tablespaces;

TABLESPACE_NAME 	       ENC COMPRESS_FOR
------------------------------ --- ------------------------------
SYSTEM			       NO
SYSAUX			       NO
UNDOTBS1		       NO
TEMP			       NO
USERS			       YES
TS1			       YES
TS2			       YES

7 rows selected.

22:16:05 SYS@kacdb> show parameter encrypt

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
encrypt_new_tablespaces 	     string	 ALWAYS
22:16:26 SYS@kacdb> 
22:17:32 SYS@kacdb> alter system set encrypt_new_tablespaces='DDL';

System altered.

22:17:42 SYS@kacdb> 
22:17:43 SYS@kacdb> show parameter encrypt 

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
encrypt_new_tablespaces 	     string	 DDL
22:17:47 SYS@kacdb> create tablespace ts3 datafile '/u02/app/oracle/oradata/kacdb_phx1q5/KACDB_PHX1Q5/D37EAE7206931EBAE053040A640AA08B/datafile/ts3.dbf' size 1M;
create tablespace ts3 datafile '/u02/app/oracle/oradata/kacdb_phx1q5/KACDB_PHX1Q5/D37EAE7206931EBAE053040A640AA08B/datafile/ts3.dbf' size 1M
*
ERROR at line 1:
ORA-28427: cannot create, import or restore unencrypted tablespace: TS3 in Oracle Cloud


22:18:06 SYS@kacdb> show parameter encrypt

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
encrypt_new_tablespaces 	     string	 DDL
22:18:37 SYS@kacdb> alter system set encrypt_new_tablespaces='ALWAYS';

System altered.

22:18:46 SYS@kacdb> create tablespace ts3 datafile '/u02/app/oracle/oradata/kacdb_phx1q5/KACDB_PHX1Q5/D37EAE7206931EBAE053040A640AA08B/datafile/ts3.dbf' size 1M;

Tablespace created.

22:18:53 SYS@kacdb> select tablespace_name, encrypted, compress_for from dba_tablespaces;

TABLESPACE_NAME 	       ENC COMPRESS_FOR
------------------------------ --- ------------------------------
SYSTEM			       NO
SYSAUX			       NO
UNDOTBS1		       NO
TEMP			       NO
USERS			       YES
TS1			       YES
TS2			       YES
TS3			       YES

8 rows selected.


}}}
df -k in dbfs has a bug.. which could be because of the fuse + securelob behavior discrepancies

but to get the whole space is get the 

* ''expired_bytes + unexpired_bytes + size on df -k''  which should give you the rough number of the total space then subtract the number to the ''du -sm'' output on the /dbfs directory
* if it says you only have 256GB of space but when calculated you actually have 800GB.. then if you create a big 400GB file it will actually succeed 

How DBFS Reclaims Free Space After Files Are Deleted [ID 1438356.1]
Bug 12662040 : SECUREFILE LOB SEGMENT KEEPS GROWING IN CASE OF PLENTY OF FREE SPACE

! run the following:
{{{
col segment_name format a30
select segment_name, tablespace_name, segment_type, round(bytes/1024/1024/1024,2) dbfs_segment 
from dba_segments where owner='DBFS' and segment_type = 'LOBSEGMENT';

-- search for the lob segment
set serveroutput on
declare
v_segment_size_blocks number;
v_segment_size_bytes number;
v_ number;
v_used_blocks number;
v_used_bytes number;
v_expired_blocks number;
v_expired_bytes number;
v_unexpired_blocks number;
v_unexpired_bytes number;
begin
dbms_space.space_usage ('DBFS', '&LOBSEGMENT', 'LOB', 
v_segment_size_blocks, v_segment_size_bytes,
v_used_blocks, v_used_bytes, v_expired_blocks, v_expired_bytes, 
v_unexpired_blocks, v_unexpired_bytes );
dbms_output.put_line('Expired Blocks = '||v_expired_blocks);
dbms_output.put_line('Expired GB = '|| round(v_expired_bytes/1024/1024/1024,2) );
dbms_output.put_line('UNExpired Blocks = '||v_unexpired_blocks);
dbms_output.put_line('UNExpired GB = '|| round(v_unexpired_bytes/1024/1024/1024,2) );
end;
/

! echo "df output: `df -m /dbfs | grep dbfs | awk '{print $2/1024}'`"
! echo "du output: `du -sm /dbfs | awk '{print $1/1024}'`"

run this to check lob fragmentation http://www.idevelopment.info/data/Oracle/DBA_scripts/LOBs/lob_fragmentation_user.sql
}}}


! sample output
{{{
$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
                       30G   17G   12G  60% /
/dev/sda1             124M   74M   44M  63% /boot
/dev/mapper/VGExaDb-LVDbOra1
                      148G   78G   65G  55% /u01
tmpfs                  81G     0   81G   0% /dev/shm
dbfs-dbfs@:/          258G  212G   47G  83% /dbfs

11:42:42 SYS@DBFS1> @unexpired

SEGMENT_NAME                                                                      SEGMENT_TYPE       BYTES/1024/1024
--------------------------------------------------------------------------------- ------------------ ---------------
T_WORK                                                                            TABLE                        .1875
IG_SFS$_FST_42745                                                                 INDEX                        .0625
SYS_IL0000117281C00007$$                                                          LOBINDEX                     .0625
LOB_SFS$_FST_42745                                                                LOBSEGMENT              771964.125
IP_SFS$_FST_42745                                                                 INDEX                        .0625
IPG_SFS$_FST_42745                                                                INDEX                        .0625

6 rows selected.

Expired Blocks = 72615478
Expired Bytes = 554.0121307373046875
UNExpired Blocks = 0
UNExpired Bytes = 0

PL/SQL procedure successfully completed.



$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
                       30G   17G   12G  60% /
/dev/sda1             124M   74M   44M  63% /boot
/dev/mapper/VGExaDb-LVDbOra1
                      148G   78G   65G  55% /u01
tmpfs                  81G     0   81G   0% /dev/shm
dbfs-dbfs@:/          258G  212G   47G  83% /dbfs

$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
                       30G   17G   12G  60% /
/dev/sda1             124M   74M   44M  63% /boot
/dev/mapper/VGExaDb-LVDbOra1
                      148G   78G   65G  55% /u01
tmpfs                  81G     0   81G   0% /dev/shm
dbfs-dbfs@:/          261G  215G   47G  83% /dbfs

$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
                       30G   17G   12G  60% /
/dev/sda1             124M   74M   44M  63% /boot
/dev/mapper/VGExaDb-LVDbOra1
                      148G   78G   65G  55% /u01
tmpfs                  81G     0   81G   0% /dev/shm
dbfs-dbfs@:/          262G  216G   47G  83% /dbfs

}}}



''The fix!'' the new dbfsfree script.. 
{{{
[pd01db01:oracle:dbm1] /home/oracle
> dbfsfree
                      Size          Used           Avail   Used%  Mounted on
Kilobytes      809,274,136    624,320,136    184,954,000   77.15 /dbfs
Megabytes          790,306        609,687        180,619   77.15 /dbfs
Giggabytes             771            595            176   77.15 /dbfs

[pd01db01:oracle:dbm1] /home/oracle
> df -Th
Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
              ext3     30G   22G  6.8G  76% /
/dev/sda1     ext3    124M   36M   82M  31% /boot
/dev/mapper/VGExaDb-LVDbOra1
              ext3     99G   62G   33G  66% /u01
tmpfs        tmpfs     81G   39M   81G   1% /dev/shm
dbfs-dbfs@:/  fuse    684G  596G   89G  88% /dbfs

}}}


http://www.evernote.com/shard/s48/sh/0545726e-b46b-4953-ad5e-f1d04fb38b1d/86dd3d261da9ab35c09d765157c1ac33
> what's your take on non-indexed FK constraints? is it safe to not have it?

is enq TM an issue? If yes then non-indexed is likely the first cause. If not then it might be a little less of an issue but still worth mentioning imho


> do we recommend the redundant indexes to be dropped on health checks?

usually yes (alternative is mark them as invisible for a while and after some time if no regression is experienced then drop them) 


> do we disable the "auto optimizer stats collection" ? let's say if they have already an app specific stats gathering in place

*IF* the app way of gathering stats is solid then you can keep the automatic job just for Oracle objects (dictionary in example).
Alternatively the client can lock stats (and use FORCE=>TRUE in their custom jobs) so the automatic one will only collect those non locked.


> how do we evaluate the "Sequences prone to contention” ?

is enq SQ an issue? Assuming yes then sequences are a big concern. If no then we usually just mention to increase cache size and NOT use ORDER in RAC (because it requires sync across nodes so it’s a big overhead)
Master Note for Query Rewrite (Doc ID 1215173.1)
Using DBMS_ADVANCED_REWRITE When Binds Are Present (Avoiding ORA-30353) (Doc ID 392214.1)
Master Note for Materialized View (MVIEW) (Doc ID 1353040.1)
Using Execution Plan to Verify Materialized View Query Rewrite (Doc ID 245635.1)
Advanced Query Rewrite https://docs.oracle.com/cd/B28359_01/server.111/b28313/qradv.htm  , http://pages.di.unipi.it/ghelli/didattica/bdldoc/B19306_01/server.102/b14223/qradv.htm
DBMS_ADVANCED_REWRITE https://docs.oracle.com/database/121/ARPLS/d_advrwr.htm#BEGIN
Oracle OLAP 11g and 12c: How to ensure use of Cube Materialized Views/Query Rewrite (Doc ID 577293.1)
Improving Performance using Query Rewrite in Oracle D atabase 10g http://www.oracle.com/technetwork/middleware/bi-foundation/twp-bi-dw-improve-perf-using-query--133436.pdf
How To Use DBMS_MVIEW.EXPLAIN_REWRITE and EXPLAIN_MVIEW To Diagnose Query Rewrite and Fast Refresh Problems (Doc ID 149815.1)
Manual Diagnosis & Troubleshooting for Query Rewrite Problems (Doc ID 236486.1)    <- good stuff
https://gavinsoorma.com/2011/06/using-dbms_advanced_rewrite-with-an-hint-to-change-the-execution-plan/



Oracle by example http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/10g/r2/prod/bidw/mv/mv_otn.htm
https://gerardnico.com/db/oracle/query_rewriting
https://blog.go-faster.co.uk/2016/09/dbmsapplicationinfo.html
http://www.java2s.com/Tutorial/Oracle/0601__System-Packages/ReadOriginalValuesandDisplay.htm
https://gist.githubusercontent.com/richardpascual/b8674881dac0280f606d/raw/c2537c5e1a4a8d93128632a803d02946b8a0fcbb/oracle-plsql-exception-handling.sql
{{{
-- This is an example framework of how to implement tracking of package/procedure/function plsql
-- procedural code execution (Oracle) with a built-in package: DBMS_APPLICATION_INFO; the intent
-- is to set this syntax layout up so that developers can have a flexible, customizable syntax
-- to accomplish this task.

-- Look for my project here on Github, the ORA-EXCEPTION-HANDLER. (richardpascual)

CREATE or REPLACE PROCEDURE PROCESS_TWEET_LOG is
   c_client_info   constant   V$SESSION.CLIENT_INFO%TYPE := 'DEV-DATABASE, OPS408 SCHEMA';      
   c_module_name   constant   V$SQLAREA.MODULE_NAME%TYPE := 'PROCESS_TWEET_LOG';
   l_action        V$SQLAREA.ACTION%TYPE:= null;

   BEGIN
      DBMS_APPLICATION_INFO.SET_CLIENT_INFO (client_info => c_client_info); 
      DBMS_APPLICATION_INFO.SET_MODULE ( module_name => c_module_name, action_name => l_action );

   -- Initialize Twitterizer
   l_action:= 'LoadTweetLog';
   DBMS_APPLICATION_INFO.SET_ACTION (action_name => l_action);

      ... begin Tweet Log Loading Process here.
      <more PL/SQL Code here>

   -- Count Tweets by seven demographic dimensions
   l_action:= 'CountBySeven';
   DBMS_APPLICATION_INFO.SET_ACTION (action_name => l_action);

      ... begin Tweet Log Counting Process here.
      <more PL/SQL Code here>
      ...

  EXCEPTION
     WHEN OTHERS THEN
     err_pkg.handle; -- this is the singular, exception call from the ORA_EXCEPTION_HANDLER project.

END;
}}}


! quirks 

!! if sql is hard parsed the initial tag would be the forever tag 
DBMS_APPLICATION_INFO and V$SQL/AREA
https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:10250504327860
<<<

Or use DBMS_PDB package to construct an XML file describing the non-CDB data files to
plug the non-CDB into the CDB as a PDB. This method presupposes that the non-CDB is
an Oracle 12c database
----------
Using the DBMS_PDB package is the easiest option.
If the DBMS_PDB package is not used, then using export/import is usually simpler than using
GoldenGate replication, but export/import might require more down time during the switch from
the non-CDB to the PDB.
If you choose to use export/import, and you are moving a whole non-CDB into the CDB, then
transportable databases (TDB) is usually the best option. If you choose to export and import
part of a non-CDB into a CDB, then transportable tablespaces (TTS) is the best option.
----------

DBMS_PDB step by step

The technique with DBMS_PDB package creates an unplugged PDB from an Oracle database
12c non-CDB. The unplugged PDB can then be plugged in to a CDB as a new PDB. Running
the DBMS_PDB.DESCRIBE procedure on the non-CDB generates an XML file that describes
the future PDB. You can plug in the unplugged PDB in the same way that you can plug in any
unplugged PDB, using the XML file and the non-CDB data files. The steps are the following:
1. Connect to non-CDB ORCL and first ensure that the non-CDB ORCL is in a transactionally-
consistent state and place it in read-only mode.
2. Execute the DBMS_PDB.DESCRIBE procedure, providing the file name that will be
generated. The XML file contains the list of data files to be plugged.
The XML file and the data files described in the XML file comprise an unplugged PDB.
3. Connect to the target CDB to plug the unplugged ORCL as PDB2.
4. Before plugging the unplugged PDB, make sure it can be plugged into a CDB using the
DBMS_PDB.CHECK_PLUG_COMPATIBILITY procedure. Execute the CREATE
PLUGGABLE STATEMENT using the new clause USING&nbsp;&nbsp;' XMLfile ' . The list of data files
from ORCL is read from the XML file to locate and name the data files of PDB2.
5. Run the ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql script to delete
unnecessary metadata from PDB SYSTEM tablespace. This script must be run before the
PDB can be opened for the first time. This script is required for plugging non-CDBs only.
6. Open PDB2 to verify that the application tables are in PDB2.

<<<
http://externaltable.blogspot.com/2014/04/a-closer-look-at-calibrateio.html
Thread: Answers to "Why are my jobs not running?"
http://forums.oracle.com/forums/thread.jspa?threadID=646581

http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/sqldev/r30/DBMSScheduler/DBMSScheduler.htm
http://www.oracle-base.com/articles/misc/SqlDeveloper31SchedulerSupport.php



-- other scheduling software
http://en.wikipedia.org/wiki/CA_Workload_Automation_AE



23 Managing Automatic System Tasks Using the Maintenance Window http://docs.oracle.com/cd/B19306_01/server.102/b14231/tasks.htm
CREATE_WINDOW (new 11g overload) http://psoug.org/reference/dbms_scheduler.html
Oracle Scheduling Resource Manager Plan http://www.dba-oracle.com/job_scheduling/resource_manager_plan.htm, http://www.dba-oracle.com/job_scheduling/windows.htm
Examples of Using the Scheduler http://docs.oracle.com/cd/B28359_01/server.111/b28310/schedadmin006.htm
Configuring Oracle Scheduler - Task 2B: Creating Windows http://docs.oracle.com/cd/B28359_01/server.111/b28310/schedadmin001.htm
http://www.oracle-base.com/articles/10g/Scheduler10g.php
http://www.oracle-base.com/articles/11g/SchedulerEnhancements_11gR2.php
{{{
select (:end_time-:start_time)*10 diff_in_sec from dual;

SQL> var start_time number;
SQL> exec :start_time:=DBMS_UTILITY.GET_TIME ;

PL/SQL procedure successfully completed.

SQL> var end_time number;
SQL> exec :end_time:=DBMS_UTILITY.GET_TIME ;

PL/SQL procedure successfully completed.

SQL> select (:end_time-:start_time)/100 diff_in_sec from dual;

SQL> select (:end_time-:start_time)*10 diff_in_ms from dual;

}}}
Procedure for renaming a database - Non-ASM - DBNEWID
http://www.evernote.com/shard/s48/sh/f00030b2-988c-4d9c-b4db-35dfd1bb6593/12702f51c6046d00cb2bff74c190c7e4
Init.ora Parameter "DB_WRITERS" [Port Specific] Reference Note 
  Doc ID:  Note:35268.1 

Top 8 init.ora Parameters Affecting Performance 
  Doc ID:  Note:100709.1 


DB_WRITER_PROCESSES or DBWR_IO_SLAVES? 
  Doc ID:  Note:97291.1 


Database Writer and Buffer Management 
  Doc ID:  Note:91062.1 


TROUBLESHOOTING GUIDE: Common Performance Tuning Issues 
  Doc ID:  Note:106285.1 


Systemwide Tuning using UTLESTAT Reports in Oracle7/8 
  Doc ID:  Note:62161.1 

DBWR in Oracle8i 
  Doc ID:  Note:105518.1 


DEC ALPHA: RAW DISK AND ASYNC_IO 
  Doc ID:  Note:1029511.6 


Understanding and Tuning Buffer Cache and DBWR 
  Doc ID:  Note:62172.1 


Asynchronous I/O and Multiple Database Writers 
  Doc ID:  Note:69560.1 


VIEW: "V$LOGFILE" Reference Note 
  Doc ID:  Note:43746.1 


Init.ora Parameter "DB_WRITERS" [Port Specific] Reference Note 
  Doc ID:  Note:35268.1 


CRITICAL BUGS LIST FOR V7.3.2.XX 
  Doc ID:  Note:1023229.6 


How to Resize a Datafile 
  Doc ID:  Note:1029252.6 


How to Resolve ORA-03297 When Resizing a Datafile by Finding the Table Highwatermark 
  Doc ID:  Note:130866.1 


Oracle8 and Oracle8i Database Limits 
  Doc ID:  Note:114019.1 


Oracle9i Database Limits 
  Doc ID:  Note:217143.1 


Database and File Size Limits in 10G release 2 
  Doc ID:  Note:336186.1 


Init.ora Parameter "DB_WRITERS" [Port Specific] Reference Note 
  Doc ID:  Note:35268.1 


ORA-00346: REDO LOG FILE HAS STATUS 'STALE' 
  Doc ID:  Note:1014824.6 


Archiver Best Practices 
  Doc ID:  Note:45042.1 


Shutdown Immediate Hangs 
  Doc ID:  Note:179192.1 


http://sarojkd.tripod.com/B001.html

http://www.riddle.ru/mirrors/oracledocs/server/sad73/ch505.html
https://blogs.oracle.com/oem/entry/database_as_a_service_on
!!!! THIS TIDDLER IS ON GOING... 


I've done a couple tests lately on my Windows laptop (on Intel i5) and also on a "13 MacbookAir

To summarize the screenshots that you'll see below, it's divided into four test cases: 

''1) The effect of DD to /dev/null''
* /dev/null is a special file that acts like a black hole, this test shows that you must use this facility with caution when doing your IO tests or else you may end up with super bloated numbers. One common error or misuse you may encounter is doing DD from /dev/zero straight to /dev/null.. see more from the screenshots below.. 

''2) DD Write, Read, and Read Write''
This shows how you can properly do Write, Read, and Read Write tests using DD
* Write - if=/dev/zero of=testfile.txt
* Read - if=testfile.txt of=/dev/null
* Read Write - if=testfile.txt of=testfile2.txt

time dd bs=16384 if=/Users/gaja/Data/Downloads/Software/"Rosetta Stone Version 3 Update.dmg" of=/dev/null
sync; time dd bs=1048576 count=4096 if=/dev/zero of=/tmp/testfile12.txt; sync;

''3) IOMeter tests''
I was never successful on doing a pure read operation using DD. To have a read only test I have to use IOMeter

''4) Actual MacBook Air test - part2''
Having my tests above in mind, I was able to get hold of a MacBook Air and did some tests on various block sizes. 





So here it goes... 

!
! The Effect of DD

!!!! So fast
<<<
[img[picturename| https://lh6.googleusercontent.com/_F2x5WXOJ6Q8/TZbUNMaLDNI/AAAAAAAABJ4/nlslGxySL34/s800/so%20fast.png]]
<<<

!!!! First Run 
<<<
[img[picturename| https://lh4.googleusercontent.com/_F2x5WXOJ6Q8/TZY0yHjByUI/AAAAAAAABJQ/4mdJxRQ175c/s800/test10-the%20effect%20of%20dd.png]]

[img[picturename| https://lh4.googleusercontent.com/_F2x5WXOJ6Q8/TZY2iptzAPI/AAAAAAAABJY/JYoZJBuASuc/s800/test10-the%20effect%20of%20dd-after%20cancel.png]]
<<<

!!!! Another Run
<<<
[img[picturename| https://lh6.googleusercontent.com/_F2x5WXOJ6Q8/TZY1wan6kFI/AAAAAAAABJU/Omh7uM1RaXs/s800/test11-the%20effect%20of%20dd%2016k%20bs.png]]

asdadasd

[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TZY4vB6IWOI/AAAAAAAABJc/etlhojzy-1U/s800/test11-the%20effect%20of%20dd%2016k%20bs-after%20cancel.png]]
<<<

!
! DD Write, Read, and Read Write

!!!! DD write
<<<
[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TZY8lKpct2I/AAAAAAAABJg/NVy9RbQLDhM/s800/x2.png]]
<x2.png>

asd

[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TZY8lGdoyBI/AAAAAAAABJk/Qz_gQ7huEyw/s800/x3.png]]
<x3.png>
<<<

!!!! DD read
<<<
[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TZb_NiIu0eI/AAAAAAAABJ8/jh0RDLxjU8o/s800/ddread.png]]
<<<


!!!! DD read write
<<<
[img[picturename| https://lh4.googleusercontent.com/_F2x5WXOJ6Q8/TZY8lImV9CI/AAAAAAAABJo/oz8C7xwXgVE/s800/x5.png]]
<x5.png>

asdad

[img[picturename| https://lh5.googleusercontent.com/_F2x5WXOJ6Q8/TZY8nLguFzI/AAAAAAAABJs/FYIpx3O6aRg/s800/x6.png]]
<x6.png>
<<<

!
! IOMeter tests

!!!! Read
<<<
asdsa

[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TZY9nijeNlI/AAAAAAAABJ0/2I8e8KR-JX0/s800/test19-dynamo1M%2050outstanding%20all%20read-sequential.png]]
<test19-dynamo1M 50outstanding all read-sequential.png>
<<<

!!!! Write
<<<
asdada

[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TZY9nRUTX5I/AAAAAAAABJw/1NWZeou2f7c/s800/test18-dynamo1M%2050outstanding%20all%20write-sequential.png]]
<test18-dynamo1M 50outstanding all write-sequential.png>
<<<

!
! Actual Macbook Air test "13

1MB http://db.tt/uVWYLkt
736 IOPS peak
188.6 MB/s peak

512K http://db.tt/sHsg8RV
645 IOPS peak
182.9 MB/s peak

16K http://db.tt/Z8zf6SO
631 IOPS peak
167.5 MB/s peak

8K http://db.tt/8UCAOOV
147 IOPS peak
145 MB/s peak


!
! References

Apple's 2010 MacBook Air (11 & 13 inch) Thoroughly Reviewed
http://www.anandtech.com/Show/Index/3991?cPage=13&all=False&sort=0&page=4&slug=apples-2010-macbook-air-11-13inch-reviewed <— GOOD STUFF
Support and Q&A for Solid-State Drives
http://blogs.msdn.com/b/e7/archive/2009/05/05/support-and-q-a-for-solid-state-drives-and.aspx <— GOOD STUFF
http://www.anandtech.com/show/2738 <— GOOD STUFF REVIEW + TRIM + NICE EXPLANATIONS
http://www.usenix.org/event/usenix08/tech/full_papers/agrawal/agrawal_html/index.html <— GOOD STUFF PAPER


https://www.oracle.com/technetwork/testcontent/o26performance-096310.html
{{{
Faster Batch Processing
By Mark Rittman Oracle ACE

LOG ERRORS handles errors quickly and simplifies batch loading.

When you need to load millions of rows of data into a table, the most efficient way is usually to use an INSERT, UPDATE, or MERGE statement to process your data in bulk. Similarly, if you want to delete thousands of rows, using a DELETE statement is usually faster than using procedural code. But what if the data you intend to load contains values that might cause an integrity or check constraint to be violated, or what if some values are too big for the column they are to be loaded into?

You may well have loaded 999,999 rows into your table, but that last row, which violates a check constraint, causes the whole statement to fail and roll back. In situations such as this, you have to use an alternative approach to loading your data.

For example, if your data is held in a file, you can use SQL*Loader to automatically handle data that raises an error, but then you have to put together a control file, run SQL*Loader from the command line, and check the output file and the bad datafile to detect any errors.

If, however, your data is held in a table or another object, you can write a procedure or an anonymous block to process your data row by row, loading the valid rows and using exception handling to process those rows that raise an error. You might even use BULK COLLECT and FORALL to handle data in your PL/SQL routine more efficiently, but even with these improvements, handling your data in this manner is still much slower than performing a bulk load by using a direct-path INSERT DML statement.

Until now, you could take advantage of the set-based performance of INSERT, UPDATE, MERGE, and DELETE statements only if you knew that your data was free from errors; in all other circumstances, you needed to resort to slower alternatives. All of this changes with the release of Oracle Database 10g Release 2, which introduces a new SQL feature called DML error logging.

Efficient Error Handling
DML error logging enables you to write INSERT, UPDATE, MERGE, or DELETE statements that automatically deal with certain constraint violations. With this new feature, you use the new LOG ERRORS clause in your DML statement and Oracle Database automatically handles exceptions, writing erroneous data and details of the error message to an error logging table you've created.

Before you can use the LOG ERRORS clause, you need to create an error logging table, either manually with DDL or automatically with the CREATE_ERROR_LOG procedure in the DBMS_ERRLOG package, whose specification is shown in Listing 1.

Code Listing 1: DBMS_ERRLOG.CREATE_ERROR_LOG parameters 

DBMS_ERRLOG.CREATE_ERROR_LOG (
        dml_table_name                  IN VARCHAR2,
        err_log_table_name              IN VARCHAR2 := NULL,
        err_log_table_owner             IN VARCHAR2 := NULL,
        err_log_table_space             IN VARCHAR2 := NULL,
        skip_unsupported                IN BOOLEAN  := FALSE);


All the parameters except DML_TABLE_NAME are optional, and if the optional details are omitted, the name of the error logging table will be ERR$_ together with the first 25 characters of the DML_TABLE_NAME. The SKIP_UNSUPPORTED parameter, if set to TRUE, instructs the error logging clause to skip over LONG, LOB, and object type columns that are not supported and omit them from the error logging table.

With the error logging table created, you can add the error logging clause to most DML statements, using the following syntax: 

LOG ERRORS [INTO [schema.]table] 
[ (simple_expression) ] 
[ REJECT LIMIT  {integer|UNLIMITED} ]


The INTO clause is optional; if you omit it, the error logging clause will put errors into a table with the same name format used by the CREATE_ERROR_LOG procedure. SIMPLE_EXPRESSION is any expression that would evaluate to a character string and is used for tagging rows in the error table to indicate the process that caused the error, the time of the data load, and so on. REJECT LIMIT can be set to any integer or UNLIMITED and specifies the number of errors that can occur before the statement fails. This value is optional, but if it is omitted, the default value is 0, which effectively disables the error logging feature.

The following types of errors are handled by the error logging clause: 

Column values that are too large

Constraint violations (NOT NULL, unique, referential, and check constraints), except in certain circumstances detailed below

Errors raised during trigger execution

Errors resulting from type conversion between a column in a subquery and the corresponding column of the table

Partition mapping errors

The following conditions cause the statement to fail and roll back without invoking the error logging capability: 

Violated deferred constraints

Out-of-space errors

Any direct-path INSERT operation (INSERT or MERGE) that raises a unique constraint or index violation

Any UPDATE operation (UPDATE or MERGE) that raises a unique constraint or index violation

To show how the error logging clause works in practice, consider the following scenario, in which data needs to be loaded in batch from one table to another:

You have heard of the new error logging feature in Oracle Database 10g Release 2 and want to compare this new approach with your previous method of writing a PL/SQL package. To do this, you will use data held in the SH sample schema to try out each approach.

Using DML Error Logging
In this example, you will use the data in the SALES table in the SH sample schema, together with values from a sequence, to create a source table for the error logging test. This example assumes that the test schema is called ERRLOG_TEST and that it has the SELECT object privilege for the SH.SALES table. Create the source data and a target table called SALES_TARGET, based on the definition of the SALES_SRC table, and add a check constraint to the AMOUNT_SOLD column to allow only values greater than 0. Listing 2 shows the DDL for creating the source and target tables.

Code Listing 2: Creating the SALES_SRC and SALES_TARGET tables 

SQL> CREATE SEQUENCE sales_id_seq;
Sequence created.

SQL> CREATE TABLE sales_src
  2    AS
  3    SELECT sales_id_seq.nextval AS "SALES_ID"
  4    ,         cust_id
  5    ,         prod_id
  6    ,         channel_id
  7    ,         time_id
  8    ,         promo_id
  9    ,         amount_sold
 10    ,        quantity_sold
 11   FROM   sh.sales
 12   ;
Table created.

SQL> SELECT count(*)
  2    ,         min(sales_id)
  3    ,         max(sales_id)
  4    FROM   sales_src
  5    ;

        COUNT(*)        MIN(SALES_ID)   MAX(SALES_ID)
        ------          --------        --------
        918843             1            918843

SQL> CREATE TABLE sales_target
  2    AS
  3    SELECT *
  4    FROM   sales_src
  5    WHERE 1=0
  6    ;
Table created.

SQL> ALTER TABLE sales_target
  2    ADD CONSTRAINT amount_sold_chk
  3    CHECK (amount_sold > 0)
  4    ENABLE
  5    VALIDATE
  6    ;
Table altered.


Note from the descriptions of the tables in Listing 2 that the SALES_TARGET and SALES_SRC tables have automatically inherited the NOT NULL constraints that were present on the SH.SALES table because you created these tables by using a CREATE TABLE ... AS SELECT statement that copies across these column properties when you are creating a table.

You now introduce some errors into your source data, so that you can subsequently test the error logging feature. Note that because one of the errors you want to test for is a NOT NULL constraint violation on the PROMO_ID column, you need to remove this constraint from the SALES_SRC table before adding null values. The following shows the SQL used to create the data errors. 

SQL> ALTER TABLE sales_src
  2    MODIFY promo_id NULL
  3    ;
Table altered.

SQL> UPDATE sales_src
  2    SET      promo_id = null
  3    WHERE  sales_id BETWEEN 5000 and 5005
  4    ;
6 rows updated.

SQL> UPDATE sales_src
  2    SET      amount_sold = 0
  3    WHERE  sales_id IN (1000,2000,3000)
  4    ;
3 rows updated.

SQL>  COMMIT;
Commit complete.


Now that your source and target tables are prepared, you can use the DBMS_ERRLOG.CREATE_ERROR_LOG procedure to create the error logging table. Supply the name of the table on which the error logging table is based; the procedure will use default values for the rest of the parameters. Listing 3 shows the creation and description of the error logging table.

Code Listing 3: Creating the err$_sales_target error logging table 

SQL> BEGIN
  2       DBMS_ERRLOG.CREATE_ERROR_LOG('SALES_TARGET');
  3    END;
  4    /
PL/SQL procedure successfully completed.

SQL> DESCRIBE err$_sales_target;
 Name                    Null?   Type
 -------------------     ----    ------------- 
 ORA_ERR_NUMBER$                 NUMBER
 ORA_ERR_MESG$                   VARCHAR2(2000)
 ORA_ERR_ROWID$                  ROWID
 ORA_ERR_OPTYP$                  VARCHAR2(2)
 ORA_ERR_TAG$                    VARCHAR2(2000)
 SALES_ID                        VARCHAR2(4000)
 CUST_ID                         VARCHAR2(4000)
 PROD_ID                         VARCHAR2(4000)
 CHANNEL_ID                      VARCHAR2(4000)
 TIME_ID                         VARCHAR2(4000)
 PROMO_ID                        VARCHAR2(4000)
 AMOUNT_SOLD                     VARCHAR2(4000)
 QUANTITY_SOLD                   VARCHAR2(4000)


Note that the CREATE_ERROR_LOG procedure creates five ORA_ERR_% columns, to hold the error number, error message, ROWID, operation type, and tag you will supply when using the error logging clause. Datatypes have been automatically chosen for the table columns that will allow you to store numbers and characters.

The first approach is to load data into the SALES_TARGET table by using a direct-path INSERT statement. This is normally the most efficient way to load data into a table while still making the DML recoverable, but in the past, this INSERT would have failed, because the check constraints on the SALES_TARGET table would have been violated. Listing 4 shows this INSERT and the check constraint violation.

Code Listing 4: Violating the check constraint with direct-path INSERT 

SQL> SET SERVEROUTPUT ON
SQL> SET LINESIZE 150
SQL> SET TIMING ON
SQL> ALTER SESSION SET SQL_TRACE = TRUE;

Session altered.
Elapsed: 00:00:00.04

SQL> INSERT  /*+ APPEND */
  2    INTO     sales_target
  3    SELECT  *
  4    FROM    sales_src
  5    ;
INSERT /*+ APPEND */
*
ERROR at line 1:
ORA-02290: check constraint (ERRLOG_TEST.AMOUNT_SOLD_CHK) violated

Elapsed: 00:00:00.15


If you add the new LOG ERRORS clause to the INSERT statement, however, the statement will complete successfully and save any rows that violate the table constraints to the error logging table, as shown in Listing 5.

Code Listing 5: Violating the constraints and logging the errors with LOG ERRORS 

SQL> INSERT  /*+ APPEND */
  2    INTO     sales_target
  3    SELECT  *
  4    FROM    sales_src
  5    LOG ERRORS
  6    REJECT LIMIT UNLIMITED
  7    ;
918834 rows created.
Elapsed: 00:00:05.75

SQL> SELECT count(*)
  2    FROM   err$_sales_target
  3    ;

  COUNT(*)
-----   
            9

Elapsed: 00:00:00.06

SQL> COLUMN ora_err_mesg$ FORMAT A50
SQL> SELECT   ora_err_number$
  2    ,           ora_err_mesg$
  3    FROM     err$_sales_target
  4    ;

ORA_ERR_NUMBER$         ORA_ERR_MESG$
---------------         ------------------------------

        2290            ORA-02290: check constraint (ERRLOG_TEST.AMOUNT_
                        SOLD_CHK) violated

        2290            ORA-02290: check constraint (ERRLOG_TEST.AMOUNT_
                        SOLD_CHK) violated

        2290            ORA-02290: check constraint (ERRLOG_TEST.AMOUNT_
                        SOLD_CHK) violated

        1400            ORA-01400: cannot insert NULL into ("ERRLOG_TEST".
                        "SALES_TARGET"."PROMO_ID")

        1400            ORA-01400: cannot insert NULL into ("ERRLOG_TEST".
                               "SALES_TARGET"."PROMO_ID")

        1400            ORA-01400: cannot insert NULL into ("ERRLOG_TEST".
                        "SALES_TARGET"."PROMO_ID")

        1400            ORA-01400: cannot insert NULL into ("ERRLOG_TEST".
                        "SALES_TARGET"."PROMO_ID")

        1400            ORA-01400: cannot insert NULL into ("ERRLOG_TEST".
                        "SALES_TARGET"."PROMO_ID")

        1400            ORA-01400: cannot insert NULL into ("ERRLOG_TEST".
                        "SALES_TARGET"."PROMO_ID")

9 rows selected.

Elapsed: 00:00:00.28


Listing 5 shows that when this INSERT statement uses direct path to insert rows above the table high-water mark, the process takes 5.75 seconds and adds nine rows to the error logging table. Try the same statement again, this time with a conventional-path INSERT, as shown in Listing 6.

Code Listing 6: Violating the check and NOT NULL constraints with conven 

SQL> TRUNCATE TABLE sales_target;

Table truncated.

Elapsed: 00:00:06.07

SQL> TRUNCATE TABLE err$_sales_target;

Table truncated.

Elapsed: 00:00:00.25

SQL> INSERT INTO sales_target
  2  SELECT *
  3  FROM   sales_src
  4  LOG ERRORS
  5  REJECT LIMIT UNLIMITED
  6  ;

918834 rows created.

Elapsed: 00:00:30:65


As you might expect, the results in Listing 6 show that the direct-path load is much faster than the conventional-path load, because the former writes directly to disk whereas the latter writes to the buffer cache. The LOG ERRORS clause also causes kernel device table (KDT) buffering to be disabled when you're performing a conventional-path INSERT. One reason you might want to nevertheless use a conventional-path INSERT with error logging is that direct-path loads will fail when a unique constraint or index violation occurs, whereas a conventional-path load will log these errors to the error logging table and then continue. Oracle Database will also ignore the /*+ APPEND */ hint when the table you are inserting into contains foreign key constraints, because you cannot have these enabled when working in direct-path mode.

Now compare these direct- and conventional-path loading timings with the timing for using a PL/SQL anonymous block. You know that the traditional way of declaring a cursor against the source table—reading it row by row, inserting the contents into the target table, and dealing with exceptions as they occur—will be slow, but the column by Tom Kyte in the September/October 2003 issue of Oracle Magazine ("On HTML DB, Bulking Up, and Speeding") shows how BULK COLLECT, FORALL, and SAVE EXCEPTIONS could be used to process dirty data in a more efficient manner. How does Kyte's 2003 approach compare with using DML error logging? A version of Kyte's approach that, like the LOG ERRORS clause, writes error messages to an error logging table is shown in Listing 7.

Code Listing 7: PL/SQL anonymous block doing row-by-row INSERT 
 

SQL> CREATE TABLE sales_target_errors
  2  (sql_err_mesg varchar2(4000))
  3  /

Table created.
Elapsed: 00:00:00.28
SQL>  DECLARE
  2        TYPE array IS TABLE OF sales_target%ROWTYPE
  3           INDEX BY BINARY_INTEGER;
  4        sales_src_arr   ARRAY;
  5        errors          NUMBER;
  6        error_mesg     VARCHAR2(255);
  7        bulk_error      EXCEPTION;
  8        l_cnt           NUMBER := 0;
  9        PRAGMA exception_init
 10              (bulk_error, -24381);
 11        CURSOR c IS 
 12           SELECT * 
 13           FROM   sales_src;
 14        BEGIN
 15        OPEN c;
 16        LOOP
 17          FETCH c 
 18             BULK COLLECT 
 19             INTO sales_src_arr 
 20             LIMIT 100;
 21          BEGIN
 22             FORALL i IN 1 .. sales_src_arr.count 
 23                      SAVE EXCEPTIONS
 24               INSERT INTO sales_target VALUES sales_src_arr(i);
 25          EXCEPTION
 26          WHEN bulk_error THEN
 27            errors := 
 28               SQL%BULK_EXCEPTIONS.COUNT;
 29            l_cnt := l_cnt + errors;
 30            FOR i IN 1..errors LOOP
 31              error_mesg := SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE);
 32              INSERT INTO sales_target_errors 
 33              VALUES     (error_mesg);
 34       END LOOP;
 35          END;
 36          EXIT WHEN c%NOTFOUND;
 37      
 38       END LOOP;
 39       CLOSE c;
 40       DBMS_OUTPUT.PUT_LINE
 41        ( l_cnt || ' total errors' );
 42       END;
 43  /
9 total errors

PL/SQL procedure successfully completed.

Elapsed: 00:00:10.46
SQL> alter session set sql_trace = false;

Session altered.

Elapsed: 00:00:00.03
SQL> select * from sales_target_errors;

SQL_ERR_MESG

---------------------------------
ORA-02290: check constraint (.) violated
ORA-02290: check constraint (.) violated
ORA-02290: check constraint (.) violated
ORA-01400: cannot insert NULL into ()
ORA-01400: cannot insert NULL into ()
ORA-01400: cannot insert NULL into ()
ORA-01400: cannot insert NULL into ()
ORA-01400: cannot insert NULL into ()
ORA-01400: cannot insert NULL into ()

9 rows selected.

Elapsed: 00:00:00.21

Processing your data with this method takes 10.46 seconds, longer than the 5.75 seconds when using DML error logging and a direct-path INSERT but quicker than using a conventional-path INSERT. The results are conclusive: If you use DML error logging and you can insert your data with direct path, your batches can load an order of magnitude faster than if you processed your data row by row, using PL/SQL, even if you take advantage of features such as BULK COLLECT, FORALL, and SAVE EXCEPTIONS.

Finally, use TKPROF to format the SQL trace file you generated during your testing and check the explain plan and statistics for the direct-path insertion, shown in Listing 8. Note that the insertions into the error logging table are carried out after the INSERT has taken place and that these rows will stay in the error logging table even if the main statement fails and rolls back.

Code Listing 8: Using TKPROF to look at direct-path INSERT statistics 

INSERT /*+ APPEND */
INTO   sales_target
SELECT *
FROM   sales_src
LOG ERRORS
REJECT LIMIT UNLIMITED

call    count   cpu     elapsed disk    query   current    rows
---     ---     ----     ----   ----     ----   ----       ----
Parse   1       0.01    0.10       0       0       0           0
Execute 1       2.84    5.52    3460    5226    6659      918834
Fetch   0       0.00    0.00       0       0       0           0
---     ---     ----     ----   ----     ----   ----       ----
total   2       2.85    5.62    3460    5226    6659      918834

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 99  
Rows     Row Source Operation
-------  ---------------------------------------------------
      1  LOAD AS SELECT  (cr=5907 pr=3462 pw=5066 time=5539104 us)
 918843   ERROR LOGGING  (cr=5094 pr=3460 pw=0 time=92811603 us)
 918843   TABLE ACCESS FULL SALES_SRC (cr=5075 pr=3458 pw=0 time=16547710 us)

***************************************************************************
INSERT INTO ERR$_SALES_TARGET (ORA_ERR_NUMBER$, ORA_ERR_MESG$, 
ORA_ERR_ROWID$,   ORA_ERR_OPTYP$, ORA_ERR_TAG$, SALES_ID, PROD_ID, 
CUST_ID, CHANNEL_ID, TIME_ID, PROMO_ID, AMOUNT_SOLD, QUANTITY_SOLD) 
VALUES
 (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13)
call    count   cpu     elapsed  disk   query   current rows
---      ---    ----    ----     ----   ----    ----    ----
Parse     1     0.00    0.00     0      0        0      0
Execute   9     0.00    0.01     2      4       39      9
Fetch     0     0.00    0.00     0      0        0      0
---      ---    ----    ----     ----   ----    ----    ----
total    10     0.00    0.01     2      4       39      9
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 99     (recursive depth: 1)

                            

Next, locate the part of the formatted trace file that represents the PL/SQL approach and note how the execution of the anonymous block is split into four parts: (1) the anonymous block is parsed, (2) the source data is bulk-collected into an array, (3) the array is unloaded into the target table, and (4) the exceptions are written to the error logging table. Listing 9 shows that, together, these steps take more than twice as long to execute as a direct-path INSERT statement with DML error logging yet involve more coding and store less information about the rows that returned errors.

Code Listing 9: PROF to look at PL/SQL INSERT statistics 

DECLARE
      TYPE array IS TABLE OF sales_target%ROWTYPE
         INDEX BY BINARY_INTEGER;
      sales_src_arr   ARRAY;
      errors          NUMBER;
      error_mesg     VARCHAR2(255);
      bulk_error      EXCEPTION;
      l_cnt           NUMBER := 0;
      PRAGMA exception_init
            (bulk_error, -24381);
      CURSOR c IS
         SELECT *
         FROM   sales_src;
      BEGIN
      OPEN c;
      LOOP
        FETCH c
           BULK COLLECT
           INTO sales_src_arr
           LIMIT 100;
        BEGIN
           FORALL i IN 1 .. sales_src_arr.count
                    SAVE EXCEPTIONS
             INSERT INTO sales_target VALUES sales_src_arr(i);
        EXCEPTION
        WHEN bulk_error THEN
          errors :=
             SQL%BULK_EXCEPTIONS.COUNT;
          l_cnt := l_cnt + errors;
          FOR i IN 1..errors LOOP
            error_mesg := SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE);
            INSERT INTO sales_target_errors
            VALUES     (error_mesg);
     END LOOP;
        END;
        EXIT WHEN c%NOTFOUND;
     END LOOP;
     CLOSE c;
     DBMS_OUTPUT.PUT_LINE
      ( l_cnt || ' total errors' );
     END;

call    count   cpu     elapsed disk    query   current rows
---     ---     ----    ----    ----    ----    ----    ----
Parse     1     0.03    0.02    0       0       0       0
Execute   1     1.14    2.71    0       0       0       1
Fetch     0     0.00    0.00    0       0       0       0
---     ---     ----    ----    ----    ----    ----    ----
total     2     1.17    2.73    0       0       0       1
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 99  

********************************************************************
SELECT * 
FROM
 SALES_SRC

call    count   cpu     elapsed disk    query   current rows
---     ---     ----    ----    ----    ----    ----    ----
Parse      1    0.00    0.00      0         0      0         0
Execute    1    0.00    0.00      0         0      0         0
Fetch   9189    3.60    3.23      0     14219      0    918843
---     ---     ----    ----    ----    ----    ----    ----
total   9191    3.60    3.23      0     14219      0    918843
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 99     (recursive depth: 1)
Rows     Row Source Operation
-------  ---------------------------------------------------
 918843  TABLE ACCESS FULL SALES_SRC (cr=14219 pr=0 pw=0 time=33083496 us)
**************************************************************************
INSERT INTO SALES_TARGET 
VALUES
 (:B1 ,:B2 ,:B3 ,:B4 ,:B5 ,:B6 ,:B7 ,:B8 ) 
call    count   cpu     elapsed disk    query   current rows
---     ---     ----    ----    ----    ----    ----    ----
Parse      1    0.00    0.00       0       0        0        0
Execute 9189    4.39    4.30       2    6886    54411   918834
Fetch      0    0.00    0.00       0       0        0        0
---     ---     ----    ----    ----    ----    ----    ----
total   9190    4.39    4.30       2    6886    54411   918834
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 99     (recursive depth: 1)
************************************************************************
INSERT INTO SALES_TARGET_ERRORS 
VALUES (:B1 )
call    count   cpu     elapsed disk    query   current rows
---     ---     ----    ----    ----    ----    ----    ----
Parse     1     0.00    0.00      0        0      0       0
Execute   9     0.00    0.01      2        4     30       9
Fetch     0     0.00    0.00      0        0      0       0
---     ---     ----    ----    ----    ----    ----    ----
total    10     0.00    0.01      2        4     30       9
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 99     (recursive depth: 1)
Leftover from Listing 2***************

SQL> DESC sales_src
 Name                    Null?          Type
 -------------------     ----     -------------
 SALES_ID                               NUMBER
 CUST_ID                 NOT NULL       NUMBER
 PROD_ID                 NOT NULL       NUMBER
 CHANNEL_ID              NOT NULL       NUMBER
 TIME_ID                 NOT NULL       DATE
 PROMO_ID                NOT NULL       NUMBER
 AMOUNT_SOLD             NOT NULL       NUMBER(10,2)
 QUANTITY_SOLD           NOT NULL       NUMBER(10,2)

SQL> DESC sales_target
 Name                    Null?          Type
 -------------------     ----    -------------
 SALES_ID                               NUMBER
 CUST_ID                 NOT NULL       NUMBER
 PROD_ID                 NOT NULL       NUMBER
 CHANNEL_ID              NOT NULL       NUMBER
 TIME_ID                 NOT NULL       DATE
 PROMO_ID                NOT NULL       NUMBER
 AMOUNT_SOLD             NOT NULL       NUMBER(10,2)
 QUANTITY_SOLD           NOT NULL       NUMBER(10,2)


Conclusion
In the past, if you wanted to load data into a table and gracefully deal with constraint violations or other DML errors, you either had to use a utility such as SQL*Loader or write a PL/SQL procedure that processed each row on a row-by-row basis. The new DML error logging feature in Oracle Database 10g Release 2 enables you to add a new LOG ERRORS clause to most DML statements that allows the operation to continue, writing errors to an error logging table. By using the new DML error logging feature, you can load your batches faster, have errors handled automatically, and do away with the need for custom-written error handling routines in your data loading process. 
}}}
http://www.linux-magazine.com/Online/Features/Will-DNF-Replace-Yum
https://fedoraproject.org/wiki/Features/DNF#Owner
https://github.com/rpm-software-management/dnf/wiki
http://dnf.readthedocs.org/en/latest/cli_vs_yum.html
https://en.wikipedia.org/wiki/DNF_(software)
http://www.liquidweb.com/kb/dnf-dandified-yum-command-examples-install-remove-upgrade-and-downgrade/
http://www.maketecheasier.com/dnf-package-manager/
https://anup07.wordpress.com/tag/dandified-yum/
https://blogs.oracle.com/XPSONHA/entry/using_dnfs_for_test_purposes
http://oracleprof.blogspot.com/2011/11/dnfs-configuration-and-hybrid-column.html

http://www.pythian.com/news/34425/oracle-direct-nfs-how-to-start/


Direct NFS vs Kernel NFS http://glennfawcett.wordpress.com/2009/12/14/direct-nfs-vs-kernel-nfs-bake-off-with-oracle-11g-and-solaris-and-the-winner-is/



! references 
Mount Options for Oracle files when used with NFS on NAS devices (Doc ID 359515.1)
Step by Step - Configure Direct NFS Client (DNFS) on Linux (11g) (Doc ID 762374.1)
How To Setup DNFS (Direct NFS) On Oracle Release 11.2 (Doc ID 1452614.1)
Direct NFS: FAQ (Doc ID 954425.1)
Configuring Oracle Exadata Backup https://docs.oracle.com/cd/E28223_01/html/E27586/configappl.html
https://taliphakanozturken.wordpress.com/2013/01/22/what-is-oracle-direct-nfs-how-to-enable-it/



!! dnfs 
http://blog.oracle48.nl/direct-nfs-configuring-and-network-considerations-in-practise/
http://www.dba86.com/docs/oracle/12.2/CWWIN/creating-an-oranfstab-file-for-direct-nfs-client.htm
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/ladbi/enabling-and-disabling-direct-nfs-client-control-of-nfs.html#GUID-27DDB55B-F79E-4F40-8228-5D94456E620B
dnfs_workshop_ebernal.pdf
Step by Step - Configure Direct NFS Client (DNFS) on Linux (Doc ID 762374.1)
This Note Covers Some Frequently Asked Questions Related to Direct NFS (Doc ID 1496040.1)
How To Setup DNFS (Direct NFS) On Oracle Release 11.2 (Doc ID 1452614.1)
Direct NFS monitoring and v$views (Doc ID 1495739.1)
mondnfs.sql
mondnfs_pre11204.sql
TESTCASE Step by Step - Configure Direct NFS Client (DNFS) on Windows (Doc ID 1468114.1)
Collecting The Required Information For Support To Troubleshot DNFS (Direct NFS) Issues (11.1, 11.2 & 12c). (Doc ID 1464567.1)
How To Setup DNFS (Direct NFS) On Oracle Release 11.2 (Doc ID 1452614.1)
https://kb.netapp.com/app/answers/answer_view/a_id/1001816/~/best-practices-to-configure-a-dnfs-client-
https://www.oracle.com/technetwork/server-storage/sun-unified-storage/documentation/oracle11gr2-zfssa-bestprac-2255303.pdf
https://blog.pythian.com/oracle-direct-nfs-how-to-start/
http://www.pythian.com/news/34425/oracle-direct-nfs-how-to-start/
How to Disable DNFS in Oracle (Doc ID 2247243.1)
Oracle DNS configuration for SCAN
http://www.oracle-base.com/articles/linux/DnsConfigurationForSCAN.php

Configuring a small DNS server for SCAN
http://blog.ronnyegner-consulting.de/2009/10/15/configuring-a-small-dns-server-for-scan/

LinuxHomeNetworking - DNS
http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch18_:_Configuring_DNS
http://www.oracle.com/technetwork/database/oracledrcp11g-1-133381.pdf
Master Note: Overview of Database Resident Connection Pooling (DRCP) (Doc ID 1501987.1)
Is Database Resident Connection Pooling (DRCP) Supported with JDBC-THIN / JDBC-OCI ? (Doc ID 1087381.1)
How To Setup and Trace Database Resident Connection Pooling (DRCP) (Doc ID 567854.1)
How to tune Database Resident Connection Pooling(DRCP) for scalability (Doc ID 1391004.1)
Connecting to an already started session (Doc ID 1524070.1)

Managing Processes http://docs.oracle.com/cd/E11882_01/server.112/e25494/manproc.htm#ADMIN11000 <-- HOWTO
Example 9-6 Database Resident Connection Pooling Application http://docs.oracle.com/cd/E11882_01/appdev.112/e10646/oci09adv.htm#LNOCI18203
When to Use Connection Pooling, Session Pooling, or Neither http://docs.oracle.com/cd/E11882_01/appdev.112/e10646/oci09adv.htm#LNOCI16652
Database Resident Connection Pooling and LOGON/LOGOFF Triggers http://docs.oracle.com/cd/E11882_01/server.112/e25494/manproc.htm#ADMIN13400
Example 9-7 Connect String to Use for a Deployment in Dedicated Server Mode with DRCP Not Enabled http://docs.oracle.com/cd/E11882_01/appdev.112/e10646/oci09adv.htm#LNOCI18204
http://progeeking.com/2013/10/15/database-resident-connection-pooling-drcp/

! DRCP with JDBC
supported starting 12c
12c New Features http://docs.oracle.com/database/121/NEWFT/chapter12101.htm#NEWFT182

! my thoughts on this 
<<<
On one of our database here, there are multiple app schemas (around 6) tied to a JVM program and each of them has their own connection pool setting to around 250 max. The problem with this app setup is you don’t have a “central pool” of connection pools. And there will be a time where one schema is overloading the box vs the others. The issue here is that if we bring the pool down to a lower value (30-50 min/max) to let the app users queue for available pool on the app side and not on the DB side we have to do it across the board on all schemas because if we do it on just one/two apps they would still suffer as the database load is not reduced due to other applications still loading the database. Now lowering the value for everyone would make it faster but with so many development teams involved it is politically not easy to convince especially if it’s a bank. So unless you have a parallel environment (clone) where you can mimic the load with the proposed changes then you may see some progress on this effort. This one is really a tricky issue to control. 

We explored a couple of things: 

1)	DRCP was a promising option for us but it is only supported in 12c (12c New Features http://docs.oracle.com/database/121/NEWFT/chapter12101.htm#NEWFT182)  but there are limitations to this feature as well like DRCP only has one (default) pool, limits to this feature like ASO, etc.
2)	Connection Rate Limiter on the Listener side, there’s a white paper http://www.oracle.com/technetwork/database/enterprise-edition/oraclenetservices-connectionratelim-133050.pdf and it seems like it just queues the sessions and doesn't kill them. You can see the demo here http://jhdba.wordpress.com/2010/09/02/using-the-connection_rate-parameter-to-stop-dos-attacks/
<<<
http://orainternals.wordpress.com/2010/03/25/rac-object-remastering-dynamic-remastering/
How DRM works in RAC cluster http://goo.gl/FZPZI
11.2 RAC ==> Dynamic Resource Mastering(DRM) (Reconfiguration) , LMS / LMD / LMON / GCS/ GES concepts explained http://goo.gl/dVZ1P
http://www.ads1st.com/rac-dynamic-resource-management.html
http://www.ora-solutions.net/web/2009/05/12/your-experience-with-rac-dynamic-remastering-drm-in-10gr2/
oracle racsig paper http://goo.gl/PrHNW

-- GOOD INTRO ON DTRACE
http://groups.google.com/group/comp.unix.solaris/msg/73d6407711b38014%3Fdq%3D%26start%3D50%26hl%3Den%26lr%3D%26ie%3DUTF-8?pli=1

-- Kyle - Getting Started with DTrace
http://dboptimizer.com/2011/12/13/getting-started-with-dtrace/

How to use DTrace and mdb to Interpret vmstat Statistics (Doc ID 1009494.1)

-- SYSTEM PRIVS PREREQ
http://blogs.oracle.com/yunpu/entry/giving_a_user_privileges_to

-- DTRACE ON MAC
http://www.mactech.com/articles/mactech/Vol.23/23.11/ExploringLeopardwithDTrace/index.html	
''top 10 commands on mac'' http://dtrace.org/blogs/brendan/2011/10/10/top-10-dtrace-scripts-for-mac-os-x/

-- LOCKSTAT
A Primer On Lockstat [ID 1005868.1]

-- MEMORY LEAK 
http://blogs.oracle.com/openomics/entry/investigating_memory_leaks_with_dtrace

-- FAST DUMP
How to Use the Oracle Solaris Fast Crash Dump Feature [ID 1128738.1]

-- CLOUD ANALYTICS
http://www.ustream.tv/recorded/12123446
https://blogs.oracle.com/brendan/entry/dtrace_cheatsheet
https://blogs.oracle.com/brendan/resource/DTrace-cheatsheet.pdf

MDB cheatsheet https://blogs.oracle.com/jwadams/entry/an_mdb_1_cheat_sheet



-- DTrace TCP
http://blogs.oracle.com/amaguire/entry/dtracing_tcp_congestion_control
https://blogs.oracle.com/wim/entry/trying_out_dtrace
https://blogs.oracle.com/OTNGarage/entry/how_to_get_started_using
''Create the test case script'' - this script does a sustained md5sum load which is a CPU centric load
{{{
root@solaris:/home/oracle# dd if=/dev/urandom of=testfile count=20 bs=1024k

root@solaris:/home/oracle# cat md5.sh
#!/bin/sh

i=0

while [ 1 ]
do
   md5sum testfile
   i=`expr $i + 1`
   echo "Iteration: $i"
done
}}}


''Execute the script''
{{{
root@solaris:/home/oracle# sh md5.sh

sample output: 
...
root@solaris:/home/oracle# sh md5.sh
a5238634023667d128026bbc3d77c1cd  testfile
Iteration: 1
a5238634023667d128026bbc3d77c1cd  testfile
Iteration: 2
a5238634023667d128026bbc3d77c1cd  testfile
Iteration: 3
a5238634023667d128026bbc3d77c1cd  testfile
Iteration: 4
a5238634023667d128026bbc3d77c1cd  testfile
Iteration: 5
a5238634023667d128026bbc3d77c1cd  testfile
Iteration: 6
a5238634023667d128026bbc3d77c1cd  testfile
Iteration: 7
a5238634023667d128026bbc3d77c1cd  testfile
Iteration: 8
a5238634023667d128026bbc3d77c1cd  testfile
Iteration: 9
a5238634023667d128026bbc3d77c1cd  testfile
Iteration: 10
...
}}}


''Profile the session''
{{{
### TOP
root@solaris:/home/oracle# top -c
last pid: 20528;  load avg:  0.84,  0.87,  0.78;  up 0+00:35:49                                                                    14:13:39
102 processes: 100 sleeping, 1 running, 1 on cpu
CPU states:  0.0% idle, 74.0% user, 26.0% kernel,  0.0% iowait,  0.0% swap
Kernel: 526 ctxsw, 6184 trap, 315 intr, 5714 syscall, 25 fork, 5143 flt
Memory: 1024M phys mem, 53M free mem, 977M total swap, 976M free swap

   PID USERNAME NLWP PRI NICE  SIZE   RES STATE    TIME    CPU COMMAND
   742 oracle      3  59    0   61M   43M sleep    0:24  2.45% /usr/bin/Xorg :0 -nolisten tcp -br -auth /tmp/gdm-auth-cookies-rQaOBb/auth-f
  1009 oracle      2  59    0   89M   19M sleep    0:19  2.36% gnome-terminal
 19296 root        1  10    0 8948K 2312K sleep    0:01  2.20% sh md5.sh
   954 oracle     20  59    0   71M   51M sleep    0:18  0.86% /usr/bin/java -client -jar /usr/share/vpanels/vpanels-client.jar sysmon
  1992 root        1  59    0 7544K 1668K sleep    0:07  0.61% mpstat 1 100000
 20372 root        1  59    0 3920K 2260K cpu      0:00  0.28% top -c
   348 root        1  59    0 3668K 2180K sleep    0:00  0.02% /usr/lib/hal/hald-addon-acpi


### PRSTAT
   PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG PROCESS/LWPID
 25076 root      63  10 0.3 0.0 0.0 0.0 0.0  26   0 394 13K   0 md5sum/1
 19296 root     0.5 2.0 0.0 0.0 0.0 0.0  84  13 114  76  2K  72 bash/1
  1009 oracle   0.4 0.6 0.0 0.0 0.0 0.0  95 3.6 264   2  1K   0 gnome-termin/1
     5 root     0.0 0.6 0.0 0.0 0.0 0.0  99 0.2 112 104   0   0 zpool-rpool/13
   954 oracle   0.1 0.3 0.0 0.0 0.0  97 0.0 2.7 202   1 303   0 java/20
 24399 root     0.0 0.4 0.0 0.0 0.0 0.0 100 0.0  44   2 525   0 prstat/1
   742 oracle   0.1 0.3 0.0 0.0 0.0 0.0  99 0.1  95   0 785   0 Xorg/1
   978 oracle   0.1 0.2 0.0 0.0 0.0 0.0  99 0.6 120   0 510   0 xscreensaver/1
 24899 root     0.0 0.2 0.0 0.0 0.0 0.0 100 0.0   1   1 364   0 top/1
   954 oracle   0.1 0.1 0.0 0.0 0.0 100 0.0 0.1 101   0 202   0 java/19
   954 oracle   0.0 0.1 0.0 0.0 0.0 0.0  98 2.4  82   0  82   0 java/9
 23106 oracle   0.1 0.0 0.0 0.0 0.0 0.0 100 0.0  30   0 167   0 xscreensaver/1
   555 root     0.0 0.0 0.0 0.0 0.0 0.0 100 0.0  43   0 258   0 nscd/17
     5 root     0.0 0.0 0.0 0.0 0.0 0.0 100 0.2  22   3   0   0 zpool-rpool/25
     5 root     0.0 0.0 0.0 0.0 0.0 0.0 100 0.2  21   1   0   0 zpool-rpool/22
 11204 oracle   0.0 0.0 0.0 0.0 0.0 0.0 100 0.1   1   3  23   0 sshd/1
   958 oracle   0.0 0.0 0.0 0.0 0.0 100 0.0 0.2  10   0  20   0 mixer_applet/3
   954 oracle   0.0 0.0 0.0 0.0 0.0 0.0 100 0.2   9   0  27   0 java/12
     5 root     0.0 0.0 0.0 0.0 0.0 0.0 100 0.2  22   1   0   0 zpool-rpool/26
     5 root     0.0 0.0 0.0 0.0 0.0 0.0 100 0.2  22   1   0   0 zpool-rpool/20
     5 root     0.0 0.0 0.0 0.0 0.0 0.0 100 0.2  21   0   0   0 zpool-rpool/24
   423 root     0.0 0.0 0.0 0.0 0.0 0.0  99 1.4   5   0  25   5 ntpd/1
     5 root     0.0 0.0 0.0 0.0 0.0 0.0 100 0.0   2   1   0   0 zpool-rpool/2
   510 root     0.0 0.0 0.0 0.0 0.0 0.0 100 0.3   9   0   9   0 VBoxService/7
     5 root     0.0 0.0 0.0 0.0 0.0 0.0 100 0.2  20   0   0   0 zpool-rpool/23
     5 root     0.0 0.0 0.0 0.0 0.0 0.0 100 0.2  20   0   0   0 zpool-rpool/19
   979 oracle   0.0 0.0 0.0 0.0 0.0 0.0 100 0.0   5   0  10   0 updatemanage/1
     5 root     0.0 0.0 0.0 0.0 0.0 0.0 100 0.2  20   0   0   0 zpool-rpool/21
   954 oracle   0.0 0.0 0.0 0.0 0.0 100 0.0 0.1   5   0   5   0 java/3
   134 root     0.0 0.0 0.0 0.0 0.0 0.0 100 0.0   3   0  10   0 dhcpagent/1
    97 root     0.0 0.0 0.0 0.0 0.0 100 0.0 0.0   2   0   2   0 nwamd/1
   969 oracle   0.0 0.0 0.0 0.0 0.0 0.0 100 0.0   1   0   3   0 gnome-power-/1
   855 root     0.0 0.0 0.0 0.0 0.0 0.0 100 0.4   1   0  10   0 sendmail/1
   510 root     0.0 0.0 0.0 0.0 0.0 100 0.0 0.0   1   0   2   0 VBoxService/6
   510 root     0.0 0.0 0.0 0.0 0.0 100 0.0 0.0   1   0   2   0 VBoxService/5
   510 root     0.0 0.0 0.0 0.0 0.0 100 0.0 0.0   1   0   3   0 VBoxService/3
   255 root     0.0 0.0 0.0 0.0 0.0 100 0.0 0.0   2   0   4   0 devfsadm/3
  1002 oracle   0.0 0.0 0.0 0.0 0.0 100 0.0 0.0   1   0   1   0 rad/3
   984 oracle   0.0 0.0 0.0 0.0 0.0 0.0 100 0.0   1   0   2   0 nwam-manager/1
   954 oracle   0.0 0.0 0.0 0.0 0.0 0.0 100 0.0   1   0   1   0 java/10
   655 root     0.0 0.0 0.0 0.0 0.0 0.0 100 0.0   1   0   1   0 fmd/2
   918 oracle   0.0 0.0 0.0 0.0 0.0 0.0 100 0.0   1   0   4   0 ssh-agent/1
   555 root     0.0 0.0 0.0 0.0 0.0 0.0 100 0.0   1   0   1   0 nscd/31
Total: 105 processes, 469 lwps, load averages: 1.33, 1.33, 1.20


### VMSTAT   
root@solaris:/home/oracle# vmstat 1 1000
 kthr      memory            page            disk          faults      cpu
 r b w   swap  free  re  mf pi po fr de sr cd s0 -- --   in   sy   cs us sy id
 0 0 0 1037596 187496 221 2144 0 0 3  0 306 17 -0 0  0  307 3362  688 29 17 54
 0 0 0 915384 54608 481 4851 0  0  0  0  0 24  0  0  0  520 5772  773 72 28  0
 0 0 0 915568 54832 520 5252 0  0  0  0  0  0  0  0  0  280 6024  458 74 26  0
 0 0 0 915304 54588 522 5252 0  0  0  0  0  0  0  0  0  283 6025  457 74 26  0
 2 0 0 915304 54612 521 5253 0  0  0  0  0  0  0  0  0  279 6068  465 75 25  0
 0 0 0 915264 54584 520 5258 0  0  0  0  0  0  0  0  0  292 6039  464 74 26  0
 0 0 0 915260 54580 487 4866 0  0  0  0  0 29  0  0  0  555 5730  792 72 28  0
 1 0 0 915228 54592 520 5253 0  0  0  0  0  0  0  0  0  276 6092  449 74 26  0
 0 0 0 915228 54612 522 5252 0  0  0  0  0  0  0  0  0  281 6010  469 74 26  0


### MPSTAT
root@solaris:/home/oracle# mpstat 1 10000
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  0 2175   0    0   307  108  687   65    0    7    0  3387   29  17   0  53
  0 5304   0    0   286   84  480  113    0    0    0  6068   75  25   0   0
  0 5226   0    0   289   91  474  120    0    0    0  5943   74  26   0   0
  0 5346   0    0   294   92  480  113    0    0    0  6089   74  26   0   0
  0 4829   0    0   550  351  832  189    0   76    0  5634   71  29   0   0
  0 5279   0    0   285   85  480  116    0    0    0  6022   74  26   0   0
  0 5278   0    0   286   86  478  130    0    0    0  5949   75  25   0   0
  0 5278   0    0   280   84  463  110    0    0    0  6007   74  26   0   0
  0 5331   0    0   283   86  464  112    0    0    0  6071   74  26   0   0
  0 4893   0    0   454  253  689  158    0   43    0  5849   73  27   0   0
  0 5257   0    0   276   83  463  120    0    0    0  6019   74  26   0   0
  0 5278   0    0   279   83  461  107    0    0    0  6010   74  26   0   0
  0 5227   0    0   279   85  454  110    0    0    0  5960   74  26   0   0
  0 5292   0    0   282   87  468  110    0    1    0  6031   74  26   0   0
  0 4926   0    0   425  226  616  137    0   28    0  5854   74  26   0   0


### IOSTAT
root@solaris:/home/oracle# iostat -xcd 1 100000
                 extended device statistics                      cpu
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b  us sy wt id
cmdk0     2.6   15.8   94.9   81.8  0.0  0.0    0.8   0   1  46 21  0 33
sd0       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
                 extended device statistics                      cpu
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b  us sy wt id
cmdk0     0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0  74 26  0  0
sd0       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
                 extended device statistics                      cpu
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b  us sy wt id
cmdk0     0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0  75 25  0  0
sd0       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
                 extended device statistics                      cpu
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b  us sy wt id
cmdk0     0.0  156.9    0.0  772.1  0.0  0.1    0.4   1   4  69 31  0  0
sd0       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
}}}
  

''DTrace!!!'' 

* we want to know what is causing those system calls, this measures system calls by process name.. here the top process is ''md5sum''
{{{
dtrace -n 'syscall:::entry { @[execname] = count(); }'

root@solaris:/home/oracle# dtrace -n 'syscall:::entry { @[execname] = count(); }'
dtrace: description 'syscall:::entry ' matched 233 probes
^C

  fmd                                                               1
  in.routed                                                         1
  inetd                                                             1
  netcfgd                                                           1
  nwamd                                                             1
  svc.configd                                                       1
  iiimd                                                             2
  rad                                                               2
  utmpd                                                             2
  nwam-manager                                                      4
  ssh-agent                                                         4
  devfsadm                                                          8
  xscreensaver                                                     10
  gnome-power-mana                                                 12
  sshd                                                             16
  updatemanagernot                                                 16
  dhcpagent                                                        18
  hald                                                             20
  sendmail                                                         22
  VBoxService                                                      23
  hald-addon-acpi                                                  26
  mixer_applet2                                                    30
  ntpd                                                             40
  java                                                            965
  Xorg                                                           1797
  mpstat                                                         2115
  dtrace                                                         2357
  gnome-terminal                                                 3225
  expr                                                           5820
  bash                                                           7683
  md5sum                                                        21214
}}}


* matching the syscall probe only when the execname matches our investigation target, ''md5sum'', and counting the syscall name
{{{
dtrace -n 'syscall:::entry /execname == "md5sum"/ { @[probefunc] = count(); }'

root@solaris:/home/oracle# dtrace -n 'syscall:::entry /execname == "md5sum"/ { @[probefunc] = count(); }'
dtrace: description 'syscall:::entry ' matched 233 probes
^C

  llseek                                                          111
  rexit                                                           111
  write                                                           111
  getpid                                                          112
  getrlimit                                                       112
  ioctl                                                           112
  open64                                                          112
  sysi86                                                          112
  systeminfo                                                      112
  setcontext                                                      224
  sysconfig                                                       224
  mmapobj                                                         336
  fstat64                                                         448
  open                                                            448
  memcntl                                                         560
  resolvepath                                                     560
  stat64                                                          560
  close                                                           669
  brk                                                             672
  mmap                                                            784
  read                                                          17961
}}}


* what is calling ''read'' by using the ustack() DTrace action
{{{
dtrace -n 'syscall::read:entry /execname == "md5sum"/ { @[ustack()] = count();}'

root@solaris:/home/oracle# dtrace -n 'syscall::read:entry /execname == "md5sum"/ { @[ustack()] = count();}'
dtrace: description 'syscall::read:entry ' matched 1 probe
^C


              0xfeef25b5
              0xfeebb91c
              0xfeec00b0
              0x80554f2
              0x805304d
              0x805382f
              0x8052a7d
              161

              0xfeef25b5
              0xfeebb91c
              0xfeec00b0
              0x80554f2
              0x805304d
              0x805382f
              0x8052a7d
              161
}}}


* show top process and syscall
{{{
dtrace -n 'syscall:::entry { @num[execname,probefunc] = count(); }'


root@solaris:/home/oracle# dtrace -n 'syscall:::entry { @num[execname,probefunc] = count(); }'
dtrace: description 'syscall:::entry ' matched 233 probes
^C

  dtrace                                              fstat                                                             1
  dtrace                                              lwp_sigmask                                                       1
  dtrace                                              mmap                                                              1
  dtrace                                              schedctl                                                          1
  dtrace                                              setcontext                                                        1
  dtrace                                              sigpending                                                        1
  dtrace                                              write                                                             1
  fmd                                                 pollsys                                                           1
  gnome-power-mana                                    clock_gettime                                                     1
  gnome-power-mana                                    write                                                             1
  inetd                                               lwp_park                                                          1
  netcfgd                                             lwp_park                                                          1
  ntpd                                                getpid                                                            1
  ntpd                                                pollsys                                                           1
  nwam-manager                                        ioctl                                                             1
  nwam-manager                                        pollsys                                                           1
  sendmail                                            pollsys                                                           1
  ssh-agent                                           getpid                                                            1
  ssh-agent                                           pollsys                                                           1
  top                                                 pollsys                                                           1
  top                                                 sysconfig                                                         1
  top                                                 write                                                             1
  xscreensaver                                        write                                                             1
  VBoxService                                         ioctl                                                             2
  VBoxService                                         lwp_park                                                          2
  devfsadm                                            gtime                                                             2
  devfsadm                                            lwp_park                                                          2
  dhcpagent                                           pollsys                                                           2
  gnome-power-mana                                    ioctl                                                             2
  gnome-power-mana                                    read                                                              2
  gnome-terminal                                      fcntl                                                             2
  rad                                                 lwp_park                                                          2
  sendmail                                            lwp_sigmask                                                       2
  ssh-agent                                           gtime                                                             2
  sshd                                                read                                                              2
  sshd                                                write                                                             2
  top                                                 close                                                             2
  top                                                 getdents                                                          2
  top                                                 getuid                                                            2
  top                                                 lseek                                                             2
  top                                                 uadmin                                                            2
  top                                                 zone                                                              2
  xscreensaver                                        gtime                                                             2
  xscreensaver                                        ioctl                                                             2
  xscreensaver                                        read                                                              2
  Xorg                                                writev                                                            3
  dtrace                                              sysconfig                                                         3
  gnome-power-mana                                    pollsys                                                           3
  sendmail                                            pset                                                              3
  xscreensaver                                        pollsys                                                           3
  dhcpagent                                           lwp_sigmask                                                       4
  dtrace                                              sigaction                                                         4
  sendmail                                            gtime                                                             4
  sshd                                                pollsys                                                           4
  top                                                 open                                                              4
  dtrace                                              lwp_park                                                          5
  ntpd                                                setcontext                                                        5
  ntpd                                                sigsuspend                                                        5
  updatemanagernot                                    ioctl                                                             5
  dtrace                                              brk                                                               6
  top                                                 ioctl                                                             6
  updatemanagernot                                    pollsys                                                           6
  sshd                                                lwp_sigmask                                                       8
  VBoxService                                         nanosleep                                                        10
  mixer_applet2                                       ioctl                                                            10
  mixer_applet2                                       lwp_park                                                         10
  Xorg                                                pollsys                                                          12
  ntpd                                                lwp_sigmask                                                      15
  java                                                ioctl                                                            18
  top                                                 gtime                                                            20
  Xorg                                                setitimer                                                        24
  Xorg                                                read                                                             25
  Xorg                                                clock_gettime                                                    48
  bash                                                fcntl                                                            66
  bash                                                pipe                                                             66
  bash                                                write                                                            66
  expr                                                getpid                                                           66
  expr                                                getrlimit                                                        66
  expr                                                ioctl                                                            66
  expr                                                rexit                                                            66
  expr                                                sysi86                                                           66
  expr                                                systeminfo                                                       66
  expr                                                write                                                            66
  md5sum                                              getpid                                                           66
  md5sum                                              getrlimit                                                        66
  md5sum                                              ioctl                                                            66
  md5sum                                              llseek                                                           66
  md5sum                                              open64                                                           66
  md5sum                                              rexit                                                            66
  md5sum                                              sysi86                                                           66
  md5sum                                              systeminfo                                                       66
  md5sum                                              write                                                            66
  bash                                                brk                                                              67
  java                                                pollsys                                                          92
  top                                                 fstat                                                           110
  bash                                                exece                                                           132
  bash                                                forksys                                                         132
  bash                                                lwp_self                                                        132
  bash                                                schedctl                                                        132
  expr                                                fstat64                                                         132
  expr                                                setcontext                                                      132
  expr                                                sysconfig                                                       132
  md5sum                                              setcontext                                                      132
  md5sum                                              sysconfig                                                       132
  bash                                                setcontext                                                      137
  gnome-terminal                                      clock_gettime                                                   140
  bash                                                read                                                            182
  java                                                lwp_cond_signal                                                 198
  md5sum                                              mmapobj                                                         198
  bash                                                waitsys                                                         203
  top                                                 pread                                                           214
  dtrace                                              p_online                                                        256
  bash                                                getpid                                                          264
  bash                                                stat64                                                          264
  expr                                                brk                                                             264
  expr                                                mmapobj                                                         264
  md5sum                                              fstat64                                                         264
  md5sum                                              open                                                            264
  gnome-terminal                                      write                                                           274
  java                                                lwp_cond_wait                                                   302
  gnome-terminal                                      ioctl                                                           316
  gnome-terminal                                      pollsys                                                         317
  gnome-terminal                                      read                                                            322
  expr                                                open                                                            330
  md5sum                                              memcntl                                                         330
  md5sum                                              resolvepath                                                     330
  md5sum                                              stat64                                                          330
  bash                                                close                                                           396
  expr                                                close                                                           396
  expr                                                memcntl                                                         396
  md5sum                                              brk                                                             396
  md5sum                                              close                                                           396
  expr                                                mmap                                                            462
  expr                                                resolvepath                                                     462
  md5sum                                              mmap                                                            462
  gnome-terminal                                      lseek                                                           497
  expr                                                stat64                                                          528
  dtrace                                              ioctl                                                          1299
  bash                                                sigaction                                                      1452
  bash                                                lwp_sigmask                                                    1523
  md5sum                                              read                                                          10669
root@solaris:/home/oracle#
}}}


* Tanel has this script called ''dstackprof'' that you can use for a session, here you will notice on the samples that it's mostly doing a loop and read
{{{
root@solaris:/home/oracle# sh dstackprof.sh 19296

DStackProf v1.02 by Tanel Poder ( http://www.tanelpoder.com )
Sampling pid 19296 for 5 seconds with stack depth of 100 frames...

10 samples with stack below
__________________
libc.so.1`__close
bash`command_substitute
bash`0x80a08ba
bash`0x8097666
bash`expand_string_assignment
bash`0x8097378
bash`0x8096b7a
bash`do_word_assignment
bash`0x80a210d
bash`expand_words
bash`0x80813d5
bash`execute_command_internal
bash`0x807f031
bash`execute_command_internal
bash`execute_command
bash`0x807eff2
bash`execute_command_internal
bash`execute_command
bash`0x8080700
bash`0x8080634
bash`execute_command_internal
bash`execute_command
bash`reader_loop
bash`main
bash`_start

10 samples with stack below
__________________
libc.so.1`__sigaction
bash`set_signal_handler
bash`0x808f250
bash`wait_for
bash`command_substitute
bash`0x80a08ba
bash`0x8097666
bash`expand_string_assignment
bash`0x8097378
bash`0x8096b7a
bash`do_word_assignment
bash`0x80a210d
bash`expand_words
bash`0x80813d5
bash`execute_command_internal
bash`0x807f031
bash`execute_command_internal
bash`execute_command
bash`0x807eff2
bash`execute_command_internal
bash`execute_command
bash`0x8080700
bash`0x8080634
bash`execute_command_internal
bash`execute_command
bash`reader_loop
bash`main
bash`_start

10 samples with stack below
__________________
libc.so.1`__waitid
libc.so.1`waitpid
bash`0x80905f7
bash`0x8090554
libc.so.1`__sighndlr
libc.so.1`call_user_handler
libc.so.1`sigacthandler
libc.so.1`__read
bash`zread
bash`0x809a281
bash`command_substitute
bash`0x80a08ba
bash`0x8097666
bash`expand_string_assignment
bash`0x8097378
bash`0x8096b7a
bash`do_word_assignment
bash`0x80a210d
bash`expand_words
bash`0x80813d5
bash`execute_command_internal
bash`0x807f031
bash`execute_command_internal
bash`execute_command
bash`0x807eff2
bash`execute_command_internal
bash`execute_command
bash`0x8080700
bash`0x8080634
bash`execute_command_internal
bash`execute_command
bash`reader_loop
bash`main
bash`_start

10 samples with stack below
__________________
libc.so.1`__write
libc.so.1`_xflsbuf
libc.so.1`_flsbuf
libc.so.1`putc
libc.so.1`putchar
bash`echo_builtin
bash`0x8081dbf
bash`0x80827f0
bash`0x808192d
bash`execute_command_internal
bash`0x807f031
bash`execute_command_internal
bash`execute_command
bash`0x8080700
bash`0x8080634
bash`execute_command_internal
bash`execute_command
bash`reader_loop
bash`main
bash`_start

10 samples with stack below
__________________
libc.so.1`lmutex_lock
libc.so.1`continue_fork
libc.so.1`forkx
libc.so.1`fork
bash`make_child
bash`command_substitute
bash`0x80a08ba
bash`0x8097666
bash`expand_string_assignment
bash`0x8097378
bash`0x8096b7a
bash`do_word_assignment
bash`0x80a210d
bash`expand_words
bash`0x80813d5
bash`execute_command_internal
bash`0x807f031
bash`execute_command_internal
bash`execute_command
bash`0x807eff2
bash`execute_command_internal
bash`execute_command
bash`0x8080700
bash`0x8080634
bash`execute_command_internal
bash`execute_command
bash`reader_loop
bash`main
bash`_start

10 samples with stack below
__________________
libc.so.1`mutex_unlock
libc.so.1`stdio_unlocks
libc.so.1`libc_parent_atfork
libc.so.1`_postfork_parent_handler
libc.so.1`forkx
libc.so.1`fork
bash`make_child
bash`command_substitute
bash`0x80a08ba
bash`0x8097666
bash`expand_string_assignment
bash`0x8097378
bash`0x8096b7a
bash`do_word_assignment
bash`0x80a210d
bash`expand_words
bash`0x80813d5
bash`execute_command_internal
bash`0x807f031
bash`execute_command_internal
bash`execute_command
bash`0x807eff2
bash`execute_command_internal
bash`execute_command
bash`0x8080700
bash`0x8080634
bash`execute_command_internal
bash`execute_command
bash`reader_loop
bash`main
bash`_start

10 samples with stack below
__________________
libc.so.1`syscall
libc.so.1`thr_sigsetmask
libc.so.1`sigprocmask
bash`0x8092043
bash`reap_dead_jobs
bash`0x80806aa
bash`0x8080634
bash`execute_command_internal
bash`execute_command
bash`reader_loop
bash`main
bash`_start

10 samples with stack below
__________________
libc.so.1`syscall
libc.so.1`thr_sigsetmask
libc.so.1`sigprocmask
bash`stop_pipeline
bash`execute_command_internal
bash`execute_command
bash`0x807eff2
bash`execute_command_internal
bash`execute_command
bash`0x807eff2
bash`execute_command_internal
bash`execute_command
bash`0x8080700
bash`0x8080634
bash`execute_command_internal
bash`execute_command
bash`reader_loop
bash`main
bash`_start

10 samples with stack below
__________________
libc.so.1`syscall
libc.so.1`thr_sigsetmask
libc.so.1`sigprocmask
bash`wait_for
bash`execute_command_internal
bash`execute_command
bash`0x807eff2
bash`execute_command_internal
bash`execute_command
bash`0x807eff2
bash`execute_command_internal
bash`execute_command
bash`0x8080700
bash`0x8080634
bash`execute_command_internal
bash`execute_command
bash`reader_loop
bash`main
bash`_start

10 samples with stack below
__________________
unix`do_splx
genunix`disp_lock_exit
genunix`post_syscall
genunix`syscall_exit
unix`0xfffffffffb800ea9

10 samples with stack below
__________________
unix`splr
genunix`thread_lock
genunix`post_syscall
genunix`syscall_exit
unix`0xfffffffffb800ea9

10 samples with stack below
__________________
unix`splr
unix`lock_set_spl
genunix`disp_lock_enter
unix`disp
unix`swtch
unix`preempt
genunix`post_syscall
genunix`syscall_exit
unix`0xfffffffffb800ea9

10 samples with stack below
__________________
unix`tsc_read
genunix`gethrtime_unscaled
genunix`new_mstate
genunix`stop
genunix`pre_syscall
genunix`syscall_entry
unix`sys_syscall32

10 samples with stack below
__________________
unix`tsc_read
genunix`gethrtime_unscaled
unix`swtch
genunix`stop
genunix`pre_syscall
genunix`syscall_entry
unix`sys_syscall32

10 samples with stack below
__________________
unix`tsc_read
genunix`gethrtime
genunix`getproc
genunix`cfork
genunix`forksys
unix`sys_syscall32

11 samples with stack below
__________________
libc.so.1`lmutex_lock
libc.so.1`continue_fork
libc.so.1`forkx
libc.so.1`fork
bash`make_child
bash`0x8082a19
bash`0x8081a81
bash`execute_command_internal
bash`execute_command
bash`0x807eff2
bash`execute_command_internal
bash`execute_command
bash`0x807eff2
bash`execute_command_internal
bash`execute_command
bash`0x8080700
bash`0x8080634
bash`execute_command_internal
bash`execute_command
bash`reader_loop
bash`main
bash`_start

11 samples with stack below
__________________
unix`hat_kpm_page2va
unix`ppcopy
genunix`anon_private
genunix`segvn_faultpage
genunix`segvn_fault
genunix`as_fault
unix`pagefault
unix`trap
unix`0xfffffffffb8001d6

20 samples with stack below
__________________
unix`tsc_read
genunix`gethrtime_unscaled
unix`page_get_freelist
unix`page_create_va
genunix`swap_getapage
genunix`swap_getpage
genunix`fop_getpage
genunix`anon_private
genunix`segvn_faultpage
genunix`segvn_fault
genunix`as_fault
unix`pagefault
unix`trap
unix`0xfffffffffb8001d6

40 samples with stack below
__________________
unix`tsc_read
genunix`gethrtime_unscaled
genunix`syscall_mstate
unix`0xfffffffffb800eb8

50 samples with stack below
__________________
libc.so.1`__forkx
libc.so.1`fork
bash`make_child
bash`command_substitute
bash`0x80a08ba
bash`0x8097666
bash`expand_string_assignment
bash`0x8097378
bash`0x8096b7a
bash`do_word_assignment
bash`0x80a210d
bash`expand_words
bash`0x80813d5
bash`execute_command_internal
bash`0x807f031
bash`execute_command_internal
bash`execute_command
bash`0x807eff2
bash`execute_command_internal
bash`execute_command
bash`0x8080700
bash`0x8080634
bash`execute_command_internal
bash`execute_command
bash`reader_loop
bash`main
bash`_start

302 Total samples captured
}}}


* show the open file for the process 19296
{{{
root@solaris:/home/oracle# pfiles  19296
19296:  sh md5.sh
  Current rlimit: 256 file descriptors
   0: S_IFCHR mode:0620 dev:551,0 ino:444541655 uid:54321 gid:7 rdev:243,1
      O_RDWR
      /dev/pts/1
      offset:2823322
   1: S_IFCHR mode:0620 dev:551,0 ino:444541655 uid:54321 gid:7 rdev:243,1
      O_RDWR
      /dev/pts/1
      offset:2823322
   2: S_IFCHR mode:0620 dev:551,0 ino:444541655 uid:54321 gid:7 rdev:243,1
      O_RDWR
      /dev/pts/1
      offset:2823322
 255: S_IFREG mode:0755 dev:174,65544 ino:204 uid:0 gid:0 size:100
      O_RDONLY|O_LARGEFILE FD_CLOEXEC
      /home/oracle/md5.sh
      offset:100
}}}



http://www.solarisinternals.com/wiki/index.php/CPU/Processor
http://blog.tanelpoder.com/2008/09/02/oracle-hidden-costs-revealed-part2-using-dtrace-to-find-why-writes-in-system-tablespace-are-slower-than-in-others/
https://danischnider.wordpress.com/2015/12/01/foreign-key-constraints-in-an-oracle-data-warehouse/
https://blog.go-faster.co.uk/2018/11/data-warehouse-design-mistakes-1-lack.html
https://blog.go-faster.co.uk/2018/11/data-warehouse-design-mistakes-2.html
https://blog.go-faster.co.uk/2018/11/data-warehouse-design-mistakes-3-date.html
https://blog.go-faster.co.uk/2018/11/how-not-to-build-data-warehouse.html

''pulsar''
http://bensullins.com/data-warehousing-pulsar-method-introduction/

''Active Data Guard'' http://www.oracle.com/au/products/database/data-guard-hol-176005.html
this is better http://gavinsoorma.com/wp-content/uploads/2011/03/active_data_guard_hands_on_lab.pdf
<<<
Enabling Active Data Guard
Reading real-time data from an Active Data Guard Standby Database
Automatically managing potential apply lag using Active Data Guard query SLAs
Writing data when using an Active Data Guard standby.
Using schema redirection with Active Data Guard
Using Active Data Guard automatic block repair to detect and repair corrupt blocks on either the primary or standby, transparent to applications and users 
The Hands-On Lab requires that you have a system with the version of the Oracle Database (11.2 or 12.1) installed on a system and that you have created a database called SFO from the seed databases and a physical standby created with the name of NYC.  You can use any names you wish but the examples in the handout use SFO and NYC.   For complete instructions please refer to the "Setup and Configuration" section in the handout at the link above.  
<<<

''Data Guard'' http://www.oracle.com/au/products/database/data-guard-hol-basic-427660.html
<<<
Creating a Physical Standby Database
Verifying that Redo Transport has been configured correctly.
Configuring the Data Guard Broker
Changing the Transport mode using Broker Properties
Changing the Protection mode to Maximum Availability
Performing a switchover from the Primary to the Standby
Enabling Flashback database
Performing a Manual Failover from the Primary to the standby
Enabling and using Fast-Start Failover
The Hands-On Lab requires that you have a system with the version of the Oracle Database (11.2 or 12.1) installed on a system and that you have created a database called SFO from the seed databases.  You can use any name you wish but the examples in the handout use SFO as the Primary database name. 

The Hands-On Lab is provided as-is. We believe the documentation is sufficient for DBA's to be successful following this lab. We ask that you read the documentation carefully in order to avoid problems. However, if you have difficulties accessing the Hands-On Lab, or if you believe there is an error in the documentation that makes it impossible to complete the lab successfully, please send email to larry.carpenter@oracle.com and Larry will do his best to assist you. 
<<<


Set the Network Configuration and Highest Network Redo Rates http://docs.oracle.com/cd/E11882_01/server.112/e10803/config_dg.htm#HABPT4898
Data Guard Transport Considerations on Oracle Database Machine (Exadata) (Doc ID 960510.1)
also see [[coderepo data warehouse, data model]]


<<showtoc>>


! data model example index - databaseanswers
http://www.databaseanswers.org/data_models/index.htm
http://www.databaseanswers.org/index.htm
http://www.databaseanswers.org/site_map.htm
http://www.databaseanswers.org/tutorials.htm





Levels/Kinds of Data Model
[img[ http://i.imgur.com/L3H2ecK.png ]]


! Relational  

http://www.sqlfail.com/2015/02/27/database-design-resources-my-reading-list/
<<<
http://database-programmer.blogspot.com/2008/09/comprehensive-table-of-contents.html
http://fastanimals.com/melissa/WhitePapers/NormalizationDenormalizationWhitePaper.pdf
https://mwidlake.wordpress.com/tag/index-organized-tables/
https://iggyfernandez.wordpress.com/2013/07/28/no-to-sql-and-no-to-nosql/
https://blogs.oracle.com/datawarehousing/entry/optimizing_queries_with_attribute_clustering
http://use-the-index-luke.com/
https://richardfoote.wordpress.com/
<<<


Re-engineering Your Database Using Oracle SQL Developer Data Modeler 3.0
http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/sqldevdm/r30/updatedb/updatedb.htm

Quest: Data Modeling for the Database Developer, Designer & Admin http://www.youtube.com/watch?v=gPCUAcbbQ-Q

A Layman’s Approach to Relational Database Normalization http://oracledba.ezpowell.com/oracle/papers/Normalization.htm
denormalization http://oracledba.ezpowell.com/oracle/papers/Denormalization.htm
The Very Basics of Data Warehouse Design http://oracledba.ezpowell.com/oracle/papers/TheVeryBasicsOfDataWarehouseDesign.htm
All About Indexes in Oracle http://oracledba.ezpowell.com/oracle/papers/AllAboutIndexes.htm
Why an Object Database and not a Relational Database? http://oracledba.ezpowell.com/odbms/ObjectsVSRelational.html


https://gerardnico.com/wiki/data_modeling/data_modeling
https://gerardnico.com/wiki/dw/start#what_s_a_data_warehouse
https://gerardnico.com/wiki/dit/dit
https://gerardnico.com/wiki/data_quality/data_quality



http://www.vtc.com/products/Data-Modeling-tutorials.htm
{{{
01 Welcome                    
0101 Welcome
0102 Prerequisites for this Course
0103 About this Course
0104 Where to Find Documentation
0105 Samples and Example Data Models Part pt. 1
0106 Samples and Example Data Models Part pt. 2
0107 A Relational Database Modeling Tool
0108 ERWin: Changing Physical Structure
0109 ERWin: Generating Scripts

02 The History of Data Modeling
0201 What is a Data Model?
0202 Types of Data Models
0203 The Evolution of Data Modeling
0204 File Systems
0205 Hierarchical Databases
0206 Network Databases
0207 Relational Databases
0208 Object Databases
0209 Object-Relational Databases
0210 The History of the Relational Database

03 Tools for Data Modeling
0301 Entity Relationship Diagrams
0302 Using ERWin Part pt. 1
0303 Using ERWin Part pt. 2
0304 Using ERWin Part pt. 3
0305 Modeling in Microsoft Access
0306 The Parts of an Object Data Model
0307 Basic UML for Object Databases
0308 What is a Class Diagram?
0309 Building Class Structures
0310 Other UML Diagrams

04 Introducing Data Modeling
0401 The Relational Data Model
0402 The Object Data Model
0403 The Object-Relational Data Model
0404 Data Warehouse Data Modeling
0405 Client-Server Versus OLTP Databases
0406 Available Database Engines

05 Relational Data Modeling
0501 What is Normalization?
0502 Normalization Made Simple
0503 Relational Terms and Jargon
0504 1st Normal Form
0505 Demonstrating 1st Normal Form
0506 2nd Normal Form
0507 Demonstrating 2nd Normal Form
0508 3rd Normal Form
0509 Demonstrating 3rd Normal Form
0510 4th and 5th Normal Forms
0511 Primary/Foreign Keys/Referential Integrity
0512 The Traditional Relational Database Model
0513 Surrogate Keys and the Relational Model
0514 Denormalization pt. 1
0515 Denormalization pt. 2

06 Object Data Modeling
0601 The Object-Relational Database Model
0602 Relational Versus Object Models
0603 What is the Object Data Model?
0604 What is a Class?
0605 Again - a Class and an Object
0606 What is an Attribute?
0607 What is a Method?
0608 The Simplicity of Objects
0609 What is Inheritance?
0610 What is Multiple Inheritance?
0611 Some Specifics of the Object Data Model

07 Data Warehouse Data Modeling
0701 The Origin of Data Warehouses
0702 Why the Relational Model Fails
0703 The Dimensional Data Model Part pt. 1
0704 The Dimensional Data Model Part pt. 2
0705 Star Schemas and Snowflake Schemas
0706 Data Warehouse Model Design Basics

08 Getting Data from a Database
0801 What is Structured Query Language (SQL)?
0802 The Roots of SQL
0803 Queries
0804 Changing Data
0805 Changing Metadata
0806 What is ODQL?

09 Tuning a Relational Data Model
0901 Normalization Versus Denormalizatrion
0902 Referential Integrity Part pt. 1
0903 Referential Integrity Part pt. 2
0904 Alternate Keys
0905 What is an Index?
0906 Indexing Considerations
0907 Too Many Indexes
0908 Composite Indexing
0909 Which Columns to Index?
0910 Index Types
0911 Match Indexes to SQL Code
0912 Types of Indexing in Detail pt. 1
0913 Types of Indexing in Detail pt. 2
0914 Where Index Types Apply
0915 Undoing Normalization
0916 What to Look For?
0917 Undoing Normal Forms
0918 Some Good and Bad Tricks Part pt. 1
0919 Some Good and Bad Tricks Part pt. 2

10 Tuning a Data Warehouse Data Model
1001 Denormalization
1002 Star Versus Snowflake Schemas
1003 Dimensional Hierarchies
1004 Specialized Data Warehouse Toys

11 Other Tricks
1101 RAID Arrays and Striping
1102 Standby Databases
1103 Replication
1104 Clustering

12 Wrapping it Up
1201 Some Available Database Engines
1202 The Future: Relational or Object?
1203 What You Have Learned

13 Credits
1301 About the Author
}}}



! NoSQL
Making the Shift from Relational to NoSQL http://www.couchbase.com/sites/default/files/uploads/all/whitepapers/Couchbase_Whitepaper_Transitioning_Relational_to_NoSQL.pdf
https://www.quora.com/How-is-relational-data-stored-in-a-NoSQL-database
https://www.quora.com/NoSQL-What-are-the-best-practices-to-convert-sql-based-relational-data-model-into-no-sql-model
http://blog.cloudthat.com/migration-from-relational-database-to-nosql-database/

!! books 
NoSQL and SQL Data Modeling: Bringing Together Data, Semantics, and Software https://www.safaribooksonline.com/library/view/nosql-and-sql/9781634621113/
Next Generation Databases: NoSQL, NewSQL, and Big Data https://www.safaribooksonline.com/library/view/next-generation-databases/9781484213292/
An Overview of NoSQL Databases https://www.safaribooksonline.com/library/view/an-overview-of/9781634621649/
NoSQL for Mere Mortals https://www.safaribooksonline.com/library/view/nosql-for-mere/9780134029894/
Oracle NoSQL Database: Real-Time Big Data Management for the Enterprise https://www.safaribooksonline.com/library/view/oracle-nosql-database/9780071816533/
Data Modeling Explanation and Purpose https://www.safaribooksonline.com/library/view/data-modeling-explanation/9781634621632/97816346216321.html?autoStart=True
Data Modeling for MongoDB https://www.safaribooksonline.com/library/view/data-modeling-for/9781935504702/
Data Modeling Made Simple: A Practical Guide for Business and IT Professionals https://www.safaribooksonline.com/library/view/data-modeling-made/9780977140060/




! online tools
https://www.draw.io/
https://www.quora.com/Are-there-any-online-tools-for-database-model-design
https://www.modelio.org/about-modelio/license.html
https://kentgraziano.com/2012/02/20/the-best-free-data-modeling-tool-ever/
https://www.vertabelo.com/
https://en.wikipedia.org/wiki/Comparison_of_database_tools
https://en.wikipedia.org/wiki/Comparison_of_data_modeling_tools
https://www.simple-talk.com/sql/database-administration/five-online-database-modelling-services/


! data model books 
Oracle SQL Developer Data Modeler for Database Design Mastery (Oracle Press) 1st Edition, Kindle Edition https://www.safaribooksonline.com/library/view/oracle-sql-developer/9780071850100/
https://www.amazon.com/Oracle-Developer-Modeler-Database-Mastery-ebook/dp/B00VMMR9EA/



! topics 

!! composite primary key 
https://www.youtube.com/watch?v=5yifu5JwYxE Oracle SQL Tutorial 20 - How to Create Composite Primary Keys, more here https://www.youtube.com/playlist?list=PL_c9BZzLwBRJ8f9-pSPbxSSG6lNgxQ4m9
https://weblogs.sqlteam.com/jeffs/2007/08/23/composite_primary_keys/
Is a composite primary key a good idea https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:580828234131
https://dba.stackexchange.com/questions/101635/modeling-optional-foreign-key-from-composite-primary-key-field-oracle-data-mode
How to define a composite primary key https://asktom.oracle.com/pls/asktom/f%3Fp%3D100:11:0::::P11_QUESTION_ID:136812348065
https://searchoracle.techtarget.com/answer/Can-I-put-two-primary-keys-in-one-table


!! self join 
referred by example https://www.youtube.com/watch?v=W0p8KP0o8g4
manager_id example https://www.youtube.com/watch?v=G4vO83UUzek

!! information engineering notation
https://www.omg.org/retail-depository/arts-odm-73/data_modeling_methodology_and_.htm


!! data warehouse 

!!! time dimension 
https://www.youtube.com/results?search_query=data+warehouse+hourly+time+dimension
https://stackoverflow.com/questions/2507289/time-and-date-dimension-in-data-warehouse
https://blog.jamesbayley.com/2013/01/04/how-to-create-a-calendar-dimension-with-hourly-grain/
http://oracleolap.blogspot.com/2010/05/time-dimensions-with-hourly-time.html
https://www.google.com/search?q=hourly+time+dimension&oq=hourly+time+dimension&aqs=chrome..69i57.5208j1j4&sourceid=chrome&ie=UTF-8

!!! hierarchy 
https://gerardnico.com/olap/dimensional_modeling/hierarchy
region hierarchy https://www.google.com/search?q=data+warehouse+region+hierarchy&oq=data+warehouse+region+hierarchy&aqs=chrome..69i57j69i60l2j69i64.348j0j4&sourceid=chrome&ie=UTF-8
Dimensional Modelling Design Patterns: Beyond Basics https://www.youtube.com/watch?v=ppIoWzeFTrk


http://www.databaseanswers.org/data_models/corporate_hierarchy/index.htm
http://www.databaseanswers.org/data_models/user_defined_hierarchies/index.htm
http://www.databaseanswers.org/data_models/hierarchies/index.htm
http://www.databaseanswers.org/data_models/recipes_recursive/index.htm
https://smartbridge.com/remodeling-recursive-hierarchy-tables-for-business-intelligence/  <- nice 
https://www.informationweek.com/software/information-management/kimball-university-five-alternatives-for-better-employee-dimension-modeling/d/d-id/1082326
https://dwbi1.wordpress.com/2017/10/18/hierarchy-with-multiple-parents/
time dimension hierarchy https://www.nuwavesolutions.com/simple-hierarchical-dimensions-html/
https://www.nuwavesolutions.com/ragged_hierarchical_dimensions/  <- nice 
https://www.google.com/search?q=data+warehouse+data+model+Hierarchical+Queries&oq=data+warehouse+data+model+Hierarchical+Queries&aqs=chrome..69i57j69i64.11811j0j1&sourceid=chrome&ie=UTF-8
https://www.linkedin.com/pulse/step-by-step-guide-creating-sql-hierarchical-queries-bibhas-mitra/   <-- good stuff 
https://oracle-base.com/articles/misc/hierarchical-queries
Find parent record as well as child record https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:9526387800346066619
https://stackoverflow.com/questions/11559612/how-to-get-the-final-parent-id-column-in-oracle-connect-by-sql
https://stackoverflow.com/questions/935098/database-structure-for-tree-data-structure
https://dba.stackexchange.com/questions/46127/recursive-self-joins


!!! star schema 
!!!! convert relational to star schema
https://www.google.com/search?q=convert+relational+to+star+schema&oq=convert+relational+to+star+schema&aqs=chrome..69i57j69i60l2j0l2.3422j0j1&sourceid=chrome&ie=UTF-8
https://dba.stackexchange.com/questions/144826/star-schema-from-relational-database
http://cci.drexel.edu/faculty/song/courses/info%20607/tutorial_WESST/EXAMPLE.HTM
http://blog.bguiz.com/2010/03/28/how-to-transform-an-operational-database-into-a-data-warehouse/
https://www.youtube.com/results?search_query=convert+relational+to+warehouse+schema


!! EAV model (popular on nosql)
https://mikesmithers.wordpress.com/2013/12/22/the-anti-pattern-eavil-database-design/

Entity–attribute–value model
<<<
EAV data model, each attribute-value pair is a fact describing an entity, and a row in an EAV table stores a single fact. EAV tables are often described as "long and skinny": "long" refers to the number of rows, "skinny" to the few columns.
<<<
https://www.google.com/search?q=eav+pattern&oq=EAV+pattern&aqs=chrome.0.0l7.1934j0j1&sourceid=chrome&ie=UTF-8

https://www.slideshare.net/stepanyuk/implementation-of-eav-pattern-for-activerecord-models-13263311










Installing and Using Standby Statspack in 11g (Doc ID 454848.1)
http://mujiang.blogspot.com/2010/03/setup-statspack-to-monitor-standby.html
{{{
col value format a30
select * from v$dataguard_stats;
}}}

http://jarneil.wordpress.com/2010/11/16/monitoring-the-progress-of-an-11gr2-standby-database/
https://sites.google.com/site/oraclepracticals/oracle-admin/oracle-data-guard-1/oracle-data-guard-tips
http://jhdba.wordpress.com/tag/vdataguard_stats/
{{{

* Data Guard Protection Modes
short and sweet: http://jarneil.wordpress.com/2007/10/25/dataguard-protection-levels/
8i, 9i, 10g: http://www.oracle.com/technology/deploy/availability/htdocs/DataGuardRedoShipping.htm

* Data Guard Mind Map
http://jarneil.wordpress.com/2008/10/12/the-dataguard-mind-map/



11g 

DEPRECATED: 
- no more standby_archive_dest

----------------------------------------------------------------------------------------

10g 

Log Transport Services 
    - the default is ARCn
	    - can only be ARCn SYNC (default)
    - for Log Writer Process (LGWR) ... the defaul is LGWR SYNC, could also be LGWR ASYNC (see REAL-TIME APPLY)
	    - If using LGWR, In 10.1 the LGWR sends data to small buffer in the SGA and LNS transports it to the standby site
	    - If using LGWR, In 10.2 the LGWR LNS background process reads directly from the redo log and transports the redo to the standby site
	    - You can change between asynchronous and synchronous log transportation dynamically. However, any changes to the configuration parameters will not take effect until the next log switch operation on the primary database
    - default for VALID_FOR (start 10.1) attribute format is VALID_FOR=(redo_log_type,database_role) for role transition... default is (ALL_LOGFILES,ALL_ROLES)
    - default for LOG_ARCHIVE_CONFIG.. SEND RECEIVE
    - REOPEN.. the default is 300
    - LOG_ARCHIVE_DEST_n... the default is OPTIONAL
    - AFFIRM (for SYNC only).. the default is NOAFFIRM

    - REAL-TIME APPLY, In Oracle 10.1 and above, you can configure the standby database to be updated synchronously, as redo is written to the standby redo log
         To activate (using LGWR ASYNC on Maximum Performance):    alter database recover managed standby database using current logfile disconnect;

    - STANDBY REDO LOGS, Doc ID 219344.1 Usage, Benefits and Limitations of Standby Redo Logs (SRL)
      DIFFERENCE IN THE LOG APPLY SERVICES WHEN USING STANDBY REDO LOGS
	In case you do not have Standby Redo Logs, an Archived Redo Log is created
	by the RFS process and when it has completed, this Archived Redo Log is applied
	to the Standby Database by the MRP (Managed Recovery Process) or the Logical
	Apply in Oracle 10g when using Logical Standby. An open (not fully written)
	ArchiveLog file cannot be applied on the Standby Database and will not be used
	in a Failover situation. This causes a certain data loss.

	If you have Standby Redo Logs, the RFS process will write into the Standby Redo
	Log as mentioned above and when a log switch occurs, the Archiver Process of the
	Standby Database will archive this Standby Redo Log to an Archived Redo Log,
	while the MRP process applies the information to the Standby Database.  In a
	Failover situation, you will also have access to the information already
	written in the Standby Redo Logs, so the information will not be lost.

	Starting with Oracle 10g you have also the Option to use Real-Time Apply with
	Physical and Logical Standby Apply. When using Real-Time Apply we directly apply
	Redo Data from Standby RedoLogs. Real-Time Apply is also not able to apply Redo
	from partial filled ArchiveLogs if there are no Standby RedoLogs. So Standby
	RedoLogs are mandatory for Real-Time Apply.

- DB_UNIQUE_NAME, In 10.1
- LOG_ARCHIVE_CONFIG, In 10.1


DEPRECATED:
- no more LOG_ARCHIVE_START
- no more REMOTE_ARCHIVE_ENABLE, conflicts with LOG_ARCHIVE_CONFIG..

----------------------------------------------------------------------------------------

9i

- SWITCHOVER, In Oracle 9.0.1 and above, you can perform a switchover operation such that the primary database becomes a new standby database, and the old standby database becomes the new primary database. A successful switchover operation   
              should never result in any data loss, irrespective of the physical standby configuration.

- LGWR PROCESS, In 9.0.1 above, LGWR can also transport redo to standby database
- STANDBY REDO LOGS, In 9.0.1 above, standby redo logs can be created. Requires LGWR. 

Doc ID 150584.1 Data Guard 9i Setup with Guaranteed Protection Mode

----------------------------------------------------------------------------------------

8i 

- READ ONLY MODE, In Oracle 8.1.5 and above, you can cancel managed recovery on the standby database and open the database in read-only mode for reporting purposes

----------------------------------------------------------------------------------------

7.3 

- FAILOVER, Since Oracle 7.3, performing a failover operation from the primary database to the standby database has been possible. A failover operation may result in data loss, 
            depending on the configuration of the log archive destinations on the primary database.





}}}
Build data guard using the following methods
* from backupset 
* Active Duplicate 
* recover database from service <primary_service> (new in 12.1)


http://gavinsoorma.com/2009/06/trigger-to-use-with-data-guard-to-change-service-name/
{{{
CREATE OR REPLACE TRIGGER manage_OCIservice
after startup on database
DECLARE
role VARCHAR(30);
BEGIN
SELECT DATABASE_ROLE INTO role FROM V$DATABASE;
IF role = ‘PRIMARY’ THEN
DBMS_SERVICE.START_SERVICE(‘apex_dg’);
ELSE
DBMS_SERVICE.STOP_SERVICE(‘apex_dg’);
END IF;
END;
/
}}}
<<<
To have an end to end view of the data guard transport and apply performance. Here’s how I will troubleshoot it (w/ scripts/tools on the table below): 

1) get redo MB/s
•	This is the redo generation
2) get the bandwidth link  
•	The bandwidth capacity
3) get the transport lag   
•	This metric will tell if there's a problem with the transport of the logs between the sites
•	It's possible for redo to be generated at faster rates than what can be accommodated by the network
4) get apply lag           
•	This metric will tell if the managed recovery process is having a hard time reading the redo stream and applying it to the standby DB
•	This is the difference of SCN of primary site and standby site that needs to be applied 
5) get the IO breakdown/IO cell metrics 
•	Will tell if there is an IO capacity issue
•	I would also get the cell metrics of primary just to compare
6) Primary and Standby DB wait events
•	This will tell any obvious events causing the bottleneck
•	On standby site AWR data is the same as primary. So we need to use ASH here because it’s in-memory.
8) Archive per hour/day
•	output of archiving_per_day.sql to get the hourly/daily redo generation
9) Run the attached scripts from the following MOS notes
•	Data Guard Physical Standby - Configuration Health Check (Doc ID 1581388.1)
•	Script to Collect Data Guard Physical and Active Standby Diagnostic Information for Version 10g and above (Including RAC) (Doc ID 1577406.1)
•	Monitoring a Data Guard Configuration (Doc ID 2064281.1)

<<<

[img[ http://i.imgur.com/qhKPFxi.png ]]
also run archiving_per_day.sql on Primary

All the scripts can be downloaded here https://github.com/karlarao/scripts/tree/83c4681e796ccff8eb2001ba7d05d1eff4a543e6/data_guard



! references
https://docs.oracle.com/cd/E18283_01/server.112/e17110/dynviews_1103.htm
http://emrebaransel.blogspot.com/2013/07/data-guard-queries.html
http://blog.yannickjaquier.com/oracle/data-guard-apply-lag-gap-troubleshooting.html
http://yong321.freeshell.org/oranotes/DataGuardMonitoringScripts.txt

Presentation “Minimal Downtime Oracle 11g Upgrade” at DOAG Conference 2010
http://goo.gl/ZTQVD
''The netem Commands'' 
examples below demonstrates a 10Mbps network transferring a file to another server.. theoretically you have 1.25MB/s.. if you want to play around different WAN config here's the list http://en.wikipedia.org/wiki/List_of_device_bandwidths#Wide_area_networks, see the stats of my tests below:

tc qdisc show <-- to show

tc qdisc add dev eth0 root handle 1: tbf rate 10000kbit burst 10000kbit latency 10ms  <-- to set bandwidth
tc qdisc add dev eth0 parent 1: handle 10: netem delay 10ms  <-- to set delay

tc qdisc change dev eth0 parent 1: handle 10: netem delay 100ms    <-- to change delay

tc qdisc del dev eth0 root <-- to remove


-- no tweaks
{{{
[oracle@dg10g2 flash_recovery_area]$ du -sm dg10g.tar 
192	dg10g.tar
[oracle@dg10g2 flash_recovery_area]$ 
[oracle@dg10g2 flash_recovery_area]$ 
[oracle@dg10g2 flash_recovery_area]$ scp dg10g.tar oracle@192.168.203.41:/u02/flash_recovery_area/
The authenticity of host '192.168.203.41 (192.168.203.41)' can't be established.
RSA key fingerprint is f2:ed:e1:43:a6:62:ee:b1:d0:70:39:cc:28:fb:9d:e8.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.203.41' (RSA) to the list of known hosts.
oracle@192.168.203.41's password: 
dg10g.tar                                                                                                                                                                                                                100%  192MB  27.4MB/s   00:07    
[oracle@dg10g2 flash_recovery_area]$ 
[oracle@dg10g2 flash_recovery_area]$ ping 192.168.203.41
PING 192.168.203.41 (192.168.203.41) 56(84) bytes of data.
64 bytes from 192.168.203.41: icmp_seq=0 ttl=64 time=1.23 ms
64 bytes from 192.168.203.41: icmp_seq=1 ttl=64 time=0.198 ms
64 bytes from 192.168.203.41: icmp_seq=2 ttl=64 time=1.22 ms
64 bytes from 192.168.203.41: icmp_seq=3 ttl=64 time=0.311 ms
64 bytes from 192.168.203.41: icmp_seq=4 ttl=64 time=1.97 ms

--- 192.168.203.41 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4002ms
rtt min/avg/max/mdev = 0.198/0.989/1.977/0.660 ms, pipe 2
}}}


-- configured with 156.25 KB/s with 100ms latancy (too slow so I cancelled it)
tc qdisc add dev eth0 root handle 1: tbf rate 1250kbit burst 1250kbit latency 10ms
{{{
[oracle@dg10g1 flash_recovery_area]$ ls -ltr
drwxr-xr-x  5 oracle oinstall      4096 Oct 20 09:37 dg10g
-rw-r--r--  1 oracle oinstall 207912960 Oct 21 11:45 flash_recovery_area.tar
[oracle@dg10g1 flash_recovery_area]$ date
Thu Oct 21 11:48:53 PHT 2010
[oracle@dg10g1 flash_recovery_area]$ scp flash_recovery_area.tar oracle@192.168.203.40:/u02/flash_recovery_area/
oracle@192.168.203.40's password: 
flash_recovery_area.tar                                                                                                                                                                                                   65%  129MB 145.7KB/s   08:07 ETAKilled by signal 2.
[oracle@dg10g1 flash_recovery_area]$ 
[oracle@dg10g1 flash_recovery_area]$ 
[oracle@dg10g1 flash_recovery_area]$ 
[oracle@dg10g1 flash_recovery_area]$ ls
dg10g  flash_recovery_area.tar
[oracle@dg10g1 flash_recovery_area]$ date
Thu Oct 21 12:04:23 PHT 2010
}}}


-- configured with 10MB/s with 100ms latancy
tc qdisc change dev eth0 root handle 1: tbf rate 10000kbit burst 10000kbit latency 10ms
[root@dg10g1 ~]# tc qdisc show
qdisc tbf 1: dev eth0 rate 10Mbit burst 1250Kb lat 9.8ms 
qdisc netem 10: dev eth0 parent 1: limit 1000 delay 100.0ms

{{{
[oracle@dg10g1 flash_recovery_area]$ ping 192.168.203.40
PING 192.168.203.40 (192.168.203.40) 56(84) bytes of data.
64 bytes from 192.168.203.40: icmp_seq=0 ttl=64 time=201 ms
64 bytes from 192.168.203.40: icmp_seq=1 ttl=64 time=101 ms
64 bytes from 192.168.203.40: icmp_seq=2 ttl=64 time=100 ms
64 bytes from 192.168.203.40: icmp_seq=3 ttl=64 time=100 ms
64 bytes from 192.168.203.40: icmp_seq=4 ttl=64 time=100 ms

--- 192.168.203.40 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4003ms
rtt min/avg/max/mdev = 100.480/120.839/201.008/40.085 ms, pipe 2
[oracle@dg10g1 flash_recovery_area]$ date
Thu Oct 21 12:14:12 PHT 2010
[oracle@dg10g1 flash_recovery_area]$ scp flash_recovery_area.tar oracle@192.168.203.40:/u02/flash_recovery_area/
oracle@192.168.203.40's password: 
flash_recovery_area.tar                                                                                                                                                                                                  100%  198MB 620.9KB/s   05:27    
[oracle@dg10g1 flash_recovery_area]$ date
Thu Oct 21 12:20:58 PHT 2010
}}}


-- configured with 10MB/s with 10ms latency
tc qdisc change dev eth0 parent 1: handle 10: netem delay 10ms

[root@dg10g1 ~]# tc qdisc show
qdisc tbf 1: dev eth0 rate 10Mbit burst 1250Kb lat 9.8ms 
qdisc netem 10: dev eth0 parent 1: limit 1000 delay 10.0ms
{{{
[oracle@dg10g1 flash_recovery_area]$ ping 192.168.203.40
PING 192.168.203.40 (192.168.203.40) 56(84) bytes of data.
64 bytes from 192.168.203.40: icmp_seq=0 ttl=64 time=20.4 ms
64 bytes from 192.168.203.40: icmp_seq=1 ttl=64 time=9.58 ms
64 bytes from 192.168.203.40: icmp_seq=2 ttl=64 time=10.1 ms
64 bytes from 192.168.203.40: icmp_seq=3 ttl=64 time=10.1 ms
64 bytes from 192.168.203.40: icmp_seq=4 ttl=64 time=10.1 ms

--- 192.168.203.40 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4004ms
rtt min/avg/max/mdev = 9.586/12.093/20.428/4.174 ms, pipe 2
[oracle@dg10g1 flash_recovery_area]$ 
[oracle@dg10g1 flash_recovery_area]$ date ; scp flash_recovery_area.tar oracle@192.168.203.40:/u02/flash_recovery_area/ ; date
Thu Oct 21 12:26:22 PHT 2010
oracle@192.168.203.40's password: 
flash_recovery_area.tar                                                                                                                                                                                                  100%  198MB   1.2MB/s   02:53    
Thu Oct 21 12:29:19 PHT 2010
}}}


-- configured with 10MB/s with 1ms latency
tc qdisc change dev eth0 parent 1: handle 10: netem delay 1ms

[root@dg10g1 ~]# tc qdisc show
qdisc tbf 1: dev eth0 rate 10Mbit burst 1250Kb lat 9.8ms 
qdisc netem 10: dev eth0 parent 1: limit 1000 delay 999us
{{{
[root@dg10g1 ~]# ping 192.168.203.40
PING 192.168.203.40 (192.168.203.40) 56(84) bytes of data.
64 bytes from 192.168.203.40: icmp_seq=0 ttl=64 time=1.06 ms
64 bytes from 192.168.203.40: icmp_seq=1 ttl=64 time=1.20 ms
64 bytes from 192.168.203.40: icmp_seq=2 ttl=64 time=1.16 ms
64 bytes from 192.168.203.40: icmp_seq=3 ttl=64 time=1.71 ms
64 bytes from 192.168.203.40: icmp_seq=4 ttl=64 time=1.15 ms
64 bytes from 192.168.203.40: icmp_seq=5 ttl=64 time=1.55 ms
64 bytes from 192.168.203.40: icmp_seq=6 ttl=64 time=1.17 ms
64 bytes from 192.168.203.40: icmp_seq=7 ttl=64 time=1.37 ms
64 bytes from 192.168.203.40: icmp_seq=8 ttl=64 time=1.15 ms


[oracle@dg10g1 flash_recovery_area]$ date ; scp flash_recovery_area.tar oracle@192.168.203.40:/u02/flash_recovery_area/ ; date
Thu Oct 21 12:35:04 PHT 2010
oracle@192.168.203.40's password: 
flash_recovery_area.tar                                                                                                                                                                                                  100%  198MB   1.2MB/s   02:53    
Thu Oct 21 12:38:00 PHT 2010
}}}


-- configured with 10MB/s with 1ms latency (including main)
tc qdisc change dev eth0 root handle 1: tbf rate 10000kbit burst 10000kbit latency 1ms

[root@dg10g1 ~]# tc qdisc show
qdisc tbf 1: dev eth0 rate 10Mbit burst 1250Kb lat 978us 
qdisc netem 10: dev eth0 parent 1: limit 1000 delay 999us
{{{
[root@dg10g1 ~]# ping 192.168.203.40
PING 192.168.203.40 (192.168.203.40) 56(84) bytes of data.
64 bytes from 192.168.203.40: icmp_seq=0 ttl=64 time=1.05 ms
64 bytes from 192.168.203.40: icmp_seq=1 ttl=64 time=2.22 ms
64 bytes from 192.168.203.40: icmp_seq=2 ttl=64 time=1.14 ms
64 bytes from 192.168.203.40: icmp_seq=3 ttl=64 time=1.23 ms
64 bytes from 192.168.203.40: icmp_seq=4 ttl=64 time=2.46 ms
64 bytes from 192.168.203.40: icmp_seq=5 ttl=64 time=1.76 ms
64 bytes from 192.168.203.40: icmp_seq=6 ttl=64 time=2.81 ms
64 bytes from 192.168.203.40: icmp_seq=7 ttl=64 time=2.98 ms
64 bytes from 192.168.203.40: icmp_seq=8 ttl=64 time=2.98 ms


[oracle@dg10g1 flash_recovery_area]$ date ; scp flash_recovery_area.tar oracle@192.168.203.40:/u02/flash_recovery_area/ ; date
Thu Oct 21 12:40:21 PHT 2010
oracle@192.168.203.40's password: 
flash_recovery_area.tar                                                                                                                                                                                                  100%  198MB   1.2MB/s   02:53    
Thu Oct 21 12:43:16 PHT 2010
}}}


References:
http://www.linuxfoundation.org/collaborate/workgroups/networking/netem
http://fedoraforum.org/forum/showthread.php?t=243272
http://henrydu.com/blog/how-to/simulate-a-slow-link-by-linux-bridge-123.html
http://mywiki.ncsa.uiuc.edu/wiki/Tips_and_Tricks#How_to_Simulate_a_Slow_Network
Peoplesoft MAA
https://docs.google.com/viewer?url=http://www.oracle.com/technetwork/database/features/availability/maa-peoplesoft-bestpractices-134154.pdf

Data Guard Implications of NOLOGGING operations from PeopleTools 8.48
http://blog.psftdba.com/2007/06/stuff-changes.html

PeopleSoft for the Oracle DBA
https://docs.google.com/viewer?url=http://www.atloaug.org/presentations/PeopleSoftDBARiley200504.ppt&pli=1

Reducing PeopleSoft Downtime Using a Local Standby Database
https://docs.google.com/viewer?url=http://www.oracle.com/technetwork/database/features/availability/maa-peoplesoft-local-standby-128609.pdf

Batch Processing in Disaster Recovery Configurations
https://docs.google.com/viewer?url=http://www.hitachi.co.jp/Prod/comp/soft1/oracle/pdf/OBtecinfo-08-008.pdf  <-- uses netem's (iproute rpm) Token Bucket Filter (TBF) to limit output

A whitepaper on workload based performance management for PeopleSoft and DB2 on z/OS
https://docs.google.com/viewer?url=http://www.hewittandlarsen.com/_documents/WLM/WLM%2520for%2520PS.pdf

Securing Sensitive Data in PeopleSoft Applications
https://docs.google.com/viewer?url=http://www.ingrian.com/resources/sol_briefs/peoplesoft-sb.pdf

My PeopleSoft Disaster Recovery Adventure
http://www.erpassociates.com/peoplesoft-corner-weblog/peoplesoft/my-peoplesoft-disaster-recovery-adventure.html


Excessive redo
http://tech.groups.yahoo.com/group/psftdba/message/4030
http://tech.groups.yahoo.com/group/psftdba/message/4273
http://gasparotto.blogspot.com/2010/06/goldengate-database-for-peoplesoft.html
http://www.freelists.org/post/oracle-l/PeopleSoft-and-Logical-Standby
http://www.pythian.com/news/17127/redo-transport-compression/
http://el-caro.blogspot.com/2006/11/archivelog-compression.html
http://download.oracle.com/docs/cd/B14099_19/core.1012/b14003/sshpinfo.htm <-- 
http://www.oracle.com/technetwork/database/features/availability/dataguardnetwork-092224.html
http://jarneil.wordpress.com/2007/11/21/protecting-oracle-redo-transport/
Implementing SSH port forwarding with Data Guard 	Doc ID:	Note:225633.1
http://sdt.sumida.com.cn:8080/cs/blogs/wicky/archive/2006/10/30/448.aspx


Redo compression
Redo Transport Compression in a Data Guard Environment [ID 729551.1]
Enabling Encryption for Data Guard Redo Transport [ID 749947.1]
MAA - Data Guard Redo Transport and Network Best Practices [ID 387174.1]
Oracle 10g R2 and 11g R1 Database Feature Support Summary [ID 778861.1]
Changing the network used by the Data Guard Broker for redo transport [ID 730361.1]
Oracle Data Guard and SSH [ID 751528.1] <-- the announcement
Troubleshooting 9i Data Guard Network Issues [ID 241925.1] 
Manual Standby Database under Oracle Standard Edition
http://goo.gl/TvMO7
-- CERTIFICATION, PRE-REQ

Certification and Prerequisites for Oracle DataGuard
  	Doc ID: 	Note:234508.1



-- FAQ

Data Guard Knowledge Browser Product Page [ID 267955.1]

11gR1 Dataguard Content
  	Doc ID: 	798974.1

10gR2 Dataguard Content
  	Doc ID: 	739396.1



-- MIXED ENVIRONMENT

Data Guard Support for Heterogeneous Primary and Standby Systems in Same Data Guard Configuration
  	Doc ID: 	413484.1

Role Transitions for Data Guard Configurations Using Mixed Oracle Binaries
  	Doc ID: 	414043.1

http://www.freelists.org/post/oracle-l/Hetergenous-Dataguard




-- ARCHIVELOG MAINTENANCE
Maintenance Of Archivelogs On Standby Databases [ID 464668.1]
RMAN Best Practices - Log Maintenance, RMAN Configuration Best Practices Setup Backup Management Policies http://www.oracle.com/technetwork/database/features/availability/298772-132349.pdf
Configure RMAN to purge archivelogs after applied on standby [ID 728053.1]
http://martincarstenbach.wordpress.com/2009/10/08/archivelog-retention-policy-changes-in-rman-11g/
RMAN backups in Max Performance/Max Availability Data Guard Environment [ID 331924.1]
Configure RMAN to purge archivelogs after applied on standby [ID 728053.1]




-- MAINTENANCE
Using RMAN Effectively In A Dataguard Environment. [ID 848716.1]




-- RAC DATA GUARD

MAA - Creating a Single Instance Physical Standby for a RAC Primary [ID 387339.1]
MAA - Creating a RAC Physical Standby for a RAC Primary [ID 380449.1]
MAA - Creating a RAC Logical Standby for a RAC Primary 10gr2 [ID 387261.1]




Usage, Benefits and Limitations of Standby Redo Logs (SRL)
  	Doc ID: 	Note:219344.1 	
  	
Setup and maintenance of Data Guard Broker using DGMGRL
  	Doc ID: 	Note:201669.1
  	
9i Data Guard FAQ
  	Doc ID: 	Note:233509.1
  	
Migrating to RAC using Data Guard
  	Doc ID: 	Note:273015.1
  	
Data Guard 9i Creating a Logical Standby Database
  	Doc ID: 	Note:186150.1
  	
Reinstating a Logical Standby Using Backups Instead of Flashback Database
  	Doc ID: 	Note:416314.1
  	
WAITEVENT: "log file sync" Reference Note
  	Doc ID: 	Note:34592.1
  	
Standby Redo Logs are not Created when Creating a 9i Data Guard DB with RMAN
  	Doc ID: 	Note:185076.1
  	
Oracle10g: Data Guard Switchover and Failover Best Practices
  	Doc ID: 	Note:387266.1
  	
Script to Collect Data Guard Physical Standby Diagnostic Information
  	Doc ID: 	Note:241438.1
  	
Script to Collect Data Guard Primary Site Diagnostic Information
  	Doc ID: 	Note:241374.1
  	
Creating a 9i Data Guard Database with RMAN (Recovery Manager)
  	Doc ID: 	Note:183570.1
  	
Upgrading to 10g with a Physical Standby in Place
  	Doc ID: 	Note:278521.1
  	
Script to Collect Data Guard Logical Standby Table Information
  	Doc ID: 	Note:269954.1 	
  	
Comparitive Study between Oracle Streams and Oracle Data Guard
  	Doc ID: 	Note:300223.1
  	
Creating a 10g Data Guard Physical Standby on Linux
  	Doc ID: 	Note:248382.1
  	
9i Data Guard Primary Site and Network Configuration Best Practices
  	Doc ID: 	Note:240874.1
  	
The Gains and Pains of Nologging Operations
  	Doc ID: 	Note:290161.1
  	
How I make a standby database with Oracle Database Standard Edition
  	Doc ID: 	Note:432514.1 	
  	
Data Guard Gap Detection and Resolution
  	Doc ID: 	Note:232649.1
  	
Steps To Setup Replication Using Oracle Streams
  	Doc ID: 	Note:224255.1
  	
How To Setup Schema Level Streams Replication
  	Doc ID: 	Note:301431.1
  	
Installing and Using Standby Statspack in 11gR1
  	Doc ID: 	Note:454848.1
  	
Recovering After Loss of Redo Logs
  	Doc ID: 	Note:392582.1
  	
Hardware Assisted Resilient Data H.A.R.D
  	Doc ID: 	Note:227671.1
  	
A Study of Non-Partitioned NOLOGGING DML/DDL on Primary/Standby Data Dictionary
  	Doc ID: 	Note:150694.1
  	
Extracting Data from Redo Logs Is Not A Supported Interface
  	Doc ID: 	Note:97080.1 	
  	


-- RMAN - create physical standby

Step By Step Guide To Create Physical Standby Database Using RMAN
 	Doc ID:	Note:469493.1

Creating a Data Guard Database with RMAN (Recovery Manager) using Duplicate Command
  	Doc ID: 	Note:183570.1

Creating a Standby Database using RMAN (Recovery Manager)
  	Doc ID: 	Note:118409.1

Step By Step Guide To Create Physical Standby Database Using RMAN
  	Doc ID: 	Note:469493.1

Steps To Create Physical Standby Database
  	Doc ID: 	Note:736863.1



  	
  	
-- SWITCHOVER, FAILOVER
  	
Oracle10g: Data Guard Switchover and Failover Best Practices
  	Doc ID: 	Note:387266.1

Are Virtual IPs required for Data Guard?
http://blog.trivadis.com/blogs/yannneuhaus/archive/2008/02/06/are-virtual-ips-required-for-data-guard.aspx

Steps to workaround issue described in Alert 308698.1
  	Doc ID: 	368276.1

  	
  	
-- CASCADED STANDBY DATABASES

Cascaded Standby Databases
 	Doc ID:	Note:409013.1
 	
 	
-- LOG APPLY

Applied Archived Logs Not Getting Updated on the Standby Database
 	Doc ID:	Note:197032.1
 	
 	
-- RESIZE DATAFILE

Standby Database Behavior when a Datafile is Resized on the Primary Database
 	Doc ID:	Note:123883.1
 	
 	

-- UPGRADE WITH DATA GUARD

Upgrading to 10g with a Physical Standby in Place
 	Doc ID:	Note:278521.1
 	
Upgrading to 10g with a Logical Standby in Place
 	Doc ID:	Note:278108.1

Upgrading Oracle Applications 11i Database to 10g with Physical Standby in Place [ID 340859.1]



-- PATCH, PATCHSET

187242 "patch or patch set" to a dataguard systems

Applying Patchset with a 10g Physical Standby in Place (Doc ID 278641.1)



-- NETWORK PERFORMANCE

Network Bandwidth Implications of Oracle Data Guard
http://www.oracle.com/technology/deploy/availability/htdocs/dataguardnetwork.htm

High ARCH wait on SENDREQ wait events found in statspack report.
  	Doc ID: 	Note:418709.1

Refining Remote Archival Over a Slow Network with the ARCH Process
  	Doc ID: 	Note:260040.1

Troubleshooting 9i Data Guard Network Issues
  	Doc ID: 	Note:241925.1



-- REDO TRANSPORT

Redo Corruption Errors During Redo Transport
  	Doc ID: 	386417.1



-- LOGICAL STANDBY

Creating a Logical Standby with Minimal Production Downtime
  	Doc ID: 	278371.1



-- CLONE PHYSICAL STANDBY, RMAN PHYSICAL STANDBY

How I Created a Test Database with the RMAN Backup of the Physical Standby Database
  	Doc ID: 	428014.1

How to create a non ASM physical standby from an ASM primary [ID 790327.1]


-- DUPLICATE 

Creating a Data Guard Database with RMAN (Recovery Manager) using Duplicate Command [ID 183570.1]



-- MINIMAL DOWNTIME

How I Create a Physical Standby Database for a 24/7 Shop
  	Doc ID: 	580004.1


-- STARTUP

Data Guard 9i Data Guard Remote Process Startup Failed
  	Doc ID: 	Note:204848.1



-- DATA GUARD 8i

Data Guard 8i Setting up SSH using SSH-AGENT
  	Doc ID: 	Note:136377.1

How to Create a Oracle 8i Standby Database
  	Doc ID: 	Note:70233.1

Data Guard 8i Setup and Implementation
  	Doc ID: 	Note:132991.1



-- CREATE DATA GUARD CONFIGURATION

Creating a configuration using Data Guard Manager
  	Doc ID: 	Note:214071.1

Creating a Data Guard Configuration
  	Doc ID: 	Note:180031.1

Creating a Standby Database on a new host [ID 374069.1]





-- ROLLING FORWARD

Rolling a Standby Forward using an RMAN Incremental Backup in 10g
  	Doc ID: 	290814.1












How To Calculate The Required Network Bandwidth Transfer Of Archivelogs In Dataguard Environments

      Required bandwidth = ((Redo rate bytes per sec. / 0.7) * 8) / 1,000,000 = bandwidth in Mbps
      Note that if your primary database is a RAC database, you must run the Statspack snapshot on every RAC instance. Then, for each Statspack snapshot, sum the "Redo Size Per Second" value of each instance, to obtain the net peak redo generation rate for the primary database. Remember that for
      a RAC primary database, each node generates its own redo and independently sends that redo to the standby database - hence the reason to sum up the redo rates for each RAC node, to obtain the net peak redo rate for the database. 

  	Doc ID: 	736755.1

Creating physical standby using RMAN duplicate without shutting down the primary
  	Doc ID: 	789370.1

Effect of changing DBID using NID of Primary database when Physical standby in place - ORA-16012
  	Doc ID: 	829095.1

Note 219344.1 - Usage, Benefits and Limitations of Standby Redo Logs (SRL)

TRANSPORT: Data Guard Protection Modes
  	Doc ID: 	239100.1

Will a Standby Database in Read Only Mode Apply Archived Log Files?
  	Doc ID: 	136830.1

Note 330103.1 Ext/Mod How to Move Asm Database Files From one Diskgroup To Another

Moving Files Between Asm Disk Groups For Rac Primary/Standby Configuration
  	Doc ID: 	601643.1

How to Rename a Datafile in Primary Database When in Dataguard Configuration
  	Doc ID: 	733796.1

Hybrid Configurations using Data Guard and Remote-Mirroring
  	Doc ID: 	804623.1

http://www.oracle.com/technology/deploy/availability/htdocs/DataGuardRemoteMirroring.html

http://www.oracle.com/technology/deploy/availability/htdocs/dataguardprotection.html

What is the Database_role in Previous Version Equivalency for 9.2.X And 10g V$Database view
  	Doc ID: 	313130.1

Is using Transportable Tablespaces method supported in DataGuard?
  	Doc ID: 	471293.1

How to transport a Tablespace to Databases in a Physical Standby Configuration
  	Doc ID: 	467752.1

Note 343424.1 - Creating a 10gr2 Data Guard Physical Standby database with Real-Time apply
Note 388431.1 - Creating a Duplicate Database on a New Host.

Monitoring Physical Standby Progress
  	Doc ID: 	243709.1

Redo Corruption Errors During Redo Transport
  	Doc ID: 	386417.1

Certification and Prerequisites for Oracle DataGuard
  	Doc ID: 	234508.1

Special Considerations About Physical Standby Databases
  	Doc ID: 	236659.1

V$ARCHIVED_LOG.APPLIED is Not Consistent With Standby Progress
  	Doc ID: 	263994.1

How to Use Standby Database in Read-Only Mode and Managed Recovery Mode at the Same Time
  	Doc ID: 	177859.1

Redo Transport Compression in a Data Guard Environment
  	Doc ID: 	729551.1

Data Guard and Network Disconnects
  	Doc ID: 	255959.1

Oracle Data Guard and SSH
  	Doc ID: 	751528.1

Developer and DBA Tips to Optimize SQL Apply
  	Doc ID: 	603361.1

Broker and SQL*Plus
  	Doc ID: 	744396.1

Refining Remote Archival Over a Slow Network with the ARCH Process
  	Doc ID: 	260040.1

How To Open Physical Standby For Read Write Testing and Flashback
  	Doc ID: 	805438.1

Exporting Transportable Tablespace Fails from a Read-only Standby Database
  	Doc ID: 	252866.1

What Does Database in Limbo Mean When Seen in the Alert File?
  	Doc ID: 	165676.1

Standby Database Has Datafile In Recover Status
  	Doc ID: 	270043.1

Oracle Label Security Packages affect Data Guard usage of Switchover and connections to Primary Database
  	Doc ID: 	265192.1

Rman Backups On Standby Having Impact On Dataguard Max_availability Mode
  	Doc ID: 	259946.1

Dataguard-Automate Removal Of Archives Once Applied Against Physical Standby
  	Doc ID: 	260874.1

Alter Database Create Datafile
  	Doc ID: 	2103994.6

Is my Standby Database Working ?
  	Doc ID: 	136776.1




-- ORA-1031, HEARTBEAT FAILED TO CONNECT TO STANDBY

Transport : Remote Archival to Standby Site Fails with ORA-01031
  	Doc ID: 	353976.1

ORA-1031 for Remote Archive Destination on Primary
  	Doc ID: 	733793.1



-- ORA-16191 -PRIMARY LOG SHIPPING CLIENT NOT LOGGED ON STANDBY

Changing SYS password of PRIMARY database when STANDBY in place to avoid ORA-16191
  	Doc ID: 	806703.1

DATA GUARD TRANSPORT: ORA-01017 AND ORA-16191 WHEN SEC_CASE_SENSITIVE_LOGON=FALSE
  	Doc ID: 	815664.1

DATA GURAD LOG SHIPPING FAILS WITH ERROR ORA-16191 IN 11G
  	Doc ID: 	462219.1



-- ORA-1017 & ORA-2063, DATABASE LINK

Database Link from 10g to 11g fails with ORA-1017 & ORA-2063
  	Doc ID: 	473716.1 

ORA-1017 : Invalid Username/Password; Logon Denied. When Attempting to Change An Expired Password.
  	Doc ID: 	742961.1




-- EBUSINESS SUITE R12

Case Study : Configuring Standby Database(Dataguard) on R12 using RMAN Hot Backup
  	Doc ID: 	753241.1


-- REDO LOG REPOSITORY / PSEUDO STANDBY

Data Guard Archived Redo Log Repository Example
  	Doc ID: 	434164.1



-- RMAN ON STANDBY

Our Experience in Creating a clone database from RMAN backup of a physical standby database without using a recovery catalog
  	Doc ID: 	467525.1



-- FLASHBACK

How To Flashback Primary Database In Standby Configuration [ID 728374.1]



-- STANDBY REDO LOGS

Usage, Benefits and Limitations of Standby Redo Logs (SRL)
  	Doc ID: 	219344.1

Data Guard 9i Setup with Guaranteed Protection Mode	<-- not yet read.. but good stuff
  	Doc ID: 	150584.1

Online Redo Logs on Physical Standby	<-- add, drop, drop standby logfile
  	Doc ID: 	740675.1





-- DATA GUARD CONTROLFILE

CORRUPTION IN SNAPSHOT CONTROLFILE
  	Doc ID: 	268719.1

Steps to recreate a Physical Standby Controlfile
  	Doc ID: 	459411.1

Step By Step Guide On How To Recreate Standby Control File When Datafiles Are On ASM And Using Oracle Managed Files
  	Doc ID: 	734862.1




-- DATA GUARD TROUBLESHOOTING

Dataguard Information gathering to upload with the Service Requests
  	Doc ID: 	814417.1

10gR2 Dataguard Content			<-- ALL ABOUT ADMINISTRATION OF DATA GUARD
  	Doc ID: 	739396.1

Script to Collect Data Guard Logical Standby Table Information
  	Doc ID: 	269954.1

Creating a 10gr2 Data Guard Physical Standby database with Real-Time apply
  	Doc ID: 	343424.1

How to Add/Drop/Resize Redo Log with Physical Standby in place.
  	Doc ID: 	473442.1

Online Redo Logs on Physical Standby
  	Doc ID: 	740675.1



-- DATA GUARD REMOVE

How to Remove Standby Configuration from Primary Database
  	Doc ID: 	733794.1



-- BROKER

Setup and maintenance of Data Guard Broker using DGMGRL
  	Doc ID: 	201669.1

Creating a configuration using Data Guard Manager
  	Doc ID: 	214071.1

10g DGMGRL CLI Configuration
  	Doc ID: 	260112.1

Data Guard Broker and SQL*Plus
  	Doc ID: 	783445.1

Data Guard Switchover Not Completed Successfully	<-- 9i issue
  	Doc ID: 	308158.1



-- BROKER BUG

Broker shutdown can lead to ora-600 [kjcvg04] in RAC ENV.
  	Doc ID: 	840627.1




-- FAILSAFE

How to Use Oracle Failsafe With Oracle Data Guard for RDBMS versions 10g
  	Doc ID: 	373204.1


-- FAST START FAILOVER

IMPLEMENTING FAST-START FAILOVER IN 10GR2 DATAGUARD BROKER ENVIRONMENT
  	Doc ID: 	359555.1


-- DATA GUARD BEST PRACTICE

Oracle10g: Data Guard Switchover and Failover Best Practices
  	Doc ID: 	387266.1

Data Guard Broker High Availability
  	Doc ID: 	275977.1








''From "Oracle Data Guard 11g Handbook"''
<<<
If, however, you have chosen Maximum Availability or Maximum Protection mode, then that
latency is going to have a big effect on your production throughput. Several calculations can be
used to determine latency, most of which try to include the latency introduced by the various
hardware devices at each end. But since the devices used in the industry all differ, it is difficult to
determine how long the network has to be to maintain a 1 millisecond (ms) RTT. A good rule of
thumb (in a perfect world) is that a 1 ms RTT is about 33 miles (or 53 km). This means that if you
want to keep your production impact down to the 4 percent range, you will need to keep the
latency down to 10ms, or 300 miles (in a perfect world, of course). You will have to examine, test,
and evaluate your network to see if it actually matches up to these numbers. Remember that
latency depends on the size of the packet, so don’t just ping with 56 bytes, because the redo you
are generating is a lot bigger than that..
<<<

''Rule Of Thumb... taken from "Oracle Data Guard 11g Handbook"''
<<<
1mile = 1.604km
normal "ping" command = 56bytes

In a perfect world ===> ''1ms (ping RTT) = 33miles = 53km (52.932km)''
If you want to keep the production impact to ''4%''...then keep the latency down to ''10ms or 300miles''
<<<

''Tests taken from "Oracle Data Guard 11g Handbook"'':""
<<<
Output from a ping going from Texas to New Hampshire (''about 1990 miles'') at night, when nothing else is going on using ''56 bytes'' and ''64,000 bytes''

''==> @56bytes ping''
ping -c 10 <hostname>
ping average = 49.122
= 1990/49.122
= ''1ms = 40miles''

''==> @64000bytes ping''
ping -c 10 -s 64000 <hostname>
ping average = 66.82
= 1990/66.82
= ''1ms = 29.7miles'' but in the book it is 27miles

The small packet is getting about 40 miles to the millisecond,
but the larger packet is getting around only 27 miles per millisecond. Still not bad and right around
our guess of about 33 miles to the millisecond. So given this network, you could potentially go
270 miles and keep it within the 4 percent range, depending on the redo generation rate and the
bandwidth, which are not shown here. Of course, you would want to use a more reliable and
detailed tool to determine your network latency—something like traceroute. 

These examples are just that, examples. A lot of things affect your ability to ship redo across the
network. As we have shown, these include the overhead caused by network acknowledgments,
network latency, and other factors. All of these will be unique to your workload and need to
be tested.
<<<

For Batch jobs 
“Batch Processing in Disaster Recovery Configurations - Best Practices for Oracle Data Guard” (http://goo.gl/hHhK) 


From Frits
<<<
You mentioned a 3.5T tablespace. If your storage connection is 1GbE (as an example), if you are able to use the entire bandwidth, restoring that tablespace should at least take
3.5TB * 1024 = 3,584 GB * 1024 = 3,670,016 MB 
1 Gigabit / 8 = 125 MB /s
3,670,016/125 = 29,360 seconds needed to transport / 60 = 489 minutes / 60 = 8,15 hour
<<<




http://gjilevski.wordpress.com/2010/07/24/managing-data-guard-11g-r2-with-oem-11g/
State of the Art in Database Replication
https://docs.google.com/viewer?url=http://gorda.di.uminho.pt/library/wp1/GORDA-D1.1-V1.2-p.pdf

Improving Performance in Replicated Databases through Relaxed Coherency’
https://docs.google.com/viewer?url=http://reference.kfupm.edu.sa/content/i/m/improving_performance_in_replicated_data_60451.pdf
ETL Microservices using Kafka for Fast Big Data - DataTorrent AppFactory https://www.youtube.com/watch?v=4r11a65wY28

http://www.idevelopment.info/data/Oracle/DBA_tips/LOBs/
DDL commands for LOBs: http://www.idevelopment.info/data/Oracle/DBA_tips/LOBs/LOBS_2.shtml


-- ''LONG''

How to overcome a few restrictions of LONG data type [ID 205288.1]
How to Copy Data from a Table with a LONG Column into an Existing Table [ID 119489.1]
http://www.orafaq.com/wiki/SQL*Plus_FAQ
http://wwww.orafaq.net/wiki/LONG_RAW
http://wwww.orafaq.net/wiki/LONG
http://arjudba.blogspot.com/2008/07/char-varchar2-long-etc-datatype-limits.html
http://arjudba.blogspot.com/2008/06/how-to-convert-long-data-type-to-lob.html
http://www.orafaq.com/forum/t/119648/0/
http://articles.techrepublic.com.com/5100-10878_11-6177742.html# <----- nice explanation


-- ''BLOB''

Summary Note Index for BasicFiles(LOB's/BLOB's/CLOB's/NCLOB's,BFILES) and SecureFiles [ID 198160.1]
Export and Import of Table with LOB Columns (like CLOB and BLOB) has Slow Performance [ID 281461.1]
Troubleshooting Guide (TSG) - Large Objects (LOBs) [ID 846562.1]
LOBS - Storage, Redo and Performance Issues [ID 66431.1]
ORA-01555 And Other Errors while Exporting Table With LOBs, How To Detect Lob Corruption. [ID 452341.1]
LOBs and ORA-01555 troubleshooting [ID 846079.1]
How to determine the actual size of the LOB segments and how to free the deleted/unused space above/below the HWM [ID 386341.1]
How to move LOB Data to Another Tablespace [ID 130814.1]


--  ''NOT NULL INTERVAL DAY(5) TO SECOND(1)''
to convert to seconds http://www.dbforums.com/oracle/1044035-converting-interval-day-second-integer.html
to convert to days,hours,mins http://community.qlikview.com/thread/38211
example
{{{
-- TO VIEW RETENTION INFORMATION
set lines 300
col snap_interval format a30
col retention format a30
select DBID, SNAP_INTERVAL, 
EXTRACT(DAY FROM SNAP_INTERVAL) ||
      ' days, ' || EXTRACT (HOUR FROM SNAP_INTERVAL) ||
      ' hours, ' || EXTRACT (MINUTE FROM SNAP_INTERVAL) ||
      ' minutes' as snap_interval
,
((TRUNC(SYSDATE) + SNAP_INTERVAL - TRUNC(SYSDATE)) * 86400)/60 AS SNAP_INTERVAL_MINS
,
RETENTION,
((TRUNC(SYSDATE) + RETENTION - TRUNC(SYSDATE)) * 86400)/60 AS RETENTION_MINS
,TOPNSQL from dba_hist_wr_control
where dbid in (select dbid from v$database);
}}}


''Timestamp data type''
{{{
DATE and TIMESTAMP Datatypes
http://www.databasejournal.com/features/oracle/article.php/2234501/A-Comparison-of-Oracles-DATE-and-TIMESTAMP-Datatypes.htm
http://psoug.org/reference/timestamp.html
}}}





[[cloud data warehouse, cloud dw, cloud datawarehouse]]

Data Warehouse page
http://www.oracle.com/us/solutions/datawarehousing/index.html

Database focus areas
http://www.oracle.com/technetwork/database/focus-areas/index.html

Parallelism and Scalability for Data Warehousing 
http://www.oracle.com/technetwork/database/focus-areas/bi-datawarehousing/dbbi-tech-info-sca-090608.html

DW and BI page - Oracle Database for Business Intelligence and Data Warehousing
http://www.oracle.com/technetwork/database/focus-areas/bi-datawarehousing/index.html

Data Warehousing - Best Practices page
http://www.oracle.com/technetwork/database/focus-areas/bi-datawarehousing/dbbi-tech-info-best-prac-092320.html

''Best Practices for Data Warehousing on the Oracle Database Machine X2-2 [ID 1297112.1]''

Best practices for a Data Warehouse on Oracle Database 11g http://www.uet.vnu.edu.vn/~thuyhq/Courses_PDF/$twp_dw_best_practies_11g11_2008_09.pdf
http://www.oracle.com/technetwork/database/bi-datawarehousing/twp-dw-best-practies-11g11-2008-09-132076.pdf

2 day DW guide http://docs.oracle.com/cd/B28359_01/server.111/b28314.pdf

DATA WAREHOUSING BIG DATA "made" EASY https://www.youtube.com/watch?v=DeExbclijPg
1keydata tutorial http://www.1keydata.com/datawarehousing/datawarehouse.html

PX materclass https://www.slideshare.net/iarsov/parallel-execution-with-oracle-database-12c-masterclass




http://docs.oracle.com/cd/B28359_01/server.111/b28314/tdpdw_bandr.htm
''Data Warehouse Best Practices''
<<<
http://blogs.oracle.com/datawarehousing/2010/05/data_warehouse_best_practices.html
http://structureddata.org/2011/06/15/real-world-performance-videos-on-youtube-oltp/                       <-- VIDEO
http://structureddata.org/2011/06/15/real-world-performance-videos-on-youtube-data-warehousing/        <-- VIDEO
http://www.oracle.com/technology/products/bi/db/11g/dbbi_tech_info_best_prac.html
<<<

''Parallelism and Scalability for Data Warehousing''
<<<
http://www.oracle.com/technology/products/bi/db/11g/dbbi_tech_info_sca.html
<<<

''Whitepapers''
{{{
http://www.oracle.com/technology/products/bi/db/11g/pdf/twp_dw_best_practies_11g11_2008_09.pdf
http://www.oracle.com/technology/products/bi/db/11g/pdf/twp_bidw_parallel_execution_11gr1.pdf
}}}

Dion Cho
{{{
http://dioncho.wordpress.com/2009/01/23/misunderstanding-on-top-sqls-of-awr-repository/
http://dioncho.wordpress.com/2009/02/20/how-was-my-parallel-query-executed-last-night-awr/
http://dioncho.wordpress.com/2009/02/16/the-most-poweful-way-to-monitor-parallel-execution-vpq_tqstat/
http://dioncho.wordpress.com/2009/03/12/automating-tkprof-on-parallel-slaves/


Following is a small test case to demonstrate how Oracle captures the top SQLs.

-- create objects
create table parallel_t1(c1 int, c2 char(100));
insert into parallel_t1
select level, 'x'
from dual
connect by level <= 1000000
;

commit;

-- generate one parallel query
select /*+ parallel(parallel_t1 4) */ count(*) from parallel_t1;

or 

-- generate many many TOP sqls. here we generate 100 top sqls which do full scan on table t1
set heading off
set timing off
set feedback off
spool select2.sql

select 'select /*+ top_sql_' || mod(level,100) || ' */ count(*) from parallel_t1;'
from dual
connect by level <= 10000;
spool off
ed select2

-- check the select2.sql

-- Now we capture the SQLs
exec dbms_workload_repository.create_snapshot;
@select2
exec dbms_workload_repository.create_snapshot;

-- AWR Report would show that more than 30 top sqls are captured
@?/rdbms/admin/awrrpt
}}}


Jonathan Lewis
{{{
http://jonathanlewis.wordpress.com/2010/01/03/pseudo-parallel/
http://jonathanlewis.wordpress.com/2008/11/05/px-buffer/
http://jonathanlewis.wordpress.com/2007/06/25/qb_name/
http://jonathanlewis.wordpress.com/2007/05/29/autoallocate-and-px/
http://jonathanlewis.wordpress.com/2007/03/14/how-parallel/
http://jonathanlewis.wordpress.com/2007/02/19/parallelism-and-cbo/
http://jonathanlewis.wordpress.com/2007/01/11/rescresp/
http://jonathanlewis.wordpress.com/2006/12/28/parallel-execution/
}}}

Doug
{{{
http://oracledoug.com/serendipity/index.php?/archives/774-Direct-Path-Reads.html
}}}

Greg Rahn
{{{
http://structureddata.org/category/oracle/parallel-execution/
}}}

Riyaj Shamsudeen 
{{{
RAC, parallel query and udpsnoop
http://orainternals.wordpress.com/2009/06/20/rac-parallel-query-and-udpsnoop/	
}}}

Sheeri Cabral 
{{{
Data Warehousing Best Practices: Comparing Oracle to MySQL
http://www.pythian.com/news/15157/data-warehousing-best-practices-comparing-oracle-to-mysql-part-1-introduction-and-power/
http://www.pythian.com/news/15167/data-warehousing-best-practices-comparing-oracle-to-mysql-part-2-partitioning/
}}}
-- Oracle Optimized Warehouse
Oracle Exadata Best Practices (Doc ID 757552.1)
Oracle Optimized Warehouse for HP (Doc ID 779222.1)
HP Oracle Exadata Performance Best Practices (Doc ID 759429.1)
Oracle Sun Database Machine Setup/Configuration Best Practices (Doc ID 1067527.1)
Oracle Sun Database Machine Performance Best Practices (Doc ID 1067520.1)
Oracle Sun Database Machine Application Best Practices for Data Warehousing (Doc ID 1094934.1)
HP Exadata Setup/Configuration Best Practices (Doc ID 757553.1)
http://www.emc.com/collateral/hardware/white-papers/h6015-oracle-data-warehouse-sizing-dmx-4-dell-wp.pdf
	



-- PARALLELISM
Tips to Reduce Waits for "PX DEQ CREDIT SEND BLKD" at Database Level (Doc ID 738464.1)
Parallel Direct Load Insert DML (Doc ID 146631.1)
Using Parallel Execution (Doc ID 203238.1)
Parallel Capabilities of Oracle Data Pump (Doc ID 365459.1)
How to Refresh a Materialized View in Parallel (Doc ID 577870.1)
FAQ's about Parallel/Noparallel Hints. (Doc ID 263153.1)
SQL statements that run in parallel with NO_PARALLEL hints (Doc ID 267330.1)
	



-- PX SETUP
Where to find Information about Parallel Execution in the Oracle Documentation (Doc ID 184417.1)
Fundamentals of the Large Pool (Doc ID 62140.1)
Health Check Alert: parallel_execution_message_size is not set greater than or equal to the recommended value (Doc ID 957436.1)
Disable Parallel Execution on Session/System Level (Doc ID 235400.1)
	



-- PARALLELISM ISSUES
Why didn't my parallel query use the expected number of slaves? (Doc ID 199272.1)
Note:196938.1 "Why did my query go parallel?"

	


-- PARALLELISM SCRIPT
Report for the Degree of Parallelism on Tables and Indexes (Doc ID 270837.1)
Old and new Syntax for setting Degree of Parallelism (Doc ID 260845.1)
Script to map Senderid in PX Wait Event to an Oracle Process (Doc ID 304317.1)
Procedure PqStat to monitor Current PX Queries (Doc ID 240762.1)
Script to map Parallel Execution Server to User Session (Doc ID 344196.1)
Script to map parallel query coordinators to slaves (Doc ID 202219.1)
Script to monitor PX limits from Resource Manager for active sessions (Doc ID 240877.1)
Script to monitor parallel queries (Doc ID 457857.1)                                                          <-------------- GOOD STUFF





-- PARALLELISM AND MEMORY
PX Slaves take sometimes a lot of memory (Doc ID 240883.1)
Parallel Execution the Large/Shared Pool and ORA-4031 (Doc ID 238680.1)


-- PX & TRIGGER
Can a PX Be Triggered by an User or an Event Can Trigger the PX (Doc ID 960694.1)


-- PARALLELISM WAIT EVENTS
Parallel Query Wait Events (Doc ID 191103.1)
Statspack Report has PX (Parallel Query) Idle Events shown in Top Waits (Doc ID 353603.1)
271767.1 “WAITEVENT: “PX Deq Credit: send blkd”
WAITEVENT: "PX Deq: Execute Reply" (Doc ID 270916.1)
WAITEVENT: "PX Deq: Execution Msg" Reference Note (Doc ID 69067.1)
WAITEVENT: "PX Deq: Table Q Normal" (Doc ID 270921.1)
WAITEVENT: "PX Deq Credit: need buffer" (Doc ID 253912.1)
Wait Event 'PX qref latch' (Doc ID 240145.1)
WAITEVENT: "PX Deq: Join ACK" (Doc ID 250960.1)
WAITEVENT: "PX Deq: Signal ACK" (Doc ID 257594.1)
WAITEVENT: "PX Deq: Parse Reply" (Doc ID 257596.1)
WAITEVENT: "PX Deq: reap credit" (Doc ID 250947.1)
WAITEVENT: "PX Deq: Msg Fragment" (Doc ID 254760.1)
WAITEVENT: "PX Idle Wait" (Doc ID 257595.1)
WAITEVENT: "PX server shutdown" (Doc ID 250357.1)
WAITEVENT: "PX create server" (Doc ID 69106.1)



-- 10046 TRACE ON PX
Tracing PX session with a 10046 event or sql_trace (Doc ID 242374.1)
Tracing Parallel Execution with _px_trace. Part I (Doc ID 444164.1)



-- PX ERRORS
OERR: ORA-12853 insufficient memory for PX buffers: current %sK, max needed %s (Doc ID 287751.1)
Bug 6981690 - Cursor not shared when running PX query on mounted RAC system (Doc ID 6981690.8)
Bug 4336528 - PQ may be slower than expected (timeouts on "PX Deq: Signal ACK") (Doc ID 4336528.8)
Bug 5023410 - QC can wait on "PX Deq: Join ACK" when slave is available (Doc ID 5023410.8)
Bug 5030215 - Excessive waits on PX Deq Signal ACK when RAC enabled (Doc ID 5030215.8)
Error With Create Session When Invoking PX (Doc ID 782073.1)
Creating Session Failed Within PX (Doc ID 781437.1)
5 minute Delay Observed In Message Processing after RAC reconfiguration (Doc ID 458898.1)



-- KILL PX
The simplest Solution to kill a PX Session at OS Level (Doc ID 738618.1)




-- WEBINARS
	Selected Webcasts in the Oracle Data Warehouse Global Leaders Webcast Series (Doc ID 1306350.1)



{{{
parallel_automatic_tuning=false                 <--- currently set to TRUE which is a deprecated parameter in 10g
parallel_max_servers=64                             <--- the current value is just too high, caused by parallel_automatic_tuning
parallel_adaptive_multi_user=false             <--- best practice recommends to set this to false to have predictable performance
db_file_multiblock_read_count=64              <--- 1024/16......16 is your blocksize
parallel_execution_message_size=16384    <--- best practice recommends to set this to this value
}}}


http://www.freelists.org/post/oracle-l/PX-Deq-Credit-send-blkd
{{{
Christo Kutrovsky
http://www.freelists.org/post/oracle-l/PX-Deq-Credit-send-blkd,13

Note that only if  you have parallel_automatic_tuning=true then the
buffers are allocated from LARGE_POOL, otherwise (the default) they
come from the shared pool, which may be an issue when you try to
allocate 64kb chunks.
}}}

{{{
Craig Shallahamer
http://shallahamer-orapub.blogspot.com/2010/04/finding-parallelization-sweet-spot-part.html
http://shallahamer-orapub.blogspot.com/2010/04/parallelization-vs-duration-part-2.html
http://shallahamer-orapub.blogspot.com/2010/04/parallelism-introduces-limits-part-3.html
}}}

{{{
Christian Antognini
http://www.freelists.org/post/oracle-l/PX-Deq-Credit-send-blkd,22

> alter session force parallel ddl parallel 32;

This should not be necessary. The parallel DDL are enabled by default...
You can check that with the following query:

select pddl_status 
from v$session 
where sid = sys_context('userenv','sid')
}}}


{{{
PX Deq Credit: send blkd - wait for what?
http://www.asktherealtom.ch/?p=8

PX Deq Credit: send blkd caused by IDE (SQL Developer, Toad, PL/SQL Developer)
http://iamsys.wordpress.com/2010/03/24/px-deq-credit-send-blkd-caused-by-ide-sql-developer-toad-plsql-developer/

http://www.freelists.org/post/oracle-l/PX-Deq-Credit-send-blkd

How can I associate the parallel query slaves with the session that's running the query?
http://www.jlcomp.demon.co.uk/faq/pq_proc.html
}}}

{{{

What event are the consumer slaves waiting on?

set linesize 150
col "Wait Event" format a30

select s.sql_id,
       px.INST_ID "Inst",
       px.SERVER_GROUP "Group",
       px.SERVER_SET "Set",
       px.DEGREE "Degree",
       px.REQ_DEGREE "Req Degree",
       w.event "Wait Event"
from GV$SESSION s, GV$PX_SESSION px, GV$PROCESS p, GV$SESSION_WAIT w
where s.sid (+) = px.sid and
      s.inst_id (+) = px.inst_id and
      s.sid = w.sid (+) and
      s.inst_id = w.inst_id (+) and
      s.paddr = p.addr (+) and
      s.inst_id = p.inst_id (+)
ORDER BY decode(px.QCINST_ID,  NULL, px.INST_ID,  px.QCINST_ID),
         px.QCSID,
         decode(px.SERVER_GROUP, NULL, 0, px.SERVER_GROUP),
         px.SERVER_SET,
         px.INST_ID;
}}}


Installing Database Vault in a Data Guard Environment
  	Doc ID: 	754065.1
http://docs.oracle.com/cd/E11882_01/server.112/e23090/dba.htm
http://www.oracle.com/technetwork/database/security/twp-oracle-database-vault-sap-2009-128981.pdf
http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/11g/r1/prod/security/datavault/datavault2.htm Restricting Command Execution Using Oracle Database Vault
[img[picturename| https://lh6.googleusercontent.com/-5ZeHtRGSEKI/TeVlmgna5XI/AAAAAAAABSo/N4hMYIhLLkc/s800/IMG_4070.JPG]]

Securing Linux Servers - https://www.pluralsight.com/courses/securing-linux-servers


Series: Project Lockdown - A phased approach to securing your database infrastructure
http://www.oracle.com/technetwork/articles/index-087388.html
http://blog.red-database-security.com/2010/09/10/update-of-project-lockdown-released/


http://www.cyberciti.biz/tips/tips-to-protect-linux-servers-physical-console-access.html


-- DoD files
http://www.disa.mil/About/Search-Results?q=oracle&col=iase&s=Search
http://iase.disa.mil/stigs/app_security/database/oracle.html
http://iase.disa.mil/stigs/app_security/database/general.html

http://www.cvedetails.com/vulnerability-list/vendor_id-93/product_id-13824/Oracle-Database-11g.html
http://www.cisecurity.org/
{{{
Pre-req reading materials
    Read on the Chapter 14, 15, and 10 of this book (in particular order!!!) Beginning_11g_Admin_From_Novice_to_Professional.pdf 
    to know why we need to do database health checks and to have an idea about our value to our clients

Alignment to the IT Service Management
    There are 10 components of ITSM and these are as follows: 
	Service Level Management
	Financial Management
	Service Continuity Management
	Capacity Management
	Availability Management
	Incident Management
	Problem Management
	Change Management
	Configuration Management
	Release Management

    For simplicity and aligning it to the health check tasks the 10 components are categorized as follows: 
	Performance and Availability
	Service Level Management
	Capacity Management
	Availability Management
	Backup and Recovery
	Service Continuity Management
	Incident/Problem Management
	Incident Management
	Problem Management
	Configuration Management
	Financial Management
	Change Management
	Configuration Management
	Release Management

The Health Check Checklist
    Gather information on the environment
    Database Maintenance
	Backups
	    Check the backup log
	Log file maintenance (see TrimLogs)
	    Trim the alert log
	    Trim the backup log
	    Trim/delete files at the user dump directories
	    Trim listener log file
	    Trim sqlnet log file
	Configuration Management
	    Check installed Oracle software
	    Gather RDA
	    Check the DBA_FEATURE_USAGE_STATISTICS
	Statistics
	Archive & Purge
	Rebuilding
	Auditing
	User Management
	Capacity Management
	Patching
    Database Monitoring
	Database Availability
	    Check the alert log (see GetAlertLog)
	    Check the backup log
	    Check the archive mode
	    Check nologging tables
	    Check the control files
	    Check Redo log files and sizes
	    Check database parameters
	      SGA size
	      Undo management
	      Memory management
	Database Changes
	    Check changes on the database parameters
	    Check on recent DDLs (if possible)
	Security
	    Check the audit logs
	Space and Growth
	    Check local and dictionary managed tablespace
	    Check tablespace usage
	    Check tablespace quotas
	    Check temporary tablespace
	    Check tablespace fragmentation
	    Check datafiles with autoextend
	    Check segment growth or top segments
	Workload and Capacity
	    Check the AAS
	    Check the CPU, IO, memory, network workload
	    Check the top timed events
	Performance
	    Check the top SQLs
	    Check unstable SQLs
	Database Objects
	    Check objects unable to extend
	    Check objects reaching max extents
	    Check sequences reaching max value
	    Check row migration and chaining
	    Check invalid objects
	    Check table statistics
	    Check index statistics
	    Check rollback segments (for 8i below)
	    Check resource contention (locks, enqueue)
    Analysis
    Documentation and recommendation of action plans
    Validation of action plans
    Execution of action plans
}}}


See also [[PerformanceTuningReport]] for possible report/analysis formats




Top DBA Shell Scripts for Monitoring the Database
http://communities.bmc.com/communities/docs/DOC-9942#tablespace

''Interesting scripts on this grid infra directory''
{{{
oracle@enkdb01.enkitec.com:/home/oracle/dba/etc:dbm1
$ locate "/pluggable/unix"
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/alert_log_file_size_analyzer.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/bdump_dest_trace_analyzer.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/check_default_gateway.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/check_disk_asynch_io_linking.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/check_e1000.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/check_jumbo_frames.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/check_network_packet_reassembly.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/check_network_param.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/check_non_routable_network_interconnect.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/check_rp_filter.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/check_tcp_packet_retransmit.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/check_vip_restart_attempt.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/check_vmm.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/checkcorefile.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/checkhugepage.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/checkmemlock.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/checkportavail.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/checkramfs.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/checksshd.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/common_include.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/core_dump_dest_analyzer.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/css_diagwait.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/css_disk_timeout.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/css_misscount.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/css_reboot_time.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/getNICSpeed.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/hangcheck_margin.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/hangcheck_reboot.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/hangcheck_tick.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/hangchecktimer.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/listener_naming_convention.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/ora_00600_errors_analyzer.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/ora_07445_errors_analyzer.sh
/u01/app/11.2.0.3/grid/cv/remenv/pluggable/unix/shutdown_hwclock_sync.sh
}}}

Bug No. 	1828368
SYS.LINK$ CONTAINS UNENCRYPTED PASSWORDS OF REMOTE LOGIN 


Duplicate table over db link
http://laurentschneider.com/wordpress/2011/09/duplicate-table-over-database-link.html

Tuning query with database link using USE_NL hint http://msutic.blogspot.com/2012/03/tuning-distributed-query-using-usenl.html
version.extensions.DcTableOfContentsPlugin= {
	major: 0, minor: 4, revision: 0,
	type: "macro",
	source: "http://devpad.tiddlyspot.com#DcTableOfContentsPlugin"
};

// Replace heading formatter with our own
for (var n=0; n<config.formatters.length; n++) {
	var format = config.formatters[n];
	if (format.name == 'heading') {
		format.handler = function(w) {
			// following two lines is the default handler
			var e = createTiddlyElement(w.output, "h" + w.matchLength);
			w.subWikifyTerm(e, this.termRegExp); //updated for TW 2.2+

			// Only show [top] if current tiddler is using showtoc
			if (w.tiddler && w.tiddler.isTOCInTiddler == 1) {
				// Create a container for the default CSS values
				var c = createTiddlyElement(e, "div");
				c.setAttribute("style", "font-size: 0.5em; color: blue;");
				// Create the link to jump to the top
				createTiddlyButton(c, " [top]", "Go to top of tiddler", window.scrollToTop, "dcTOCTop", null, null);
			}
		}
		break;
	}
}

config.macros.showtoc = {
	handler: function(place, macroName, params, wikifier, paramString, tiddler) {
		var text = "";
		var title = "";
		var myTiddler = null;

		// Did they pass in a tiddler?
		if (params.length) {
			title = params[0];
			myTiddler = store.getTiddler(title);
		} else {
			myTiddler = tiddler;
		}

		if (myTiddler == null) {
			wikify("ERROR: Could not find " + title, place);
			return;
		}

		var lines = myTiddler .text.split("\n");
		myTiddler.isTOCInTiddler = 1;

		// Create a parent container so the TOC can be customized using CSS
		var r = createTiddlyElement(place, "div", null, "dcTOC");
		// create toggle button
		createTiddlyButton(r, "/* Table of Contents */", "show/collapse table of contents",
			function() { config.macros.showtoc.toggleElement(this.nextSibling); },
			"toggleButton")
		// Create a container so the TOC can be customized using CSS
		var c = createTiddlyElement(r, "div");

		if (lines != null) {
			for (var x=0; x<lines.length; x++) {
				var line = lines[x];
				if (line.substr(0,1) == "!") {
					// Find first non ! char
					for (var i=0; i<line.length; i++) {
						if (line.substr(i, 1) != "!") {
							break;
						}
					}
					var desc = line.substring(i);
					// Remove WikiLinks
					desc = desc.replace(/\[\[/g, "");
					desc = desc.replace(/\]\]/g, "");

					text += line.substr(0, i).replace(/[!]/g, '*');
					text += '<html><a href="javascript:;" onClick="window.scrollToHeading(\'' + title + '\', \'' + desc+ '\', event)">' + desc+ '</a></html>\n';
				}
			}
		}
		wikify(text, c);
	}
}

config.macros.showtoc.toggleElement = function(e) {
	if(e) {
		if(e.style.display != "none") {
			e.style.display = "none";
		} else {
			e.style.display = "";
		}
	}
};

window.scrollToTop = function(evt) {
	if (! evt)
		var evt = window.event;

	var target = resolveTarget(evt);
	var tiddler = story.findContainingTiddler(target);

	if (! tiddler)
		return false;

	window.scrollTo(0, ensureVisible(tiddler));

	return false;
};

window.scrollToHeading = function(title, anchorName, evt) {
	var tiddler = null;

	if (! evt)
		var evt = window.event;

	if (title) {
		story.displayTiddler(store.getTiddler(title), title, null, false);
		tiddler = document.getElementById(story.idPrefix + title);
	} else {
		var target = resolveTarget(evt);
		tiddler = story.findContainingTiddler(target);
	}

	if (tiddler == null)
		return false;
	
	var children1 = tiddler.getElementsByTagName("h1");
	var children2 = tiddler.getElementsByTagName("h2");
	var children3 = tiddler.getElementsByTagName("h3");
	var children4 = tiddler.getElementsByTagName("h4");
	var children5 = tiddler.getElementsByTagName("h5");

	var children = new Array();
	children = children.concat(children1, children2, children3, children4, children5);

	for (var i = 0; i < children.length; i++) {
		for (var j = 0; j < children[i].length; j++) {
			var heading = children[i][j].innerHTML;

			// Remove all HTML tags
			while (heading.indexOf("<") >= 0) {
				heading = heading.substring(0, heading.indexOf("<")) + heading.substring(heading.indexOf(">") + 1);
			}

			// Cut off the code added in showtoc for TOP
			heading = heading.substr(0, heading.length-6);

			if (heading == anchorName) {
				var y = findPosY(children[i][j]);
				window.scrollTo(0,y);
				return false;
			}
		}
	}
	return false
};



Summary Of Bugs Which Could Cause Deadlock [ID 554616.1]

http://hemantoracledba.blogspot.com/2010/09/deadlocks.html

http://hoopercharles.wordpress.com/2010/01/07/deadlock-on-oracle-11g-but-not-on-10g/#comment-1793
http://markjbobak.wordpress.com/2008/06/09/11g-is-more-deadlock-sensitive-than-10g/
http://getfirebug.com/
http://jsonlint.com/

debugging book http://www.amazon.com/Debugging-David-J-Agans-ebook/dp/B002H5GSZ2/ref=tmm_kin_swatch_0
http://programmers.stackexchange.com/questions/93302/spending-too-much-time-debugging
Debugging with RStudio
https://support.rstudio.com/hc/en-us/articles/205612627-Debugging-with-RStudio
[[RSS & Search]] [[TagCloud]]
Document 1484775.1 Database Control To Be Desupported in DB Releases after 11.2
Document 1392280.1 Desupport of Oracle Cluster File System (OCFS) on Windows with Oracle DB 12
Document 1175293.1 Obsolescence Notice: Oracle COM Automation
Document 1175303.1 Obsolescence Notice: Oracle Objects for OLE
Document 1175297.1 Obsolescence Notice: Oracle Counters for Windows Performance Monitor
Document 1418321.1 CSSCAN and CSALTER To Be Desupported After DB 11.2
Document 1169017.1 Deprecating the cursor_sharing = ‘SIMILAR’ setting
Document 1469466.1: Deprecation of Oracle Net Connection Pooling feature in Oracle Database 11g Release 2
1) Mount the WD 3TB on linux server with virtual box installed

2) Install extension pack

http://www.oracle.com/technetwork/server-storage/virtualbox/downloads/index.html#extpack

-rwxr-xr-x 1 root   root    9566803 Oct 17 11:43 Oracle_VM_VirtualBox_Extension_Pack-4.1.4-74291.vbox-extpack
[root@desktopserver installers]# VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-4.1.4-74291.vbox-extpack
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Successfully installed "Oracle VM VirtualBox Extension Pack".

3) then mount on windows 7

http://blogs.oracle.com/wim/entry/oracle_vm_virtualbox_40_extens
http://www.cyberciti.biz/tips/fdisk-unable-to-create-partition-greater-2tb.html
http://plone.lucidsolutions.co.nz/linux/io/adding-a-xfs-filesystem-to-centos-5
http://blog.cloutier-vilhuber.net/?p=246
http://blogs.oracle.com/wim/entry/playing_with_btrfs
https://www.udemy.com/learn-devops-continuously-deliver-better-software/learn/v4/content
https://www.udemy.com/learn-devops-scaling-apps-on-premise-and-in-the-cloud/learn/v4/content

-- MICROSOFT

ODP.NET example code using password management with C#
  	Doc ID: 	Note:226759.1
http://www.dialogs.com/en/GetDialogs.html
http://www.dialogs.com/en/Downloads.html
http://www.dialogs.com/en/Manual.html
https://www.dialogs.com/en/cuf_req_thankyou.html
http://kdiff3.sourceforge.net/ <-- just like on linux
http://stackoverflow.com/questions/12625/best-diff-tool
http://intermediatesql.com/oracle/what-is-the-difference-between-sql-profile-and-spm-baseline/
http://yong321.freeshell.org/oranotes/DirectIO.txt  <-- ''good stuff'' - linux, solaris, tru64
{{{
$ uname -a
SunOS countfleet 5.6 Generic_105181-31 sun4u sparc SUNW,Ultra-2
$ mount | grep ^/f[12] #/f2 has DIO turned on
/f1 on /dev/dsk/c0t1d0s0 setuid/read/write/largefiles on Wed Jan 15 16:17:29 2003
/f2 on /dev/dsk/c0t1d0s1 forcedirectio/setuid/read/write/largefiles on Wed Jan 15 16:17:29 2003
$ grep maxphys /etc/system
set maxphys = 1048576

Database 9.0.1.3

create tablespace test datafile '/f1/oradata/tiny/test.dbf' size 400m extent management local 
uniform size 32k;

Three times it took 35,36,36 seconds, respectively. The same command except for f1 changed to f2 
took 25,27,26 seconds, respectively, about 9 seconds faster. /f1 is regular UFS and /f2 is DIO UFS.

When the tablespace is being created on /f1, truss is run against the shadow process and the second 
run shows:

$ truss -c -p 9704
^Csyscall      seconds   calls  errors
read             .00       1
write            .00       3
open             .00       2
close            .00      10
time             .00       2
lseek            .00       2
times            .03     282
semsys           .00      31
ioctl            .00       3      3
fdsync           .00       1
fcntl            .01      14
poll             .01     146
sigprocmask      .00      56
context          .00      14
fstatvfs         .00       3
writev           .00       2
getrlimit        .00       3
setitimer        .00      28
lwp_create       .00       2
lwp_self         .00       1
lwp_cond_wai     .03     427
lwp_cond_sig     .15     427
kaio            5.49     469    430 <-- More kernelized IO time
stat64           .00       3      1
fstat64          .00       3
pread64          .00      32
pwrite64         .35     432        <-- Each pwrite() call takes 350/432 = 0.8 ms
open64           .00       6
                ----     ---    ---
sys totals:     6.07    2405    434
usr time:       1.71
elapsed:       36.74

When the tablespace is created on /f2,

$ truss -c -p 9704
^Csyscall      seconds   calls  errors
read             .00       1
write            .00       3
open             .00       2
close            .00      10
time             .00       2
lseek            .00       2
times            .02     282
semsys           .00      31
ioctl            .00       3      3
fdsync           .00       1
fcntl            .00      14
poll             .01     146
sigprocmask      .00      56
context          .00      14
fstatvfs         .00       3
writev           .00       2
getrlimit        .00       3
setitimer        .00      28
lwp_cond_wai     .00     430
lwp_cond_sig     .03     430
kaio             .50     462    430 <-- Much less kernelized IO time
stat64           .00       3      1
fstat64          .00       3
pread64          .01      32
pwrite64         .00     432        <-- pwrite calls take practically no time.
open64           .00       6
                ----     ---    ---
sys totals:      .57    2401    434
usr time:       1.94
elapsed:       27.72

During the first run, the result on /f1 is even worse. But for good benchmark, I usually ignore the 
first run.
}}}


http://www.pythian.com/news/22727/how-to-confirm-direct-io-is-getting-used-on-solaris/
''on Linux''
{{{
Now in Linux it becomes very easy.you just need to read /proc/slabinfo :

cat /proc/slabinfo | grep kio

In the SLAB allocator there are three different caches involved. The kioctx and kiocb are Async I/O data structures that are defined in aio.h header file. If it shows a non zero value that means async io is enabled.
}}}

''on Solaris''
{{{
truss -f -t open,ioctl -u ':directio' sqlplus "/ as sysdba"

27819/1: open(“/ora02/oradata/MYDB/undotbs101.dbf”, O_RDWR|O_DSYNC) = 13
27819/1@1: -> libc:directio(0×100, 0×1, 0×0, 0×0, 0xfefefefeffffffff, 0xfefefefeff726574)
27819/1: ioctl(256, _ION(‘f’, 76, 0), 0×00000001) = 0
27819/1@1: <- libc:directio() = 0
27819/1: open(“/ora02/oradata/MYDB/system01.dbf”, O_RDWR|O_DSYNC) = 13
27819/1@1: -> libc:directio(0×101, 0×1, 0×0, 0×0, 0xfefefefeffffffff, 0xfefefefeff726574)
27819/1: ioctl(257, _ION(‘f’, 76, 0), 0×00000001) = 0
27819/1@1: <- libc:directio() = 0

Table created.

SQL> drop table test;

Table dropped.

See the line “ioctl(256, _ION(‘f’, 76, 0), 0×00000001)” above.

The 3rd parameter as shown in the above output/line to the ioctl() call decides the use of direct IO.
It is 0 for directio off, and 1 for directio on and its ON in case of this database.i.e undo and system datafiles are opened with directio.
}}}



http://blogs.oracle.com/apatoki/entry/ensuring_that_directio_is_active
http://www.solarisinternals.com/si/tools/directiostat/index.php    <-- ''directiostat tool''




VxFS DirectIO
http://mailman.eng.auburn.edu/pipermail/veritas-vx/2006-February/025477.html
When direct I/O attacks! - A sample of VxFS mount options
{{{
$ mount | grep u02
/u02 on /dev/vx/dsk/oradg/oradgvol01 read/write/setuid/mincache=direct/convosync=direct/delaylog/largefiles/ioerror=mwdisable/dev=3bd4ff0 on Mon Dec  5 22:21:31 2005
}}}
http://blogs.sybase.com/dwein/?p=326
http://www.freelists.org/post/oracle-l/direct-reads-and-writes-on-Solaris,4
http://orafaq.com/node/27
Setting mincache=direct and convosync=direct for VxFS on Solaris 10 - http://www.symantec.com/connect/forums/setting-mincachedirect-and-convosyncdirect-vxfs-solaris-10
What are the differences between the direct, dsync, and unbuffered settings for the Veritas File System mount options mincache and convosync, and how do those options affect I/O? - http://www.symantec.com/business/support/index?page=content&id=TECH49211
Pros and Cons of Using Direct I/O for Databases [ID 1005087.1]
Oracle Import Takes Longer When Using Buffered VxFS Then Using Unbuffered VxFS [ID 1018755.1]
Performance impact of file system when mounted as Buffered and Unbuffered option [ID 151719.1]


















http://antognini.ch/2010/09/parallel-full-table-scans-do-not-always-perform-direct-reads/
http://oracle-randolf.blogspot.com/2011/10/auto-dop-and-direct-path-inserts.html
http://blog.tanelpoder.com/2012/09/03/optimizer-statistics-driven-direct-path-read-decision-for-full-table-scans-_direct_read_decision_statistics_driven/
http://www.pythian.com/news/27867/secrets-of-oracles-automatic-degree-of-parallelism/
http://dioncho.wordpress.com/2009/07/21/disabling-direct-path-read-for-the-serial-full-table-scan-11g/
How Parallel Execution Works http://docs.oracle.com/cd/E11882_01/server.112/e25523/parallel002.htm
http://uhesse.com/2009/11/24/automatic-dop-in-11gr2/

http://www.oracle.com/technetwork/database/bi-datawarehousing/twp-parallel-execution-fundamentals-133639.pdf


also see [[_small_table_threshold]]



! 2020 nigel 
https://github.com/oracle/oracle-db-examples/tree/master/optimizer/direct_path
<<<
SQL scripts to compare direct path load in Oracle Database 11g Release 2 with Oracle Database 12c (12.1.0.2 and above). They are primarily intended to demonstrate the new Hybrid TSM/HWMB load strategy in 12c - comparing this to the TSM strategy available in 11g. See the 11g and 12c "tsm_v_tsmhwmb.sql" scripts and their associated spool file "tsm_v_tsmhwmb.lst" to see the difference in behavior between these two database versions. In particular, compare the reduced number of table extents created in the 12c example than 11g by comparing the "tsm_v_tsmhwmb.lst" files.

The 12c directory contains a comprehensive set of examples demonstrating how the SQL execution plan is decorated with the chosen load strategy.

The 11g directory contains a couple of examples for comparative purposes.
<<<
http://www.windowsnetworking.com/articles_tutorials/authenticating-linux-active-directory.html
''Centrify'' http://www.cerberis.com/images/produits/livreblanc/Active%20Directory%20Solutions%20for%20Red%20Hat%20Enterprise%20Linux.pdf, http://www.centrify.com/express/comparing-free-active-directory-integration-tools.asp
http://en.wikipedia.org/wiki/Active_Directory
https://wiki.archlinux.org/index.php/Active_Directory_Integration
http://en.gentoo-wiki.com/wiki/Active_Directory_with_Samba_and_Winbind
http://en.gentoo-wiki.com/wiki/Active_Directory_Authentication_using_LDAP
http://serverfault.com/questions/23632/how-to-use-active-directory-to-authenticate-linux-users
http://serverfault.com/questions/12454/linux-clients-on-a-windows-domains
http://serverfault.com/questions/15626/how-practical-is-to-authenticate-a-linux-server-against-ad
http://wiki.samba.org/index.php/Samba_&_Active_Directory
http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch31_:_Centralized_Logins_Using_LDAP_and_RADIUS
http://helpdeskgeek.com/how-to/windows-2003-active-directory-setupdcpromo/















How to Disable Automatic Statistics Collection in 11g [ID 1056968.1]
http://www.oracle-base.com/articles/11g/AutomatedDatabaseMaintenanceTaskManagement_11gR1.php
http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_job.htm#i1000521

''to check'' 
{{{
SQL> col status format a20
SQL> r
  1* select client_name,status from Dba_Autotask_Client

CLIENT_NAME                                                      STATUS
---------------------------------------------------------------- --------------------
auto optimizer stats collection                                  ENABLED
auto space advisor                                               ENABLED
sql tuning advisor                                               ENABLED

}}}

''-- disable a specific job or auto job''
{{{
EXEC DBMS_AUTO_TASK_ADMIN.DISABLE('auto optimizer stats collection', NULL, NULL);
exec dbms_scheduler.disable('gather_stats_job'); 
exec dbms_scheduler.disable( 'SYS.BSLN_MAINTAIN_STATS_JOB' );
EXEC DBMS_JOB.BROKEN(62,TRUE);
}}}

''-- disable all maintenance jobs altogether''
{{{
EXEC DBMS_AUTO_TASK_ADMIN.disable;
EXEC DBMS_AUTO_TASK_ADMIN.enable;
}}}

''-- to address the maintenance window on your newly created resource_plan''
* you can just do a single level plan.. then add the ORA$AUTOTASK_SUB_PLAN and ORA$DIAGNOSTICS consumer groups and edit the maintenance windows from DEFAULT_MAINTENANCE_PLAN to your newly created resource plan. by doing this each time the window is executed all the jobs will conform to the new resource plan and their allocations

Enabling Oracle Database Resource Manager and Switching Plans http://docs.oracle.com/cd/B28359_01/server.111/b28310/dbrm005.htm#ADMIN11890
{{{
ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'FORCE:mydb_plan';
}}}
Windows http://docs.oracle.com/cd/B28359_01/server.111/b28310/schedover007.htm#i1106396
Configuring Resource Allocations for Automated Maintenance Tasks http://docs.oracle.com/cd/B28359_01/server.111/b28310/tasks005.htm#BABHEFEH
Automated Database Maintenance Task Management http://www.oracle-base.com/articles/11g/automated-database-maintenance-task-management-11gr1.php






{{{
There’s a parameter in 11.2 which you can force the px executions to be local on a node..
PARALLEL_FORCE_LOCAL:
 
If you are on 10gR2… you can set a hint..
 
Select /*+PARALLEL(TAB, DEGREE, INSTANCES) */
 
Or set it on the table level
 
ALTER TABLE NODETEST1 PARALLEL(DEGREE 4 INSTANCES 2)

ALTER SESSION DISABLE PARALLEL DML|DDL|QUERY

SELECT /*+ NOPARALLEL(hr_emp) */ last_name FROM hr.employees hr_emp;
}}}
http://rogunix.com/docs/Reversing&Exploiting/The.IDA.Pro.Book.2nd.Edition.Jun.2011.pdf
https://www.hex-rays.com/index.shtml

The compiler, assembler, linker, loader and process address space tutorial - hacking the process of building programs using C language: notes and illustrations
http://www.tenouk.com/ModuleW.html
<<showtoc>>


! prereq 
* user should not have ADMINISTER DATABASE TRIGGER priv 
* user should not own the trigger 
* user that needs to be locked does not apply to SYS,SYSTEM



! the trigger to prevent users from login in 
{{{
-- EXECUTE THIS SCRIPT AS ALLOC_APP_PERF

create or replace trigger alloc_app_perf.revoke_alloc_app_user
    after logon on database
       begin
         -- not allow app schema
         if sys_context('USERENV','SESSION_USER') in ('ALLOC_APP_USER') 
         -- not allow users outside of this server (change the server name accordingly)
         and UPPER(SYS_CONTEXT ('USERENV','HOST')) not in ('KARLDEVFEDORA')
         then
           raise_application_error(-20001,'<<< NIGHTLY BATCH RUNNING. PLEASE COME BACK LATER. >>>');
       end if;
     end;
/


}}}

! the kill procedure executed in UC4 
{{{
-- EXECUTE THIS SCRIPT AS SYSDBA

grant alter system to system;
grant select on sys.gv_$session to system;

create or replace procedure system.uc4_kill_all_alloc_app_user
as 
    BEGIN
      FOR c IN (
          SELECT sid, serial#, inst_id
          FROM sys.gv_$session
          WHERE USERNAME = 'ALLOC_APP_USER'
          AND upper(MACHINE) NOT IN (select upper(sys_context ('userenv','HOST')) from dual)
          AND STATUS <> 'KILLED'
      )
      LOOP
          EXECUTE IMMEDIATE 'alter system kill session ''' || c.sid || ', ' || c.serial# || ', @' || c.inst_id || ''' immediate';
      END LOOP;
    END;
    /

grant execute on system.uc4_kill_all_alloc_app_user to alloc_app_perf;


}}}


! the user that will execute the kill should have the following privs 
{{{

-- quotas
alter user "alloc_app_perf" quota unlimited on bas_data;

-- roles, privs
grant alloc_app_r to alloc_app_perf;
grant select_catalog_role to alloc_app_perf;
grant resource to alloc_app_perf;
grant select any dictionary to alloc_app_perf;
grant advisor to alloc_app_perf;
grant create job to alloc_app_perf;
grant oem_monitor to alloc_app_perf;
grant administer any sql tuning set to alloc_app_perf;   
grant administer sql management object to alloc_app_perf; 
grant create any sql_profile to alloc_app_perf;
grant drop any sql_profile to alloc_app_perf;
grant alter any sql_profile to alloc_app_perf;   
grant create any trigger to alloc_app_perf;
grant alter any trigger to alloc_app_perf;
grant administer database trigger to alloc_app_perf with admin option;

-- execute  
grant execute on dbms_monitor to alloc_app_perf;
grant execute on dbms_application_info to alloc_app_perf;
grant execute on dbms_workload_repository to alloc_app_perf;
grant execute on dbms_xplan to alloc_app_perf;     
grant execute on dbms_sqltune to alloc_app_perf;
grant execute on sys.dbms_lock to alloc_app_perf;


}}}




''references''
https://serverfault.com/questions/58856/disconnecting-an-oracle-session-from-a-logon-trigger
How do i prevent end users from connecting to the database other than my application https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:561622956788
Raise_application_error procedure in AFTER LOGON trigger https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:3236035522926
http://oracle.ittoolbox.com/groups/technical-functional/oracle-db-l/after-logon-trigger-not-killing-the-user-session-5108880
https://www.freelists.org/post/oracle-l/Disconnecting-session-from-an-on-logon-trigger,3
https://stackoverflow.com/questions/55342/how-can-i-kill-all-sessions-connecting-to-my-oracle-database





http://kerryosborne.oracle-guy.com/2012/03/displaying-sql-baseline-plans/
https://12factor.net/
http://www.ec2instances.info/


Docker repo
https://registry.hub.docker.com/search?q=oracle&searchfield=
http://sve.to/2010/10/11/cannot-drop-the-first-disk-group-in-asm-11-2/
11gR2 (11.2.0.1) ORA-15027: active use of diskgroup precludes its dismount (With no database clients connected) [ID 1082876.1]

{{{
Dismounting DiskGroup DATA failed with the following message:
ORA-15032: not all alterations performed
ORA-15027: active use of diskgroup "DATA" precludes its dismount
}}}
http://drupal.org/download
http://drupal.org/project/themes?solrsort=sis_project_release_usage%20desc
http://drupal.org/start


http://drupal.org/search/apachesolr_multisitesearch/blog%20aggregator <-- AGGREGATOR
http://groups.drupal.org/node/21325 <-- VIEWS
http://alexanderanokhin.wordpress.com/2012/03/19/dtrace-lio-new-features/
http://www.jlcomp.demon.co.uk/faq/duplicates.html
http://oracletoday.blogspot.com/2005/08/magic-exceptions-into.html
http://www.java2s.com/Code/Oracle/PL-SQL/handleexceptionofduplicatevalueonindex.htm


http://www.unix.com/programming/176214-eliminate-duplicate-rows-sqlloader.html
http://database.itags.org/oracle/243273/
http://boardreader.com/thread/How_to_avoid_Duplicate_Insertion_without_l8ddXffgc.html
http://homepages.inf.ed.ac.uk/wenfei/tdd/reading/cleaning.pdf
http://docs.oracle.com/cd/B31104_02/books/EIMAdm/EIMAdm_UsageScen16.html
http://momendba.blogspot.com/2008/06/hi-there-was-interesting-post-on-otn.html
http://www.freelists.org/post/oracle-l/Is-it-a-good-idea-to-have-primary-key-on-DW-table
http://www.etl-tools.com/loading-data-into-oracle.html
http://www.justskins.com/forums/eliminate-duplicates-using-sqlldr-148572.html
http://www.akadia.com/services/ora_exchange_partition.html
http://www.dbforums.com/oracle/1008995-avoid-duplicate-rows-error-sqlldr.html
http://database.itags.org/oracle/19023/
http://www.club-oracle.com/forums/how-to-avoid-duplicate-rows-from-being-inserted-in-table-t2101/
http://www.dbforums.com/oracle/979143-performance-issue-using-sql-loader.html



! row_number PARTITION BY 
https://www.sqlservercentral.com/articles/eliminating-duplicate-rows-using-the-partition-by-clause
{{{
select a.Emp_Name, a.Company, a.Join_Date, a.Resigned_Date, a.RowNumber
from
(select Emp_Name
 ,Company
 ,Join_Date
 ,Resigned_Date
 ,ROW_NUMBER() over (partition by Emp_Name, Company, Join_Date
 ,Resigned_Date
 order by Emp_Name, Company, Join_Date
 ,Resigned_Date) RowNumber 
from Emp_Details) a
where a.RowNumber > 1
}}}










http://forums.untangle.com/openvpn/14806-dyndns-openvpn.html
http://dyn.com/dns/dyndns-free/
-- DynamicSampling
http://blogs.oracle.com/optimizer/2010/08/dynamic_sampling_and_its_impact_on_the_optimizer.html


-- CursorSharing
http://db-optimizer.blogspot.com/2010/06/cursorsharing-picture-is-worth-1000.html
http://blogs.oracle.com/mt/mt-search.cgi?blog_id=3361&tag=cursor%20sharing&limit=20
http://blogs.oracle.com/optimizer/2009/05/whydo_i_have_hundreds_of_child_cursors_when_cursor_sharing_is_set_to_similar_in_10g.html
Formated V$SQL_SHARED_CURSOR Report by SQLID or Hash Value (Doc ID 438755.1)
Unsafe Literals or Peeked Bind Variables (Doc ID 377847.1)
Adaptive Cursor Sharing in 11G (Doc ID 836256.1)


-- HighVersionCount
High SQL version count and low executions from ADDM Report!!
http://forums.oracle.com/forums/thread.jspa?threadID=548770
Library Cache : Causes of Multiple Version Count for an SQL http://viveklsharma.wordpress.com/2009/09/12/ql/
http://viveklsharma.wordpress.com/2009/09/24/library-cache-latch-contention-due-to-multiple-version-count-day-2-of-aioug/
High Version Count with CURSOR_SHARING = SIMILAR or FORCE (Doc ID 261020.1)


-- PLAN_HASH_VALUE
http://oracle-randolf.blogspot.com/2009/07/planhashvalue-how-equal-and-stable-are.html
Thread: SQL with multiple plan hash value http://forums.oracle.com/forums/thread.jspa?threadID=897302
SQL PLAN_HASH_VALUE Changes for the Same SQL Statement http://hoopercharles.wordpress.com/2009/12/01/sql-plan_hash_value-changes-for-the-same-sql-statement/

-- LibraryCacheLatch
Higher Library Cache Latch contention in 10g than 9i (Doc ID 463860.1)
Understanding and Tuning the Shared Pool and Tuning Library Cache Latch Contention (Doc ID 62143.1)
Solutions for possible AWR Library Cache Latch Contention Issues in Oracle 10g (Doc ID 296765.1)


-- COE
TESTING SQL PERFORMANCE IMPACT OF AN ORACLE 9i TO ORACLE DATABASE 10g RELEASE 2 UPGRADE WITH SQL PERFORMANCE ANALYZER (Doc ID 562899.1)
Case Study: The Mysterious Performance Drop (Doc ID 369427.1)
http://office.microsoft.com/en-us/excel-help/using-named-ranges-to-create-dynamic-charts-in-excel-HA001109801.aspx
http://www.exceluser.com/explore/dynname1.htm
http://dmoffat.wordpress.com/2011/05/19/dynamic-range-names-and-charts-in-excel-2010/
http://www.eggheadcafe.com/software/aspnet/30309917/newbie-needs-translation-of-andy-popes-code.aspx
http://www.ozgrid.com/forum/showthread.php?t=56215&page=1
http://peltiertech.com/Excel/Charts/Dynamics.html
http://peltiertech.com/Excel/Charts/DynamicChartLinks.html
http://www.tushar-mehta.com/excel/newsgroups/dynamic_charts/index.html#BasicRange
http://www.tushar-mehta.com/excel/newsgroups/dynamic_charts/images/snapshot014.jpg
http://www.mrexcel.com/forum/showthread.php?p=1299121
http://www.eggheadcafe.com/software/aspnet/35201025/help-to-pick-constant-color-to-a-value-in-a-pie-chart.aspx
http://peltiertech.com/WordPress/vba-conditional-formatting-of-charts-by-category-label/
http://peltiertech.com/WordPress/using-colors-in-excel-charts/
http://peltiertech.com/WordPress/vba-conditional-formatting-of-charts-by-value/
http://peltiertech.com/WordPress/vba-conditional-formatting-of-charts-by-series-name/
http://peltiertech.com/WordPress/vba-conditional-formatting-of-charts-by-category-label/




Installing Oracle Apps 11i
http://avdeo.com/2010/11/01/installing-oracle-apps-11i/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+advait+(IN+ORACLE+MILIEU+...)

Virtualizing Oracle E-Business Suite through Oracle VM
http://kyuoracleblog.wordpress.com/2012/08/13/virtualizing-oracle-e-business-suite-through-oracle-vm/
Oracle E-Business Suite Release 11i with 9i RAC: Installation and Configuration using AutoConfig
  	Doc ID: 	Note:279956.1
  	


ALERT: Oracle 10g Release 2 (10.2) Support Status and Alerts 
  Doc ID:  Note:316900.1 

Oracle Applications Release 11i with Oracle 10g Release 2 (10.2.0) 
  Doc ID:  Note:362203.1 




Configuring Oracle Applications Release 11i with Oracle10g Release 2 Real Application Clusters and Automatic Storage Management 
  Doc ID:  Note:362135.1 

Oracle E-Business Suite Release 11i Technology Stack Documentation Roadmap 
  Doc ID:  Note:207159.1 

Patching Best Practices and Reducing Downtime 
  Doc ID:  Note:225165.1 

MAA Roadmap for the E-Business Suite 
  Doc ID:  Note:403347.1 

Oracle E-Business Suite Recommended Performance Patches 
  Doc ID:  Note:244040.1 


http://onlineappsdba.com

Upgrading Oracle Application 11i to E-Business Suite R12
http://advait.wordpress.com/2008/03/04/upgrading-oracle-application-11i-to-e-business-suite-r12/

Chapter 5. Patching - Part 1  by Elke Phelps and Paul Jackson
From Oracle Applications DBA Field Guide, Berkeley, Apress, March 2006.
http://www.dbazine.com/oracle/or-articles/phelps1

Oracle E-Business Suite Patching - Best Practices 
http://www.appshosting.com/pub_doc/patching.html

Types Of application Patch
http://oracleebusinesssuite.wordpress.com/2007/05/28/types-of-application-patch/

http://patchsets12.blogspot.com/

E-Business Suite Applications 11i on RAC/ASM
http://www.ardentperf.com/2007/04/18/e-business-suite-applications-11i-on-racasm/

RAC Listener Best Practices
http://www.ardentperf.com/2007/02/28/rac-listener-best-practices/#comment-1412

http://www.integrigy.com/security-resources/whitepapers/Integrigy_Oracle_Listener_TNS_Security.pdf



--------------------------------

Upgrade Oracle Database to 10.2.0.2 : SOA Suite Install Part II
http://onlineappsdba.com/index.php/2007/06/16/upgrade-oracle-database-to-10202-soa-suite-install-part-ii/

Good Metalink Notes or Documentation on Apps 11i/R12/12i Patching
http://onlineappsdba.com/index.php/2008/05/28/good-metalink-notes-or-documentation-on-apps-11ir1212i-patching/

http://teachmeoracle.com/healthcheck02.html

Practical Interview Question for Oracle Apps 11i DBA
http://onlineappsdba.com/index.php/2007/12/08/practical-interview-question-for-oracle-apps-11i-dba/

Oracle Apps 11i with Database 10g R2 10.2.0.2
http://onlineappsdba.com/index.php/2006/08/28/oracle-apps-11i-with-database-10g-r2-10202/







-- INSTALL

Oracle E-Business Suite 11i and Database FAQ
  	Doc ID: 	285267.1

Unbreakable Linux Enviroment check before R12 install
  	Doc ID: 	421409.1

RCONFIG : Frequently Asked Questions
  	Doc ID: 	387046.1

Using Oracle E-Business Suite Release 12 with a Database Tier Only Platform on Oracle 10g Release 2
  	Doc ID: 	456197.1 	Type: 	WHITE PAPER




-- ORACLE VM / VIRTUALIZATION

Using Oracle VM with Oracle E-Business Suite Release 11i or Release 12
(Doc ID 465915.1)

Certified Software on Oracle VM (Doc ID 464754.1)

Hardware Vendor Virtualization Technologies on non x86/x86-64 Architectures and Oracle E-Business Suite (Doc ID 794016.1)




-- CONCURRENT MANAGER

A Script We Use to Monitor Concurrent Jobs and Sessions that Hang (Doc ID 444611.1)




-- TUNING

http://blogs.oracle.com/stevenChan/2007/05/performance_tuning_the_apps_da.html

Troubleshooting Oracle Applications Performance Issues
 	Doc ID:	Note:169935.1

coe_stats.sql - Automates CBO Stats Gathering using FND_STATS and Table sizes
 	Doc ID:	Note:156968.1

bde_last_analyzed.sql - Verifies CBO Statistics
 	Doc ID:	Note:163208.1

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds and/or Waits generated by EVENT 10046
 	Doc ID:	Note:224270.1

Diagnostic Scripts: Data Collection Performance Management
 	Doc ID:	Note:183401.1

Tuning performance on eBusiness suite
 	Doc ID:	Note:744143.1

Does Gather Schema Statistics collect statistics for indexes?
 	Doc ID:	Note:170647.1

Which Method To Gather Statistics When On DB 10g
 	Doc ID:	Note:427878.1

Script to Automate Gathering Stats on Applications 11.5 Using FND_STATS
 	Doc ID:	Note:190177.1

Gather Schema Statistics program hangs or fails with ORA-54 errors
 	Doc ID:	Note:331017.1

Purging Strategy for eBusiness Suite 11i
 	Doc ID:	Note:732713.1

Gather Schema Statistics with LASTRUN Option does not Clean FND_STATS_HIST Table
 	Doc ID:	Note:745442.1

How to get a Trace for And Begin to Analyze a Performance Issue
 	Doc ID:	Note:117129.1

How to Troubleshoot Performance Issues
 	Doc ID:	Note:232419.1

How Often Should Gather Schema Statistics Program be Run?
 	Doc ID:	Note:168136.1

Using the FND_STATS Package for Gathering Statistics and 100% of Sample Data is Returned
 	Doc ID:	Note:197386.1

A Holistic Approach to Performance Tuning Oracle Applications Systems
 	Doc ID:	Note:69565.1

APS Performance TIPS
 	Doc ID:	Note:209996.1

GATHERING STATS FOR APPS 11i IN PARARELL TAKES A LONG TIME
 	Doc ID:	Note:603144.1

ways to calculate
419728.1

histogram
429002.1

How To Gather Statistics On Oracle Applications 11.5.10(and above) - Concurrent Process,Temp Tables, Manually
  	Doc ID: 	419728.1

How To Gather Statistics For Oracle Applications Prior to 11.5.10
  	Doc ID: 	122371.1

How to collect histograms in Apps Ebusiness Suite using FND_STATS
  	Doc ID: 	429002.1

11i: Setup of the Oracle 8i Cost-Based Optimizer (CBO)
  	Doc ID: 	101379.1

Gathering Statistics for the Cost Based Optimizer (Pre 10g)
  	Doc ID: 	114671.1



-- TRACE APPS

Note 296559.1 Tracing FAQ: Common Tracing Techniques within the Oracle Applications 11i

Note 100964.1 - Troubleshooting Performance Issues Relating to the Database and Core/MFG MRP
Note 117129.1 - How to get a Trace for And Begin to Analyze a Performance Issue
Note 130182.1 - HOW TO TRACE FROM FORM, REPORT, PROGRAM AND OTHERS IN ORACLE APPLICATIONS
Note 142898.1 - How To Use Tkprof and Trace With Applications
Note 161474.1 - Oracle Applications Remote Diagnostics Agent (APPS_RDA)
Note 179848.1 - bde_system_event_10046.sql - SQL Trace any transaction with Event 10046 8.1-9.2
Note 224270.1 - Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds and/or Waits generated by EVENT 10046
Note 245974.1 - FAQ - How to Use Debug Tools and Scripts for the APS Suite
Note 279132.1 - set_FND_INIT_SQL.sql - Tracing sessions, Forms and Concurrent Request, for SINGLE Applications User (Binds+Waits)
Note 301372.1 - How to Generate a SQLTrace Including Binds and Waits for a Concurrent Program for 11.5.10 and R12
Note 76338.1 - Tracing Tips for Oracle Applications

A practical guide in Troubleshooting Oracle ERP Applications Performance
    Issues can be found on Metalink under Note 169935.1

Trace 11i Bind Variables - Profile Option: Initialization SQL Statement - Custom
  	Doc ID: 	170223.1

set_FND_INIT_SQL.sql - Tracing sessions, Forms and Concurrent Request, for SINGLE Applications User (Binds+Waits)
  	Doc ID: 	279132.1







-- PLAN STABILITY

Best Practices for automatic statistics collection on Oracle 10g
  	Doc ID: 	377152.1

Restoring table statistics in 10G onwards
  	Doc ID: 	452011.1

Oracle Database Stats History Using dbms_stats.restore_table_stats
  	Doc ID: 	281793.1

Statistics Best Practices: How to Backup and Restore Statistics
  	Doc ID: 	464939.1

Tips for avoiding upgrade related query problems
  	Doc ID: 	167086.1

Recording Explain Plans before an upgrade to 10g or 11g
  	Doc ID: 	466350.1





-- DBMS_STATS

SIZE Clause in METHOD_OPT Parameter of DBMS_STATS Package
  	Doc ID: 	338926.1

Recommendations for Gathering Optimizer Statistics on 10g
  	Doc ID: 	605439.1

Recommendations for Gathering Optimizer Statistics on 11g
  	Doc ID: 	749227.1






-- UPGRADE - MIGRATE

Consolidated Reference List For Migration / Upgrade Service Requests
  	Doc ID: 	762540.1




-- PERFORMANCE SCENARIO

A Holistic Approach to Performance Tuning Oracle Applications Systems
  	Doc ID: 	69565.1

When Conventional Thinking Fails: A Performance Case Study in Order Management Workflow customization
  	Doc ID: 	431619.1

Create Service Request Performance Issue
  	Doc ID: 	303150.1

EBPERF FAQ - Collecting Statistics with Oracle Apps 11i
  	Doc ID: 	368252.1





-- CBO

Managing CBO Stats during an upgrade to 10g or 11g
  	Doc ID: 	465787.1





-- APPLICATION SERVER

Oracle Application Server with Oracle E-Business Suite Release 11i FAQ
  	Doc ID: 	Note:186981.1



-- DEBUG

FAQ - How to Use Debug Tools and Scripts for the APS Suite 
  Doc ID:  245974.1

Debugging Platform Migration Issues in Oracle Applications 11i
  	Doc ID: 	567703.1



-- CLONE

FAQ: Cloning Oracle Applications Release 11i
  	Doc ID: 	216664.1

http://onlineappsdba.com/index.php/2008/02/07/cloning-in-oracle-apps-11i/




-- PLATFORM MIGRATION 

Platform Migration with Oracle Applications Release 12
  	Doc ID: 	438086.1

Migrating to Linux with Oracle Applications Release 11i
  	Doc ID: 	238276.1

Oracle Applications R12 Migration from Solaris to Linux Platform
http://smartoracle.blogspot.com/2008/12/oracle-applications-r12-migration-from.html

http://forums.oracle.com/forums/thread.jspa?threadID=481742&start=0&tstart=0
Thread: 11i migration from solaris to linux 

http://www.dbspecialists.com/files/presentations/cloning.html






-- INTEROPERABILITY

Interoperability Notes Oracle Applications Release 10.7 with Release 8.1.7
  	Doc ID: 	148901.1

Interoperability Notes Oracle Applications Release 11.0 with Release 8.1.7
  	Doc ID: 	148902.1






-- X86-64 SUPPORT

Frequently Asked Questions: Oracle E-Business Suite Support on x86-64
  	Doc ID: 	343917.1




-- ITANIUM SUPPORT

Frequently Asked Questions: Oracle E-Business Suite Support on Itanium
  	Doc ID: 	311717.1





-- DATABASE VAULT

Integrating Oracle E-Business Suite Release 11i with Oracle Database Vault 10.2.0.4
  	Doc ID: 	428503.1 	Type: 	WHITE PAPER





-- EXPORT IMPORT

Export/Import Process for Oracle E-Business Suite Release 12 using 10gR2
  	Doc ID: 	454616.1

9i Export/Import Process for Oracle Applications Release 11i
  	Doc ID: 	230627.1



-- RAC

Oracle E-Business Suite Release 11i with 9i RAC: Installation and Configuration using AutoConfig
  	Doc ID: 	279956.1 	Type: 	WHITE PAPER



-- DATA GUARD

Case Study : Configuring Standby Database(Dataguard) on R12 using RMAN Hot Backup
  	Doc ID: 	753241.1




-- NETWORK 
Oracle E-Business Suite Network Utilities: Best Practices
  	Doc ID: 	Note:556738.1




Installation

Note: 452120.1 - How to locate the log files and troubleshoot RapidWiz for R12
Note: 329985.1 - How to locate the Rapid Wizard Installation log files for Oracle Applications 11.5.8 and higher
Note: 362135.1 - Configuring Oracle Applications Release 11i with Oracle10g Release 2 Real Application Clusters and Automatic Storage Management
Note: 312731.1 - Configuring Oracle Applications Release 11i with 10g RAC and 10g ASM
Note: 216550.1 - Oracle Applications Release 11i with Oracle9i Release 2 (9.2.0)
Note: 279956.1 - Oracle E-Business Suite Release 11i with 9i RAC: Installation and Configuration using AutoConfig
Note: 294932.1 - Recommendations to Install Oracle Applications 11i
Note: 403339.1 - Oracle 10gR2 Database Preparation Guidelines for an E-Business Suite Release 12.0.4 Upgrade
Note: 455398.1 - Using Oracle 11g Release 1 Real Application Clusters and Automatic Storage Management with Oracle E-Business Suite Release 11i
Note: 402311.1 - Oracle Applications Installation and Upgrade Notes Release 12 (12.0.4) for Microsoft Windows
Note: 405565.1 - Oracle Applications Release 12 Installation Guidelines

AD Utilities

Note: 178722.1 - How to Generate a Specific Form Through AD utility ADADMIN
Note: 109667.1 - What is AD Administration on APPS 11.0.x ?
Note: 112327.1 - How Does ADADMIN Know Which Forms Files To Regenerate?
Note: 136342.1 - How To Apply a Patch in a Multi-Server Environment
Note: 109666.1 - Release 10.7 to 11.0.3 : What is adpatch ?
Note: 152306.1 - How to Restart Failed AutoInstall Job
Note: 356878.1 - How to relink an Applications Installation of Release 11i and Release 12
Note: 218089.1 - Autoconfig FAQ
Note: 125922.1 - How To Find Oracle Application File Versions

Cloning

Note: 419475.1 - Removing Credentials from a Cloned EBS Production Database
Note: 398619.1 - Clone Oracle Applications 11i using Oracle Application Manager (OAM Clone)
Note: 230672.1 - Cloning Oracle Applications Release 11i with Rapid Clone
Note: 406982.1 - Cloning Oracle Applications Release 12 with Rapid Clone
Note: 364565.1 - Troubleshooting RapidClone issues with Oracle Applications 11i
Note: 603104.1 - Troubleshooting RapidClone issues with Oracle Applications R12
Note: 435550.1 - R12 Login issue on target after cloning
Note: 559518.1 - Cloning Oracle E-Business Suite Release 12 RAC-Enabled Systems with Rapid Clone
Note: 216664.1 - FAQ: Cloning Oracle Applications Release 11i

Patching

Note: 225165.1 - Patching Best Practices and Reducing Downtime
Note: 62418.1 - PATCHING/PATCHSET FREQUENTLY ASKED QUESTIONS
Note: 181665.1 - Release 11i Adpatch Basics
Note: 443761.1 - How to check if a certain Patch was applied to Oracle Applications instance?
Note: 231701.1 - How to Find Patching History (10.7, 11.0, 11i)
Note: 60766.1 - 11.0.x : Patch Installation Frequently Asked Questions
Note: 459156.1 - Oracle Applications Patching FAQ for Release 12
Note: 130608.1 - AdPatch Basics
Note::60766.1 - Patch Installation FAQ (Part 1)

Upgrade

Note: 461709.1 - Oracle E-Business Suite Upgrade Guide - Plan
Note: 293166.1 - Previous Versions of e-Business 11i Upgrade Assistant FAQ
Note: 224875.1 - Installation, Patching & Upgrade Frequently Asked Questions (FAQ’s)
Note: 224814.1 - Installation, Patching & Upgrade Current Issues
Note: 225088.1 - Installation, Patching & Upgrade Patches Guide
Note: 225813.1 - Installation, Patching & Upgrade Setup and Usage Guide
Note: 224816.1 - Installation, Patching & Upgrade Troubleshooting Guide
Note: 216550.1 - Oracle Applications Release 11i with Oracle9i Release 2 (9.2.0)
Note: 362203.1 - Oracle Applications Release 11i with Oracle 10g Release 2 (10.2.0)
Note: 423056.1 - Oracle Applications Release 11i with Oracle 10g Release 2 (10.2.0.2)
Note: 726982.1 - Oracle Applications Release 11i with Oracle 10g Release 2 (10.2.0.3)
Note: 452783.1 - Oracle Applications Release 11i with Oracle 11g Release 1 (11.1.0)
Note: 406652.1 - Upgrading Oracle Applications 11i DB to DB 10gR2 with Physical Standby in Place
Note: 316365.1 - Oracle Applications Release 11.5.10.2 Maintenance Pack Installation Instructions
Note: 418161.1 - Best Practices for Upgrading Oracle E-Business Suite

Printer

Note: 297522.1 - How to investigate printing issues and work towards its resolution ?
Note: 110406.1 - Check Printing Frequently Asked Questions
Note: 264118.1 - Pasta Pasta Printing Setup Test
Note: 200359.1 - Oracle Application Object Library Printer Setup Test
Note: 234606.1 - Oracle Application Object Library Printer Initialization String Setup Test
Note: 1014599.102 - Subject: How to Test Printer Initialization Strings in Unix

Performance

Note: 390137.1 - FAQ for Collections Performance
Note: 216205.1 - Database Initialization Parameters for Oracle Applications Release 11i
Note: 169935.1 - Troubleshooting Oracle Applications Performance Issues
Note: 171647.1 - Tracing Oracle Applications using Event 10046
Note: 153507.1 - Oracle Applications and StatsPack
Note: 356501.1 - How to Setup Pasta Quickly and Effectively
Note: 333504.1 - How To Print Concurrent Requests in PDF Format
Note: 356972.1 - 11i How to troubleshoot issues with printers

Working with Support: Collaborate (OAUG) 2009 Conference Notes
  	Doc ID: 	820449.1
Tom's Handy SQL for the Oracle Applications
  	Doc ID: 	731190.1


Others

Note: 189367.1 - Best Practices for Securing the E-Business Suite
Note: 403537.1 - Best Practices For Securing Oracle E-Business Suite Release 12
Note: 454616.1 - Export/Import Process for Oracle E-Business Suite Release 12 using 10gR2
Note: 394692.1 - Oracle Applications Documentation Resources, Release 12
Note: 370274.1 - New Features in Oracle Application 11i
Note: 130183.1 - How to Get Log Files from Various Programs for Oracle Applications
Note: 285267.1 - Oracle E-Business Suite 11i and Database FAQ
Note: 453137.1 - Oracle Workflow Best Practices Release 12 and Release 11i
Note: 398942.1 - FNDCPASS Utility New Feature ALLORACLE
Note: 187735.1 - Workflow FAQ - All Versions




-- AUTOCONFIG

Running Autoconfig on RAC instance, Failed with ORA-12504: TNS:listener was not given the SID in CONNECT_DATA
  	Doc ID: 	577396.1

Troubleshooting Autoconfig issues with Oracle Applications RAC Databases
  	Doc ID: 	756050.1



http://www.tomshardware.com/forum/221285-30-memory
https://db-engines.com/en/system/Elasticsearch%3BGoogle+BigQuery%3BSphinx
https://stackoverflow.com/questions/11264868/does-google-bigquery-support-full-text-search
https://medium.com/google-cloud/bigquery-performance-tips-searching-for-text-8x-faster-f9314927b8d2  <- nice
https://medium.com/inside-bizzabo/creating-an-elasticsearch-to-bigquery-data-pipeline-afe7c3f97369
https://www.youtube.com/results?search_query=elasticsearch+fulltext+bigquery
https://www.youtube.com/watch?v=WwN-vq67vBk
https://www.youtube.com/watch?v=H8f59-vxvn4 <- nice
https://www.google.com/search?client=firefox-b-1-d&q=elasticsearch+etl
https://discuss.elastic.co/t/etl-tool-for-elasticsearch/113803/14   <- logstash to etl 
https://qbox.io/blog/integrating-elasticsearch-into-node-js-application   <- nodejs 
https://www.alooma.com/integrations/elasticsearch






! 2020 
<<<
Haven't really played with the whole ELK or TIG stack but I'm watching and I have references and bought courses :p  I just don't have time to catch up


There are two peeps who did it with Oracle in mind: See blog posts by Bertrand and Robin

Bertrand
https://bdrouvot.wordpress.com/2016/03/05/graphing-oracle-performance-metrics-with-telegraf-influxdb-and-grafana/

ELK:
Logstash to collect the information the way we want to.
Elasticsearch as an analytics engine.
Kibana to visualize the data.

TIG:
telegraf: to collect the Exadata metrics
InfluxDB: to store the time-series Exadata metrics
grafana: to visualise the Exadata metrics

Robin
https://www.elastic.co/blog/visualising-oracle-performance-data-with-the-elastic-stack



Here are some courses:

I like this one because of different ways of integration
https://www.udemy.com/course/grafana-graphite-and-statsd-visualize-metrics/
Integration with DataSources
Integration of Grafana with MySQL
Integration of Grafana with SQL Server (version 5.3 and above)
Integration of Grafana with Elasticsearch
Integration of Grafana with AWS Cloudwatch
Integration of Grafana with InfluxDB

Then there's also this thing called prometheus
https://prometheus.io/
https://www.udemy.com/course/monitoring-and-alerting-with-prometheus/





I think ELK and TIG are geared towards real time dashboarding. And these tools compete with New Relic, Splunk, Dynatrace and a lot more that are probably new in the market.





 https://devconnected.com/how-to-setup-telegraf-influxdb-and-grafana-on-linux/
<<<
http://newappsdba.blogspot.com/2009/11/setting-em-blackouts-from-gui-and.html
http://dbakevlar.com/2012/01/getting-the-most-out-of-enterprise-manager-and-notifications/
How to Troubleshoot Process Control (start, stop, check status) the 10g Oracle Management Service(OMS) Component in 10g Enterprise Manager Grid Control [ID 730308.1]
Grid Control Performance: How to Troubleshoot OMS Crash / Restart Issues? [ID 964469.1]
11.1.0.1 emctl start oms gives the error message Unexpected error occurred. Check error and log files [ID 1331527.1]



Troubleshooting Why EM Express is not Working (Doc ID 1604062.1)
NOTE:1601454.1 - EM Express 12c Database Administration Page FAQ

http://www.oracle.com/technetwork/database/manageability/emx-intro-1970113.html

! https
{{{
select dbms_xdb_config.gethttpsport() from dual; 
exec DBMS_XDB_CONFIG.SETHTTPSPORT(5500); 
https://localhost:5500/em/
}}}



{{{
How Reset the SYSMAN password in OEM 12c

-- Stop all the OMS:

$OMS_HOME/bin/emctl stop oms

-- Modify the SYSMAN password:

$OMS_HOME/bin/emctl config oms -change_repos_pwd -use_sys_pwd -sys_pwd <sys pwd> -new_pwd <new sysman pwd>

-- Re-start all the OMS:

$OMS_HOME/bin/emctl stop oms -all
$OMS_HOME/bin/emctl start oms
}}}
http://oraclepoint.com/oralife/2011/10/11/difference-between-oracle-enterprise-manager-10g-and-11g/
''installers''
{{{
Oracle Enterprise Manager Grid Control for Linux x86-64
http://download.oracle.com/otn/linux/oem/1110/GridControl_11.1.0.1.0_Linux_x86-64_1of3.zip
http://download.oracle.com/otn/linux/oem/1110/GridControl_11.1.0.1.0_Linux_x86-64_2of3.zip
http://download.oracle.com/otn/linux/oem/1110/GridControl_11.1.0.1.0_Linux_x86-64_3of3.zip


Agent Software for 64-bit Platforms
http://download.oracle.com/otn/linux/oem/1110/Linux_x86_64_Grid_Control_agent_download_11_1_0_1_0.zip


Oracle WebLogic Server 11gR1 (10.3.2) - Package Installer
http://download.oracle.com/otn/nt/middleware/11g/wls/wls1032_generic.jar
}}}


http://www.oracle.com/technetwork/oem/grid-control/downloads/index.html
http://www.oracle.com/technetwork/middleware/ias/downloads/wls-main-097127.html
http://www.oracle.com/technetwork/oem/grid-control/downloads/linuxx8664soft-085949.html
http://www.oracle.com/technetwork/oem/grid-control/downloads/agentsoft-090381.html
http://www.oracle-base.com/articles/11g/GridControl11gR1InstallationOnOEL5.php
http://ocpdba.wordpress.com/2010/05/28/enterprise-manager-11g-installation/
http://gavinsoorma.com/2010/04/11g-enterprise-manager-grid-control-installation-overview/
http://ivan.kartik.sk/oracle/install_ora11gR1_elinux.html
http://www.masterschema.com/2010/04/install-enterprise-manager-grid-control-11g-release-1/
http://blogs.griddba.com/2010/05/enterprise-manger-grid-control-11g.html


Also check out the [[EnterpriseManagerMetalink]]
http://oemgc.files.wordpress.com/2012/10/em12c-monitoring-best-practices.pdf


How to Deploy Oracle Management Agent 12c http://www.gokhanatil.com/2011/10/how-to-deploy-oracle-management-agent.html

Em12c:Silent Oracle Management agent Installation http://askdba.org/weblog/2012/02/em12c-silent-oracle-management-agent-installation

EM12c:Automated discovery of Targets http://askdba.org/weblog/2012/02/em12c-automated-discovery-of-targets/

Rapid deployment of Enterprise Manager Cloud Control 12c (12.1) Agent http://goo.gl/vqrtK

Auto Discovery of Targets in EM12c http://oemgc.wordpress.com/2012/02/01/auto-discovery-of-targets-in-em12c/  <-- this will discover targets from an IP range

''Official Doc''
 Installing Oracle Management Agent 12.1.0.2 http://docs.oracle.com/cd/E24628_01/install.121/e22624/install_agent.htm#CACJEFJI
 Installing Oracle Management Agent 12.1.0.1 http://docs.oracle.com/html/E22624_12/install_agent.htm#CACJEFJI
Download additional agent 12.1.0.2 software using Self Update http://docs.oracle.com/cd/E24628_01/doc.121/e24473/self_update.htm#BEHGDJGE
Applying bundle patches on Exadata using Enterprise Manager Grid Control https://blogs.oracle.com/XPSONHA/entry/applying_bundle_patches_on_exadata

http://www.oracle.com/technetwork/oem/exa-mgmt/em12c-exadata-discovery-cookbook-1662643.pdf
<<<
Introduction ......................................................................................... 2
Before You Begin................................................................................ 2
Exadata discovery prerequisite check script................................... 3
Launching Discovery........................................................................... 9
Installing the Agents on the Compute Nodes.................................. 9
Running Guided Discovery ........................................................... 13
Post Discovery Setups .................................................................. 23
KVM .............................................................................................. 27
Discovering the Cluster and Oracle Databases ................................ 29
Conclusion ........................................................................................ 36
<<<
http://download.oracle.com/docs/cd/E24628_01/em.121/e25160/oracle_exadata.htm#BABFDHBG
http://blogs.oracle.com/XPSONHA/entry/racle_enterprise_manager_cloud_control
http://www.pythian.com/news/33261/oem12c-discovery-of-exadata-cluster/
12cr4 http://docs.oracle.com/cd/E24628_01/doc.121/e27442/toc.htm
http://www.pythian.com/news/38901/setup-exadata-for-cloud-control-12-1-0-2/
http://www.oracle.com/technetwork/oem/em12c-screenwatches-512013.html
Failover capability for plugins Exadata & EMGC Rapid deployment https://blogs.oracle.com/XPSONHA/entry/failover_capability_for_plugins_exadata
Set OEM 12c Self Update to Offline mode
https://blogs.oracle.com/VDIpier/entry/set_oem_12c_self_update
! Cloud Control Install 
<<<
1) 11.2 RDBMS OS Prereqs
see [[11gR1 Install]]

2) Install RDBMS software
{{{
	-- disable AMM first then set the following
	ALTER SYSTEM SET pga_aggregate_target=1G SCOPE=SPFILE;
	ALTER SYSTEM SET shared_pool_size=600M SCOPE=SPFILE;
	ALTER SYSTEM SET job_queue_processes=20 SCOPE=SPFILE;
	ALTER SYSTEM SET log_buffer=10485760 SCOPE=SPFILE;
	ALTER SYSTEM SET open_cursors=300 SCOPE=SPFILE;
	ALTER SYSTEM SET processes=1000 SCOPE=SPFILE;
	ALTER SYSTEM SET session_cached_cursors=200 SCOPE=SPFILE;
	ALTER SYSTEM SET sga_target=2G SCOPE=SPFILE;
	
	EXEC dbms_auto_task_admin.disable('auto optimizer stats collection',null,null);
}}}
3) Deconfigure 11.2 DB control
{{{
oracle@emgc12c.local:/u01/installers/cloudcontrol:emrep12c
$ $ORACLE_HOME/bin/emca -deconfig dbcontrol db -repos drop
}}}
4) Install Cloud Control

5) Deploy stop/start scripts
{{{
oracle@emgc12c.local:/home/oracle/bin:emrep12c
$ cat start_grid.sh
export ORACLE_SID=emrep12c
export ORAENV_ASK=NO
. oraenv
lsnrctl start
sqlplus / as sysdba  << EOF
startup
EOF
cd /u01/middleware/oms/bin
./emctl start oms
cd /u01/agent/agent_inst/bin
./emctl start agent

cd /u01/agent/agent_inst/bin
./emctl stop agent
cd /u01/middleware/oms/bin
./emctl stop oms -all

cd /u01/middleware/oms/bin
./emctl start oms
cd /u01/agent/agent_inst/bin
./emctl start agent
}}}
{{{
oracle@emgc12c.local:/home/oracle/bin:emrep12c
$ cat stop_grid.sh
cd /u01/agent/agent_inst/bin
./emctl stop agent
cd /u01/middleware/oms/bin
./emctl stop oms -all
export ORACLE_SID=emrep12c
export ORAENV_ASK=NO
. oraenv
sqlplus / as sysdba << EOF
shutdown immediate
EOF
lsnrctl stop
}}}
<<<

! Install Agent
<<<
<<<
Enterprise Manager Agent Downloads Page http://www.oracle.com/technetwork/oem/grid-control/downloads/agentsoft-090381.html
	Enterprise Manager Agent 12.1.0.1 and 12.1.0.2 Binaries
	You can get the 12.1.0.1 / 12.1.0.2 agent binaries for the agent installation by using the Self Updated feature. Refer to the Agent deployment section of the Advance Install guide available here for more details. 
	For information on using the Self Update feature, refer to the Oracle Enterprise Manager Cloud Control Administrator's Guide, available here.
<<<

1) Discover the local emrep database http://oemgc.wordpress.com/2012/01/09/discover-em12c-repository-database-after-installation/

2) Install agent using AgentDeploy from the OMS

edit the /etc/sudoers file on the target for the post install scripts (you can ignore this and run after install) http://www.gokhanatil.com/2011/10/how-to-deploy-oracle-management-agent.html
{{{
#Defaults    requiretty   <-- comment this
# Defaults   !visiblepw   <-- comment this
Defaults   visiblepw
## Allow root to run any commands anywhere
root    ALL=(ALL)       ALL
oracle  ALL=(ALL)       ALL
}}}

3) Activate ASH analytics by deploying the Database Management PL/SQL Packages on target databases
<<<
Also see [[EM12c Agent]]


! Activate other Plug-ins (requires OMS shutdown/restart and will disconnect all targets)




! Errors

Exception: OperationFailedException: Below host metric patches are not applied to OMS.[13426571]
Re: where can i download agent 12c for all platform? https://forums.oracle.com/forums/thread.jspa?threadID=2315005

SEVERE: OUI-10053: Unable to generate temporary script, Unable to continue install    <-- corrupted inventory.xml file, debug opatch issue by ''export OPATCH_DEBUG=TRUE'', do a ''locate inventory.xml'' to get the backup of the inventory.xml
http://www.gokhanatil.com/2012/03/emcli-session-expired-error-and-fqdn.html  <-- on manual agent install when getting the zip software on the OMS



! References
Release Schedule of Current Enterprise Manager Releases and Patch Sets (10g, 11g, 12c) [ID 793512.1]

How to Install Enterprise Manager Cloud Control 12.1.0.1 (12c) on Linux [ID 1359176.1]
EM 12c R2: How to Install Enterprise Manager Cloud Control 12.1.0.2 using GUI Mode [ID 1488154.1]
http://www.gokhanatil.com/2011/10/how-to-install-oracle-enterprise-manager-cloud-control-12c.html
http://docs.oracle.com/cd/E24628_01/install.121/e22624/preinstall_req_hw.htm#BACDDAAC
http://docs.oracle.com/cd/E24628_01/install.121/e22624/preinstall_req_packages.htm#CHDEHHCA
http://www.oracle.com/technetwork/oem/em12c-screenwatches-512013.html <-- includes agent install
http://blogs.oracle.com/VDIpier/entry/installing_oem_12c
http://www.dbspecialists.com/blog/database-monitoring/install-and-configure-oracle-enterprise-manager-cloud-control-12c/  <-- using manual agent install

EM 12c: How to Install EM 12c Agent using Silent Install Method with Response File [ID 1360083.1]
12c Cloud Control: How to Install Cloud Agent on Oracle RAC Nodes? [ID 1377434.1]  <-- In 12c, there is no option to install a 'cluster Agent' as in the earlier versions
EM 12c: How to Install Enterprise Manager 12.1.0.1 Using Silent Method [ID 1361643.1]
How To De-Install the Enterprise Manager 12c Cloud Control [ID 1363418.1]
How to De-install the Enterprise Manager Cloud Control 12c Agent [ID 1368088.1]
FAQ: Enterprise Manager Agent 12c Availbility / Certification / Install / Upgrade Frequently Asked Questions [ID 1488133.1]
Note 1369575.1 EM 12c: Acquiring or Updating the Enterprise Manager Cloud Control 12.1.0.1 Management Agent Software Using the Self Update Feature
Note 406906.1 Understanding Enterprise Manager Certification in My Oracle Support
EM 12c: Troubleshooting 12c Management Agent Installation issues [ID 1396675.1]


oem12cR1 http://www.oracle-base.com/articles/12c/cloud-control-12cr1-installation-on-oracle-linux-5-and-6.php
oem12cR2 http://www.oracle-base.com/articles/12c/cloud-control-12cr2-installation-on-oracle-linux-5-and-6.php



-- display all devices
powermt display dev=all


''EMC VNX'' http://rogerluethy.wordpress.com/2011/01/18/emc-vnx-whats-in-the-box/
''EMC Symmetrix'' http://en.wikipedia.org/wiki/EMC_Symmetrix
''EMC xtremIO'' (the competitor to Nutanix) https://twitter.com/kevinclosson/status/420971534195232768


''Kevin's readables notes'' http://kevinclosson.wordpress.com/2012/01/30/emc-oracle-related-reading-material-of-interest/#comments





https://blogs.oracle.com/pankaj/entry/emcli_setup
http://archives.devshed.com/forums/linux-97/using-lvm-with-san-1988109.html
http://archives.devshed.com/forums/linux-97/using-lvm-with-emc-powerpath-1845854.html
http://lists.us.dell.com/pipermail/linux-poweredge/2006-October/028086.html
http://archives.free.net.ph/message/20060609.164110.a24b2220.en.html
http://www.mail-archive.com/centos@centos.org/msg19136.html
http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/en-US/RHEL510/DM_Multipath/multipath_logical_volumes.html
http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Cluster_Logical_Volume_Manager/lvm_filters.html <-- You can control which devices LVM scans by setting up filters in the lvm.conf configuration file
http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/en-US/RHEL510/Cluster_Logical_Volume_Manager/lvmconf_file.html
http://kbase.redhat.com/faq/docs/DOC-1573


Support Info
http://www.emc.com/support-training/support/maintenance-tech-support/options/index.htm


Powelink:

emc193050	"vgcreate against emcpower device fails on Linux server."
emc193050	"vgcreate against emcpower device fails on Linux server."
emc46848	"Duplicate PVIDS on multiple disks"
emc118890	"How to create a Linux Sistina LVM2 logical volume"
emc118561	"Sistina LVM2 is reporting duplicate PV on RHEL"
emc120281	"How to set up a Linux host to use emcpower devices in LVM"
emc93760	"Where can I find Linux Solutions?"
http://www.pythian.com/news/14721/environment-variables-in-grid-control-user-defined-metrics/
* emctl start agent
* emctl stop agent
* emctl status agent
* emctl upload agent
* emctl resetTZ agent
<<<
if having OMS: AGENT_TZ_MISMATCH errors
<<<
* exec mgmt_admin.cleanup_agent('pd02db02.us.cbre.net:3872');              ''<-- this cleans up any info of that host, for De-commissioned Host''
<<<
Right After Install, the Grid Control Agent Generates ERROR-Agent is blocked. Blocked reason is: Agent is out-of-sync with repository [ID 1307816.1]
<<<
*/u01/app/oracle/product/12.1.0.4/middleware/oms/bin/@@emctl status oms -details@@   ''<-- to get status when deploying plugin''

''OEM Dashboard and Groups''
emctl status agent
emctl config agent listtargets



on repository first do this 
{{{
exec mgmt_admin.cleanup_agent('<hostname>:3872'); <— this cleans up any info of that host, for De-commissioned Host
}}}


{{{
oracle@desktopserver:/app/oracle/product/agent11g/oui/bin:AGENT
$ . ~oracle/.karlenv
<HOME_LIST>
<HOME NAME="OraDb10g_asmhome" LOC="/app/oracle/product/10.2.0/asm" TYPE="O" IDX="1"/>
<HOME NAME="OraDb10g_home2" LOC="/app/oracle/product/10.2.0/db" TYPE="O" IDX="2"/>
<HOME NAME="agent11g1" LOC="/app/oracle/product/agent11g" TYPE="O" IDX="3"/>
<HOME NAME="OraDb10g_asm_10205_home" LOC="/app/oracle/product/10.2.0.5/asm" TYPE="O" IDX="4"/>
<HOME NAME="OraDb10g_db_10205_home" LOC="/app/oracle/product/10.2.0.5/db" TYPE="O" IDX="5"/>
<HOME NAME="Ora11g_gridinfrahome1" LOC="/app/oracle/product/11.2.0.3/grid" TYPE="O" IDX="6"/>
<HOME NAME="OraDb11g_home1" LOC="/app/oracle/product/11.2.0.3/db" TYPE="O" IDX="7"/>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>


 1-   epm10prd
 2-   cog10prd
 3-    statprd
 4-      AGENT
 5-       +ASM

Select the Oracle SID with given number [1]:

oracle@desktopserver:/app/oracle/product/agent11g/oui/bin:
$ ./runInstaller -deinstall ORACLE_HOME=/app/oracle/product/agent11g "REMOVE_HOMES={/app/oracle/product/agent11g}" -silent
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 30047 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2012-11-28_04-07-20PM. Please wait ...oracle@desktopserver:/app/oracle/product/agent11g/oui/bin:
$ Oracle Universal Installer, Version 11.1.0.8.0 Production
Copyright (C) 1999, 2010, Oracle. All rights reserved.

Starting deinstall


Deinstall in progress (Wednesday, November 28, 2012 4:07:31 PM CST)
Configuration assistant "Agent Deinstall Assistant" succeeded
Configuration assistant "Oracle Configuration Manager Deinstall" succeeded
............................................................... 100% Done.

Deinstall successful

End of install phases.(Wednesday, November 28, 2012 4:08:13 PM CST)
End of deinstallations
Please check '/app/oraInventory/logs/silentInstall2012-11-28_04-07-20PM.log' for more details.

oracle@desktopserver:/app/oracle/product/agent11g/oui/bin:epm10prd
$ . ~oracle/.karlenv
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
<HOME_LIST>
<HOME NAME="OraDb10g_asmhome" LOC="/app/oracle/product/10.2.0/asm" TYPE="O" IDX="1"/>
<HOME NAME="OraDb10g_home2" LOC="/app/oracle/product/10.2.0/db" TYPE="O" IDX="2"/>
<HOME NAME="OraDb10g_asm_10205_home" LOC="/app/oracle/product/10.2.0.5/asm" TYPE="O" IDX="4"/>
<HOME NAME="OraDb10g_db_10205_home" LOC="/app/oracle/product/10.2.0.5/db" TYPE="O" IDX="5"/>
<HOME NAME="Ora11g_gridinfrahome1" LOC="/app/oracle/product/11.2.0.3/grid" TYPE="O" IDX="6"/>
<HOME NAME="OraDb11g_home1" LOC="/app/oracle/product/11.2.0.3/db" TYPE="O" IDX="7"/>
<HOME NAME="agent11g1" LOC="/app/oracle/product/agent11g" TYPE="O" IDX="3" REMOVED="T"/>
</HOME_LIST>


 1-   epm10prd
 2-   cog10prd
 3-    statprd
 4-      AGENT        <-- it's still there!!!
 5-       +ASM

Select the Oracle SID with given number [1]:


next is manually remove it from the /etc/oratab and /app/oraInventory/ContentsXML/inventory.xml
}}}
http://www.evernote.com/shard/s48/sh/0c1c4419-cc71-43d1-b833-3158554a16dd/4202762f0bd31d3becafa02b760ae6fa
Creating a view only user in Enterprise Manager grid control http://dbastreet.com/blog/?p=395
http://boomslaang.wordpress.com/2008/05/27/securing-oracle-agents/

Right After Install, the Grid Control Agent Generates ERROR-Agent is blocked. Blocked reason is: Agent is out-of-sync with repository [ID 1307816.1]  <-- this fixed it
Communication: Agent to OMS Communication Fails if the Agent is 'Blocked' in the 10.2.0.5 Grid Console [ID 799618.1]
11.1 Agent Upload is Failing With "ERROR-Agent is blocked. Blocked reason is: Agent is out-of-sync with repository" [ID 1362430.1]
* ESCOM error while pressing enter, enter the following as root to change the behavior

xmodmap -e 'keycode 104 = Return'
Oracle Identity Management Certification
http://www.oracle.com/technology/software/products/ias/files/idm_certification_101401.html#BABFFCJA



eSSO: Overview And Troubleshooting Of OIM Integration With Provisioning Gateway
  	Doc ID: 	Note:550639.1
  	
ESSO - debugging terminal emulator templates
  	Doc ID: 	Note:445012.1
  	
How to Upgrade eSSO
  	Doc ID: 	Note:471825.1

eSSO: Credentials Might Get Corrupted
  	Doc ID: 	Note:563523.1

Installation and Configuration of the ESSO-LM with Oracle Database
  	Doc ID: 	Note:456062.1

ESSO - Putty autologin to Unix server
  	Doc ID: 	Note:412967.1

eSSO: Overview And Troubleshooting Provisioning Gateway
  	Doc ID: 	Note:549189.1

eSSO: How To Integrate an Application Having Windows Based Login and Web Based Password Change
  	Doc ID: 	Note:470492.1

Does Oracle Single Sing-On have any Means to Provide Two Factor Authentication?
  	Doc ID: 	Note:559094.1

Installing eSSO Login Manager On Windows Vista Fails If User Is Not Administrator
  	Doc ID: 	Note:469501.1

Failed To Detect Change Window Password Of Oracle Forms 6
  	Doc ID: 	Note:563955.1

ESSO - Logon Manager Agent - enabling traces for intercepted windows
  	Doc ID: 	Note:412995.1
https://statsbot.co/blog/etl-vs-elt/


<<<
ETL vs ELT: running transformations in a data warehouse
What exactly happens when we switch “L” and “T”? With new, fast data warehouses some of the transformation can be done at query time. But there are still a lot of cases where it would take quite a long time to perform huge calculations. So instead of doing these transformations at query time you can perform them in the warehouse, but in the background, after loading data.
<<<

<<showtoc>>


! home 

!! gen1 home 
https://docs.oracle.com/en/cloud/paas/exadata-cloud/

!! gen2 home 
https://www.oracle.com/database/exadata-cloud-customer.html


! documentation 

!! gen1 doc
Administering Oracle Database Exadata Cloud at Customer (Gen 1/OCI-C)
https://docs.oracle.com/en/cloud/cloud-at-customer/exadata-cloud-at-customer/exacc/service-instances.html#GUID-B34563D6-9581-4390-AE6E-3D2304E829EE

!! gen2 doc 
https://docs.oracle.com/en-us/iaas/exadata/doc/ecc-exadata-cloud-at-customer-overview.html


! datasheet 
!! gen1 ds
(until x7) https://www.oracle.com/technetwork/database/exadata/exacc-x7-ds-4126773.pdf
!! gen2 ds 
(x8 onwards) https://www.oracle.com/a/ocom/docs/engineered-systems/exadata/gen2-exacc-commercial-faqs.pdf
https://www.oracle.com/a/ocom/docs/engineered-systems/exadata/gen2-exacc-ds.pdf


! Architecture_Diagrams
Understanding the Exadata Cloud at Customer Technical Architecture (Gen 1/OCI-C) https://www.oracle.com/webfolder/technetwork/tutorials/Architecture_Diagrams/ecc_arch/ecc_arch.html
Understanding the Exadata Cloud Service Technical Architecture (Gen 1/OCI-C) https://www.oracle.com/webfolder/technetwork/tutorials/Architecture_Diagrams/ecs_arch/ecs_arch.html#



! other references 

!! gen2 
https://wikibon.com/oracle-ups-its-game-with-gen-2-cloud-at-customer/
https://www.ejgallego.com/2018/10/oracle-cloud-gen-2/
https://blog.dbi-services.com/oracle-open-world-2018-cloudgen2/larrykeynote008/
https://www.oracle.com/a/ocom/docs/constellation-on-gen-2-exacc-fr.pdf




! EXACLI 
https://docs.oracle.com/en/engineered-systems/exadata-database-machine/dbmmn/exacli.html#GUID-6BF1E4F5-A63E-4A30-886A-2F3DB8A2830F
<<showtoc>>




oracle automated patching 
https://www.google.com/search?q=oracle+automated+patching&oq=oracle+automated+patching&aqs=chrome..69i57j0l2j69i64l3.3858j0j1&sourceid=chrome&ie=UTF-8

https://www.oracle.com/technical-resources/articles/enterprise-manager/havewala-patching-oem12c.html
https://www.doag.org/formes/pubfiles/9627420/2017-DB-Nicolas_Jardot-Automate_Patching_for_Oracle_Database_in_your_Private_Cloud-Praesentation.pdf

oracle dbaascli patch oracle cloud on-prem
https://www.google.com/search?sxsrf=ACYBGNT09LMBA3N-RkJoTvWrRClLS9bTKw%3A1568043638172&ei=dnJ2Xd6CCtHU5gLhpIPgCA&q=oracle+dbaascli+patch+oracle+cloud+on-prem&oq=oracle+dbaascli+patch+oracle+cloud+on-prem&gs_l=psy-ab.3..33i160l3.8173.10244..10526...0.2..0.123.834.3j5......0....1..gws-wiz.......0i71j33i299.o-i2Ht-IvTQ&ved=0ahUKEwjela7gicTkAhVRqlkKHWHSAIwQ4dUDCAs&uact=5

https://gokhanatil.com/2016/12/how-to-patch-oracle-database-on-the-oracle-cloud.html

https://docs.oracle.com/en/cloud/paas/database-dbaas-cloud/csdbi/patch-hybrid-dr-deployment.html

patching on-premise oracle exadata
https://www.google.com/search?q=patching+on-premise+oracle+exadata&oq=patching+on-premise+oracle+exadata&aqs=chrome..69i57j33.10698j0j4&sourceid=chrome&ie=UTF-8

! exadata cloud 
Patching Exadata Cloud Service https://docs.oracle.com/en/cloud/paas/exadata-cloud/csexa/patch.html
https://docs.oracle.com/en/cloud/paas/exadata-cloud/csexa/typical-workflow-using-service.html

! exadata cloud at customer 
Patching Exadata Cloud at Customer  https://docs.oracle.com/en/cloud/cloud-at-customer/exadata-cloud-at-customer/exacc/patch.html , https://docs.oracle.com/en/cloud/cloud-at-customer/index.html




https://www.oracle.com/database/exadata-cloud-service.html
<<showtoc>> 

! sqltext_to_signature example use
{{{
-- sqltext_to_signature example use
exec DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE('karlarao');   
-- 1 is force matching
select dbms_sqltune.sqltext_to_signature('karlarao',1) from dual;
2777083410832069452
}}}
https://docs.oracle.com/database/121/ARPLS/d_sqltun.htm#ARPLS68464


! the two example SQLs
{{{
select * from karlarao.skew where skew=3;   --6fvyp18cvnzwa 375614277642158684 -- exact matching 4404474968209701751  -- force matching 1949605896 PHV
select  *   from karlarao.SKEW Where skew=3;   --1myj38m1m3g2u 375614277642158684 -- exact matching 4404474968209701751  -- force matching 1949605896 PHV 

set serveroutput on 
VARIABLE sql1 VARCHAR2(100)
VARIABLE sql2 VARCHAR2(100)
BEGIN
  :sql1 := q'[select * from karlarao.skew where skew=3]';
  :sql2 := q'[select  *   from karlarao.SKEW Where skew=3]';
END;
/
col signature format 999999999999999999999999
SELECT :sql1 sql_text, dbms_sqltune.sqltext_to_signature(:sql1,0) signature FROM dual
UNION ALL
SELECT :sql2 sql_text, dbms_sqltune.sqltext_to_signature(:sql2,0) signature FROM dual
}}}


! EXACT_MATCHING_SIGNATURE vs FORCE_MATCHING_SIGNATURE vs dbms_sqltune.sqltext_to_signature across views 
{{{
select * from v$sql where sql_id in ('6fvyp18cvnzwa','1myj38m1m3g2u');
select * from gv$sqlstats where sql_id in ('6fvyp18cvnzwa','1myj38m1m3g2u'); -- same as v$sql but planx uses this
--EXACT_MATCHING_SIGNATURE (dbms_sqltune.sqltext_to_signature) - Signature calculated on the normalized SQL text. The normalization includes the removal of white space and the uppercasing of all non-literal strings.
--FORCE_MATCHING_SIGNATURE - Signature used when the CURSOR_SHARING parameter is set to FORCE

select * from dba_sql_plan_baselines;   -- only signature 375614277642158684 which is the EXACT_MATCHING_SIGNATURE
-- SQL_05367332076d025c SQL_HANDLE , SQL_PLAN_0admm683qu0kw08e93fe4 PLAN_NAME

select * from dba_sql_profiles;  -- here signature means 4404474968209701751 FORCE_MATCHING_SIGNATURE, with FORCE_MATCHING=yes/no
select name, force_matching, signature, created from dba_sql_profiles where signature in (select force_matching_signature from dba_hist_sqlstat where sql_id = '6fvyp18cvnzwa');

select * from dba_hist_sqlstat where sql_id in ('6fvyp18cvnzwa'); --4404474968209701751 only contains FORCE_MATCHING_SIGNATURE



-- generate explain plans of SQL handle 
SELECT sql_handle FROM dba_sql_plan_baselines WHERE signature = 375614277642158684 AND ROWNUM = 1; --SQL_05367332076d025c
SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY_SQL_PLAN_BASELINE('SQL_05367332076d025c'));

}}}




! how it works - detailed example using SQL Profile and SPM 
{{{

--6fvyp18cvnzwa
-- start with full table scan plan
ALTER SESSION SET OPTIMIZER_USE_SQL_PLAN_BASELINES=false;
select * from karlarao.skew where skew=3;

DECLARE
my_plans pls_integer;
BEGIN
my_plans := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id => '6fvyp18cvnzwa',plan_hash_value=>'246648590', fixed =>'YES', enabled=>'YES');
END;
/

create index karlarao.skew_idx on skew(skew); 
exec dbms_stats.gather_index_stats(user,'SKEW_IDX', no_invalidate => false); 
exec dbms_stats.gather_table_stats(user,'SKEW', no_invalidate => false); 

--1myj38m1m3g2u
-- to parse and pickup the new index and create a new PHV
ALTER SESSION SET OPTIMIZER_USE_SQL_PLAN_BASELINES=false;  
select  *   from karlarao.SKEW Where skew=3


-- even with different SQL_ID, what matters is the text matches the EXACT_MATCHING_SIGNATURE 375614277642158684 to be tied to SQL_HANDLE SQL_05367332076d025c as a new SQL PLAN_NAME
DECLARE
my_plans pls_integer;
BEGIN
my_plans := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id => '1myj38m1m3g2u',plan_hash_value=>'1949605896', fixed =>'YES', enabled=>'YES');
END;
/

SQL handle: SQL_05367332076d025c
SQL text: select * from karlarao.skew where skew=3
Plan name: SQL_PLAN_0admm683qu0kw08e93fe4 
Plan hash value: 1949605896
Plan name: SQL_PLAN_0admm683qu0kw950a48a8         
Plan hash value: 246648590


02:19:18 KARLARAO@cdb1> SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY_SQL_PLAN_BASELINE('SQL_05367332076d025c'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

--------------------------------------------------------------------------------
SQL handle: SQL_05367332076d025c
SQL text: select * from karlarao.skew where skew=3
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
Plan name: SQL_PLAN_0admm683qu0kw08e93fe4         Plan id: 149503972
Enabled: YES     Fixed: YES     Accepted: YES     Origin: MANUAL-LOAD
Plan rows: From dictionary
--------------------------------------------------------------------------------

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Plan hash value: 1949605896

------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |          |       |       |     2 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| SKEW     |     1 |     7 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | SKEW_IDX |     1 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------


PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("SKEW"=3)

--------------------------------------------------------------------------------
Plan name: SQL_PLAN_0admm683qu0kw950a48a8         Plan id: 2500479144
Enabled: YES     Fixed: YES     Accepted: YES     Origin: MANUAL-LOAD
Plan rows: From dictionary
--------------------------------------------------------------------------------


PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 246648590

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |     8 (100)|          |
|*  1 |  TABLE ACCESS FULL| SKEW |     1 |     7 |     8  (13)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

   1 - filter("SKEW"=3)

46 rows selected.




-- another way to add a new plan to an existing baseline is to create a test SQL, add hints to get the desired plan
-- once created, the explain plan will show both profile and baseline are used although the old plan is still in effect
-- the SQL profile plan will only be in effect when the PHV is added to the SPM 

select /* new */ * from karlarao.SKEW Where skew=3;


02:47:23 KARLARAO@cdb1> @copy_plan_hash_value.sql
Enter value for plan_hash_value to generate profile from (X0X0X0X0): 1949605896
Enter value for sql_id to attach profile to (X0X0X0X0): 6fvyp18cvnzwa
Enter value for child_no to attach profile to (0): 
Enter value for category (DEFAULT): 
Enter value for force_matching (false): true
old  18:             plan_hash_value = '&&plan_hash_value_from'
new  18:             plan_hash_value = '1949605896'
old  32:    sql_id = '&&sql_id_to'
new  32:    sql_id = '6fvyp18cvnzwa'
old  33:    and child_number = &&child_no_to;
new  33:    and child_number = 0;
old  37:    dbms_output.put_line ('SQL_ID ' || '&&sql_id_to' || ' not found in v$sql');
new  37:    dbms_output.put_line ('SQL_ID ' || '6fvyp18cvnzwa' || ' not found in v$sql');
old  49:       sql_id = '&&sql_id_to';
new  49:       sql_id = '6fvyp18cvnzwa';
old  53:       dbms_output.put_line ('SQL_ID ' || '&&sql_id_to' || ' not found in dba_hist_sqltext');
new  53:       dbms_output.put_line ('SQL_ID ' || '6fvyp18cvnzwa' || ' not found in dba_hist_sqltext');
old  60: name => 'SP_'||'&&sql_id_to'||'_'||'&&plan_hash_value_from',
new  60: name => 'SP_'||'6fvyp18cvnzwa'||'_'||'1949605896',
old  61: category => '&&category',
new  61: category => 'DEFAULT',
old  63: force_match => &&force_matching
new  63: force_match => true

PL/SQL procedure successfully completed.


DECLARE
my_plans pls_integer;
BEGIN
my_plans := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id => '6fvyp18cvnzwa',plan_hash_value=>'1949605896', fixed =>'YES', enabled=>'YES');
END;
/
375614277642158684	SQL_05367332076d025c	select * from karlarao.skew where skew=3	SQL_PLAN_0admm683qu0kw08e93fe4	HR	MANUAL-LOAD	KARLARAO
375614277642158684	SQL_05367332076d025c	select * from karlarao.skew where skew=3	SQL_PLAN_0admm683qu0kw950a48a8	SYS	MANUAL-LOAD	SYS


}}}




http://www.dbspecialists.com/files/presentations/semijoins.html
-- MATRIX 

Export/Import DataPump: The Minimum Requirements to Use Export DataPump and Import DataPump (System Privileges)
  	Doc ID: 	Note:351598.1

Export/Import DataPump Parameter VERSION - Compatibility of Data Pump Between Different Oracle Versions
  	Doc ID: 	Note:553337.1

Oracle Server - Export and Import FAQ
  	Doc ID: 	175624.1

Oracle Server - Export Data Pump and Import DataPump FAQ (Doc ID 556636.1)

Compatibility Matrix for Export And Import Between Different Oracle Versions
  	Doc ID: 	132904.1

Compatibility and New Features when Transporting Tablespaces with Export and Import
  	Doc ID: 	291024.1

How to Gather the Header Information and the Content of an Export Dumpfile ?
  	Doc ID: 	462488.1






Exporting to Tape on Unix System
  	Doc ID: 	Note:30428.1
  	
How to Estimate Export File Size Without Creating Dump File
  	Doc ID: 	Note:106465.1
  	
Exporting on Unix Systems
  	Doc ID: 	Note:1018477.6
  	
Exporting/Importing From Multiple Tapes
  	Doc ID: 	Note:2035.1
  	
Exporting to Tape Fails with Errors EXP-00002 and EXP-00000
  	Doc ID: 	Note:160764.1
  	
Large File Issues (2Gb+) when Using Export (EXP-2 EXP-15), Import (IMP-2 IMP-21), or SQL*Loader
  	Doc ID: 	Note:30528.1
  	
Export Using the Parameter VOLSIZE
  	Doc ID: 	Note:90620.1
  	
Parameter FILESIZE - Make Export Write to Multiple Export Files
  	Doc ID: 	Note:290810.1
  	

Compatibility Matrix for Export And Import Between Different Oracle Versions
  	Doc ID: 	Note:132904.1

How To Copy Database Schemas To A New Database With Same Login Password ?
  	Doc ID: 	Note:336012.1

How to Capture Table Constraints onto a SQL Script
  	Doc ID: 	Note:1016836.6

Using DBMS_METADATA To Get The DDL For Objects
  	Doc ID: 	Note:188838.1



-- DATA PUMP

Oracle DataPump Quick Start
 	Doc ID:	Note:413965.1

DataPump Export/Import Generate Messages "The Value (30) Of Maxtrans Parameter Ignored" in Alert Log 
  Doc ID:  Note:455021.1 

How To Cleanup Orphaned DataPump Jobs In DBA_DATAPUMP_JOBS ?
  	Doc ID: 	Note:336014.1



-- CANCEL, STOP, RESTART 

How To Cleanup Orphaned DataPump Jobs In DBA_DATAPUMP_JOBS ? 
  Doc ID:  Note:336014.1 

HOW TO CLEANUP ROWS IN DBA_DATAPUMP_JOBS FOR STOPPED EXP/IMP JOBS WHEN DUMPFILE IS NOT THERE OR CORRUPTED 
  Doc ID:  Note:294618.1 


-- 32bit 64bit

Note: 277650.1 - How to Use Export and Import when Transferring Data Across Platforms or Across 32-bit and 64-bit Servers
https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=277650.1

Note: 553337.1 - Export/Import DataPump Parameter VERSION - Compatibility of Data Pump Between Different Oracle Versions
https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=553337.1

Note: 132904.1 - Compatibility Matrix for Export And Import Between Different Oracle Versions
https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=132904.1




-- EXP IMP PERFORMANCE

http://www.oracle.com/technology/products/database/utilities/htdocs/datapump_faq.html
http://www.freelists.org/post/oracle-l/import-tuning
http://www.dba-oracle.com/oracle_tips_load_speed.htm

IMPORT / EXPORT UTILITY RUNNING EXTREMELY SLOW
  	Doc ID: 	1012699.102

Tuning Considerations When Import Is Slow
  	Doc ID: 	93763.1

Parallel Capabilities of Oracle Data Pump
  	Doc ID: 	365459.1

Export/Import DataPump Parameter ACCESS_METHOD - How to Enforce a Method of Loading and Unloading Data ?
  	Doc ID: 	552424.1









-- DDL

Unix Script: IMPSHOW2SQL - Extracting SQL from an EXPORT file
  	Doc ID: 	29765.1

How to Gather the Header Information and the Content of an Export Dumpfile ?
  	Doc ID: 	462488.1





-- MIGRATION

How to Perform a Full Database Export Import during Upgrade, Migrate, Copy, or Move of a Database
  	Doc ID: 	286775.1



-- EXPDP ON ASM

Creating dumpsets in ASM
  	Doc ID: 	559878.1

How To Extract Datapump File From ASM Diskgroup To Local Filesystem?
  	Doc ID: 	566941.1
<<showtoc>>



! ash elap - shark fin 
* the idea behind shark fin viz is simple. The biggest shark = worst SQL 

tableau calculated field 
{{{
DATEDIFF('second',MIN([TMS]),max([TM]))


Above calculates the diff of these two columns to form a shark fin, bigger fins mean longer running query. Then you can use the ASH dimensions to slice and dice the data
min(SQL_EXEC_START), max(SAMPLE_TIME)
}}}

[img(100%,100%)[https://i.imgur.com/XqWgFxM.jpg]]
[img(100%,100%)[https://i.imgur.com/OzlGpU4.png]]




! ash elap scripts (script version of shark fin using SQL_EXEC_START and SAMPLE_TIME)


!! ash_elap_topsql.sql (top n by elapsed)
https://github.com/karlarao/scripts/blob/master/performance/ash_elap_topsql.sql
{{{

00:00:33 SYS@cdb1> @ash_elap_topsql

SQL_ID        SQL_TYPE                     MODULE                                             PARSING_SCHEMA       DISTINCT_PHV EXEC_COUNT   ELAP_AVG   ELAP_MIN   ELAP_MAX    PCT_CPU   PCT_WAIT     PCT_IO MAX_TEMP_MB MAX_PGA_MB MAX_READ_MB MAX_WRITE_MB  MAX_RIOPS  MAX_WIOPS
------------- ---------------------------- -------------------------------------------------- -------------------- ------------ ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------- ---------- ----------- ------------ ---------- ----------
2j1z0b4ptkqtb PL/SQL EXECUTE               sqlplus@karldevfedora (TNS V1-V3)                  SYS                             1          1       7.07       7.07       7.07        100          0          0           1       8.16         7.2                    682
b9p45hkcx0pwh SELECT                                                                          SYS                             1          1       5.54       5.54       5.54          0          0        100                   13.3        13.4          .08       675  83
cnphq355f5rah PL/SQL EXECUTE               DBMS_SCHEDULER                                     SYS                             1          1       4.35       4.35       4.35        100          0          0           1      14.23        4.93         2.26       281   5
4u5zq7r9y690a SELECT                       DBMS_SCHEDULER                                     SYS                             2          2       2.89       2.78          3        100          0          0                  44.17       40.93          .13      5157   7
acc988uzvjmmt DELETE                       MMON_SLAVE                                         SYS                             2          2       2.32       1.92       2.73         50          0         50                   1.42       35.33                   1255
6ajkhukk78nsr PL/SQL EXECUTE               MMON_SLAVE                                         SYS                            14         14       1.56         .5       2.63        100          0          0           2      26.73        1.38                    100
4d4gpy6vwqcyw SELECT                       SQL Developer                                      HR                              2          2       1.58        .66       2.49        100          0          0           2      12.09        2.24                    172
3wrrjm9qtr2my SELECT                       MMON_SLAVE                                         SYS                             4          4       1.79       1.14       2.26        100          0          0                  30.23       33.95                    320
0w26sk6t6gq98 SELECT                       MMON_SLAVE                                         SYS                             1          1       2.12       2.12       2.12        100          0          0           1       4.36           2                    249
6ajkhukk78nsr PL/SQL EXECUTE               SQL Developer                                      HR                              1          1       2.12       2.12       2.12        100          0          0           2      11.53        30.8          .27      1577  18
d2tvgg49y2ap6 SELECT                       MMON_SLAVE                                         SYS                             1          1       1.98       1.98       1.98        100          0          0           1      22.79        2.08                    217

SQL_ID        SQL_TYPE                     MODULE                                             PARSING_SCHEMA       DISTINCT_PHV EXEC_COUNT   ELAP_AVG   ELAP_MIN   ELAP_MAX    PCT_CPU   PCT_WAIT     PCT_IO MAX_TEMP_MB MAX_PGA_MB MAX_READ_MB MAX_WRITE_MB  MAX_RIOPS  MAX_WIOPS
------------- ---------------------------- -------------------------------------------------- -------------------- ------------ ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------- ---------- ----------- ------------ ---------- ----------
fuws5bqghb2qh SELECT                       MMON_SLAVE                                         SYS                             8          8       1.17        .27       1.96        100          0          0                   7.29         .11                      7
2n1wa7zpt48cg SELECT                       MMON_SLAVE                                         SYS                             1          1       1.91       1.91       1.91        100          0          0           1      52.23       16.51                    777
2afh4r7z1rfv6 INSERT                       MMON_SLAVE                                         SYS                             1          1       1.85       1.85       1.85        100          0          0                  11.17         .43                     35
1k5c5twx2xr01 INSERT                       DBMS_SCHEDULER                                     SYS                             1          1       1.85       1.85       1.85          0        100          0                   1.17
3xjw1ncw5vh27 SELECT                       DBMS_SCHEDULER                                     SYS                             4          4       1.28        .41       1.84        100          0          0                  12.73        1.46                    139
a6ygk0r9s5xuj SELECT                       MMON_SLAVE                                         SYS                             8          8       1.03         .3       1.83         75         13         13                   8.98         .27                     20
bkfnnm1unwz2b SELECT                       SQL Developer                                      HR                              1          1       1.83       1.83       1.83        100          0          0          25      26.23      189.17          .31      4607   8
15wvjr16nbyf9 SELECT                       MMON_SLAVE                                         SYS                             1          1       1.76       1.76       1.76        100          0          0           1      25.23        4.63                    363
3vg8wn9rtb8r6 SELECT                       DBMS_SCHEDULER                                     SYS                             1          1       1.73       1.73       1.73        100          0          0           2      27.73       34.94          .14      2437   6
c9umxngkc3byq SELECT                       MMON_SLAVE                                         SYS                             3          3       1.26        .89       1.73        100          0          0                    .86
gbb40ccatx69g SELECT                       sqlplus@karldevfedora (TNS V1-V3)                  SYS                             1          1       1.72       1.72       1.72        100          0          0                  31.11        1.13                     89

}}}





!! ash_elap_hist.sql (by elap > seconds filter)
https://github.com/karlarao/scripts/blob/master/performance/ash_elap_hist.sql
{{{

23:57:26 SYS@cdb1> @ash_elap_hist

DBA_HIST_ACTIVE_SESS_HISTORY - ash_elap by exec (recent)
~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for run_time_sec: 5
old  26: where run_time_sec > &run_time_sec
new  26: where run_time_sec > 5

SQL_ID        SQL_EXEC_ID SQL_PLAN_HASH_VALUE SQL_EXEC_START                 RUN_TIME_TIMESTAMP             RUN_TIME_SEC
------------- ----------- ------------------- ------------------------------ ------------------------------ ------------
2j1z0b4ptkqtb    16777216                   0 12-APR-21 12.19.03.000000 AM   +000000000 00:00:07.071               7.071
b9p45hkcx0pwh    16777216                   0 12-APR-21 10.58.07.000000 PM   +000000000 00:00:05.541               5.541


DBA_HIST_ACTIVE_SESS_HISTORY - ash_elap exec avg min max
~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for sql_id: 2j1z0b4ptkqtb

SQL_PLAN_HASH_VALUE   COUNT(*)        AVG        MIN        MAX
------------------- ---------- ---------- ---------- ----------
                  0          1       7.07       7.07       7.07
                             1       7.07       7.07       7.07

}}}




!! ash_elap_hist_user.sql (by parsing_schema)
https://github.com/karlarao/scripts/blob/master/performance/ash_elap_hist_user.sql
{{{

23:58:26 SYS@cdb1> @ash_elap_hist_user

DBA_HIST_ACTIVE_SESS_HISTORY - ash_elap by exec (recent)
~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for user: HR
old  21: and user_id in (select user_id from dba_users where upper(username) like upper('&&user'))
new  21: and user_id in (select user_id from dba_users where upper(username) like upper('HR'))

SQL_ID        SQL_EXEC_ID SQL_PLAN_HASH_VALUE SQL_EXEC_START                 RUN_TIME_TIMESTAMP             RUN_TIME_SEC
------------- ----------- ------------------- ------------------------------ ------------------------------ ------------
510myug0qnp5j    16777216                   0 06-APR-21 09.25.23.000000 PM   +000000000 00:00:00.331                .331
bkfnnm1unwz2b    16777216          2479479715 07-APR-21 01.19.13.000000 AM   +000000000 00:00:01.826               1.826
6ajkhukk78nsr    16777218                   0 11-APR-21 11.53.05.000000 PM   +000000000 00:00:02.124               2.124
5h7w8ykwtb2xt    16777433          4166561850 11-APR-21 11.53.36.000000 PM   +000000000 00:00:01.166               1.166
4d4gpy6vwqcyw    16777220          1820166347 12-APR-21 02.03.27.000000 AM   +000000000 00:00:02.494               2.494
4d4gpy6vwqcyw    16777223          1820166347 12-APR-21 02.51.13.000000 AM   +000000000 00:00:00.663                .663

6 rows selected.


DBA_HIST_ACTIVE_SESS_HISTORY - ash_elap exec avg min max
~~~~~~~~~~~~~~~~~~~~~~~~~

SQL_PLAN_HASH_VALUE   COUNT(*)        AVG        MIN        MAX
------------------- ---------- ---------- ---------- ----------
         2479479715          1       1.83       1.83       1.83
         1820166347          2       1.58        .66       2.49
         4166561850          1       1.17       1.17       1.17
                  0          2       1.23        .33       2.12
                             6       1.43        .33       2.49

}}}


! ash_elap as part of planx.sql (sqldb360)
https://github.com/karlarao/scripts/blob/master/performance/planx.sql
https://github.com/karlarao/sqldb360/blob/master/sql/planx.sql  (same as above)
* I customized the planx.sql to include the ash_elap scripts to have a better view of recent elapsed time and performance of the SQL_ID
** temp,pga,read,write,riops,wiops by SQL_EXEC_ID
** PHV avg,min,max 

{{{
@planx Y brfw9gfks2d37
}}}


{{{

SQL_ID: brfw9gfks2d37
SIGNATURE: 9196040709030148699
SIGNATUREF: 9196040709030148699
DB NAME: CDB1
INSTANCE NAME: cdb1
PDB NAME: cdb1

select min(hire_date), max(hire_date) from hr.employees


GV$SQL_PLAN_STATISTICS_ALL LAST (ordered by inst_id and child_number)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Inst: 1   Child: 0    Plan hash value: 1756381138

                      ---------------------------------------------------------------------------------
                      | Id  | Operation          | Name      | E-Rows |E-Bytes| Cost (%CPU)| E-Time   |
                      ---------------------------------------------------------------------------------
                      |   0 | SELECT STATEMENT   |           |        |       |     3 (100)|          |
                      |   1 |  SORT AGGREGATE    |           |      1 |     8 |            |          |
                      |   2 |   TABLE ACCESS FULL| EMPLOYEES |    107 |   856 |     3   (0)| 00:00:01 |
                      ---------------------------------------------------------------------------------

                      Query Block Name / Object Alias (identified by operation id):
                      -------------------------------------------------------------

                         1 - SEL$1
                         2 - SEL$1 / EMPLOYEES@SEL$1

                      Outline Data
                      -------------

                        /*+
                            BEGIN_OUTLINE_DATA
                            IGNORE_OPTIM_EMBEDDED_HINTS
                            OPTIMIZER_FEATURES_ENABLE('12.1.0.2')
                            DB_VERSION('12.1.0.2')
                            OPT_PARAM('_optimizer_use_feedback' 'false')
                            ALL_ROWS
                            OUTLINE_LEAF(@"SEL$1")
                            FULL(@"SEL$1" "EMPLOYEES"@"SEL$1")
                            END_OUTLINE_DATA
                        */

                      Column Projection Information (identified by operation id):
                      -----------------------------------------------------------

                         1 - (#keys=0) MAX("HIRE_DATE")[7], MIN("HIRE_DATE")[7]
                         2 - (rowset=200) "HIRE_DATE"[DATE,7]

                      Note
                      -----
                         - Warning: basic plan statistics not available. These are only collected when:
                             * hint 'gather_plan_statistics' is used for the statement or
                             * parameter 'statistics_level' is set to 'ALL', at session or system level


GV$ACTIVE_SESSION_HISTORY - ash_elap by exec (recent)
~~~~~~~~~~~~~~~~~~~~~~~~~

SOURCE   SQL_ID        SQL_EXEC_ID SQL_PLAN_HASH_VALUE SQL_EXEC_START                 RUN_TIME_TIMESTAMP             RUN_TIME_SEC    TEMP_MB     PGA_MB    READ_MB   WRITE_MB      RIOPS      WIOPS
-------- ------------- ----------- ------------------- ------------------------------ ------------------------------ ------------ ---------- ---------- ---------- ---------- ---------- ----------
realtime brfw9gfks2d37    16777225          1756381138 12-APR-21 09.54.58.000000 PM   +000000000 00:00:00.483                .483          0       1.49         .2          0         19          0
realtime brfw9gfks2d37    16777285          1756381138 12-APR-21 09.55.08.000000 PM   +000000000 00:00:00.493                .493          0       1.49         .2          0         19          0

GV$ACTIVE_SESSION_HISTORY - ash_elap exec avg min max
~~~~~~~~~~~~~~~~~~~~~~~~~

SOURCE   SQL_PLAN_HASH_VALUE   COUNT(*)        AVG        MIN        MAX
-------- ------------------- ---------- ---------- ---------- ----------
realtime          1756381138          2        .49        .48        .49
                                      2        .49        .48        .49

SOURCE     SQL_PLAN_HASH_VALUE   COUNT(*)        AVG        MIN        MAX
---------- ------------------- ---------- ---------- ---------- ----------
                                        0

GV$ACTIVE_SESSION_HISTORY
~~~~~~~~~~~~~~~~~~~~~~~~~

SOURCE            SAMPLES  PERCENT TIMED_EVENT
-------- ---------------- -------- ----------------------------------------------------------------------
realtime               30    100.0 ON CPU

GV$ACTIVE_SESSION_HISTORY - by inst_id
~~~~~~~~~~~~~~~~~~~~~~~~~

SOURCE            SAMPLES  PERCENT    INST_ID TIMED_EVENT
-------- ---------------- -------- ---------- ----------------------------------------------------------------------
realtime               30    100.0          1 ON CPU

GV$ACTIVE_SESSION_HISTORY
~~~~~~~~~~~~~~~~~~~~~~~~~

         SAMPLES  PERCENT PLAN_HASH_VALUE  LINE_ID OPERATION                                          TIMED_EVENT
---------------- -------- --------------- -------- -------------------------------------------------- ----------------------------------------------------------------------
              13     43.3      1756381138        0                                                    ON CPU
              12     40.0      1475428744        0                                                    ON CPU
               2      6.7               0        0                                                    ON CPU
               1      3.3      1756381138        0 SELECT STATEMENT                                   ON CPU
               1      3.3      1756381138        1 SORT AGGREGATE                                     ON CPU
               1      3.3      1756381138        2 TABLE ACCESS FULL                                  ON CPU


GV$ACTIVE_SESSION_HISTORY
~~~~~~~~~~~~~~~~~~~~~~~~~

         SAMPLES  PERCENT PLAN_HASH_VALUE  LINE_ID OPERATION                                          CURRENT_OBJECT                                               TIMED_EVENT
---------------- -------- --------------- -------- -------------------------------------------------- ------------------------------------------------------------ ----------------------------------------------------------------------
              13     43.3      1756381138        0                                                    SERIAL -1                                                    ON CPU
              12     40.0      1475428744        0                                                    SERIAL -1                                                    ON CPU
               2      6.7               0        0                                                    SERIAL -1                                                    ON CPU
               1      3.3      1756381138        2 TABLE ACCESS FULL                                  SERIAL -1                                                    ON CPU
               1      3.3      1756381138        0 SELECT STATEMENT                                   SERIAL -1                                                    ON CPU
               1      3.3      1756381138        1 SORT AGGREGATE                                     SERIAL -1                                                    ON CPU


GV$ACTIVE_SESSION_HISTORY - px distribution
~~~~~~~~~~~~~~~~~~~~~~~~~

SQL_EXEC_START                 SQL_EXEC_ID SQL_PLAN_HASH_VALUE SQL_PLAN_LINE_ID DOP        PROGRAM                                                        COUNT(*)
------------------------------ ----------- ------------------- ---------------- ---------- ------------------------------------------------------------ ----------
12-APR-21 09.54.58.000000 PM      16777225          1756381138                1 SERIAL     sqlplus@karldevfedora (TNS V1-V3)                                     1
12-APR-21 09.55.08.000000 PM      16777285          1756381138                2 SERIAL     sqlplus@karldevfedora (TNS V1-V3)                                     1

}}}




! ash elap explained 


!! elapsed time (wall clock) using ASH



http://dboptimizer.com/2011/05/04/sql-execution-times-from-ash/
http://dboptimizer.com/2011/05/06/sql-timings-for-ash-ii/
http://dboptimizer.com/2011/05/06/sql-ash-timings-iii/

<<<
* this is pretty awesome way of characterizing the response times of SQLs.. another way of doing this is through 10046 trace and using the Mr. Tools, and there are so many things you can do with both of the tools, another thing I'm interested in (although not related to this tiddler) is getting the IO size distribution from the 10046 along side it is the data coming from ASH which is basically pulling the data from the p1,p2,p3 values of the IO events.. 
<<<


{{{
[oracle@oel5-11g bin]$ cat ash_test.sh
export DATE=$(date +%Y%m%d%H%M%S%N)

sqlplus "/ as sysdba" <<EOF
set timing on
set echo on
spool all_nodes_full_table_scan_$DATE.log

select /* ash_elapsed */ * from
(select owner, object_name from karltest
where owner = 'SYSTEM'
and object_type = 'TABLE'
union
select owner, object_name from karltest
where owner = 'SYSTEM'
and object_type = 'INDEX')
order by object_name
/

spool off
exit
EOF
[oracle@oel5-11g bin]$
[oracle@oel5-11g bin]$ cat loadtest.sh
(( n=0 ))
while (( n<$1 ));do
(( n=n+1 ))
sh ash_test.sh &
done
}}}

{{{
[oracle@oel5-11g bin]$ ls -ltr
total 1468
-rwxr-xr-x 1 oracle oinstall    107 Apr 23 08:21 startdb.sh
-rwxr-xr-x 1 oracle oinstall    118 Apr 23 08:21 stopdb.sh
-rw-r--r-- 1 oracle oinstall 127675 May  5 18:12 all_nodes_full_table_scan_20110505181225583938000.log
-rw-r--r-- 1 oracle oinstall 127675 May  5 18:17 all_nodes_full_table_scan_20110505181508275739000.log
-rw-r--r-- 1 oracle oinstall 127675 May  5 18:17 all_nodes_full_table_scan_20110505181508273773000.log
-rw-r--r-- 1 oracle oinstall 127675 May  5 18:17 all_nodes_full_table_scan_20110505181508273060000.log
-rw-r--r-- 1 oracle oinstall 127675 May  5 18:17 all_nodes_full_table_scan_20110505181508269189000.log
-rw-r--r-- 1 oracle oinstall 127675 May  5 18:17 all_nodes_full_table_scan_20110505181508265790000.log
-rw-r--r-- 1 oracle oinstall 127675 May  5 18:17 all_nodes_full_table_scan_20110505181508262532000.log
-rw-r--r-- 1 oracle oinstall 127675 May  5 18:17 all_nodes_full_table_scan_20110505181508259253000.log
-rw-r--r-- 1 oracle oinstall 127675 May  5 18:17 all_nodes_full_table_scan_20110505181508256596000.log
-rw-r--r-- 1 oracle oinstall 127675 May  5 18:17 all_nodes_full_table_scan_20110505181508251337000.log
-rw-r--r-- 1 oracle oinstall 127675 May  5 18:17 all_nodes_full_table_scan_20110505181508245849000.log
-rw-r--r-- 1 oracle oinstall     64 May  5 19:23 loadtest.sh
-rw-r--r-- 1 oracle oinstall    397 May  5 19:23 ash_test.sh
}}}

{{{
[oracle@oel5-11g bin]$ cat *log | grep Elapsed
Elapsed: 00:00:15.00
Elapsed: 00:02:00.41
Elapsed: 00:02:00.10
Elapsed: 00:02:00.03
Elapsed: 00:02:00.15
Elapsed: 00:02:00.32
Elapsed: 00:02:00.08
Elapsed: 00:02:00.20
Elapsed: 00:01:59.99
Elapsed: 00:02:00.31
Elapsed: 00:02:00.11
}}}

{{{
 SELECT /* example */ substr(sql_text, 1, 80) sql_text,
           sql_id, 
	    hash_value, address, child_number, plan_hash_value, FIRST_LOAD_TIME
      FROM v$sql
     WHERE 
	--sql_id = '6wps6tju5b8tq'
	-- hash_value = 1481129178
	sql_text LIKE '%ash_elapsed%'
       AND sql_text NOT LIKE '%example%' 
      order by first_load_time; 

SQL_TEXT                                                                                                                                                            SQL_ID        HASH_VALUE ADDRESS           CHILD_NUMBER PLAN_HASH_VALUE FIRST_LOAD_TIME
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------- ---------- ---------------- ------------ --------------- ----------------------------------------------------------------------------
select /* ash_elapsed */ * from (select owner, object_name from karltest where o                                                                                    gy6j5kg641saa 3426804042 000000006C523480             0      1959977140 2011-05-05/18:12:25
}}}


{{{
select sql_id, 
      run_time run_time_timestamp, 
 (EXTRACT(HOUR FROM run_time) * 3600
                    + EXTRACT(MINUTE FROM run_time) * 60 
                    + EXTRACT(SECOND FROM run_time)) run_time_sec
from  (
select 
       sql_id,
       max(sample_time - sql_exec_start) run_time 
from 
       dba_hist_active_sess_history 
where
       sql_exec_start is not null 
group by sql_id,SQL_EXEC_ID
order by sql_id 
)
-- where rownum < 100
where sql_id = 'gy6j5kg641saa'
order by sql_id, run_time desc
/

SQL_ID        RUN_TIME_TIMESTAMP                                                          RUN_TIME_SEC
------------- --------------------------------------------------------------------------- ------------
gy6j5kg641saa +000000000 00:01:54.575                                                          114.575
gy6j5kg641saa +000000000 00:01:54.575                                                          114.575
gy6j5kg641saa +000000000 00:01:54.575                                                          114.575
gy6j5kg641saa +000000000 00:01:54.575                                                          114.575
gy6j5kg641saa +000000000 00:01:54.575                                                          114.575
gy6j5kg641saa +000000000 00:01:54.575                                                          114.575
gy6j5kg641saa +000000000 00:01:54.575                                                          114.575
gy6j5kg641saa +000000000 00:01:54.575                                                          114.575
gy6j5kg641saa +000000000 00:01:53.575                                                          113.575
gy6j5kg641saa +000000000 00:01:53.575                                                          113.575
gy6j5kg641saa +000000000 00:00:11.052                                                           11.052

11 rows selected.
}}}

{{{
select sql_id,  
		count(*),
        round(avg(EXTRACT(HOUR FROM run_time) * 3600
                    + EXTRACT(MINUTE FROM run_time) * 60 
                    + EXTRACT(SECOND FROM run_time)),2) avg , 
        round(min(EXTRACT(HOUR FROM run_time) * 3600
                    + EXTRACT(MINUTE FROM run_time) * 60 
                    + EXTRACT(SECOND FROM run_time)),2) min , 
        round(max(EXTRACT(HOUR FROM run_time) * 3600
                    + EXTRACT(MINUTE FROM run_time) * 60 
                    + EXTRACT(SECOND FROM run_time)),2) max 
from  (
        select 
               sql_id,
               max(sample_time - sql_exec_start) run_time
        from 
               dba_hist_active_sess_history 
        where
               sql_exec_start is not null 
               and sql_id = 'gy6j5kg641saa'
        group by sql_id,SQL_EXEC_ID
        order by sql_id 
       )
-- where rownum < 100
group by sql_id
order by avg desc
/

SQL_ID          COUNT(*)      AVG        MIN        MAX
------------- ---------- -------- ---------- ----------
gy6j5kg641saa         11  104.980      11.05     114.58

}}}


-- Also verify the data points and avg min max in Excel 

[img[picturename| https://lh6.googleusercontent.com/_F2x5WXOJ6Q8/TcKKM6OwQNI/AAAAAAAABQQ/6AunDw4VDvI/avgminmax.png]]


{{{
SQL> select count(*) from karltest;

  COUNT(*)
----------
   2215968


SQL> insert into karltest select * from dba_objects;

69249 rows created.

Elapsed: 00:00:00.86
SQL> commit;

Commit complete.

SQL> select count(*) from karltest;

  COUNT(*)
----------
     69249



[oracle@oel5-11g bin]$ cat *log | grep Elapsed
Elapsed: 00:00:00.67
Elapsed: 00:00:00.35
Elapsed: 00:00:01.16
Elapsed: 00:00:00.33
Elapsed: 00:00:00.35
Elapsed: 00:00:00.31
Elapsed: 00:00:00.31
Elapsed: 00:00:01.32
Elapsed: 00:00:00.34
Elapsed: 00:00:00.31



 SELECT /* example */ substr(sql_text, 1, 80) sql_text,
           sql_id, 
	    hash_value, address, child_number, plan_hash_value, FIRST_LOAD_TIME
      FROM v$sql
     WHERE 
	--sql_id = '6wps6tju5b8tq'
	-- hash_value = 1481129178
	sql_text LIKE '%ash_elapsed2%'
       AND sql_text NOT LIKE '%example%' 
      order by first_load_time; 



SQL>  SELECT /* example */ substr(sql_text, 1, 80) sql_text,
  2             sql_id,
  3         hash_value, address, child_number, plan_hash_value, FIRST_LOAD_TIME
  4        FROM v$sql
  5       WHERE
  6     --sql_id = '6wps6tju5b8tq'
  7     -- hash_value = 1481129178
  8     sql_text LIKE '%ash_elapsed2%'
  9         AND sql_text NOT LIKE '%example%'
      order by first_load_time;  10

SQL_TEXT                                                                                                                                                            SQL_ID        HASH_VALUE ADDRESS           CHILD_NUMBER PLAN_HASH_VALUE FIRST_LOAD_TIME
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------- ---------- ---------------- ------------ --------------- ----------------------------------------------------------------------------
select /* ash_elapsed2 */ * from (select owner, object_name from karltest where                                                                                     4bkcftyvj2j6p 3071362261 000000006C776858             0      1959977140 2011-05-05/19:59:58


SQL> BEGIN
  DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT ();
END;
/  2    3    4

PL/SQL procedure successfully completed.



SQL> select sql_id,
  2        run_time run_time_timestamp,
  3   (EXTRACT(HOUR FROM run_time) * 3600
                    + EXTRACT(MINUTE FROM run_time) * 60
  4    5                      + EXTRACT(SECOND FROM run_time)) run_time_sec
  6  from  (
  7  select
  8         sql_id,
  9         max(sample_time - sql_exec_start) run_time
 10  from
 11         dba_hist_active_sess_history
 12  where
 13         sql_exec_start is not null
 14  group by sql_id,SQL_EXEC_ID
order by sql_id
 15   16  )
-- where rownum < 100
 17   18  where sql_id = '4bkcftyvj2j6p'
 19  order by sql_id, run_time desc
/ 20

no rows selected
}}}



!! Making use of STDDEV on elapsed time

This gets the avg,min,max,stddev on a specific time window.. then drill down further with a join on dba_hist_active_sess_history with particular filters (module, user, etc.)

{{{

-- CREATE A TEMP TABLE THAT SHOWS AVG,MIN,MAX,STDDEV RESPONSE TIME OF SQLS    
define begin='03/08/2012 14:40'
define end='03/08/2012 14:45'


SYS@fsprd2> create table karl_sql_id2 as
select sql_id,
  2    3                  count(*) count,
  4          round(avg(EXTRACT(HOUR FROM run_time) * 3600
  5                      + EXTRACT(MINUTE FROM run_time) * 60
  6                      + EXTRACT(SECOND FROM run_time)),2) avg ,
  7          round(min(EXTRACT(HOUR FROM run_time) * 3600
  8                      + EXTRACT(MINUTE FROM run_time) * 60
  9                      + EXTRACT(SECOND FROM run_time)),2) min ,
10          round(max(EXTRACT(HOUR FROM run_time) * 3600
11                      + EXTRACT(MINUTE FROM run_time) * 60
12                      + EXTRACT(SECOND FROM run_time)),2) max,
13          round(stddev(EXTRACT(HOUR FROM run_time) * 3600
14                      + EXTRACT(MINUTE FROM run_time) * 60
15                      + EXTRACT(SECOND FROM run_time)),2) stddev
16  from  (
17          select
18                 sql_id,
19                 max(sample_time - sql_exec_start) run_time
20          from
21                 dba_hist_active_sess_history
22          where
23                 sql_exec_start is not null
24                                         and sample_time
25                                         between to_date('&begin', 'MM/DD/YY HH24:MI:SS')
26                                         and to_date('&end', 'MM/DD/YY HH24:MI:SS')
27          group by sql_id,SQL_EXEC_ID
28          order by sql_id
29         )
30  group by sql_id
31  order by avg desc
32  /

Table created.


define _start_time='03/08/2012 14:40'
define _end_time='03/08/2012 14:45'


SYS@fsprd2> select * from karl_sql_id2
where sql_id in
  2    3        (select sql_id from
  4     dba_hist_active_sess_history
  5     where sample_time
  6                                            between to_date('&_start_time', 'MM/DD/YY HH24:MI')
  7                                            and to_date('&_end_time', 'MM/DD/YY HH24:MI')
  8     and lower(module) like 'ex_%')
  9  order by stddev asc;

SQL_ID             COUNT        AVG        MIN        MAX     STDDEV
------------- ---------- ---------- ---------- ---------- ----------
aadkvg74cknvc          1         .8         .8         .8          0
c96tdmv2wu0mb          1        .81        .81        .81          0
03zk40yazk2cj          1        .81        .81        .81          0
89s2kmgjcyg08          1       1.96       1.96       1.96          0
cb5gq5xu04sbb          3        2.6       1.92       3.93       1.15
991y15af5jxx9          5       2.07        .96       5.93       2.16
c2fn0swka653f          6      18.94       9.99      28.99       7.28

7 rows selected.

}}}




.

first I've setup my own mail server (where my DNS,NTP,Samba are also hosted in one VM)... [[R&D Mail Server]]  http://www.evernote.com/shard/s48/sh/799368fe-07f0-4ebf-8a92-8b295e9bcf0d/61f0bb8e887507684925fad01d3f9245

Setup Email Notification
http://www.evernote.com/shard/s48/sh/a0869438-b44d-4b39-a280-c138dc21ac84/48be976fcc4fc894e8713d261cfc644a

tablespacealerts and repvfy install
http://www.evernote.com/shard/s48/sh/9568bb0c-c65b-482f-903b-b4b792e5f927/4745645ebf375d8abc950ca3f059dc3a

tablespacealerts-fixdbtimezone (I don't think you have to deal with this)
http://www.evernote.com/shard/s48/sh/9520da28-d89d-4b63-adbd-04b0cb4d819e/cfaa06dbdf41046a0597694180d66c43


''related notes''
RAC Metrics: Unable to get E-mail Notification for some metrics against Cluster Databases (Doc ID 403886.1)


! capture 
{{{
Kindly send the 3 files to the customer and run it on each of the database that will be consolidated. The readme.txt shows the steps 
$ ls
esp_collect-master.zip  readme.txt  run_awr-quickextract-master.zip


Please execute the following for each database as sysdba, for RAC just run the scripts on the 1st node

-- edb360-master
----------------------------------------
 $ unzip edb360-master.zip
 $ cd edb360-master
 $ sqlplus / as sysdba
 $ @edb360.sql T 31

-- esp_collect-master
----------------------------------------
 $ unzip esp_collect-master.zip
 $ cd esp_collect-master
 $ sqlplus / as sysdba
 $ @sql/resources_requirements.sql
 $ @sql/esp_collect_requirements.sql
 $ mv res_requirements_<hostname>.txt res_requirements_<hostname>_<databasename>.txt
 $ mv esp_requirements_<hostname>.csv esp_requirements_<hostname>_<databasename>.csv
 $ cat /proc/cpuinfo | grep -i name | sort | uniq >> cpuinfo_model_name.txt
 $ zip esp_output.zip res_requirements_*.txt esp_requirements_*.csv cpuinfo_model_name.txt

-- run_awr-quickextract
----------------------------------------
 $ unzip run_awr-quickextract-master.zip
 $ cd run_awr-quickextract-master
 $ sqlplus / as sysdba
 $ @run_all.sql
 $ zip run_awr_output.zip *tar

}}}

! quick howto 
{{{
0) pull the esp
1) concatenate the est files 
2) create the client 
3) load the esp file, associate to the file to the client
4) check the summary
5) admin -> client -> click on pencil
6) check on host tab -> edit the specint
7) plan is implicitly created 
8) click on report
9) click config per plan
	stack
	edit the params
}}}


http://en.wikipedia.org/wiki/The_Open_Group_Architecture_Framework

1) Setup Yum and install the following rpms
yum install curl compat-libstdc++-33 glibc nspluginwrapper

2) Download the flash player RPM 
http://get.adobe.com/flashplayer/

rpm -ivh flash-plugin.rpm

3) Close the Firefox and restart it


http://www.flashconf.com/how-to/how-to-install-flash-player-on-centosredhat-linux/


! on 6.5 
http://www.sysads.co.uk/2014/01/install-adobe-flash-player-11-2-centosrhel-6-5/

-- FAQ
Enterprise Manager Database Console FAQ (Doc ID 863631.1)
Master Note for Grid Control 11.1.0.1.0 Installation and Upgrade [ID 1067438.1]     <-- MASTER NOTE

Oracle Support Master Note for 10g Grid Control OMS Performance Issues (Doc ID 1161003.1)
http://blogs.oracle.com/db/2010/09/oracle_support_master_note_for_10g_grid_control_oms_performance_issues_doc_id_11610031_1.html


-- INSTALLATION 10gR2

Doc ID: 763351.1 Documentation Reference for Grid Control 10.2.0.5.0 Installation and Upgrade
Note 412431.1 - Oracle Enterprise Manager 10g Grid Control Certification Checker
Note 464674.1 - Checklist for EM 10g Grid Control 10.2.x to 10.2.0.4/10.2.0.5 OMS and Repository Upgrades
Note 784963.1 - How to Install Grid Control 10.2.0.5.0 on Enterprise Linux 5 Using the Existing Database (11g) Option
Note 793870.1 - How to Install Grid Control 10.2.0.5.0 on Enterprise Linux 4 Using the Existing Database (11g) Option
Note 604520.1 - How to Install Grid Control 10.2.0.4.0 with an Existing (10.2.X.X/11.1.0.6) Database using the Software-only Option
Doc ID: 467677.1 How to Install Grid Control 10.2.0.4.0 to use an 11g Database for the Repository
Doc ID: 780836.1 How to Install Grid Control 10.2.0.5.0 on Enterprise Linux 5 Using the New Database Option	<-- got from Jeff Hunter

-- INSTALLATION 11g

Enterprise Manager Grid Control and Database Control Certification with 11g R2 Database [ID 1266977.1]
11g Grid Control: 11.2.0.1 Database Containing Grid Control Repository Generates Core Dump with ORA-07445 Error [ID 1305569.1]
Checklist for EM 10g Grid Control 10.2.0.4/10.2.0.5 to 11.1.0.1.0 OMS and Repository Upgrades [ID 1073166.1]
Grid Control 11g: How to Install 11.1.0.1.0 on OEL5.3 x86_64 with a 11.1.0.7.0 Repository Database [ID 1064495.1]
Grid Control 11g install fails at OMS configuration stage - Wrong Weblogic Server version used. [ID 1135493.1]
http://kkempf.wordpress.com/2010/05/08/em-11g-grid-control-install/
http://www.ora-solutions.net/papers/HowTo_Installation_GridControl_11g_RHEL5.pdf
http://gavinsoorma.com/2010/10/11g-grid-control-installation-tips-and-solutions/
http://www.emarcel.com/myblog/44-oraclearticles/136-installingoem11gr1
http://www.oracle-wiki.net/startdocsgridcontrollinuxagentinstall11gmanual
http://www.oracle-wiki.net/startdocsgridcontrollinuxagentinstall11g
http://www.oracle-wiki.net/startdocsgridcontrolpostimplementation11g#toc9
http://www.oracle-wiki.net/startdocshowtobuildgridcontrol11101
http://download.oracle.com/docs/cd/E11857_01/install.111/e16847/install_agent_on_clstr.htm#CHDHEBFE   <-- official doc
https://forums.oracle.com/forums/thread.jspa?threadID=2244102
http://www.gokhanatil.com/2011/08/how-to-deploy-em-grid-control-11g-agent.html <-- on windows
Installing Enterprise Manager Grid Control Fails with Error 'OUI-10133 Invalid staging area' [ID 443513.1] <-- staging
11g Grid Control: Details of the Directory Structure and Commonly Used Locations in a 11g OMS Installation [ID 1276554.1]  <-- the detailed directory structure




-- OEM MAA
MAA home page http://www.oracle.com/technetwork/database/features/availability/em-maa-155389.html
http://docs.oracle.com/cd/E11857_01/em.111/e16790/part3.htm#sthref1164   <-- four levels of HA
http://docs.oracle.com/cd/E11857_01/em.111/e16790/ha_single_resource.htm#CHDEHBEG  <-- single resource config
http://docs.oracle.com/cd/E11857_01/em.111/e16790/ha_multi_resource.htm#BABDAJEE <-- multiple resource config
http://blogs.oracle.com/db/entry/oracle_support_master_note_for_configuring_10g_grid_control_components_for_high_availability  <-- collection of MOS notes for OEM HA
Enterprise Manager Community: Four Stages to MAA in Grid Control [ID 985082.1]
How To Configure Enterprise Manager for High Availability [ID 330072.1]






-- TROUBLESHOOTING
Files Needed for Troubleshooting an EM 10G Service Request if an RDA is not Available [ID 405755.1]
How to Run the RDA against a Grid Control Installation [ID 1057051.1]
Files to Upload for an Enterprise Manager Grid Control 10g Service Request [ID 377124.1]





-- CONSOLE, WEBSITE

Differences Between Oracle Enterprise Manager Console and Oracle Enterprise Manager Web Site
  	Doc ID: 	Note:222667.1


-- DATABASE CONTROL

278100.1 drop recreate dbconsole
Master Note for Enterprise Manager Configuration Assistant (EMCA) in Single Instance Database Environment [ID 1099271.1]



-- GRID CONTROL

Comparison Between the Database Healthcheck and Database Response Metrics
  	Doc ID: 	Note:469227.1
  	
Overview Comparison of EM 9i to EM10g Features
  	Doc ID: 	Note:277066.1
  	
EM 10gR2 GRID Control Release Notes (10.2.0.1.0)
  	Doc ID: 	Note:356236.1
  	
OCM: Software Configuration Manager (SCM formerly known as MCP): FAQ and Troubleshooting for Oracle Configuration Manager (OCM)
  	Doc ID: 	Note:369619.1
  	
Enterprise Manager Grid Control 10g (10.1.0) Frequently Asked Questions
  	Doc ID: 	Note:273579.1 	
  	
Differences Between Oracle Enterprise Manager Console and Oracle Enterprise Manager Web Site
  	Doc ID: 	Note:222667.1
  	
Where Are The Tuning Pack Advisors For A 9i DB Within 10g Gc Control?
  	Doc ID: 	Note:299729.1
  	
Grid Control Reports FAQ
  	Doc ID: 	Note:460894.1
  	
How do you display performance data for a period greater than 31 days in Enterprise Manager
  	Doc ID: 	Note:363880.1
  	
Enterprise Manager DST Quick Fix Guide
  	Doc ID: 	Note:418792.1
  	
What can you patch using Grid Control?
  	Doc ID: 	Note:457979.1
  	
How To Discover RAC Listeners Started On VIPs In Grid Control
  	Doc ID: 	Note:461420.1
  	
EM2GO (Enterprise Manager Grid Control 10G) Frequently Asked Questions
  	Doc ID: 	Note:400193.1



How To Access Advisor Central for 9i Target Databases in Grid Control
  	Doc ID: 	Note:332971.1

Frequently Asked Questions (FAQ) for EM Tuning Pack 9i
  	Doc ID: 	Note:169548.1

Where Are The Tuning Pack Advisors For A 9i DB Within 10g Gc Control?
  	Doc ID: 	Note:299729.1

Frequently Asked Questions (FAQ) for the EM Diagnostics Pack 9i
  	Doc ID: 	Note:169551.1




-- ISSUES/BUGS

Note 387212.1 - How to Locate the Installation Logs for Grid Control 10.2.0.x

10.2 Grid Agent Can Break RAID Mirroring and Cause Hard Disk To Go Offline
      Doc ID:     454647.1

Known Issues: When Installing Grid Control Using Existing Database Which Is Configured With ASM
  	Doc ID: 	738445.1

Doc ID: 787872.1 Grid Control 10.2.0.5.0 Known Issues

Files to Upload for an Enterprise Manager Grid Control 10g Service Request
  	Doc ID: 	377124.1

Database Control Status Of Db Instance Is Unmounted (Doc ID 550712.1)

Problem: Database Status Unavailable in Grid Control with Metric Collection Error (Doc ID 340158.1)

Database Control Showing Database Status as Currently Unavailable. Connect via sqlplus is successfull. (Doc ID 315299.1)

Grid Control shows Database Status as Unmounted on the db Homepage, but the Database is actually Open (Doc ID 1094524.1)

PROBLEM: Top Activity Page Fails With Error "Java.Sql.Sqlexception: Unknown Host Specified" In Grid Control 11.1 [ID 1183783.1]   <-- issue we had on exadata



-- METRICS

How to - Disable the Host Storage Metric on Multiple Hosts using an Enterprise Manager Job
      Doc ID:     560905.1

Troubleshooting guide to remove old warning and critical alerts from grid console
  	Doc ID: 	806052.1

Note 748630.1 - How to clear an Alert in Enterprise Manager Grid Control

Warning Alerts Still Reported for Metrics That Have Been Disabled
  	Doc ID: 	744115.1

Understanding Oracle 10G - Server Generated Alerts
  	Doc ID: 	266970.1



-- ''EMAIL NOTIFICATIONS''
Problem - RDBMS metrics, e.g.Tablespace Full(%), not clearing in Grid Control even though they are no longer present in dba_outstanding_alerts [ID 455222.1]
Understanding Oracle 10G - Server Generated Alerts [ID 266970.1]
How to set up dbconsole to send email notifications for a metric alert (eg. tablespace full) [ID 1266924.1]
Configuring Email Notification Method in EM - Steps and Troubleshooting [ID 429426.1]
New Features for Notifications in 10.2.0.5 Enterprise Manager Grid Control [ID 813399.1]
How to Add/Update Email Addresses and Configure a Notification Schedule in Grid Control ? [ID 438150.1]
How to Test an SMTP Mail Gateway From a Command Line Interface [ID 74269.1]
Grid Control and SMTP Authentication [ID 429836.1]
What are Short and Long Email Formats in Enterprise Manager Notifications? [ID 429292.1]
How To Configure Notification Rules in Enterprise Manager Grid Control? [ID 429422.1]
Configuring SNMP Trap Notification Method in EM - Steps and Troubleshooting [ID 434886.1]
Configuring Notifications for Job Executions in Enterprise Manager [ID 414409.1]
What are the Packs required for using the Notifications feature from Grid Control? [ID 552788.1]
How to Enable 'Repeat Notifications' from Enterprise Manager Grid Control? [ID 464847.1]
How to Troubleshoot Notifications That Are Hung / Stuck and Not Being Sent from EM 10g [ID 285093.1]
http://download.oracle.com/docs/cd/B16240_01/doc/em.102/b25986/oracle_database.htm#sthref1176
EMDIAG REPVFY Kit - Download, Install/De-Install and Upgrade [ID 421499.1]
All Email Notifications Arrive With 1 Hour Delay In Grid control [ID 413718.1]
EMDIAG Master Index [ID 421053.1]
EMDIAG REPVFY Kit - How to Use the Repository Diagnostics [ID 421563.1]
EMDIAG REPVFY Kit - Download, Install/De-Install and Upgrade [ID 421499.1]
EMDIAG REPVFY Kit - Environment Variables [ID 421586.1]
EMDIAG REPVFY Kit - How to Configure the 'repvfy.cfg' File [ID 421600.1]
change dbtimzone https://forums.oracle.com/forums/thread.jspa?threadID=572038
Timestamps & time zones - Frequently Asked Questions [ID 340512.1]
http://practicaloracle.blogspot.com/2007/10/oracle-enterprise-manager-10g.html

Note: 271367.1 - Oracle Alert Alert Check Setup Test
https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=271367.1
Note: 577392.1 - How To Check Oracle Alert Setup?
https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=577392.1
Note: 75030.1 - Troubleshooting Oracle Alert on NT
https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=75030.1
Note: 152687.1 - How to Troubleshoot E-mail and Alerts
https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=152687.1





-- ENTERPRISE MANAGER
How to: Grant Access to non-DBA users to the Database Performance Tab
  	Doc ID: 	455191.1



-- DISCOVERY

How to Troubleshoot EM Discovery Problems Caused by the Intelligent Agent Setup
  	Doc ID: 	Note:166935.1



-- TUTORIAL

Centrally Managing Your Enterprise Environment With EM 10g Grid Control - Oracle by Example Lesson
  	Doc ID: 	277090.1



-- LINUX PACK

Un-Install / Rollback of RPM's on Linux OS from Enterprise Manager
  	Doc ID: 	436535.1

Patching Linux Hosts through Deployment Procedure from Enterprise Manager
  	Doc ID: 	436485.1



-- MIGRATION, DB2GC (starting 10.2.0.3)

Migrate targets from DB Control to Grid Control - db2gc
  	Doc ID: 	605578.1

How To Move the Grid Control Repository Using an Inconsistent (Hot) Database Backup
  	Doc ID: 	602955.1




-----------------------------------------------------------------------------------------------------------------------------------------------



Oracle Created Database Users: Password, Usage and Files References
  	Doc ID: 	Note:160861.1
  	
How to change the password of the 10g database user sysman
  	Doc ID: 	Note:259379.1
  	
How to change the password of the 10g database user dbsnmp
  	Doc ID: 	Note:259387.1 	
  	



How to Start the Central Management Agent on an AS Instance Host
  	Doc ID: 	Note:297727.1
  	
Understanding Network Address Translation in EM 10g Grid Control
  	Doc ID: 	Note:299595.1
  	
How To Find Agents with time-skew problems
  	Doc ID: 	Note:359524.1
  	
10gR2 - Where is the Management Services & Repository Monitoring Page in Grid Control
  	Doc ID: 	Note:356795.1
  	
Problem: Performance: Agent High CPU Consumption
  	Doc ID: 	Note:361612.1
  	
How To Perform Periodic Maintenance and Improve Performance of Grid Control Repository
  	Doc ID: 	Note:387957.1 	
  	
How to Start and Stop Enterprise Manager Components
  	Doc ID: 	Note:298991.1
  	
How to Start and Stop Enterprise Manager Components
  	Doc ID: 	Note:298991.1
  	
How to: Add The Domain Name Of The Host To Name Of The Agent
  	Doc ID: 	Note:295949.1
  	
Understanding the Enterprise Manager Management Agent 10g 'emd.properties' File
  	Doc ID: 	Note:235290.1 	
  	
Understanding Oracle Enterprise Manager 10g Agent Resource Consumption
  	Doc ID: 	Note:375509.1
  	
Understanding the Enterprise Manager 10g Grid Control Management Agent
  	Doc ID: 	Note:234872.1
  	
How To Install A Grid Management Agent On 10g Rac Cluster And On Single Node
  	Doc ID: 	Note:309635.1
  	
How do you display performance data for a period greater than 31 days in Enterprise Manager
  	Doc ID: 	Note:363880.1
  	
How to Restrict access for EM Database Control only from Specific Hosts / IPs
  	Doc ID: 	Note:438493.1
  	
How Do You Configure An Agent After Hostname Change?
  	Doc ID: 	Note:423565.1
  	
Problem: Config: Why Does The Em-Application.Log Grow So Large?
  	Doc ID: 	Note:403525.1
  	
Files Needed for Troubleshooting an EM 10G Service Request if an RDA is not Available
  	Doc ID: 	Note:405755.1
  	
Files to Upload for an Enterprise Manager Grid Control 10g Service Request
  	Doc ID: 	Note:377124.1
  	
HOW TO: give a user only read only access of Enterprise Manager Database Control
  	Doc ID: 	Note:465520.1
  	
The dbconsole fails to start after a change in the hostname.
  	Doc ID: 	Note:467598.1
  	
How to: Configure the DB Console to Use Dedicated Server Processes
  	Doc ID: 	Note:432972.1
  	
Basic Troubleshooting Guide For Grid Control Oracle Mangement Server (OMS) Midtier
  	Doc ID: 	Note:550395.1
  	
How to Point an Agent to a different Grid Control OMS and Repository?
  	Doc ID: 	Note:413228.1
  	
How to Log and Trace the EM 10g Management Agents
  	Doc ID: 	Note:229624.1
  	
How to Install The Downloadable Central Management Agent in EM 10g Grid Control
  	Doc ID: 	Note:235287.1
  	
How To Discover An AS Instance In EM 10g Grid Control
  	Doc ID: 	Note:297721.1
  	
How To Rename A Database Target in Grid Control
  	Doc ID: 	Note:295014.1
  	
EM 10g Target Discovery White Paper
  	Doc ID: 	Note:239224.1
  	
How to Cleanly De-Install the EM 10g Agent on Windows and Unix
  	Doc ID: 	Note:438158.1
  	
How To Discover a Standalone Webcache Installation In EM 10g Grid Control
  	Doc ID: 	Note:297734.1
  	
Problem: Database Upgraded, Now Database Home Page In Grid Control Still Shows Old Oracle Home
  	Doc ID: 	Note:290731.1
  	
Is it Possible to Manage a Standalone OC4J Target using Grid Control?
  	Doc ID: 	Note:414635.1
  	
How To Add a New OC4J Target To The Grid Control
  	Doc ID: 	Note:290261.1
  	
How To Find the Target_name Of a 10G Database Or Other Grid Target
  	Doc ID: 	Note:371643.1
  	
How to Remove a Target From The EM 10g Grid Control Console
  	Doc ID: 	Note:271691.1
  	
Problem: App Server Not Being Discovered By Grid Agent
  	Doc ID: 	Note:454600.1
  	
Problem: Not All Duplicate Database Target Names Can Be Discovered In Grid Control
  	Doc ID: 	Note:443520.1
  	
How To Manually Add A Target (Host) To Grid Control 10g
  	Doc ID: 	Note:279975.1
  	
Howto: How to remove a deleted agent from the GRID Control repository database?
  	Doc ID: 	Note:454081.1


How to Troubleshoot Grid Control Provisioning and Deployment Setup Issues.
  	Doc ID: 	Note:466798.1


  	
----- GRID CONTROL UPGRADE ------------------------------- 


Problem: Listener Referring Old Oracle Home After Upgrading From 10.1.0.4 to 10.2.0.1
  	Doc ID: 	Note:423439.1
  	
Problem: 10.1.0.4.0 Upgrade: Additonal Management Service Patching Needs First Oms To Be Stopped
  	Doc ID: 	Note:377303.1
  	
How to Obtain Patch 4329444 for Upgrading Grid Control Repository to 10.2.0.3.0 / 10.2.0.4 on Windows
  	Doc ID: 	Note:456928.1
  	
How To Find RDBMS patchsets on Metalink
  	Doc ID: 	Note:438049.1
  	
How To Find and Download The Latest Patchset and Associated Patch Number For Oracle Database Release
  	Doc ID: 	Note:330374.1
  	
How to be notified for all ORA- Errors recorded in the alert.log file
  	Doc ID: 	Note:405396.1
  	
Different Upgrade Methods For Upgrading Your Database
  	Doc ID: 	Note:419550.1
  	
Procedure To Upgrade The Database From 8.1.7.4.0 In AIX 4.3.3 64-bit To 10.2.0.X.0 On AIX 5L 64-bit
  	Doc ID: 	Note:413968.1
  	
How to upgrade database control from 10gR1 to 10gR2 using emca upgrade
  	Doc ID: 	Note:465518.1
  	
Does The RMAN Catalog Need To Be Downgraded When The Database Is Downgraded?
  	Doc ID: 	Note:558364.1
  	
Complete checklist for manual upgrades of Oracle databases from anyversion to any version on any platform (documents only from 7.3.x>>8.0.x>>8.1.x>>9.0.x>>9.2.x>>10.1.x>>10.2.x>>11.1.x)
  	Doc ID: 	Note:421191.1
  	
Key RDBMS Install Differences in 11gR1
  	Doc ID: 	Note:431768.1
  	
Complete Checklist for Manual Upgrades to 10gR2
  	Doc ID: 	Note:316889.1
  	
COMPATIBLE Initialization Parameter While Upgrading To 10gR2
  	Doc ID: 	Note:413186.1 	
  	
RMAN Compatibility Matrix
  	Doc ID: 	Note:73431.1
  	
How to upgrade a 10.1.0.5.0 Repository Database for Grid Control to a 10.2.0.2.0 Repository Database
  	Doc ID: 	Note:399520.1
  	
Steps to upgrade 10.2.0.2.0 (or) higher Repository Database for EM Grid Control to 11.1.0.6.0
  	Doc ID: 	Note:467586.1
  	
EM2GO (Enterprise Manager Grid Control 10G) Frequently Asked Questions
  	Doc ID: 	Note:400193.1
  	
Quick Link to EM 10g Grid Control Installation Documentation
  	Doc ID: 	Note:414700.1
  	
Installation Checklist for EM 10g Grid Control 10.1.x.x to 10.2.0.1 OMS and Repository Upgrades
  	Doc ID: 	Note:401592.1
  	



-- RHEL 4.2 

Prerequisites and Install Information for EM 10g Grid Control Components on Red Hat EL 4.0 Update 2 Platforms
  	Doc ID: 	Note:343364.1



-- SYSMAN and DBSNMP PASSWORD

Enterprise Manager Database Console FAQ
  	Doc ID: 	863631.1

Oracle Created Database Users: Password, Usage and Files References
  	Doc ID: 	160861.1

How To Change The DBSNMP Password For Multiple Databases Using Batch Mode EMCLI
  	Doc ID: 	377357.1

Problem: dbsnmp password change for RAC database only updates one agent targets.xml
  	Doc ID: 	368925.1

Problem: Modifying The SYS, SYSMAN, and DBSNMP Passwords Using EMCLI Fails
  	Doc ID: 	369946.1

Dbsnmp Password Not Accepted
  	Doc ID: 	337260.1

How to change the DBSNMP passwords for a target database in Grid Console?
  	Doc ID: 	748668.1

Problem: The DBSNMP Account Becomes Locked And Database Shows A Status Of Down With A Metric Collection Error Of 'Ora-28000'
  	Doc ID: 	352585.1

Security Risk on DBSNMP User Password Due to catsnmp.sql Launched by catproc.sql
  	Doc ID: 	206870.1

How to Change the Monitoring Credentials for Database Targets in EM 10g
  	Doc ID: 	271627.1

How to change the password of the 10g database user sysman	<-- used for SR
  	Doc ID: 	259379.1

How to Change the Password of the 10g Database User Dbsnmp	<-- used for SR
  	Doc ID: 	259387.1




http://www.andrewcmaxwell.com/2009/11/100-different-evernote-uses/

''Evernote''
How are attachments stored on my local machine? https://www.evernote.com/shard/s2/note/4cab39c8-f700-4570-881d-bfd5dff2cf0f/ensupport/faq#b=c88dd0ac-32c1-4bc5-b3f4-50612072e0ad&n=4cab39c8-f700-4570-881d-bfd5dff2cf0f
http://forensicartifacts.com/2011/06/evernote-note-storage/
File Locations
On Windows 7: C:\Users\\AppData\Local\Evernote\Evernote\Database\.exb
http://stackoverflow.com/questions/4471725/how-to-open-a-evernote-file-extension-exb
<<<
here are the features that I like -- 1) i heavily take notes on paper/mind maps/etc and then I take a photo of it (iphone) then send it to my evernote email 2) the text on the photos are searchable CTRL-F on all notes/photos for a search string, it even recognizes my handwriting 3) you can password protect notes 4) I can embed word/excel/pdf on each note 5) I can have a sharable link and post it on my tiddlywiki 6) there's a clip feature which you can use for webpages and emails and I'm sure there are more stuff

so for my note taking purposes evernote alone would not suffice, so I also use tiddlywiki http://karlarao.tiddlyspot.com/#About to organize them in ala-mind-map manner.. and both of them are in the cloud..If I'll have a laptop break down now, I can pull the tiddlywiki save it in a folder and just sync again my evernote.. as well as my dropbox folder.. and I'll be productive again.
<<<

! annnoying - How to Disable Automatic Spell Check in Evernote on a Mac
{{{
To do this, simply:
Open Evernote
Open a file within your Notebooks (anything you can type on)
Find a typed word, and right-click on it
Go to Spelling and Grammar in the menu that pops up
In the sub-menu, uncheck "Correct Spelling Automatically"
Repeat step 3 again, and go to Substitutions, uncheck "Text Replacement" if necessary
}}}

! add fonts - Courier New
http://www.christopher-mayo.com/?p=1512

! evernote limits
http://www.christopher-mayo.com/?p=169




https://www.google.com/search?q=oracle+exa+cs&oq=oracle+exa+cs&aqs=chrome..69i57j69i60j69i65j0l3.2576j0j0&sourceid=chrome&ie=UTF-8

https://www.oracle.com/technetwork/database/exadata/exadataservice-ds-2574134.pdf


! cons
<<<
* every database is a CDB and created with a separate oracle home
* consolidated environment is created as PDB
<<<
{{{
--V2

> cat /etc/fstab
/dev/md5           /                       ext3    defaults,usrquota,grpquota        1 1
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
/dev/md2              swap                    swap    defaults        0 0
/dev/md7                /opt/oracle             ext3    defaults,nodev  1 1
/dev/md4                /boot                   ext3    defaults,nodev  1 1
/dev/md11               /var/log/oracle         ext3    defaults,nodev  1 1

[enkcel01:root] /root
>

[enkcel01:root] /root
> df -Th
Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/md5      ext3    9.9G  5.8G  3.6G  62% /
tmpfs        tmpfs     12G     0   12G   0% /dev/shm
/dev/md7      ext3    2.0G  684M  1.3G  36% /opt/oracle
/dev/md4      ext3    116M   42M   69M  38% /boot
/dev/md11     ext3    2.3G  149M  2.1G   7% /var/log/oracle

[enkcel01:root] /root
>

[enkcel01:root] /root
> mount
/dev/md5 on / type ext3 (rw,usrquota,grpquota)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/md7 on /opt/oracle type ext3 (rw,nodev)
/dev/md4 on /boot type ext3 (rw,nodev)
/dev/md11 on /var/log/oracle type ext3 (rw,nodev)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)



> cat /proc/mdstat
Personalities : [raid1]
md4 : active raid1 sdb1[1] sda1[0]
      120384 blocks [2/2] [UU]

md5 : active raid1 sdb5[1] sda5[0]
      10482304 blocks [2/2] [UU]

md6 : active raid1 sdb6[1] sda6[0]
      10482304 blocks [2/2] [UU]

md7 : active raid1 sdb7[1] sda7[0]
      2096384 blocks [2/2] [UU]

md8 : active raid1 sdb8[1] sda8[0]
      2096384 blocks [2/2] [UU]

md2 : active raid1 sdb9[1] sda9[0]
      2096384 blocks [2/2] [UU]

md11 : active raid1 sdb11[1] sda11[0]
      2433728 blocks [2/2] [UU]

md1 : active raid1 sdb10[1] sda10[0]
      714752 blocks [2/2] [UU]

unused devices: <none>

[enkcel01:root] /root
>

[enkcel01:root] /root
> mdadm --misc --detail /dev/md4
/dev/md4:
        Version : 0.90
  Creation Time : Sat May 15 13:47:10 2010
     Raid Level : raid1
     Array Size : 120384 (117.58 MiB 123.27 MB)
  Used Dev Size : 120384 (117.58 MiB 123.27 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 4
    Persistence : Superblock is persistent

    Update Time : Sun Oct 16 04:22:03 2011
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : d529a7ad:ed5936bb:b0502716:e8114570
         Events : 0.88

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1

[enkcel01:root] /root
> mdadm --misc --detail /dev/md5
/dev/md5:
        Version : 0.90
  Creation Time : Sat May 15 13:47:19 2010
     Raid Level : raid1
     Array Size : 10482304 (10.00 GiB 10.73 GB)
  Used Dev Size : 10482304 (10.00 GiB 10.73 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 5
    Persistence : Superblock is persistent

    Update Time : Sun Oct 23 03:47:34 2011
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 11ba27c1:6d6fa21d:8fa278dc:2cb77a67
         Events : 0.70

    Number   Major   Minor   RaidDevice State
       0       8        5        0      active sync   /dev/sda5
       1       8       21        1      active sync   /dev/sdb5

[enkcel01:root] /root
> mdadm --misc --detail /dev/md6
/dev/md6:
        Version : 0.90
  Creation Time : Sat May 15 13:47:34 2010
     Raid Level : raid1
     Array Size : 10482304 (10.00 GiB 10.73 GB)
  Used Dev Size : 10482304 (10.00 GiB 10.73 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 6
    Persistence : Superblock is persistent

    Update Time : Sun Oct 16 04:28:03 2011
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : b9e70f92:9f86d4fd:e0cf405d:df6b60ef
         Events : 0.26

    Number   Major   Minor   RaidDevice State
       0       8        6        0      active sync   /dev/sda6
       1       8       22        1      active sync   /dev/sdb6

[enkcel01:root] /root
> mdadm --misc --detail /dev/md7
/dev/md7:
        Version : 0.90
  Creation Time : Sat May 15 13:48:06 2010
     Raid Level : raid1
     Array Size : 2096384 (2047.59 MiB 2146.70 MB)
  Used Dev Size : 2096384 (2047.59 MiB 2146.70 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 7
    Persistence : Superblock is persistent

    Update Time : Sun Oct 23 03:47:28 2011
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 056ad8df:b649ca96:cc7d1691:c2e85879
         Events : 0.90

    Number   Major   Minor   RaidDevice State
       0       8        7        0      active sync   /dev/sda7
       1       8       23        1      active sync   /dev/sdb7

[enkcel01:root] /root
> mdadm --misc --detail /dev/md8
/dev/md8:
        Version : 0.90
  Creation Time : Sat May 15 13:48:55 2010
     Raid Level : raid1
     Array Size : 2096384 (2047.59 MiB 2146.70 MB)
  Used Dev Size : 2096384 (2047.59 MiB 2146.70 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 8
    Persistence : Superblock is persistent

    Update Time : Sun Oct 16 04:23:44 2011
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 9650421a:fd228e8e:e2e291ce:f8970923
         Events : 0.84

    Number   Major   Minor   RaidDevice State
       0       8        8        0      active sync   /dev/sda8
       1       8       24        1      active sync   /dev/sdb8

[enkcel01:root] /root
> mdadm --misc --detail /dev/md2
/dev/md2:
        Version : 0.90
  Creation Time : Sat May 15 13:46:43 2010
     Raid Level : raid1
     Array Size : 2096384 (2047.59 MiB 2146.70 MB)
  Used Dev Size : 2096384 (2047.59 MiB 2146.70 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Sun Oct 16 04:23:00 2011
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 14811b0c:f3cf3622:03f81e8a:89b2d031
         Events : 0.78

    Number   Major   Minor   RaidDevice State
       0       8        9        0      active sync   /dev/sda9
       1       8       25        1      active sync   /dev/sdb9

[enkcel01:root] /root
> mdadm --misc --detail /dev/md11
/dev/md11:
        Version : 0.90
  Creation Time : Wed Sep  8 13:18:37 2010
     Raid Level : raid1
     Array Size : 2433728 (2.32 GiB 2.49 GB)
  Used Dev Size : 2433728 (2.32 GiB 2.49 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 11
    Persistence : Superblock is persistent

    Update Time : Sun Oct 23 03:47:52 2011
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 55c92014:d351004a:539a77be:6c759d6e
         Events : 0.94

    Number   Major   Minor   RaidDevice State
       0       8       11        0      active sync   /dev/sda11
       1       8       27        1      active sync   /dev/sdb11

[enkcel01:root] /root
> mdadm --misc --detail /dev/md1
/dev/md1:
        Version : 0.90
  Creation Time : Sat May 15 13:46:44 2010
     Raid Level : raid1
     Array Size : 714752 (698.12 MiB 731.91 MB)
  Used Dev Size : 714752 (698.12 MiB 731.91 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Sun Oct 16 04:22:18 2011
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : bdfabe50:2c42a387:120614c4:2f682052
         Events : 0.78

    Number   Major   Minor   RaidDevice State
       0       8       10        0      active sync   /dev/sda10
       1       8       26        1      active sync   /dev/sdb10

       
> parted /dev/sda print

Model: LSI MR9261-8i (scsi)
Disk /dev/sda: 1999GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type      File system  Flags
 1      32.3kB  123MB   123MB   primary   ext3         boot, raid
 2      123MB   132MB   8225kB  primary   ext2
 3      132MB   1968GB  1968GB  primary
 4      1968GB  1999GB  31.1GB  extended               lba
 5      1968GB  1979GB  10.7GB  logical   ext3         raid
 6      1979GB  1989GB  10.7GB  logical   ext3         raid
 7      1989GB  1991GB  2147MB  logical   ext3         raid
 8      1991GB  1994GB  2147MB  logical   ext3         raid
 9      1994GB  1996GB  2147MB  logical   linux-swap   raid
10      1996GB  1997GB  732MB   logical                raid
11      1997GB  1999GB  2492MB  logical   ext3         raid

Information: Don't forget to update /etc/fstab, if necessary.


[enkcel01:root] /root
> parted /dev/sdb print

Model: LSI MR9261-8i (scsi)
Disk /dev/sdb: 1999GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type      File system  Flags
 1      32.3kB  123MB   123MB   primary   ext3         boot, raid
 2      123MB   132MB   8225kB  primary   ext2
 3      132MB   1968GB  1968GB  primary
 4      1968GB  1999GB  31.1GB  extended               lba
 5      1968GB  1979GB  10.7GB  logical   ext3         raid
 6      1979GB  1989GB  10.7GB  logical   ext3         raid
 7      1989GB  1991GB  2147MB  logical   ext3         raid
 8      1991GB  1994GB  2147MB  logical   ext3         raid
 9      1994GB  1996GB  2147MB  logical   linux-swap   raid
10      1996GB  1997GB  732MB   logical                raid
11      1997GB  1999GB  2492MB  logical   ext3         raid

Information: Don't forget to update /etc/fstab, if necessary.

       
       

> fdisk -l

Disk /dev/sda: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          15      120456   fd  Linux raid autodetect
/dev/sda2              16          16        8032+  83  Linux
/dev/sda3              17      239246  1921614975   83  Linux
/dev/sda4          239247      243031    30403012+   f  W95 Ext'd (LBA)
/dev/sda5          239247      240551    10482381   fd  Linux raid autodetect
/dev/sda6          240552      241856    10482381   fd  Linux raid autodetect
/dev/sda7          241857      242117     2096451   fd  Linux raid autodetect
/dev/sda8          242118      242378     2096451   fd  Linux raid autodetect
/dev/sda9          242379      242639     2096451   fd  Linux raid autodetect
/dev/sda10         242640      242728      714861   fd  Linux raid autodetect
/dev/sda11         242729      243031     2433816   fd  Linux raid autodetect

Disk /dev/sdb: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1          15      120456   fd  Linux raid autodetect
/dev/sdb2              16          16        8032+  83  Linux
/dev/sdb3              17      239246  1921614975   83  Linux
/dev/sdb4          239247      243031    30403012+   f  W95 Ext'd (LBA)
/dev/sdb5          239247      240551    10482381   fd  Linux raid autodetect
/dev/sdb6          240552      241856    10482381   fd  Linux raid autodetect
/dev/sdb7          241857      242117     2096451   fd  Linux raid autodetect
/dev/sdb8          242118      242378     2096451   fd  Linux raid autodetect
/dev/sdb9          242379      242639     2096451   fd  Linux raid autodetect
/dev/sdb10         242640      242728      714861   fd  Linux raid autodetect
/dev/sdb11         242729      243031     2433816   fd  Linux raid autodetect

Disk /dev/sdc: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdc doesn't contain a valid partition table

Disk /dev/sdd: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdd doesn't contain a valid partition table

Disk /dev/sde: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sde doesn't contain a valid partition table

Disk /dev/sdf: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdf doesn't contain a valid partition table

Disk /dev/sdg: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdg doesn't contain a valid partition table

Disk /dev/sdh: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdh doesn't contain a valid partition table

Disk /dev/sdi: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdi doesn't contain a valid partition table

Disk /dev/sdj: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdj doesn't contain a valid partition table

Disk /dev/sdk: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdk doesn't contain a valid partition table

Disk /dev/sdl: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdl doesn't contain a valid partition table

Disk /dev/sdm: 4009 MB, 4009754624 bytes
124 heads, 62 sectors/track, 1018 cylinders
Units = cylinders of 7688 * 512 = 3936256 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdm1               1        1017     3909317   83  Linux

Disk /dev/md1: 731 MB, 731906048 bytes
2 heads, 4 sectors/track, 178688 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md1 doesn't contain a valid partition table

Disk /dev/md11: 2492 MB, 2492137472 bytes
2 heads, 4 sectors/track, 608432 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md11 doesn't contain a valid partition table

Disk /dev/md2: 2146 MB, 2146697216 bytes
2 heads, 4 sectors/track, 524096 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md2 doesn't contain a valid partition table

Disk /dev/md8: 2146 MB, 2146697216 bytes
2 heads, 4 sectors/track, 524096 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md8 doesn't contain a valid partition table

Disk /dev/md7: 2146 MB, 2146697216 bytes
2 heads, 4 sectors/track, 524096 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md7 doesn't contain a valid partition table

Disk /dev/md6: 10.7 GB, 10733879296 bytes
2 heads, 4 sectors/track, 2620576 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md6 doesn't contain a valid partition table

Disk /dev/md5: 10.7 GB, 10733879296 bytes
2 heads, 4 sectors/track, 2620576 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md5 doesn't contain a valid partition table

Disk /dev/md4: 123 MB, 123273216 bytes
2 heads, 4 sectors/track, 30096 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md4 doesn't contain a valid partition table

Disk /dev/sdn: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdn doesn't contain a valid partition table

Disk /dev/sdo: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdo doesn't contain a valid partition table

Disk /dev/sdp: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdp doesn't contain a valid partition table

Disk /dev/sdq: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdq doesn't contain a valid partition table

Disk /dev/sdr: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdr doesn't contain a valid partition table

Disk /dev/sds: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sds doesn't contain a valid partition table

Disk /dev/sdt: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdt doesn't contain a valid partition table

Disk /dev/sdu: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdu doesn't contain a valid partition table

Disk /dev/sdv: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdv doesn't contain a valid partition table

Disk /dev/sdw: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdw doesn't contain a valid partition table

Disk /dev/sdx: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdx doesn't contain a valid partition table

Disk /dev/sdy: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdy doesn't contain a valid partition table

Disk /dev/sdz: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdz doesn't contain a valid partition table

Disk /dev/sdaa: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdaa doesn't contain a valid partition table

Disk /dev/sdab: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdab doesn't contain a valid partition table

Disk /dev/sdac: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdac doesn't contain a valid partition table




V2--CellCLI> list physicaldisk attributes all
         35:0            23              HardDisk        35      0       0       false   0_0     "HITACHI H7220AA30SUN2.0T"      JKAOA28A                2010-05-15T21:10:45-05:00       sata                           JK11D1YAJB8GGZ  1862.6559999994934G     0                       normal
         35:1            24              HardDisk        35      0       0       false   0_1     "HITACHI H7220AA30SUN2.0T"      JKAOA28A                2010-05-15T21:10:46-05:00       sata                           JK11D1YAJB4V0Z  1862.6559999994934G     1                       normal
         35:2            25              HardDisk        35      0       0       false   0_2     "HITACHI H7220AA30SUN2.0T"      JKAOA28A                2010-05-15T21:10:47-05:00       sata                           JK11D1YAJAZMMZ  1862.6559999994934G     2                       normal
         35:3            26              HardDisk        35      0       0       false   0_3     "HITACHI H7220AA30SUN2.0T"      JKAOA28A                2010-05-15T21:10:49-05:00       sata                           JK11D1YAJ7JX2Z  1862.6559999994934G     3                       normal
         35:4            27              HardDisk        35      0       0       false   0_4     "HITACHI H7220AA30SUN2.0T"      JKAOA28A                2010-05-15T21:10:50-05:00       sata                           JK11D1YAJ60R8Z  1862.6559999994934G     4                       normal
         35:5            28              HardDisk        35      0       0       false   0_5     "HITACHI H7220AA30SUN2.0T"      JKAOA28A                2010-05-15T21:10:51-05:00       sata                           JK11D1YAJB4J8Z  1862.6559999994934G     5                       normal
         35:6            29              HardDisk        35      0       0       false   0_6     "HITACHI H7220AA30SUN2.0T"      JKAOA28A                2010-05-15T21:10:52-05:00       sata                           JK11D1YAJ7JXGZ  1862.6559999994934G     6                       normal
         35:7            30              HardDisk        35      0       0       false   0_7     "HITACHI H7220AA30SUN2.0T"      JKAOA28A                2010-05-15T21:10:54-05:00       sata                           JK11D1YAJB4E5Z  1862.6559999994934G     7                       normal
         35:8            31              HardDisk        35      4       0       false   0_8     "HITACHI H7220AA30SUN2.0T"      JKAOA28A                2010-05-15T21:10:55-05:00       sata                           JK11D1YAJ8TY3Z  1862.6559999994934G     8                       normal
         35:9            32              HardDisk        35      0       0       false   0_9     "HITACHI H7220AA30SUN2.0T"      JKAOA28A                2010-05-15T21:10:56-05:00       sata                           JK11D1YAJ8TXKZ  1862.6559999994934G     9                       normal
         35:10           33              HardDisk        35      0       0       false   0_10    "HITACHI H7220AA30SUN2.0T"      JKAOA28A                2010-05-15T21:10:58-05:00       sata                           JK11D1YAJ8TYLZ  1862.6559999994934G     10                      normal
         35:11           34              HardDisk        35      0       0       false   0_11    "HITACHI H7220AA30SUN2.0T"      JKAOA28A                2010-05-15T21:10:59-05:00       sata                           JK11D1YAJAZNKZ  1862.6559999994934G     11                      normal
         FLASH_1_0       FlashDisk       0               0       0       0       0       0       1_0                             "MARVELL SD88SA02"      D20Y                            2011-05-06T12:00:49-05:00      sas             1014M02JC3              22.8880615234375G       0       "PCI Slot: 1; FDOM: 0"  normal
         FLASH_1_1       FlashDisk       0               0       0       0       0       0       1_1                             "MARVELL SD88SA02"      D20Y                            2011-05-06T12:00:49-05:00      sas             1014M02JYG              22.8880615234375G       0       "PCI Slot: 1; FDOM: 1"  normal
         FLASH_1_2       FlashDisk       0               0       0       0       0       0       1_2                             "MARVELL SD88SA02"      D20Y                            2011-05-06T12:00:49-05:00      sas             1014M02JV9              22.8880615234375G       0       "PCI Slot: 1; FDOM: 2"  normal
         FLASH_1_3       FlashDisk       0               0       0       0       0       0       1_3                             "MARVELL SD88SA02"      D20Y                            2011-05-06T12:00:49-05:00      sas             1014M02J93              22.8880615234375G       0       "PCI Slot: 1; FDOM: 3"  normal
         FLASH_2_0       FlashDisk       0               0       0       0       0       0       2_0                             "MARVELL SD88SA02"      D20Y                            2011-05-06T12:00:49-05:00      sas             1014M02JFK              22.8880615234375G       0       "PCI Slot: 2; FDOM: 0"  normal
         FLASH_2_1       FlashDisk       0               0       0       0       0       0       2_1                             "MARVELL SD88SA02"      D20Y                            2011-05-06T12:00:49-05:00      sas             1014M02JFL              22.8880615234375G       0       "PCI Slot: 2; FDOM: 1"  normal
         FLASH_2_2       FlashDisk       0               0       0       0       0       0       2_2                             "MARVELL SD88SA02"      D20Y                            2011-05-06T12:00:49-05:00      sas             1014M02JF7              22.8880615234375G       0       "PCI Slot: 2; FDOM: 2"  normal
         FLASH_2_3       FlashDisk       0               0       0       0       0       0       2_3                             "MARVELL SD88SA02"      D20Y                            2011-05-06T12:00:49-05:00      sas             1014M02JF8              22.8880615234375G       0       "PCI Slot: 2; FDOM: 3"  normal
         FLASH_4_0       FlashDisk       0               0       0       0       0       0       4_0                             "MARVELL SD88SA02"      D20Y                            2011-05-06T12:00:49-05:00      sas             1014M02HP5              22.8880615234375G       0       "PCI Slot: 4; FDOM: 0"  normal
         FLASH_4_1       FlashDisk       0               0       0       0       0       0       4_1                             "MARVELL SD88SA02"      D20Y                            2011-05-06T12:00:49-05:00      sas             1014M02HNN              22.8880615234375G       0       "PCI Slot: 4; FDOM: 1"  normal
         FLASH_4_2       FlashDisk       0               0       0       0       0       0       4_2                             "MARVELL SD88SA02"      D20Y                            2011-05-06T12:00:49-05:00      sas             1014M02HP2              22.8880615234375G       0       "PCI Slot: 4; FDOM: 2"  normal
         FLASH_4_3       FlashDisk       0               0       0       0       0       0       4_3                             "MARVELL SD88SA02"      D20Y                            2011-05-06T12:00:49-05:00      sas             1014M02HP4              22.8880615234375G       0       "PCI Slot: 4; FDOM: 3"  normal
         FLASH_5_0       FlashDisk       0               0       0       0       0       0       5_0                             "MARVELL SD88SA02"      D20Y                            2011-05-06T12:00:49-05:00      sas             1014M02JUD              22.8880615234375G       0       "PCI Slot: 5; FDOM: 0"  normal
         FLASH_5_1       FlashDisk       0               0       0       0       0       0       5_1                             "MARVELL SD88SA02"      D20Y                            2011-05-06T12:00:49-05:00      sas             1014M02JVF              22.8880615234375G       0       "PCI Slot: 5; FDOM: 1"  normal
         FLASH_5_2       FlashDisk       0               0       0       0       0       0       5_2                             "MARVELL SD88SA02"      D20Y                            2011-05-06T12:00:49-05:00      sas             1014M02JAP              22.8880615234375G       0       "PCI Slot: 5; FDOM: 2"  normal
         FLASH_5_3       FlashDisk       0               0       0       0       0       0       5_3                             "MARVELL SD88SA02"      D20Y                            2011-05-06T12:00:49-05:00      sas             1014M02JVH              22.8880615234375G       0       "PCI Slot: 5; FDOM: 3"  normal

CellCLI>

CellCLI> list lun attributes all
         0_0     CD_00_cell01    /dev/sda        HardDisk        0_0     TRUE    FALSE   1861.712890625G         0_0     35:0            0       "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU"  normal
         0_1     CD_01_cell01    /dev/sdb        HardDisk        0_1     TRUE    FALSE   1861.712890625G         0_1     35:1            0       "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU"  normal
         0_2     CD_02_cell01    /dev/sdc        HardDisk        0_2     FALSE   FALSE   1861.712890625G         0_2     35:2            0       "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU"  normal
         0_3     CD_03_cell01    /dev/sdd        HardDisk        0_3     FALSE   FALSE   1861.712890625G         0_3     35:3            0       "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU"  normal
         0_4     CD_04_cell01    /dev/sde        HardDisk        0_4     FALSE   FALSE   1861.712890625G         0_4     35:4            0       "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU"  normal
         0_5     CD_05_cell01    /dev/sdf        HardDisk        0_5     FALSE   FALSE   1861.712890625G         0_5     35:5            0       "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU"  normal
         0_6     CD_06_cell01    /dev/sdg        HardDisk        0_6     FALSE   FALSE   1861.712890625G         0_6     35:6            0       "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU"  normal
         0_7     CD_07_cell01    /dev/sdh        HardDisk        0_7     FALSE   FALSE   1861.712890625G         0_7     35:7            0       "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU"  normal
         0_8     CD_08_cell01    /dev/sdi        HardDisk        0_8     FALSE   FALSE   1861.712890625G         0_8     35:8            0       "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU"  normal
         0_9     CD_09_cell01    /dev/sdj        HardDisk        0_9     FALSE   FALSE   1861.712890625G         0_9     35:9            0       "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU"  normal
         0_10    CD_10_cell01    /dev/sdk        HardDisk        0_10    FALSE   FALSE   1861.712890625G         0_10    35:10           0       "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU"  normal
         0_11    CD_11_cell01    /dev/sdl        HardDisk        0_11    FALSE   FALSE   1861.712890625G         0_11    35:11           0       "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU"  normal
         1_0     FD_00_enkcel01  /dev/sds        FlashDisk       1_0     FALSE   FALSE   22.8880615234375G       100.0   FLASH_1_0       normal
         1_1     FD_01_enkcel01  /dev/sdr        FlashDisk       1_1     FALSE   FALSE   22.8880615234375G       100.0   FLASH_1_1       normal
         1_2     FD_02_enkcel01  /dev/sdt        FlashDisk       1_2     FALSE   FALSE   22.8880615234375G       100.0   FLASH_1_2       normal
         1_3     FD_03_enkcel01  /dev/sdu        FlashDisk       1_3     FALSE   FALSE   22.8880615234375G       100.0   FLASH_1_3       normal
         2_0     FD_04_enkcel01  /dev/sdz        FlashDisk       2_0     FALSE   FALSE   22.8880615234375G       99.9    FLASH_2_0       normal
         2_1     FD_05_enkcel01  /dev/sdaa       FlashDisk       2_1     FALSE   FALSE   22.8880615234375G       100.0   FLASH_2_1       normal
         2_2     FD_06_enkcel01  /dev/sdab       FlashDisk       2_2     FALSE   FALSE   22.8880615234375G       100.0   FLASH_2_2       normal
         2_3     FD_07_enkcel01  /dev/sdac       FlashDisk       2_3     FALSE   FALSE   22.8880615234375G       100.0   FLASH_2_3       normal
         4_0     FD_08_enkcel01  /dev/sdn        FlashDisk       4_0     FALSE   FALSE   22.8880615234375G       100.0   FLASH_4_0       normal
         4_1     FD_09_enkcel01  /dev/sdo        FlashDisk       4_1     FALSE   FALSE   22.8880615234375G       100.0   FLASH_4_1       normal
         4_2     FD_10_enkcel01  /dev/sdp        FlashDisk       4_2     FALSE   FALSE   22.8880615234375G       100.0   FLASH_4_2       normal
         4_3     FD_11_enkcel01  /dev/sdq        FlashDisk       4_3     FALSE   FALSE   22.8880615234375G       100.0   FLASH_4_3       normal
         5_0     FD_12_enkcel01  /dev/sdv        FlashDisk       5_0     FALSE   FALSE   22.8880615234375G       100.0   FLASH_5_0       normal
         5_1     FD_13_enkcel01  /dev/sdw        FlashDisk       5_1     FALSE   FALSE   22.8880615234375G       100.0   FLASH_5_1       normal
         5_2     FD_14_enkcel01  /dev/sdx        FlashDisk       5_2     FALSE   FALSE   22.8880615234375G       100.0   FLASH_5_2       normal
         5_3     FD_15_enkcel01  /dev/sdy        FlashDisk       5_3     FALSE   FALSE   22.8880615234375G       100.0   FLASH_5_3       normal

CellCLI>

CellCLI> list celldisk attributes all
         CD_00_cell01            2010-05-28T13:09:11-05:00       /dev/sda        /dev/sda3       HardDisk        0       0               00000128-e01a-793d-0000-000000000000    none                                   0_0     0               1832.59375G     normal
         CD_01_cell01            2010-05-28T13:09:15-05:00       /dev/sdb        /dev/sdb3       HardDisk        0       0               00000128-e01a-8c16-0000-000000000000    none                                   0_1     0               1832.59375G     normal
         CD_02_cell01            2010-05-28T13:09:16-05:00       /dev/sdc        /dev/sdc        HardDisk        0       0               00000128-e01a-8e29-0000-000000000000    none                                   0_2     0               1861.703125G    normal
         CD_03_cell01            2010-05-28T13:09:16-05:00       /dev/sdd        /dev/sdd        HardDisk        0       0               00000128-e01a-904a-0000-000000000000    none                                   0_3     0               1861.703125G    normal
         CD_04_cell01            2010-05-28T13:09:17-05:00       /dev/sde        /dev/sde        HardDisk        0       0               00000128-e01a-9274-0000-000000000000    none                                   0_4     0               1861.703125G    normal
         CD_05_cell01            2010-05-28T13:09:18-05:00       /dev/sdf        /dev/sdf        HardDisk        0       1122.8125G      ((offset=738.890625G,size=1122.8125G))  00000128-e01a-948e-0000-000000000000   none    0_5             0               1861.703125G    normal
         CD_06_cell01            2010-05-28T13:09:18-05:00       /dev/sdg        /dev/sdg        HardDisk        0       0               00000128-e01a-96a9-0000-000000000000    none                                   0_6     0               1861.703125G    normal
         CD_07_cell01            2010-05-28T13:09:19-05:00       /dev/sdh        /dev/sdh        HardDisk        0       0               00000128-e01a-98ce-0000-000000000000    none                                   0_7     0               1861.703125G    normal
         CD_08_cell01            2010-05-28T13:09:19-05:00       /dev/sdi        /dev/sdi        HardDisk        0       0               00000128-e01a-9aec-0000-000000000000    none                                   0_8     0               1861.703125G    normal
         CD_09_cell01            2010-05-28T13:09:20-05:00       /dev/sdj        /dev/sdj        HardDisk        0       0               00000128-e01a-9cfe-0000-000000000000    none                                   0_9     0               1861.703125G    normal
         CD_10_cell01            2010-05-28T13:09:20-05:00       /dev/sdk        /dev/sdk        HardDisk        0       0               00000128-e01a-9f1b-0000-000000000000    none                                   0_10    0               1861.703125G    normal
         CD_11_cell01            2010-05-28T13:09:21-05:00       /dev/sdl        /dev/sdl        HardDisk        0       0               00000128-e01a-a13e-0000-000000000000    none                                   0_11    0               1861.703125G    normal
         FD_00_enkcel01          2011-09-22T20:51:10-05:00       /dev/sds        /dev/sds        FlashDisk       0       0               b8638e68-b436-48ab-9790-a53b6f188b53    none                                   1_0     22.875G         normal
         FD_01_enkcel01          2011-09-22T20:51:11-05:00       /dev/sdr        /dev/sdr        FlashDisk       0       0               7485b0c0-b6ef-4e8f-b4cb-ded2734dc424    none                                   1_1     22.875G         normal
         FD_02_enkcel01          2011-09-22T20:51:12-05:00       /dev/sdt        /dev/sdt        FlashDisk       0       0               2f0dee7e-3f0d-49af-9f10-865952a6362d    none                                   1_2     22.875G         normal
         FD_03_enkcel01          2011-09-22T20:51:13-05:00       /dev/sdu        /dev/sdu        FlashDisk       0       0               9a7586dd-4fad-431b-8459-4c8a3504ce51    none                                   1_3     22.875G         normal
         FD_04_enkcel01          2011-09-22T20:51:14-05:00       /dev/sdz        /dev/sdz        FlashDisk       0       0               65acb88c-b5b4-4768-a029-04de9238442f    none                                   2_0     22.875G         normal
         FD_05_enkcel01          2011-09-22T20:51:15-05:00       /dev/sdaa       /dev/sdaa       FlashDisk       0       0               f99d5e54-063f-423a-ad21-bb97fded6534    none                                   2_1     22.875G         normal
         FD_06_enkcel01          2011-09-22T20:51:15-05:00       /dev/sdab       /dev/sdab       FlashDisk       0       0               6d1af809-5f61-47cb-bdb5-3eceeb4804b4    none                                   2_2     22.875G         normal
         FD_07_enkcel01          2011-09-22T20:51:16-05:00       /dev/sdac       /dev/sdac       FlashDisk       0       0               d2c7735a-f646-4632-a063-bf9ce4093e10    none                                   2_3     22.875G         normal
         FD_08_enkcel01          2011-09-22T20:51:17-05:00       /dev/sdn        /dev/sdn        FlashDisk       0       0               ab088c83-e6bf-47e2-98e4-a45d67873a5b    none                                   4_0     22.875G         normal
         FD_09_enkcel01          2011-09-22T20:51:18-05:00       /dev/sdo        /dev/sdo        FlashDisk       0       0               7ba2b17a-bcb2-4084-ba88-c5d7415b18fb    none                                   4_1     22.875G         normal
         FD_10_enkcel01          2011-09-22T20:51:19-05:00       /dev/sdp        /dev/sdp        FlashDisk       0       0               b429e31e-cf38-412f-9c82-44a2d9ae346e    none                                   4_2     22.875G         normal
         FD_11_enkcel01          2011-09-22T20:51:20-05:00       /dev/sdq        /dev/sdq        FlashDisk       0       0               fd8af61f-1a16-4a97-b82c-d81f2031cf9a    none                                   4_3     22.875G         normal
         FD_12_enkcel01          2011-09-22T20:51:21-05:00       /dev/sdv        /dev/sdv        FlashDisk       0       0               8a6fa836-61b8-4718-b93f-bc22a5566182    none                                   5_0     22.875G         normal
         FD_13_enkcel01          2011-09-22T20:51:22-05:00       /dev/sdw        /dev/sdw        FlashDisk       0       0               1748c6a9-c24d-4324-bd85-5d5e9cbadcaf    none                                   5_1     22.875G         normal
         FD_14_enkcel01          2011-09-22T20:51:22-05:00       /dev/sdx        /dev/sdx        FlashDisk       0       0               98200a21-a687-4afb-9cd0-6911be8c5be5    none                                   5_2     22.875G         normal
         FD_15_enkcel01          2011-09-22T20:51:23-05:00       /dev/sdy        /dev/sdy        FlashDisk       0       0               793fba33-3d8f-425d-b261-42fbfa71bfcb    none                                   5_3     22.875G         normal

CellCLI> list griddisk attributes all
         AC10G_CD_05_cell01      AC10G           AC10G_CD_05_CELL01              CD_05_cell01            2011-10-11T14:34:52-05:00       HardDisk        0       ac5e9ec2-0269-45e0-b7fe-ea8c0974c6b1  708.890625G      30G             active
         DATA_CD_00_cell01       DATA            DATA_CD_00_CELL01               CD_00_cell01            2010-06-14T17:41:12-05:00       HardDisk        0       00000129-389f-a070-0000-000000000000  32M              1282.8125G      active
         DATA_CD_01_cell01       DATA            DATA_CD_01_CELL01               CD_01_cell01            2010-06-14T17:41:12-05:00       HardDisk        0       00000129-389f-a09e-0000-000000000000  32M              1282.8125G      active
         DATA_CD_02_cell01       DATA            DATA_CD_02_CELL01               CD_02_cell01            2010-06-14T17:41:12-05:00       HardDisk        0       00000129-389f-a0d2-0000-000000000000  32M              1282.8125G      active
         DATA_CD_03_cell01       DATA            DATA_CD_03_CELL01               CD_03_cell01            2010-06-14T17:41:12-05:00       HardDisk        0       00000129-389f-a0f0-0000-000000000000  32M              1282.8125G      active
         DATA_CD_04_cell01       DATA            DATA_CD_04_CELL01               CD_04_cell01            2010-06-14T17:41:12-05:00       HardDisk        0       00000129-389f-a10e-0000-000000000000  32M              1282.8125G      active
         DATA_CD_06_cell01       DATA            DATA_CD_06_CELL01               CD_06_cell01            2010-06-14T17:41:12-05:00       HardDisk        0       00000129-389f-a159-0000-000000000000  32M              1282.8125G      active
         DATA_CD_07_cell01       DATA            DATA_CD_07_CELL01               CD_07_cell01            2010-06-14T17:41:12-05:00       HardDisk        0       00000129-389f-a176-0000-000000000000  32M              1282.8125G      active
         DATA_CD_08_cell01       DATA            DATA_CD_08_CELL01               CD_08_cell01            2010-06-14T17:41:12-05:00       HardDisk        0       00000129-389f-a193-0000-000000000000  32M              1282.8125G      active
         DATA_CD_09_cell01       DATA            DATA_CD_09_CELL01               CD_09_cell01            2010-06-14T17:41:12-05:00       HardDisk        0       00000129-389f-a1a9-0000-000000000000  32M              1282.8125G      active
         DATA_CD_10_cell01       DATA            DATA_CD_10_CELL01               CD_10_cell01            2010-06-14T17:41:12-05:00       HardDisk        0       00000129-389f-a1c2-0000-000000000000  32M              1282.8125G      active
         DATA_CD_11_cell01       DATA            DATA_CD_11_CELL01               CD_11_cell01            2010-06-14T17:41:12-05:00       HardDisk        0       00000129-389f-a1e0-0000-000000000000  32M              1282.8125G      active
         RECO_CD_00_cell01       RECO            RECO_CD_00_CELL01               CD_00_cell01            2010-06-14T17:41:13-05:00       HardDisk        0       00000129-389f-a656-0000-000000000000  1741.328125G     91.265625G      active
         RECO_CD_01_cell01       RECO            RECO_CD_01_CELL01               CD_01_cell01            2010-06-14T17:41:13-05:00       HardDisk        0       00000129-389f-a65b-0000-000000000000  1741.328125G     91.265625G      active
         RECO_CD_02_cell01       RECO            RECO_CD_02_CELL01               CD_02_cell01            2010-06-14T17:41:13-05:00       HardDisk        0       00000129-389f-a65f-0000-000000000000  1741.328125G     120.375G        active
         RECO_CD_03_cell01       RECO            RECO_CD_03_CELL01               CD_03_cell01            2010-06-14T17:41:13-05:00       HardDisk        0       00000129-389f-a664-0000-000000000000  1741.328125G     120.375G        active
         RECO_CD_04_cell01       RECO            RECO_CD_04_CELL01               CD_04_cell01            2010-06-14T17:41:13-05:00       HardDisk        0       00000129-389f-a668-0000-000000000000  1741.328125G     120.375G        active
         RECO_CD_06_cell01       RECO            RECO_CD_06_CELL01               CD_06_cell01            2010-06-14T17:41:13-05:00       HardDisk        0       00000129-389f-a672-0000-000000000000  1741.328125G     120.375G        active
         RECO_CD_07_cell01       RECO            RECO_CD_07_CELL01               CD_07_cell01            2010-06-14T17:41:13-05:00       HardDisk        0       00000129-389f-a676-0000-000000000000  1741.328125G     120.375G        active
         RECO_CD_08_cell01       RECO            RECO_CD_08_CELL01               CD_08_cell01            2010-06-14T17:41:13-05:00       HardDisk        0       00000129-389f-a67b-0000-000000000000  1741.328125G     120.375G        active
         RECO_CD_09_cell01       RECO            RECO_CD_09_CELL01               CD_09_cell01            2010-06-14T17:41:13-05:00       HardDisk        0       00000129-389f-a680-0000-000000000000  1741.328125G     120.375G        active
         RECO_CD_10_cell01       RECO            RECO_CD_10_CELL01               CD_10_cell01            2010-06-14T17:41:13-05:00       HardDisk        0       00000129-389f-a685-0000-000000000000  1741.328125G     120.375G        active
         RECO_CD_11_cell01       RECO            RECO_CD_11_CELL01               CD_11_cell01            2010-06-14T17:41:13-05:00       HardDisk        0       00000129-389f-a689-0000-000000000000  1741.328125G     120.375G        active
         SCRATCH_CD_05_cell01    SCRATCH         SCRATCH_CD_05_CELL01            CD_05_cell01            2010-12-24T11:11:03-06:00       HardDisk        0       9fd44ab2-a674-40ba-aa4f-fb32d380c573  32M              578.84375G      active
         SMITHERS_CD_05_cell01   SMITHERS        SMITHERS_CD_05_CELL01           CD_05_cell01            2011-02-16T13:38:19-06:00       HardDisk        0       ee413b30-fe57-47a3-b1ad-815fa25b471c  578.890625G      100G            active
         STAGE_CD_00_cell01      STAGE           STAGE_CD_00_CELL01              CD_00_cell01            2010-06-14T17:41:12-05:00       HardDisk        0       00000129-389f-a267-0000-000000000000  1282.859375G     458.140625G     active
         STAGE_CD_01_cell01      STAGE           STAGE_CD_01_CELL01              CD_01_cell01            2010-06-14T17:41:12-05:00       HardDisk        0       00000129-389f-a26c-0000-000000000000  1282.859375G     458.140625G     active
         STAGE_CD_02_cell01      STAGE           STAGE_CD_02_CELL01              CD_02_cell01            2010-06-14T17:41:12-05:00       HardDisk        0       00000129-389f-a271-0000-000000000000  1282.859375G     458.140625G     active
         STAGE_CD_03_cell01      STAGE           STAGE_CD_03_CELL01              CD_03_cell01            2010-06-14T17:41:12-05:00       HardDisk        0       00000129-389f-a277-0000-000000000000  1282.859375G     458.140625G     active
         STAGE_CD_04_cell01      STAGE           STAGE_CD_04_CELL01              CD_04_cell01            2010-06-14T17:41:12-05:00       HardDisk        0       00000129-389f-a27d-0000-000000000000  1282.859375G     458.140625G     active
         STAGE_CD_06_cell01      STAGE           STAGE_CD_06_CELL01              CD_06_cell01            2010-06-14T17:41:12-05:00       HardDisk        0       00000129-389f-a288-0000-000000000000  1282.859375G     458.140625G     active
         STAGE_CD_07_cell01      STAGE           STAGE_CD_07_CELL01              CD_07_cell01            2010-06-14T17:41:12-05:00       HardDisk        0       00000129-389f-a28d-0000-000000000000  1282.859375G     458.140625G     active
         STAGE_CD_08_cell01      STAGE           STAGE_CD_08_CELL01              CD_08_cell01            2010-06-14T17:41:12-05:00       HardDisk        0       00000129-389f-a293-0000-000000000000  1282.859375G     458.140625G     active
         STAGE_CD_09_cell01      STAGE           STAGE_CD_09_CELL01              CD_09_cell01            2010-06-14T17:41:12-05:00       HardDisk        0       00000129-389f-a299-0000-000000000000  1282.859375G     458.140625G     active
         STAGE_CD_10_cell01      STAGE           STAGE_CD_10_CELL01              CD_10_cell01            2010-06-14T17:41:12-05:00       HardDisk        0       00000129-389f-a29e-0000-000000000000  1282.859375G     458.140625G     active
         STAGE_CD_11_cell01      STAGE           STAGE_CD_11_CELL01              CD_11_cell01            2010-06-14T17:41:12-05:00       HardDisk        0       00000129-389f-a2a4-0000-000000000000  1282.859375G     458.140625G     active
         SWING_CD_05_cell01      TENJEE          SWING_CD_05_CELL01              CD_05_cell01            2011-02-21T14:36:03-06:00       HardDisk        0       aaf8a3bc-7f81-45f2-b091-5bf73c93d972  678.890625G      30G             active
         SYSTEM_CD_00_cell01     SYSTEM          SYSTEM_CD_00_CELL01             CD_00_cell01            2010-06-14T17:41:13-05:00       HardDisk        0       00000129-389f-a45f-0000-000000000000  1741G            336M            active
         SYSTEM_CD_01_cell01     SYSTEM          SYSTEM_CD_01_CELL01             CD_01_cell01            2010-06-14T17:41:13-05:00       HardDisk        0       00000129-389f-a464-0000-000000000000  1741G            336M            active
         SYSTEM_CD_02_cell01     SYSTEM          SYSTEM_CD_02_CELL01             CD_02_cell01            2010-06-14T17:41:13-05:00       HardDisk        0       00000129-389f-a468-0000-000000000000  1741G            336M            active
         SYSTEM_CD_03_cell01     SYSTEM          SYSTEM_CD_03_CELL01             CD_03_cell01            2010-06-14T17:41:13-05:00       HardDisk        0       00000129-389f-a46c-0000-000000000000  1741G            336M            active
         SYSTEM_CD_04_cell01     SYSTEM          SYSTEM_CD_04_CELL01             CD_04_cell01            2010-06-14T17:41:13-05:00       HardDisk        0       00000129-389f-a470-0000-000000000000  1741G            336M            active
         SYSTEM_CD_06_cell01     SYSTEM          SYSTEM_CD_06_CELL01             CD_06_cell01            2010-06-14T17:41:13-05:00       HardDisk        0       00000129-389f-a479-0000-000000000000  1741G            336M            active
         SYSTEM_CD_07_cell01     SYSTEM          SYSTEM_CD_07_CELL01             CD_07_cell01            2010-06-14T17:41:13-05:00       HardDisk        0       00000129-389f-a47e-0000-000000000000  1741G            336M            active
         SYSTEM_CD_08_cell01     SYSTEM          SYSTEM_CD_08_CELL01             CD_08_cell01            2010-06-14T17:41:13-05:00       HardDisk        0       00000129-389f-a482-0000-000000000000  1741G            336M            active
         SYSTEM_CD_09_cell01     SYSTEM          SYSTEM_CD_09_CELL01             CD_09_cell01            2010-06-14T17:41:13-05:00       HardDisk        0       00000129-389f-a486-0000-000000000000  1741G            336M            active
         SYSTEM_CD_10_cell01     SYSTEM          SYSTEM_CD_10_CELL01             CD_10_cell01            2010-06-14T17:41:13-05:00       HardDisk        0       00000129-389f-a48b-0000-000000000000  1741G            336M            active
         SYSTEM_CD_11_cell01     SYSTEM          SYSTEM_CD_11_CELL01             CD_11_cell01            2010-06-14T17:41:13-05:00       HardDisk        0       00000129-389f-a48f-0000-000000000000  1741G            336M            active

}}}
{{{
--X2

[root@enkcel04 ~]# cat /etc/fstab
/dev/md5           /                       ext3    defaults,usrquota,grpquota        1 1
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
/dev/md2              swap                    swap    defaults        0 0
/dev/md7                /opt/oracle             ext3    defaults,nodev  1 1
/dev/md4                /boot                   ext3    defaults,nodev  1 1
/dev/md11               /var/log/oracle         ext3    defaults,nodev  1 1
[root@enkcel04 ~]#
[root@enkcel04 ~]# df -Th
Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/md5      ext3    9.9G  3.4G  6.1G  36% /
tmpfs        tmpfs     12G     0   12G   0% /dev/shm
/dev/md7      ext3    2.0G  626M  1.3G  33% /opt/oracle
/dev/md4      ext3    116M   37M   74M  34% /boot
/dev/md11     ext3    2.3G  181M  2.0G   9% /var/log/oracle
[root@enkcel04 ~]#
[root@enkcel04 ~]# mount
/dev/md5 on / type ext3 (rw,usrquota,grpquota)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/md7 on /opt/oracle type ext3 (rw,nodev)
/dev/md4 on /boot type ext3 (rw,nodev)
/dev/md11 on /var/log/oracle type ext3 (rw,nodev)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)


[root@enkcel04 ~]# cat /proc/mdstat
Personalities : [raid1]
md4 : active raid1 sdb1[1] sda1[0]
      120384 blocks [2/2] [UU]

md5 : active raid1 sdb5[1] sda5[0]
      10482304 blocks [2/2] [UU]

md6 : active raid1 sdb6[1] sda6[0]
      10482304 blocks [2/2] [UU]

md7 : active raid1 sdb7[1] sda7[0]
      2096384 blocks [2/2] [UU]

md8 : active raid1 sdb8[1] sda8[0]
      2096384 blocks [2/2] [UU]

md1 : active raid1 sdb10[1] sda10[0]
      714752 blocks [2/2] [UU]

md11 : active raid1 sdb11[1] sda11[0]
      2433728 blocks [2/2] [UU]

md2 : active raid1 sdb9[1] sda9[0]
      2096384 blocks [2/2] [UU]

unused devices: <none>
[root@enkcel04 ~]#
[root@enkcel04 ~]# mdadm --misc --detail /dev/md4
/dev/md4:
        Version : 0.90
  Creation Time : Sat Mar 12 13:39:08 2011
     Raid Level : raid1
     Array Size : 120384 (117.58 MiB 123.27 MB)
  Used Dev Size : 120384 (117.58 MiB 123.27 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 4
    Persistence : Superblock is persistent

    Update Time : Tue Aug  2 10:22:53 2011
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 04a2efb0:05de7468:211366dd:d50b2c00
         Events : 0.4

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
[root@enkcel04 ~]#
[root@enkcel04 ~]# mdadm --misc --detail /dev/md5
/dev/md5:
        Version : 0.90
  Creation Time : Sat Mar 12 13:39:15 2011
     Raid Level : raid1
     Array Size : 10482304 (10.00 GiB 10.73 GB)
  Used Dev Size : 10482304 (10.00 GiB 10.73 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 5
    Persistence : Superblock is persistent

    Update Time : Sun Oct 23 03:49:07 2011
          State : active
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : a9bee465:3ab8337f:8f1ef237:01bcbae0
         Events : 0.5

    Number   Major   Minor   RaidDevice State
       0       8        5        0      active sync   /dev/sda5
       1       8       21        1      active sync   /dev/sdb5
[root@enkcel04 ~]#
[root@enkcel04 ~]# mdadm --misc --detail /dev/md6
/dev/md6:
        Version : 0.90
  Creation Time : Sat Mar 12 13:39:17 2011
     Raid Level : raid1
     Array Size : 10482304 (10.00 GiB 10.73 GB)
  Used Dev Size : 10482304 (10.00 GiB 10.73 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 6
    Persistence : Superblock is persistent

    Update Time : Thu Oct 13 10:31:39 2011
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 94214ee2:ddb3cdb2:e6b53739:6b6e01df
         Events : 0.4

    Number   Major   Minor   RaidDevice State
       0       8        6        0      active sync   /dev/sda6
       1       8       22        1      active sync   /dev/sdb6
[root@enkcel04 ~]#
[root@enkcel04 ~]# mdadm --misc --detail /dev/md7
/dev/md7:
        Version : 0.90
  Creation Time : Sat Mar 12 13:39:18 2011
     Raid Level : raid1
     Array Size : 2096384 (2047.59 MiB 2146.70 MB)
  Used Dev Size : 2096384 (2047.59 MiB 2146.70 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 7
    Persistence : Superblock is persistent

    Update Time : Sun Oct 23 03:49:15 2011
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : b95729e3:38d23a1f:22ee3182:4f2abebd
         Events : 0.6

    Number   Major   Minor   RaidDevice State
       0       8        7        0      active sync   /dev/sda7
       1       8       23        1      active sync   /dev/sdb7
[root@enkcel04 ~]#
[root@enkcel04 ~]# mdadm --misc --detail /dev/md8
/dev/md8:
        Version : 0.90
  Creation Time : Sat Mar 12 13:39:20 2011
     Raid Level : raid1
     Array Size : 2096384 (2047.59 MiB 2146.70 MB)
  Used Dev Size : 2096384 (2047.59 MiB 2146.70 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 8
    Persistence : Superblock is persistent

    Update Time : Thu May  5 17:32:35 2011
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : d0d2b027:1731b4d8:8bd77b3a:4588996c
         Events : 0.6

    Number   Major   Minor   RaidDevice State
       0       8        8        0      active sync   /dev/sda8
       1       8       24        1      active sync   /dev/sdb8
[root@enkcel04 ~]#
[root@enkcel04 ~]# mdadm --misc --detail /dev/md1
/dev/md1:
        Version : 0.90
  Creation Time : Sat Mar 12 13:39:00 2011
     Raid Level : raid1
     Array Size : 714752 (698.12 MiB 731.91 MB)
  Used Dev Size : 714752 (698.12 MiB 731.91 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Sat Mar 12 13:45:50 2011
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 7929eb8f:e993f335:b6b1f5d6:0e21a218
         Events : 0.2

    Number   Major   Minor   RaidDevice State
       0       8       10        0      active sync   /dev/sda10
       1       8       26        1      active sync   /dev/sdb10
[root@enkcel04 ~]#
[root@enkcel04 ~]# mdadm --misc --detail /dev/md11
/dev/md11:
        Version : 0.90
  Creation Time : Sat Mar 12 13:39:21 2011
     Raid Level : raid1
     Array Size : 2433728 (2.32 GiB 2.49 GB)
  Used Dev Size : 2433728 (2.32 GiB 2.49 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 11
    Persistence : Superblock is persistent

    Update Time : Sun Oct 23 03:49:44 2011
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : f6ce4c8e:98ed1e26:47116a89:70babf94
         Events : 0.6

    Number   Major   Minor   RaidDevice State
       0       8       11        0      active sync   /dev/sda11
       1       8       27        1      active sync   /dev/sdb11
[root@enkcel04 ~]#
[root@enkcel04 ~]# mdadm --misc --detail /dev/md2
/dev/md2:
        Version : 0.90
  Creation Time : Sat Mar 12 13:39:00 2011
     Raid Level : raid1
     Array Size : 2096384 (2047.59 MiB 2146.70 MB)
  Used Dev Size : 2096384 (2047.59 MiB 2146.70 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Sat Mar 12 13:56:24 2011
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 1582ff78:e1a1a2ae:ef5f5c1f:60d86130
         Events : 0.6

    Number   Major   Minor   RaidDevice State
       0       8        9        0      active sync   /dev/sda9
       1       8       25        1      active sync   /dev/sdb9

       
[root@enkcel04 ~]# parted /dev/sda print

Model: LSI MR9261-8i (scsi)
Disk /dev/sda: 1999GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type      File system  Flags
 1      32.3kB  123MB   123MB   primary   ext3         boot, raid
 2      123MB   132MB   8225kB  primary   ext2
 3      132MB   1968GB  1968GB  primary
 4      1968GB  1999GB  31.1GB  extended               lba
 5      1968GB  1979GB  10.7GB  logical   ext3         raid
 6      1979GB  1989GB  10.7GB  logical   ext3         raid
 7      1989GB  1991GB  2147MB  logical   ext3         raid
 8      1991GB  1994GB  2147MB  logical   ext3         raid
 9      1994GB  1996GB  2147MB  logical   linux-swap   raid
10      1996GB  1997GB  732MB   logical                raid
11      1997GB  1999GB  2492MB  logical   ext3         raid

Information: Don't forget to update /etc/fstab, if necessary.

[root@enkcel04 ~]#
[root@enkcel04 ~]# parted /dev/sdb print

Model: LSI MR9261-8i (scsi)
Disk /dev/sdb: 1999GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type      File system  Flags
 1      32.3kB  123MB   123MB   primary   ext3         boot, raid
 2      123MB   132MB   8225kB  primary   ext2
 3      132MB   1968GB  1968GB  primary
 4      1968GB  1999GB  31.1GB  extended               lba
 5      1968GB  1979GB  10.7GB  logical   ext3         raid
 6      1979GB  1989GB  10.7GB  logical   ext3         raid
 7      1989GB  1991GB  2147MB  logical   ext3         raid
 8      1991GB  1994GB  2147MB  logical   ext3         raid
 9      1994GB  1996GB  2147MB  logical   linux-swap   raid
10      1996GB  1997GB  732MB   logical                raid
11      1997GB  1999GB  2492MB  logical   ext3         raid

Information: Don't forget to update /etc/fstab, if necessary.

       
       
[root@enkcel04 ~]# fdisk -l

Disk /dev/sda: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          15      120456   fd  Linux raid autodetect
/dev/sda2              16          16        8032+  83  Linux
/dev/sda3              17      239246  1921614975   83  Linux
/dev/sda4          239247      243031    30403012+   f  W95 Ext'd (LBA)
/dev/sda5          239247      240551    10482381   fd  Linux raid autodetect
/dev/sda6          240552      241856    10482381   fd  Linux raid autodetect
/dev/sda7          241857      242117     2096451   fd  Linux raid autodetect
/dev/sda8          242118      242378     2096451   fd  Linux raid autodetect
/dev/sda9          242379      242639     2096451   fd  Linux raid autodetect
/dev/sda10         242640      242728      714861   fd  Linux raid autodetect
/dev/sda11         242729      243031     2433816   fd  Linux raid autodetect

Disk /dev/sdb: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1          15      120456   fd  Linux raid autodetect
/dev/sdb2              16          16        8032+  83  Linux
/dev/sdb3              17      239246  1921614975   83  Linux
/dev/sdb4          239247      243031    30403012+   f  W95 Ext'd (LBA)
/dev/sdb5          239247      240551    10482381   fd  Linux raid autodetect
/dev/sdb6          240552      241856    10482381   fd  Linux raid autodetect
/dev/sdb7          241857      242117     2096451   fd  Linux raid autodetect
/dev/sdb8          242118      242378     2096451   fd  Linux raid autodetect
/dev/sdb9          242379      242639     2096451   fd  Linux raid autodetect
/dev/sdb10         242640      242728      714861   fd  Linux raid autodetect
/dev/sdb11         242729      243031     2433816   fd  Linux raid autodetect

Disk /dev/sdc: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdc doesn't contain a valid partition table

Disk /dev/sdd: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdd doesn't contain a valid partition table

Disk /dev/sde: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sde doesn't contain a valid partition table

Disk /dev/sdf: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdf doesn't contain a valid partition table

Disk /dev/sdg: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdg doesn't contain a valid partition table

Disk /dev/sdh: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdh doesn't contain a valid partition table

Disk /dev/sdi: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdi doesn't contain a valid partition table

Disk /dev/sdj: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdj doesn't contain a valid partition table

Disk /dev/sdk: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdk doesn't contain a valid partition table

Disk /dev/sdl: 1998.9 GB, 1998998994944 bytes
255 heads, 63 sectors/track, 243031 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdl doesn't contain a valid partition table

Disk /dev/sdm: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdm doesn't contain a valid partition table

Disk /dev/sdn: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdn doesn't contain a valid partition table

Disk /dev/sdo: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdo doesn't contain a valid partition table

Disk /dev/sdp: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdp doesn't contain a valid partition table

Disk /dev/sdq: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdq doesn't contain a valid partition table

Disk /dev/sdr: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdr doesn't contain a valid partition table

Disk /dev/sds: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sds doesn't contain a valid partition table

Disk /dev/sdt: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdt doesn't contain a valid partition table

Disk /dev/sdu: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdu doesn't contain a valid partition table

Disk /dev/sdv: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdv doesn't contain a valid partition table

Disk /dev/sdw: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdw doesn't contain a valid partition table

Disk /dev/sdx: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdx doesn't contain a valid partition table

Disk /dev/sdy: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdy doesn't contain a valid partition table

Disk /dev/sdz: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdz doesn't contain a valid partition table

Disk /dev/sdaa: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdaa doesn't contain a valid partition table

Disk /dev/sdab: 24.5 GB, 24575868928 bytes
255 heads, 63 sectors/track, 2987 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdab doesn't contain a valid partition table

Disk /dev/sdac: 4009 MB, 4009754624 bytes
126 heads, 22 sectors/track, 2825 cylinders
Units = cylinders of 2772 * 512 = 1419264 bytes

    Device Boot      Start         End      Blocks   Id  System
/dev/sdac1               1        2824     3914053   83  Linux

Disk /dev/md2: 2146 MB, 2146697216 bytes
2 heads, 4 sectors/track, 524096 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md2 doesn't contain a valid partition table

Disk /dev/md11: 2492 MB, 2492137472 bytes
2 heads, 4 sectors/track, 608432 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md11 doesn't contain a valid partition table

Disk /dev/md1: 731 MB, 731906048 bytes
2 heads, 4 sectors/track, 178688 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md1 doesn't contain a valid partition table

Disk /dev/md8: 2146 MB, 2146697216 bytes
2 heads, 4 sectors/track, 524096 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md8 doesn't contain a valid partition table

Disk /dev/md7: 2146 MB, 2146697216 bytes
2 heads, 4 sectors/track, 524096 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md7 doesn't contain a valid partition table

Disk /dev/md6: 10.7 GB, 10733879296 bytes
2 heads, 4 sectors/track, 2620576 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md6 doesn't contain a valid partition table

Disk /dev/md5: 10.7 GB, 10733879296 bytes
2 heads, 4 sectors/track, 2620576 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md5 doesn't contain a valid partition table

Disk /dev/md4: 123 MB, 123273216 bytes
2 heads, 4 sectors/track, 30096 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md4 doesn't contain a valid partition table







X2--CellCLI> list physicaldisk attributes all
         20:0            19              HardDisk        20                      0       0                               false   0_0                     "SEAGATE ST32000SSSUN2.0T"      0514                           2011-03-12T14:02:16-06:00       sata    L3E5ZF  1862.6559999994934G     0       normal
         20:1            18              HardDisk        20                      0       0                               false   0_1                     "SEAGATE ST32000SSSUN2.0T"      0514                           2011-03-12T14:02:21-06:00       sata    L3E2MM  1862.6559999994934G     1       normal
         20:2            17              HardDisk        20                      0       0                               false   0_2                     "SEAGATE ST32000SSSUN2.0T"      0514                           2011-03-12T14:02:26-06:00       sata    L3GX6J  1862.6559999994934G     2       normal
         20:3            16              HardDisk        20                      0       0                               false   0_3                     "SEAGATE ST32000SSSUN2.0T"      0514                           2011-03-12T14:02:31-06:00       sata    L3G8QX  1862.6559999994934G     3       normal
         20:4            15              HardDisk        20                      0       0                               false   0_4                     "SEAGATE ST32000SSSUN2.0T"      0514                           2011-03-12T14:02:37-06:00       sata    L2CG8S  1862.6559999994934G     4       normal
         20:5            14              HardDisk        20                      0       0                               false   0_5                     "SEAGATE ST32000SSSUN2.0T"      0514                           2011-03-12T14:02:42-06:00       sata    L3H3TS  1862.6559999994934G     5       normal
         20:6            13              HardDisk        20                      0       0                               false   0_6                     "SEAGATE ST32000SSSUN2.0T"      0514                           2011-03-12T14:02:47-06:00       sata    L3GYH3  1862.6559999994934G     6       normal
         20:7            12              HardDisk        20                      0       0                               false   0_7                     "SEAGATE ST32000SSSUN2.0T"      0514                           2011-03-12T14:02:52-06:00       sata    L3G73C  1862.6559999994934G     7       normal
         20:8            11              HardDisk        20                      0       0                               false   0_8                     "SEAGATE ST32000SSSUN2.0T"      0514                           2011-03-12T14:02:57-06:00       sata    L3H3TJ  1862.6559999994934G     8       normal
         20:9            10              HardDisk        20                      0       0                               false   0_9                     "SEAGATE ST32000SSSUN2.0T"      0514                           2011-03-12T14:03:02-06:00       sata    L3GXVK  1862.6559999994934G     9       normal
         20:10           9               HardDisk        20                      0       0                               false   0_10                    "SEAGATE ST32000SSSUN2.0T"      0514                           2011-03-12T14:03:08-06:00       sata    L3G8N6  1862.6559999994934G     10      normal
         20:11           8               HardDisk        20                      0       0                               false   0_11                    "SEAGATE ST32000SSSUN2.0T"      0514                           2011-03-12T14:03:13-06:00       sata    L3HLN6  1862.6559999994934G     11      normal
         [1:0:0:0]       FlashDisk       4_0             "MARVELL SD88SA02"      D20Y    2011-03-12T14:03:13-06:00       sas     5080020000f2e2aFMOD0    22.8880615234375G               "PCI Slot: 4; FDOM: 0"         normal
         [1:0:1:0]       FlashDisk       4_1             "MARVELL SD88SA02"      D20Y    2011-03-12T14:03:13-06:00       sas     5080020000f2e2aFMOD1    22.8880615234375G               "PCI Slot: 4; FDOM: 1"         normal
         [1:0:2:0]       FlashDisk       4_2             "MARVELL SD88SA02"      D20Y    2011-03-12T14:03:13-06:00       sas     5080020000f2e2aFMOD2    22.8880615234375G               "PCI Slot: 4; FDOM: 2"         normal
         [1:0:3:0]       FlashDisk       4_3             "MARVELL SD88SA02"      D20Y    2011-03-12T14:03:13-06:00       sas     5080020000f2e2aFMOD3    22.8880615234375G               "PCI Slot: 4; FDOM: 3"         normal
         [2:0:0:0]       FlashDisk       1_0             "MARVELL SD88SA02"      D20Y    2011-03-12T14:03:13-06:00       sas     5080020000f27f0FMOD0    22.8880615234375G               "PCI Slot: 1; FDOM: 0"         normal
         [2:0:1:0]       FlashDisk       1_1             "MARVELL SD88SA02"      D20Y    2011-03-12T14:03:13-06:00       sas     5080020000f27f0FMOD1    22.8880615234375G               "PCI Slot: 1; FDOM: 1"         normal
         [2:0:2:0]       FlashDisk       1_2             "MARVELL SD88SA02"      D20Y    2011-03-12T14:03:13-06:00       sas     5080020000f27f0FMOD2    22.8880615234375G               "PCI Slot: 1; FDOM: 2"         normal
         [2:0:3:0]       FlashDisk       1_3             "MARVELL SD88SA02"      D20Y    2011-03-12T14:03:13-06:00       sas     5080020000f27f0FMOD3    22.8880615234375G               "PCI Slot: 1; FDOM: 3"         normal
         [3:0:0:0]       FlashDisk       5_0             "MARVELL SD88SA02"      D20Y    2011-03-12T14:03:13-06:00       sas     5080020000f2eb4FMOD0    22.8880615234375G               "PCI Slot: 5; FDOM: 0"         normal
         [3:0:1:0]       FlashDisk       5_1             "MARVELL SD88SA02"      D20Y    2011-03-12T14:03:13-06:00       sas     5080020000f2eb4FMOD1    22.8880615234375G               "PCI Slot: 5; FDOM: 1"         normal
         [3:0:2:0]       FlashDisk       5_2             "MARVELL SD88SA02"      D20Y    2011-03-12T14:03:13-06:00       sas     5080020000f2eb4FMOD2    22.8880615234375G               "PCI Slot: 5; FDOM: 2"         normal
         [3:0:3:0]       FlashDisk       5_3             "MARVELL SD88SA02"      D20Y    2011-03-12T14:03:13-06:00       sas     5080020000f2eb4FMOD3    22.8880615234375G               "PCI Slot: 5; FDOM: 3"         normal
         [4:0:0:0]       FlashDisk       2_0             "MARVELL SD88SA02"      D20Y    2011-03-12T14:03:13-06:00       sas     5080020000f2de6FMOD0    22.8880615234375G               "PCI Slot: 2; FDOM: 0"         normal
         [4:0:1:0]       FlashDisk       2_1             "MARVELL SD88SA02"      D20Y    2011-03-12T14:03:13-06:00       sas     5080020000f2de6FMOD1    22.8880615234375G               "PCI Slot: 2; FDOM: 1"         normal
         [4:0:2:0]       FlashDisk       2_2             "MARVELL SD88SA02"      D20Y    2011-03-12T14:03:13-06:00       sas     5080020000f2de6FMOD2    22.8880615234375G               "PCI Slot: 2; FDOM: 2"         normal
         [4:0:3:0]       FlashDisk       2_3             "MARVELL SD88SA02"      D20Y    2011-03-12T14:03:13-06:00       sas     5080020000f2de6FMOD3    22.8880615234375G               "PCI Slot: 2; FDOM: 3"         normal


CellCLI> list lun attributes all
         0_0     CD_00_enkcel04  /dev/sda        HardDisk        0_0     TRUE    FALSE   1861.712890625G         0_0     20:0            0       "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU"  normal
         0_1     CD_01_enkcel04  /dev/sdb        HardDisk        0_1     TRUE    FALSE   1861.712890625G         0_1     20:1            0       "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU"  normal
         0_2     CD_02_enkcel04  /dev/sdc        HardDisk        0_2     FALSE   FALSE   1861.712890625G         0_2     20:2            0       "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU"  normal
         0_3     CD_03_enkcel04  /dev/sdd        HardDisk        0_3     FALSE   FALSE   1861.712890625G         0_3     20:3            0       "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU"  normal
         0_4     CD_04_enkcel04  /dev/sde        HardDisk        0_4     FALSE   FALSE   1861.712890625G         0_4     20:4            0       "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU"  normal
         0_5     CD_05_enkcel04  /dev/sdf        HardDisk        0_5     FALSE   FALSE   1861.712890625G         0_5     20:5            0       "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU"  normal
         0_6     CD_06_enkcel04  /dev/sdg        HardDisk        0_6     FALSE   FALSE   1861.712890625G         0_6     20:6            0       "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU"  normal
         0_7     CD_07_enkcel04  /dev/sdh        HardDisk        0_7     FALSE   FALSE   1861.712890625G         0_7     20:7            0       "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU"  normal
         0_8     CD_08_enkcel04  /dev/sdi        HardDisk        0_8     FALSE   FALSE   1861.712890625G         0_8     20:8            0       "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU"  normal
         0_9     CD_09_enkcel04  /dev/sdj        HardDisk        0_9     FALSE   FALSE   1861.712890625G         0_9     20:9            0       "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU"  normal
         0_10    CD_10_enkcel04  /dev/sdk        HardDisk        0_10    FALSE   FALSE   1861.712890625G         0_10    20:10           0       "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU"  normal
         0_11    CD_11_enkcel04  /dev/sdl        HardDisk        0_11    FALSE   FALSE   1861.712890625G         0_11    20:11           0       "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU"  normal
         1_0     FD_00_enkcel04  /dev/sdq        FlashDisk       1_0     FALSE   FALSE   22.8880615234375G       100.0   [2:0:0:0]       normal
         1_1     FD_01_enkcel04  /dev/sdr        FlashDisk       1_1     FALSE   FALSE   22.8880615234375G       100.0   [2:0:1:0]       normal
         1_2     FD_02_enkcel04  /dev/sds        FlashDisk       1_2     FALSE   FALSE   22.8880615234375G       100.0   [2:0:2:0]       normal
         1_3     FD_03_enkcel04  /dev/sdt        FlashDisk       1_3     FALSE   FALSE   22.8880615234375G       100.0   [2:0:3:0]       normal
         2_0     FD_04_enkcel04  /dev/sdy        FlashDisk       2_0     FALSE   FALSE   22.8880615234375G       100.0   [4:0:0:0]       normal
         2_1     FD_05_enkcel04  /dev/sdz        FlashDisk       2_1     FALSE   FALSE   22.8880615234375G       100.0   [4:0:1:0]       normal
         2_2     FD_06_enkcel04  /dev/sdaa       FlashDisk       2_2     FALSE   FALSE   22.8880615234375G       100.0   [4:0:2:0]       normal
         2_3     FD_07_enkcel04  /dev/sdab       FlashDisk       2_3     FALSE   FALSE   22.8880615234375G       100.0   [4:0:3:0]       normal
         4_0     FD_08_enkcel04  /dev/sdm        FlashDisk       4_0     FALSE   FALSE   22.8880615234375G       100.0   [1:0:0:0]       normal
         4_1     FD_09_enkcel04  /dev/sdn        FlashDisk       4_1     FALSE   FALSE   22.8880615234375G       100.0   [1:0:1:0]       normal
         4_2     FD_10_enkcel04  /dev/sdo        FlashDisk       4_2     FALSE   FALSE   22.8880615234375G       100.0   [1:0:2:0]       normal
         4_3     FD_11_enkcel04  /dev/sdp        FlashDisk       4_3     FALSE   FALSE   22.8880615234375G       100.0   [1:0:3:0]       normal
         5_0     FD_12_enkcel04  /dev/sdu        FlashDisk       5_0     FALSE   FALSE   22.8880615234375G       100.0   [3:0:0:0]       normal
         5_1     FD_13_enkcel04  /dev/sdv        FlashDisk       5_1     FALSE   FALSE   22.8880615234375G       100.0   [3:0:1:0]       normal
         5_2     FD_14_enkcel04  /dev/sdw        FlashDisk       5_2     FALSE   FALSE   22.8880615234375G       100.0   [3:0:2:0]       normal
         5_3     FD_15_enkcel04  /dev/sdx        FlashDisk       5_3     FALSE   FALSE   22.8880615234375G       100.0   [3:0:3:0]       normal


CellCLI> list celldisk attributes all
         CD_00_enkcel04          2011-03-29T14:05:51-05:00       /dev/sda        /dev/sda3       HardDisk        0       0       7ebe749b-5f94-427c-a636-d793f691f795    none    0_0     0             1832.59375G      normal
         CD_01_enkcel04          2011-03-29T14:05:55-05:00       /dev/sdb        /dev/sdb3       HardDisk        0       0       ec5ca5d0-25a2-4f16-b8da-7ca87106f09b    none    0_1     0             1832.59375G      normal
         CD_02_enkcel04          2011-03-29T14:05:56-05:00       /dev/sdc        /dev/sdc        HardDisk        0       0       81d59e7b-795c-4c68-8151-3d1a1574cbd2    none    0_2     0             1861.703125G     normal
         CD_03_enkcel04          2011-03-29T14:05:56-05:00       /dev/sdd        /dev/sdd        HardDisk        0       0       27f3a507-cb13-43b3-ad87-a54d57984013    none    0_3     0             1861.703125G     normal
         CD_04_enkcel04          2011-03-29T14:05:57-05:00       /dev/sde        /dev/sde        HardDisk        0       0       3732d8ee-1cc4-4acd-a39d-4467668a2211    none    0_4     0             1861.703125G     normal
         CD_05_enkcel04          2011-03-29T14:05:58-05:00       /dev/sdf        /dev/sdf        HardDisk        0       0       601e610b-ec1a-4b8a-8ef9-0faa6d9c754a    none    0_5     0             1861.703125G     normal
         CD_06_enkcel04          2011-03-29T14:05:58-05:00       /dev/sdg        /dev/sdg        HardDisk        0       0       bf306119-c111-4538-b10e-d8279db6835a    none    0_6     0             1861.703125G     normal
         CD_07_enkcel04          2011-03-29T14:05:59-05:00       /dev/sdh        /dev/sdh        HardDisk        0       0       67d280a4-dce7-4139-9a19-2ff7b2d5aa45    none    0_7     0             1861.703125G     normal
         CD_08_enkcel04          2011-03-29T14:06:00-05:00       /dev/sdi        /dev/sdi        HardDisk        0       0       e348a4a5-cc49-448d-9b82-4dac64dddf8a    none    0_8     0             1861.703125G     normal
         CD_09_enkcel04          2011-03-29T14:06:01-05:00       /dev/sdj        /dev/sdj        HardDisk        0       0       ce155b98-d8c8-454d-8273-a8feb66546d9    none    0_9     0             1861.703125G     normal
         CD_10_enkcel04          2011-03-29T14:06:01-05:00       /dev/sdk        /dev/sdk        HardDisk        0       0       e4c88e9d-5d9d-4825-889e-0bce857bd85c    none    0_10    0             1861.703125G     normal
         CD_11_enkcel04          2011-03-29T14:06:02-05:00       /dev/sdl        /dev/sdl        HardDisk        0       0       3c5a73a8-7a04-4213-a7c8-8b2d0f63de7f    none    0_11    0             1861.703125G     normal
         FD_00_enkcel04          2011-03-25T14:05:33-05:00       /dev/sdq        /dev/sdq        FlashDisk       0       0       b3cf6d51-17ee-4269-a597-4af2d1e1f1ad    none    1_0     22.875G       normal
         FD_01_enkcel04          2011-03-25T14:05:34-05:00       /dev/sdr        /dev/sdr        FlashDisk       0       0       3ca528d8-de3b-4fa8-919a-7ef45f131a51    none    1_1     22.875G       normal
         FD_02_enkcel04          2011-03-25T14:05:35-05:00       /dev/sds        /dev/sds        FlashDisk       0       0       fb19081d-685e-4b48-867a-5b09529fd786    none    1_2     22.875G       normal
         FD_03_enkcel04          2011-03-25T14:05:35-05:00       /dev/sdt        /dev/sdt        FlashDisk       0       0       33c049fe-0f90-4b25-afa7-e41c5db4bb8d    none    1_3     22.875G       normal
         FD_04_enkcel04          2011-03-25T14:05:36-05:00       /dev/sdy        /dev/sdy        FlashDisk       0       0       0153e6d7-5116-4740-8b02-7b74d4b38aec    none    2_0     22.875G       normal
         FD_05_enkcel04          2011-03-25T14:05:37-05:00       /dev/sdz        /dev/sdz        FlashDisk       0       0       8b5452b1-5fb0-48e0-8887-416760f08301    none    2_1     22.875G       normal
         FD_06_enkcel04          2011-03-25T14:05:38-05:00       /dev/sdaa       /dev/sdaa       FlashDisk       0       0       2771ec81-04f3-4935-a5ac-d06f46c0fbe0    none    2_2     22.875G       normal
         FD_07_enkcel04          2011-03-25T14:05:38-05:00       /dev/sdab       /dev/sdab       FlashDisk       0       0       8aaaf99f-736a-4e01-80eb-88efebd4dcb3    none    2_3     22.875G       normal
         FD_08_enkcel04          2011-03-25T14:05:39-05:00       /dev/sdm        /dev/sdm        FlashDisk       0       0       25f72e72-a962-4b9a-92c5-b8666e83a118    none    4_0     22.875G       normal
         FD_09_enkcel04          2011-03-25T14:05:40-05:00       /dev/sdn        /dev/sdn        FlashDisk       0       0       c023fe18-e077-498f-99fa-1dd61cd83cb1    none    4_1     22.875G       normal
         FD_10_enkcel04          2011-03-25T14:05:40-05:00       /dev/sdo        /dev/sdo        FlashDisk       0       0       388d006b-4c26-427a-9bd2-6b2ada755f3d    none    4_2     22.875G       normal
         FD_11_enkcel04          2011-03-25T14:05:41-05:00       /dev/sdp        /dev/sdp        FlashDisk       0       0       c1a2f418-85d5-4fe2-bc67-9225e48c5184    none    4_3     22.875G       normal
         FD_12_enkcel04          2011-03-25T14:05:42-05:00       /dev/sdu        /dev/sdu        FlashDisk       0       0       039b1477-16ee-4d1e-aac9-b8e6ceefd6de    none    5_0     22.875G       normal
         FD_13_enkcel04          2011-03-25T14:05:43-05:00       /dev/sdv        /dev/sdv        FlashDisk       0       0       0bd3d890-36cc-4e66-b404-c16af237d6b5    none    5_1     22.875G       normal
         FD_14_enkcel04          2011-03-25T14:05:43-05:00       /dev/sdw        /dev/sdw        FlashDisk       0       0       ee31e0ca-1ff9-4ea8-9a61-d1fe9cf66a85    none    5_2     22.875G       normal
         FD_15_enkcel04          2011-03-25T14:05:44-05:00       /dev/sdx        /dev/sdx        FlashDisk       0       0       0a808b2f-ea08-48f0-abfc-8d08cffa7d72    none    5_3     22.875G       normal


CellCLI> list griddisk attributes all
         DATA_CD_00_enkcel04             CD_00_enkcel04          2011-03-29T14:07:35-05:00       HardDisk        0       cb535b02-e9bf-41d7-8e22-93009fff14fd    32M             1356G           active
         DATA_CD_01_enkcel04             CD_01_enkcel04          2011-03-29T14:07:35-05:00       HardDisk        0       c691998e-f6c3-4337-b35a-9f94076c996c    32M             1356G           active
         DATA_CD_02_enkcel04             CD_02_enkcel04          2011-03-29T14:07:35-05:00       HardDisk        0       57d84ced-040b-4446-96e7-b72d72c05534    32M             1356G           active
         DATA_CD_03_enkcel04             CD_03_enkcel04          2011-03-29T14:07:35-05:00       HardDisk        0       9420aaaf-71e5-4d82-94ff-fc4c0a73537a    32M             1356G           active
         DATA_CD_04_enkcel04             CD_04_enkcel04          2011-03-29T14:07:35-05:00       HardDisk        0       dbf36cae-e9e6-4cea-9cc8-3d04b97d91c7    32M             1356G           active
         DATA_CD_05_enkcel04             CD_05_enkcel04          2011-03-29T14:07:35-05:00       HardDisk        0       e94f2844-3055-4c12-af18-890e173b134d    32M             1356G           active
         DATA_CD_06_enkcel04             CD_06_enkcel04          2011-03-29T14:07:35-05:00       HardDisk        0       fe5db412-b695-493b-b3a2-6121cf5957ae    32M             1356G           active
         DATA_CD_07_enkcel04             CD_07_enkcel04          2011-03-29T14:07:35-05:00       HardDisk        0       9452bb5e-c11f-4fa6-9323-9afad0d1f164    32M             1356G           active
         DATA_CD_08_enkcel04             CD_08_enkcel04          2011-03-29T14:07:35-05:00       HardDisk        0       90655419-101c-4429-ac46-63eb4438692c    32M             1356G           active
         DATA_CD_09_enkcel04             CD_09_enkcel04          2011-03-29T14:07:35-05:00       HardDisk        0       4d642e65-5b3b-4f7b-818d-2503e4bf3982    32M             1356G           active
         DATA_CD_10_enkcel04             CD_10_enkcel04          2011-03-29T14:07:35-05:00       HardDisk        0       54768dd2-c63f-4d84-bfad-bd7d1e964ee6    32M             1356G           active
         DATA_CD_11_enkcel04             CD_11_enkcel04          2011-03-29T14:07:35-05:00       HardDisk        0       97aa7662-a126-44d6-b472-37c8d1ec7292    32M             1356G           active
         DBFS_DG_CD_02_enkcel04          CD_02_enkcel04          2011-03-29T14:06:46-05:00       HardDisk        0       de151b87-1eb2-48ae-976a-5e746d5a8580    1832.59375G     29.109375G      active
         DBFS_DG_CD_03_enkcel04          CD_03_enkcel04          2011-03-29T14:06:47-05:00       HardDisk        0       130e30fd-fba3-4edf-9870-c6b0a7241044    1832.59375G     29.109375G      active
         DBFS_DG_CD_04_enkcel04          CD_04_enkcel04          2011-03-29T14:06:48-05:00       HardDisk        0       935a39ea-9e4d-4979-83ff-b6fed9ecce48    1832.59375G     29.109375G      active
         DBFS_DG_CD_05_enkcel04          CD_05_enkcel04          2011-03-29T14:06:49-05:00       HardDisk        0       7da87467-7329-4f32-8667-73c22b8f2e05    1832.59375G     29.109375G      active
         DBFS_DG_CD_06_enkcel04          CD_06_enkcel04          2011-03-29T14:06:50-05:00       HardDisk        0       edc12d6b-66c2-4648-8605-162337e3c2cc    1832.59375G     29.109375G      active
         DBFS_DG_CD_07_enkcel04          CD_07_enkcel04          2011-03-29T14:06:50-05:00       HardDisk        0       b60a2162-ed3c-47df-9fd0-68868dc1df86    1832.59375G     29.109375G      active
         DBFS_DG_CD_08_enkcel04          CD_08_enkcel04          2011-03-29T14:06:51-05:00       HardDisk        0       035a0024-663b-4ac5-be35-5027c790c241    1832.59375G     29.109375G      active
         DBFS_DG_CD_09_enkcel04          CD_09_enkcel04          2011-03-29T14:06:52-05:00       HardDisk        0       c64080a5-22c8-46fa-81df-6175ce2a1066    1832.59375G     29.109375G      active
         DBFS_DG_CD_10_enkcel04          CD_10_enkcel04          2011-03-29T14:06:53-05:00       HardDisk        0       f0f34182-4751-4011-8496-d25a74192b09    1832.59375G     29.109375G      active
         DBFS_DG_CD_11_enkcel04          CD_11_enkcel04          2011-03-29T14:06:54-05:00       HardDisk        0       4e6c5015-d93b-4dab-a6d1-c09c850e542d    1832.59375G     29.109375G      active
         RECO_CD_00_enkcel04             CD_00_enkcel04          2011-03-29T14:07:40-05:00       HardDisk        0       0da3ed9b-35e1-40e1-801c-08a9d7a614bd    1356.046875G    476.546875G     active
         RECO_CD_01_enkcel04             CD_01_enkcel04          2011-03-29T14:07:40-05:00       HardDisk        0       229eac42-ee11-4752-96d0-1953f412e383    1356.046875G    476.546875G     active
         RECO_CD_02_enkcel04             CD_02_enkcel04          2011-03-29T14:07:40-05:00       HardDisk        0       3094f748-517c-4950-bf09-b7aeece47790    1356.046875G    476.546875G     active
         RECO_CD_03_enkcel04             CD_03_enkcel04          2011-03-29T14:07:40-05:00       HardDisk        0       d8340700-fe52-4afa-b837-17419bc4bfbf    1356.046875G    476.546875G     active
         RECO_CD_04_enkcel04             CD_04_enkcel04          2011-03-29T14:07:40-05:00       HardDisk        0       418020ea-e3df-418a-ad70-90bd09c1ec1b    1356.046875G    476.546875G     active
         RECO_CD_05_enkcel04             CD_05_enkcel04          2011-03-29T14:07:40-05:00       HardDisk        0       5ad78a48-ff99-4268-ae7e-fa50f909e9b2    1356.046875G    476.546875G     active
         RECO_CD_06_enkcel04             CD_06_enkcel04          2011-03-29T14:07:40-05:00       HardDisk        0       fa03466f-329d-4c31-9a61-ba2ceb6e67c1    1356.046875G    476.546875G     active
         RECO_CD_07_enkcel04             CD_07_enkcel04          2011-03-29T14:07:40-05:00       HardDisk        0       d6f247ed-6c97-4216-8c21-2f4fd92d58af    1356.046875G    476.546875G     active
         RECO_CD_08_enkcel04             CD_08_enkcel04          2011-03-29T14:07:40-05:00       HardDisk        0       42494e34-2e5a-4b17-a7bf-bcf23c5b18a1    1356.046875G    476.546875G     active
         RECO_CD_09_enkcel04             CD_09_enkcel04          2011-03-29T14:07:40-05:00       HardDisk        0       ca8fb645-f3c2-4dca-9224-d9181d23bb0f    1356.046875G    476.546875G     active
         RECO_CD_10_enkcel04             CD_10_enkcel04          2011-03-29T14:07:40-05:00       HardDisk        0       e13d011a-6fed-477f-a3c3-3792beee3184    1356.046875G    476.546875G     active
         RECO_CD_11_enkcel04             CD_11_enkcel04          2011-03-29T14:07:40-05:00       HardDisk        0       3cde9c29-3119-44e9-a9d9-bd3f03ca2829    1356.046875G    476.546875G     active


}}}

http://nnawaz.blogspot.com/2019/07/how-to-check-active-enabled-physical.html
{{{
[root@prod_node1 ~]# dbmcli
DBMCLI: Release  - Production on Wed Jul 24 00:06:08 GMT-00:00 2019 Copyright (c) 2007, 2016, Oracle and/or its affiliates. All rights reserved. 
DBMCLI> LIST DBSERVER attributes coreCount         
16/48
[root@prod_node2 ~]# dbmcli
DBMCLI: Release  - Production on Wed Jul 24 00:32:34 GMT 2019 Copyright (c) 2007, 2016, Oracle and/or its affiliates. All rights reserved.
DBMCLI> LIST DBSERVER attributes coreCount         
16/48                          
[root@prod_node3 ~]# dbmcli
DBMCLI: Release  - Production on Wed Jul 24 00:35:20 GMT 2019 Copyright (c) 2007, 2016, Oracle and/or its affiliates. All rights reserved.
DBMCLI> LIST DBSERVER attributes coreCount         
16/48


 SQL> show parameter cpu_count

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
cpu_count                            integer     32
}}}

https://community.oracle.com/tech/apps-infra/discussion/4277611/exadata-data-active-core-count-and-license
http://www.freelists.org/post/oracle-l/Performance-metrics,3
''-- "Oracle Exadata Database Machine Best Practices Series"''
Oracle E-Business Suite on Exadata http://goo.gl/2Yc4d 1133355.1 1110648.1 741818.1 557738.1 1055938.1
Oracle Siebel on Exadata http://goo.gl/3R6Iy 1187674.1 744769.1
Oracle Peoplesoft on Exadata http://goo.gl/Sg1yX 744769.1
Oracle Exadata and OLTP Applications http://goo.gl/sDCKF  
* 757552.1 Exadata Best Practices
* 1269706.1 OLTP Best Practices
* 888828.1
Using Resource Manager on Exadata http://goo.gl/db5cx
* 1207483.1 CPU Resource Manager - Example: How to control CPU Resources using the Resource Manager [ID 471265.1]
* 1208064.1 Instance Caging
* 1208104.1 max_utilization_limit
* 1208133.1 Managing Runaway Queries
Migrating to Oracle Exadata http://goo.gl/DCxpg
* 785351.1 - Upgrade Companion
* 1055938.1 - Database Machine using Data Guard
* 413484.1 - Data Guard Heterogeneous Support
* 737460.1 - Changing Storage Characteristics on Logical Standby
* 1054431.1 - DBFS
* 888828.1 - Latest Exadata Software
Using DBFS on Exadata http://goo.gl/oOFs1
* 1191144.1 - Configuring a database for DBFS on Exadata
* 1054431.1 - Configuring DBFS on Exadata
Monitoring Oracle Exadata http://goo.gl/vYzpD
* 1110675.1 - Manageability Best Practices
* ASR installation guide at OTN
Oracle Exadata Backup and Recovery http://goo.gl/FBrIa
Oracle MAA and Oracle Exadata http://goo.gl/Q1a8d
* 888828.1 - Exadata recommended software
* 1262380.1 - Exadata testing and patching practices
* 757552.1 - Hub of MAA and Exadata best practices
* 1070954.1 - Exadata MAA HealthCheck (every 3months)
* 1110675.1 - Exadata Monitoring
* ASR (OTN)
* 565535.1 - Flashback MOS
* Data Guard
* 1206603.1
* 960510.1 
* 951152.1
* 1265700.1 - Data Guard Standby-First Patch Apply
* Patching
* 1262380.1
* 757552.1 - Hub of MAA and Exadata Best Practices
* Storage Grid High Redundancy and file placement (OTN)
Troubleshooting Oracle Exadata http://goo.gl/USRIX
* 1274324.1 - Exadata X2-2 Diagnosability & Troubleshooting Best Practices
* 1283341.1 - Exadata Hardware Alert: All logical drives are in writethrough caching mode
Patching and Upgrading Oracle Exadata http://goo.gl/B2ztC 
* metalink notes 888828.1 (11.2) 835032.1 (11.1) 1262380.1 1265998.1 1265700.1
Oracle Exadata Health Check http://goo.gl/Pyw4k 
* metalink notes 1070954.1 757552.1 888828.1 835032.1


https://www.evernote.com/shard/s48/sh/b59aa9c0-4df9-44ac-b81f-8b23ae4ce7ea/4b8573572afe79a983fb6979c443cf09

* related tiddlers
[[cpu - SPECint_rate2006]]
[[cpu core comparison]]
[[Exadata CPU comparative sizing]]
http://www.oracle.com/technetwork/articles/oem/exadata-commands-intro-402431.html
http://arup.blogspot.com/p/collection-of-some-of-my-very-popular.html
http://www.proligence.com/pres/nyoug/2012/nyoug_mar13_exadata_article.pdf
http://www.centroid.com/knowledgebase/blog/exadata-initial-installation-validation

http://www.unixarena.com/2014/11/exadata-storage-cell-commands-cheat-sheet.html






http://www.oracle.com/technetwork/oem/exadata-management/em12c-exadata-lcm-webcast-1721225.pdf

Guide to a create a performance monitoring dashboard report for DB Machine targets discovered by Enterprise Manager Cloud Control 12c (Doc ID 1458346.1)

note: rep user is the os user 

http://docs.oracle.com/cd/E24628_01/doc.121/e27442/ch4_post_discovery.htm#EMXIG298
https://www.dropbox.com/sh/l8rrab8u8fli850/sXdr8PmWhG
https://www.dropbox.com/home/Documents/KnowledgeFiles/Books/Oracle/Exadata/DataSheets


Oracle System Options http://www.oracle.com/technetwork/documentation/oracle-system-options-190050.html#solid
https://twitter.com/karlarao/status/375289300360765440
http://www.evernote.com/shard/s48/sh/320a6b86-5203-499b-823c-577e9b641188/ec46229148b6b09478dbce95c27bc00b
sort this http://dbastreet.com/blog/?page_id=603
* the EM plugins for the ''db nodes'' is just like monitoring a database server.. 
* for the cells, you have to have the OMS server to have that can passwordlessly login to the ''cellmonitor'' account on the cell servers and that's it. the OMS just executes SSH commands and does cellcli command on the cells to have data points that will be stored on the OMS server for graphing
* it actually executes a command similar to this ''ssh -l cellmonitor cell1 cellcli -e 'list cell detail' ''
* and cellmonitor just have access to cellcli

{{{
[celladmin@cell1 ~]$ ssh -l root cell1 ls -ltra ~cellmonitor/
root@cell1's password:
total 48
-rw-r--r-- 1 cellmonitor cellmonitor  658 Jul 20 14:14 .zshrc
-r--r--r-- 1 root        root          49 Jul 20 14:14 .profile
drwxr-xr-x 4 cellmonitor cellmonitor 4096 Jul 20 14:14 .mozilla
drwxr-xr-x 3 cellmonitor cellmonitor 4096 Jul 20 14:14 .kde
-rw-r--r-- 1 cellmonitor cellmonitor  515 Jul 20 14:14 .emacs
-r-xr-xr-x 1 root        root        1760 Jul 20 14:14 cellcli
-r--r--r-- 1 root        cellmonitor  162 Jul 20 14:14 .bashrc
-r--r--r-- 1 root        cellmonitor  214 Jul 20 14:14 .bash_profile
drwxr-xr-x 4 root        root        4096 Jul 20 14:14 ..
drwx------ 4 cellmonitor cellmonitor 4096 Aug 18 12:45 .
-rw------- 1 cellmonitor cellmonitor  263 Aug 18 12:55 .bash_history
}}}

* see why 

{{{
[cellmonitor@cell1 ~]$ ls -ltr
-rbash: ls: command not found
[cellmonitor@cell1 ~]$
[cellmonitor@cell1 ~]$ which
-rbash: /usr/bin/which: restricted: cannot specify `/' in command names
[cellmonitor@cell1 ~]$
[cellmonitor@cell1 ~]$ cellcli
CellCLI: Release 11.2.2.2.0 - Production on Thu Aug 18 13:47:58 CDT 2011

Copyright (c) 2007, 2009, Oracle.  All rights reserved.
Cell Efficiency Ratio: 22M

CellCLI>

}}}


* follow the steps below to setup passwordless SSH 
{{{


## PASSWORDLESS SSH ORACLE TO CELLADMIN
mkdir -p ~/.ssh
chmod 700 ~/.ssh
touch ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
cd ~/.ssh
/usr/bin/ssh-keygen -t dsa
<then just hit ENTER all the way>

Repeat the above steps for each node in the cluster

cd ~/.ssh
ls -l *.pub
ssh db1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
ssh celladmin@cell1 cat ~celladmin/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
ssh celladmin@cell2 cat ~celladmin/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
ssh celladmin@cell3 cat ~celladmin/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

scp -p ~/.ssh/authorized_keys celladmin@cell1:.ssh/authorized_keys
scp -p ~/.ssh/authorized_keys celladmin@cell2:.ssh/authorized_keys
scp -p ~/.ssh/authorized_keys celladmin@cell3:.ssh/authorized_keys
exec /usr/bin/ssh-agent $SHELL
/usr/bin/ssh-add
ssh -l oracle db1 date;ssh -l celladmin cell1 date;ssh -l celladmin cell2 date;ssh -l celladmin cell3 date
Thu Aug 18 13:32:14 CDT 2011
Thu Aug 18 13:32:07 CDT 2011
Thu Aug 18 13:32:07 CDT 2011
Thu Aug 18 13:32:04 CDT 2011



## PASSWORDLESS SSH ORACLE TO CELLMONITOR
mkdir -p ~/.ssh
chmod 700 ~/.ssh
touch ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
cd ~/.ssh
/usr/bin/ssh-keygen -t dsa
<then just hit ENTER all the way>

Repeat the above steps for each node in the cluster

cd ~/.ssh
ls -l *.pub
scp id_dsa.pub celladmin@cell1:~
ssh -l root cell1 mkdir -p ~cellmonitor/.ssh
ssh -l root cell1 chmod 700 ~cellmonitor/.ssh
ssh -l root cell1 touch ~cellmonitor/.ssh/authorized_keys
ssh -l root cell1 chown -R cellmonitor:cellmonitor ~cellmonitor/.ssh
ssh -l root cell1 ls -ltra ~cellmonitor
ssh -l root cell1 "cat ~celladmin/id_dsa.pub >> ~cellmonitor/.ssh/authorized_keys"
ssh -l root cell1 rm ~celladmin/id_dsa.pub

Repeat the above steps for each node in the cluster

cd ~/.ssh
ls -l *.pub
scp id_dsa.pub celladmin@cell2:~
ssh -l root cell2 mkdir -p ~cellmonitor/.ssh
ssh -l root cell2 chmod 700 ~cellmonitor/.ssh
ssh -l root cell2 touch ~cellmonitor/.ssh/authorized_keys
ssh -l root cell2 chown -R cellmonitor:cellmonitor ~cellmonitor/.ssh
ssh -l root cell2 ls -ltra ~cellmonitor
ssh -l root cell2 "cat ~celladmin/id_dsa.pub >> ~cellmonitor/.ssh/authorized_keys"
ssh -l root cell2 rm ~celladmin/id_dsa.pub

cd ~/.ssh
ls -l *.pub
scp id_dsa.pub celladmin@cell3:~
ssh -l root cell3 mkdir -p ~cellmonitor/.ssh
ssh -l root cell3 chmod 700 ~cellmonitor/.ssh
ssh -l root cell3 touch ~cellmonitor/.ssh/authorized_keys
ssh -l root cell3 chown -R cellmonitor:cellmonitor ~cellmonitor/.ssh
ssh -l root cell3 ls -ltra ~cellmonitor
ssh -l root cell3 "cat ~celladmin/id_dsa.pub >> ~cellmonitor/.ssh/authorized_keys"
ssh -l root cell3 rm ~celladmin/id_dsa.pub


login on db1.. and execute the following command
ssh -l cellmonitor cell1 cellcli -e 'list cell detail' 
ssh -l cellmonitor cell2 cellcli -e 'list cell detail' 
ssh -l cellmonitor cell3 cellcli -e 'list cell detail' 

}}}


* TO ADD ROOT DB1 ON PASSWORDLESS SSH
{{{
mkdir -p ~/.ssh
chmod 700 ~/.ssh
touch ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
cd ~/.ssh
/usr/bin/ssh-keygen -t dsa

ssh db1 cat ~root/.ssh/id_dsa.pub >> ~root/.ssh/authorized_keys
ssh -l root cell1 cat ~root/.ssh/authorized_keys >> ~root/.ssh/authorized_keys
scp -p authorized_keys cell1:~root/.ssh/authorized_keys
scp -p authorized_keys cell2:~root/.ssh/authorized_keys
scp -p authorized_keys cell3:~root/.ssh/authorized_keys
ssh db1 date;ssh cell1 date;ssh cell2 date;ssh cell3 date
}}}



* TO ADD ROOT DB1 and DB2 ON PASSWORDLESS SSH
{{{
mkdir -p ~/.ssh
chmod 700 ~/.ssh
touch ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
cd ~/.ssh
/usr/bin/ssh-keygen -t dsa

ssh -l root db1 cat ~root/.ssh/id_dsa.pub >> ~root/.ssh/authorized_keys
ssh -l root db2 cat ~root/.ssh/id_dsa.pub >> ~root/.ssh/authorized_keys
ssh -l root cell1 cat ~root/.ssh/authorized_keys >> ~root/.ssh/authorized_keys

scp -p authorized_keys db2:~root/.ssh/authorized_keys
scp -p authorized_keys cell1:~root/.ssh/authorized_keys
scp -p authorized_keys cell2:~root/.ssh/authorized_keys
scp -p authorized_keys cell3:~root/.ssh/authorized_keys
ssh db1 date; ssh db2 date;ssh cell1 date;ssh cell2 date;ssh cell3 date
}}}



-- Passwordless SSH
{{{
To do this, first create an SSH keypair on the Grid Control server (one time only):
	ssh-keygen -t dsa -f id_dsa
	mv id_dsa.pub id_dsa ~oracle/.ssh/
	cd ~oracle/.ssh/
Next, perform each of these steps for every storage cell:
-- Passwordless SSH to cellmonitor
	scp id_dsa.pub celladmin@cell1:~
	ssh -l root cell1 "mkdir ~cellmonitor/.ssh; chmod 700 ~cellmonitor/.ssh; cat ~celladmin/id_dsa.pub >> ~cellmonitor/.ssh/authorized_keys; chown -Rf cellmonitor:cellmonitor ~cellmonitor/.ssh"
	ssh -l cellmonitor cell1 cellcli -e 'list cell detail'

-- Passwordless SSH to celladmin
	scp id_dsa.pub celladmin@cell1:~
	ssh -l root cell1 "mkdir ~celladmin/.ssh; chmod 700 ~celladmin/.ssh; cat ~celladmin/id_dsa.pub >> ~celladmin/.ssh/authorized_keys; chown -Rf celladmin:celladmin ~celladmin/.ssh"
	ssh -l celladmin cell1 cellcli -e 'list cell detail'
	
After all of these steps have been completed, the Exadata Storage Management Plug-In can be installed and deployed. 
}}}


''Agent Failover''
http://blogs.oracle.com/XPSONHA/entry/failover_capability_for_plugins_exadata
http://www.oracle.com/webfolder/technetwork/tutorials/demos/db/exadata/exadatav2/38_DBM_EM_Plugin_HA/38_dbm_em_plugin_ha_viewlet_swf.html
''Monitoring Exadata database machine with Oracle Enterprise Manager 11g'' http://dbastreet.com/blog/?p=674
''“Plugging” in the Database Machine'' http://dbatrain.wordpress.com/2011/06/

''Oracle Enterprise Manager Grid Control Exadata Monitoring plug-in bundle'' http://www.oracle.com/technetwork/oem/grid-control/downloads/devlic-188770.html  <-- download link
PDU Threshold Settings for Oracle Exadata Database Machine using Enterprise Manager [ID 1299851.1]

* Install and Configure the Agent and the Plugins
Follow MOS Note  1110675.1 to install the agents and configure the exadata cell plugin
Oracle Exadata Avocent MergePoint Unity Switch http://download.oracle.com/docs/cd/E11857_01/install.111/e20086/toc.htm
Oracle Exadata Cisco Switch http://download.oracle.com/docs/cd/E11857_01/install.111/e20084/toc.htm
Oracle Exadata ILOM http://download.oracle.com/docs/cd/E11857_01/install.111/e20083/toc.htm
Oracle Exadata Infiniband Switch http://download.oracle.com/docs/cd/E11857_01/install.111/e20085/toc.htm
Oracle Exadata Power Distribution Unit http://download.oracle.com/docs/cd/E11857_01/install.111/e20087/toc.htm
Oracle Exadata Storage Server http://download.oracle.com/docs/cd/E11857_01/install.111/e14591/toc.htm
* Additional tutorials with screenshots on configuring the plugins can be found below
Monitor Exadata Database Machine: Agent Installation and Configuration http://apex.oracle.com/pls/apex/f?p=44785:24:346990567800120::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5504,2
Monitor Exadata Database Machine: Configuring ASM and Database Targets http://apex.oracle.com/pls/apex/f?p=44785:24:346990567800120::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5505,2
Monitor Exadata Database Machine: Configuring the Exadata Storage Server Plug-in http://apex.oracle.com/pls/apex/f?p=44785:24:346990567800120::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5506,2
Monitor Exadata Database Machine: Configuring the ILOM Plug-in http://apex.oracle.com/pls/apex/f?p=44785:24:346990567800120::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5507,2
Monitor Exadata Database Machine: Configuring the InfiniBand Switch Plug-in http://apex.oracle.com/pls/apex/f?p=44785:24:346990567800120::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5508,2
Monitor Exadata Database Machine: Configuring the Cisco Ethernet Switch Plug-in http://apex.oracle.com/pls/apex/f?p=44785:24:346990567800120::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5509,2
Monitor Exadata Database Machine: Configuring the Avocent KVM Switch Plug-in http://apex.oracle.com/pls/apex/f?p=44785:24:346990567800120::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5510,2
Monitor Exadata Database Machine: Configuring User Defined Metrics for Additional Network Monitoring http://apex.oracle.com/pls/apex/f?p=44785:24:346990567800120::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5511,2
Monitor Exadata Database Machine: Configuring Plug-ins for High Availability http://apex.oracle.com/pls/apex/f?p=44785:24:346990567800120::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5512,2
Monitor Exadata Database Machine: Creating a Dashboard for Database Machine http://apex.oracle.com/pls/apex/f?p=44785:24:346990567800120::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5513,2


''Exadata Plugin names''
oracle_cell					oracle_cell_11.2.2.3.jar
cisco_switch				cisco_switch.jar
kvm							kvm.jar
oracle_x2ib					oracle_x2_ib.jar
							oracle_x2cn.jar
							oracle_exadata_hc.jar
							pdu.jar
							




Exadata X5-2: Extreme Flash and Elastic Configurations https://www.youtube.com/watch?v=xfnGIiFoSAE
https://docs.oracle.com/cd/E50790_01/doc/doc.121/e51953/app_whatsnew.htm#CEGEAGDH
How to Replace an Exadata X5-2 Storage Server NVMe drive (Doc ID 2003727.1)
Oracle® Exadata Storage Server X5-2 Extreme Flash Service Manual https://docs.oracle.com/cd/E41033_01/html/E55031/z4000419165586.html#scrolltoc
http://www.evernote.com/shard/s48/sh/1eb5b0c7-11c9-439c-a24f-4b8f8f6f3fae/f8eee4a52c650d87ec993039237237bb
https://blogs.oracle.com/AlejandroVargas/entry/exadata_parameter_auto_manage_exadata
Troubleshooting guide for Underperforming FlashDisks [ID 1348938.1]


http://www.oracle.com/us/products/servers-storage/storage/flash-storage/f20-data-sheet-403555.pdf
http://www.oracle.com/us/products/servers-storage/storage/flash-storage/f40-data-sheet-1733796.pdf
http://www.oracle.com/us/products/servers-storage/storage/flash-storage/f80-ds-2043658.pdf
http://www.oracle.com/technetwork/database/exadata/exadata-smart-flash-cache-366203.pdf
http://pages.cs.wisc.edu/~jignesh/publ/SmartSSD-slides.pdf







http://www.evernote.com/shard/s48/sh/bdaba4a6-f2f3-4a0f-bff0-d7daacc9252b/f29b87c951fbf58f175ffaf87a3a899e
explained to a customer the correlation of instance IO vs cellmetrics by (CG,DB - flash vs hard disk)
http://www.evernote.com/l/ADB0VbIOPs1Leb4s79Np5GvKmHPER93wW0g/



http://www.oracle.com/webfolder/technetwork/tutorials/demos/db/11g/r1/exadata_perf/exadata_perf_viewlet_swf.html

''IO bi = KBread/s

[img[picturename| https://lh4.googleusercontent.com/-r0rWQPyALcM/Tdsg7C9s_OI/AAAAAAAABRw/PdBeB9HPkxQ/throughput.png]]
[img[picturename| https://lh3.googleusercontent.com/-XlSllE-5cXY/Tdsg61gILdI/AAAAAAAABRo/y03hHMUpP8Y/throughput2.png]]
[img[picturename| https://lh5.googleusercontent.com/-gyDVbldZCFE/Tdsg7PdN3MI/AAAAAAAABRs/ZibKFYUiK7I/throughput3.png]]
[img[picturename| https://lh5.googleusercontent.com/-UNqgMcCmEtM/Tdsg7U04VGI/AAAAAAAABR4/gBTshCeU4x0/throughput4.png]]
[img[picturename| https://lh4.googleusercontent.com/-HpaTu09g_cA/Tdsg7aaL20I/AAAAAAAABR0/POykxlhuLUs/throughput5.png]]
[img[picturename| https://lh4.googleusercontent.com/-UC5gv5s3Icg/Tdsg7b8jQUI/AAAAAAAABR8/cd3OScz11Nw/throughput6.png]]
[img[picturename| https://lh3.googleusercontent.com/-RHkHb0v2Hwg/Tdsg7tt1LMI/AAAAAAAABSA/4mEraclNL8w/throughput7.png]]
[img[picturename| https://lh6.googleusercontent.com/-TX-cRRXCIZQ/Tdsg7saUh7I/AAAAAAAABSI/7Q0jptO8wIo/throughput8.png]]
[img[picturename| https://lh4.googleusercontent.com/-sDLBaNYUbng/Tdsg77vdqeI/AAAAAAAABSE/pQrSAsIeocY/throughput9.png]]
[img[picturename| https://lh5.googleusercontent.com/-SXinl7d3gA8/Tdsg75hmG7I/AAAAAAAABSM/w1_Je-hvv5Y/throughput10.png]]
[img[picturename| https://lh6.googleusercontent.com/-F81qnfIBUw0/Tdsg8BbGvuI/AAAAAAAABSQ/YcxLcF6rswA/throughput11.png]]


http://www.evernote.com/shard/s48/sh/af2c6e95-ebc3-4a03-9a54-ac1d36b82970/e7c1df5ecd2cb878f0c86e2fac019b79
surprising to know that the infiniband are running on centos, the whole update process is just a rpm update
http://www.evernote.com/shard/s48/sh/fed1e421-7b10-4d19-92d0-c2538a3f3c7c/0862beef70fc133490e8ae4ffeac8a42
<<showtoc>>

''collectl -sX'' https://lists.sdsc.edu/pipermail/npaci-rocks-discussion/2009-April/038950.html
http://collectl.sourceforge.net/Infiniband.html
http://collectl-utils.sourceforge.net/colmux.html
http://collectl-utils.sourceforge.net/

search "exadata infiniband bidirectional"
http://www.infosysblogs.com/oracle/2011/05/oracle_exadata_and_datawarehou.html
http://www.hpcuserforum.com/presentations/April2009Roanoke/MellanoxTechnologies.ppt
http://en.wikipedia.org/wiki/InfiniBand
http://www.oreillynet.com/pub/a/network/2002/02/04/windows.html
http://www.oracle.com/technetwork/database/exadata/dbmachine-x2-2-datasheet-175280.pdf

http://gigaom.com/cloud/infiniband-back-from-the-dead/

https://blogs.oracle.com/miker/entry/how_to_monitor_the_bandwidth
http://docs.oracle.com/cd/E23824_01/html/821-1459/gjwwf.html
http://www.scribd.com/doc/232417505/20/Infiniband-Network-Monitoring
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Networking_Guide/sec-Testing_Early_InfiniBand_RDMA_operation.html


''watch E4 presentation of KJ on infiniband''


! 2020
<<<

1-Are there any dba_hist or v$ views that shows processes usage of RDMA over IB?


here are the relevant DBA_HIST views... 

In 11g 
Interconnect Ping Latency Stats - DBA_HIST_INTERCONNECT_PINGS

In 11gR2
Interconnect Throughput by Client - DBA_HIST_IC_CLIENT_STATS
Interconnect Device Statistics - DBA_HIST_IC_DEVICE_STATS, DBA_HIST_CLUSTER_INTERCON


2-how can we monitor the performance of rdma or when a server process reaches to remote node memory?

The only way to do this is to use OS networking tools 
https://weidongzhou.wordpress.com/2013/08/11/tools-to-check-out-network-traffic-on-exadata/
https://husnusensoy.wordpress.com/2009/08/28/full-coverage-in-infiniband-monitoring-with-oswatcher-3-0-part-1/


3-on exadata system 5x and 6x are both ib0 and ib1 supposed to be working together or one is just a backup for th other ?


I think since X4 the Infiniband is active-active 

<<<

<<<
So what does these system metrics mean?  
txn cache remote copy
txn cache remote copy misses 
txn cache remote etc...
<-- I don't know 

Also how can we till if a session is utilizing rdma from remote node?  <-- probably do a pstack and see if a similar rdma function call exist Bug 24326846 - RMAN channel using rdma dNFS may hang (Doc ID 24326846.8) , but then if this is exadata it is implied that you are using rdma

Have you encountered any issues related to ib/rds and MTU size in exadata and 18c?  <-- usually exachk would be able to flag the MTU issues, if there are any findings on for that specific version then we change the values for that customer. If there are perf issues then if the perf data points to some MTU/SDU config issue then that's when we do the resizing 


John Clarke has some useful infiniband commands here as well 
https://learning.oreilly.com/library/view/oracle-exadata-recipes/9781430249146/9781430249146_Ch13.xhtml

other resources 

https://www.slideshare.net/khailey/collaborate-nfs-kylefinal?next_slideshow=1
https://docs.oracle.com/cd/E23824_01/html/821-1459/gjwwf.html
OSB - Using RDS / RDMA over InfiniBand (Doc ID 1510603.1)

<<<
http://www.evernote.com/shard/s48/sh/0987f447-b24a-4a40-9f0a-2f7e19ad6bf0/f8bd7d1a1f948d9c162cd6ee88d8c8f4
http://www.evernote.com/shard/s48/sh/0ce1cfde-99b9-4e82-8e92-7be7dc5e60f9/02ae66088cc1509e580cab382d25a0f8

DR for Exalogic and Exadata + Oracle GoldenGate on Exadata https://blogs.oracle.com/XPSONHA/entry/dr_for_exalogic_and_exadata


''Oracle Sun Database Machine X2-2/X2-8 Backup and Recovery Best Practices'' [ID 1274202.1]
''Backup and Recovery Performance and Best Practices for Exadata Cell and Oracle Exadata Database Machine'' http://www.oracle.com/technetwork/database/features/availability/maa-tech-wp-sundbm-backup-11202-183503.pdf
''Oracle Data Guard: Disaster Recovery for Oracle Exadata Database'' Machine http://www.oracle.com/technetwork/database/features/availability/maa-wp-dr-dbm-130065.pdf
''Best Practices for Corruption Detection, Prevention, and Automatic Repair - in a Data Guard Configuration'' ID 1302539.1

http://vimeo.com/62754145 Exadata Maximum Availability Tests

Monitoring exadata health and resource usage white paper http://bit.ly/160dJrn 


http://www.pythian.com/news/29333/exadata-memory-expansion-kit/
Exadata MAA Best Practices Migrating Oracle Databases
http://www.oracle.com/au/products/database/xmigration-11-133466.pdf
''Exadata FAQ''
http://www.oracle.com/technology/products/bi/db/exadata/exadata-faq.html

''My Experiences''
http://karlarao.wordpress.com/2010/05/30/seeing-exadata-in-action/

''Exadata Links''
http://tech.e2sn.com/oracle/exadata/links
http://tech.e2sn.com/oracle/exadata/articles
A grand tour of Oracle Exadata
http://www.pythian.com/expertise/oracle/exadata
http://www.pythian.com/news/13569/exadata-part-1/
http://www.pythian.com/news/13967/exadata-part2/
http://www.pythian.com/news/15673/exadata-part3/
http://www.pythian.com/news/15425/making-the-most-of-exadata/
http://www.pythian.com/news/15531/designing-for-exadata-maximizing-storage-indexes-use/
http://dbastreet.com/blog/?page_id=603 <-- good collection of links

''Exadata Comparisons''
Comparing Exadata and Netezza TwinFin
http://www.business-intelligence-quotient.com/?p=1030

''Exadata adhoc reviews''
''* A nice comment by Tanel Poder'' http://www.linkedin.com/groupItem?view=&srchtype=discussedNews&gid=3156190&item=32433184&type=member&trk=EML_anet_qa_ttle-dnhOon0JumNFomgJt7dBpSBA
<<<
-- Question by Ron Batra
I was wondering if people had any experiences to share regarding RAC on ExaData..?


-- Reply by Tanel Poder (http://tech.e2sn.com/team/tanel-poder)
Do you want good ones or bad ones? ;-)

As it's a general question, the answer will be quite general, too:

The "bad" thing is that RAC is still RAC on Exadata too. So, especially if you plan to use it for OLTP environments, there are things to consider.

Even the low-latency infiniband interconnect doesn't eliminate interconnect (and scheduling) latency and global cache wait events when you run write-write OLTP workload on the same dataset in multiple different instances. You should make sure (using services) that any serious write-write activity happens within the same physical server. But oh wait, Exadata v1 and v2 both consist of small 8-core DB nodes, so with serious OLTP workload it may not be possible to fit all the write-write activity into one 8-core node at all. So, got to be careful when planning heavy OLTP into Exadata. It's doable but needs more planning & testing if your workload is going to be significant. The new Exadata x2-8 would be better for heavy OLTP workloads as a single rack has only 2 physical DB layer servers (each with 64 cores) in it, so it'd be much easier to direct all write-write workload into one physical server.

For (a properly designed) DW workload with mostly no concurrent write-write activity on the same dataset, you shouldn't have GC bottleneck problem. However the DW should ideally be designed for (parallel) direct path full table scans (with proper partitioning design for partition pruning).

So, when you migrate your old reporting application to exadata (and it doesn't use good partitioning, indexes used everywhere and no parallel execution is used) then you might not end up getting much out of the smart scans. Or when the ETL job is a tight (PL/SQL) loop, performing single row fetches and inserts, then you won't get anywhere near the "promised" Exadata data load speeds etc.

What else... If anyone (even from Oracle) says, you don't need any indexes in Exadata, don't believe them. I have a client who didn't use any indexes even before they moved to Exadata (their schema was explicitly designed for partition pruning, full partition scans and "brute-force" hash joins). They were very happy when they moved to Exadata, because this is the kind of workload which allows smart scans to kick in.

Another client's applications relied on indexes in their old environments. They followed someone's (apparently from Oracle) recommendation to drop all indexes (to save storage space) and the performnace on exadata sucked. This is because their schema & application was not optimized for such brute force processing. They started adding indexes back to get the performance back to acceptable levels.

Another surprise from the default Exadata configuration was related to the automatic parallel execution configuration. Some queries ended up allocating 512 slaves across the whole Exadata rack. The only way to limit this was to use resource manager (and this is what I always use). All the other magic automatic features failed in some circumstances (I'll blog about it some day).

Btw, don't hope to ever these promised 5TB/hour load times in real life. In real life you probably want to use compression to save space in the limited Exadata storage and compression is done in the database nodes only (while the cells may be completely idle), so your real life load rate with compression is going to be much lower, depending on the compression options you use (well that's the same for all other vendors too, but marketing usually doesn't tell you that).


Phew, this wasn't just specific to RAC on Exadata, but just some experiences I've had to deal with. Exadata doesn't make everything always faster - out of the box. But if your application/schema design is right, then it will rock!
<<<
''* Kevin Closson interview'' http://www.pythian.com/news/1267/interview-kevin-closson-on-the-oracle-exadata-storage-server/

''Exadata Patches''
http://www.pythian.com/news/15477/exadata-bp5-patching-issues/
Potential data loss issue on Exadata http://goo.gl/8c1t3

''Exadata Presentations''
http://husnusensoy.wordpress.com/2010/10/16/exadata-v2-fast-track-session-slides-in-rac-sig-turkey/
Cool product on predictive performance management - BEZVision for Databases http://goo.gl/aQRvK + ExadataV2 presentation http://goo.gl/Zf6Pw
Tanel Poder - Performance stories from Exadata Migrations http://goo.gl/hQPdq

''Exadata Features''
* Hybrid Columnar Compression
http://blogs.oracle.com/databaseinsider/2010/11/exadata_hybrid_columnar_compre.html
http://oracle-randolf.blogspot.com/2010/10/112-new-features-subtle-restrictions.html

* Cell offload
http://dbatrain.wordpress.com/2009/06/23/measuring-exadata-offloads-efficiency/
http://dbatrain.wordpress.com/2010/11/05/dbms-for-dbas-offloads-are-for-you-too/

* Smart Scan
http://www.pythian.com/news/18077/exadata-smart-scans-and-the-flash-cache/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+PythianGroupBlog+(Pythian+Group+Blog)











''Exadata v2 InfiniBand Network 880 Gb/sec aggregate throughput''
{{{
Each machine has 40Gb/sec Infiniband network card (two HCA ports bonded together, but it's only 40 Gb/sec per machine).
Exadata v2 have 8 DB Machine and 14 Storage servers and total is 22 servers.
So 22 X 40 Gb/sec is 880 Gb/sec. 
}}}

— Enables storage predicates to be showed in the SQL execution plans of your session even if you do not have Exadata
alter session set CELL_OFFLOAD_PLAN_DISPLAY = ALWAYS;


You can use the above V$ views and corresponding statistics to monitor Exadata cells’ activity from a database instance:
* V$CELL view provides identifying information extracted from the cellip.ora file.
* V$BACKUP_DATAFILE view contains various columns relevant to Exadata Cell during RMAN incremental backups. The BLOCKS_SKIPPED_IN_CELL column is a count of the number of blocks that were read and filtered at the Exadata Cell to optimize the RMAN incremental backup.
* You can query the V$SYSSTAT view for key statistics that can be used to compute Exadata Cell effectiveness:
<<<
physical IO disk bytes - Total amount of I/O bytes processed with physical disks (includes when processing was offloaded to the cell and when processing was not offloaded)
cell physical IO interconnect bytes - Number of I/O bytes exchanged over the interconnection (between the database host and cells)
cell physical IO bytes eligible for predicate offload - Total number of I/O bytes processed with physical disks when processing was offloaded to the cell

''The following statistics show the Exadata Cell benefit due to optimized file creation and optimized RMAN file restore operations:''
cell physical IO bytes saved during optimized file creation - Number of bytes of I/O saved by the database host by offloading the file creation operation to cells
cell physical IO bytes saved during optimized rman file restore - Number of bytes of I/O saved by the database host by offloading the RMAN file restore operation to cells
<<<
* Wait Events
<<<
cell single block physical read - Same as db file sequential read for a cell
cell multiblock physical read - Same as db file scattered read for a cell
cell smart table scan - DB waiting for table scans to complete
cell smart index scan - DB waiting for index or IOT fast full scans
cell smart file creation - waiting for file creation completion
cell smart incremental backup - waiting for incremental backup completion
cell smart restore from backup - 	waiting for file initialization completion for restore
cell statistics gather
<<<
The query displays the cell path and disk name corresponding to cell wait events.. also possible for drill down on ASH
{{{
SELECT w.event, c.cell_path, d.name, w.p3
FROM   V$SESSION_WAIT w, V$EVENT_NAME e, V$ASM_DISK d, V$CELL c
WHERE  e.name LIKE 'cell%' 
AND       e.wait_class_id = w.wait_class_id 
AND       w.p1 = c.cell_hashval 
AND w.p2 = d.hash_value;
}}}
* Assess offload processing efficiency, this query calculates the percentage of I/Os that were filtered by offloading to Exadata. 
{{{
SQL> select 100 - 100*s1.value/s2.value io_filtering_percentage  2  from v$mystat s1
  3     , v$mystat s2
  4     , v$statname n1
  5     , v$statname n2
  6  where s1.statistic# = n1.statistic#
  7    and s2.statistic# = n2.statistic#
  8    and n1.name = 'cell physical IO interconnect bytes'
  9    and n2.name = 'cell physical IO bytes eligible for predicate offload' ;

IO_FILTERING_PERCENTAGE
-----------------------
             99.9872062
}}}
* It is also possible to use SQL Performance Analyzer to access offload processing. You can use the tcellsim.sql script located in $ORACLE_HOME/rdbms/admin for that purpose. The comparison uses the IO_INTERCONNECT_BYTES statistics.
http://www.evernote.com/shard/s48/sh/b9a4437d-9444-4748-b4c4-6d0a84113fc2/ab3682c18ba5e3fe08478378ea3b5804

''Advisor Webcast Archived Recordings [ID 740964.1]''
Database https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=740964.1#data
OEM https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=740964.1#em
Exadata https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=740964.1#exadata



http://apex.oracle.com/pls/apex/f?p=44785:2:3562636332635165:FORCE_QUERY::2,CIR,RIR:P2_TAGS:Exadata

{{{
Exadata Smart Flash Log Self-Study Module       Tutorial        24-Nov-11       26 mins
Exadata Smart Flash Log Demonstration   Video   21-Nov-11       9 mins
Using Exadata Smart Scan Self-Study Module      Tutorial        09-Nov-11       45 mins
Using Exadata Smart Scan Demonstration  Demo    08-Nov-11       11 mins
Oracle Enterprise Manager 12c: Manage Oracle Exadata with Oracle Enterprise Manager     Video   02-Nov-11       4 mins
Oracle Enterprise Manager 12c: Monitor an Exadata Environment   Video   02-Oct-11       9 mins
Part 1 - Load the Data  Video   01-Jun-11       14 mins
Part 2 - Gather Optimizer Statistics on the Data        Video   01-Jun-11       8 mins
Part 3 - Validate and Transform the Data        Video   01-Jun-11       10 mins
Part 4 - Query the Data Video   01-Jun-11       11 mins
Oracle Real World Performance Video Series - Migrate a 1TB Datawarehouse in 20 Minutes  Video   01-Jun-11       40 mins
Administer Exadata Database Machine: Exadata Storage Server Patch Rollback      Demo    23-May-11
Administer Exadata Database Machine: Exadata Storage Server Rolling Patch Application   Demo    23-May-11
Exadata Database Machine: Using Quality of Service Management   Demo    23-May-11
Exadata Database Machine: Configuring Quality of Service Management     Demo    23-May-11
Monitor Exadata Database Machine: Agent Installation and Configuration  Demo    23-May-11
Monitor Exadata Database Machine: Configuring ASM and Database Targets  Demo    23-May-11
Monitor Exadata Database Machine: Configuring the Exadata Storage Server Plug-in        Demo    23-May-11
Monitor Exadata Database Machine: Configuring the ILOM Plug-in  Demo    23-May-11
Monitor Exadata Database Machine: Configuring the InfiniBand Switch Plug-in     Demo    23-May-11
Monitor Exadata Database Machine: Configuring the Cisco Ethernet Switch Plug-in Demo    23-May-11
Monitor Exadata Database Machine: Configuring the Avocent KVM Switch Plug-in    Demo    23-May-11
Monitor Exadata Database Machine: Configuring User Defined Metrics for Additional Network Monitoring    Demo    23-May-11
Monitor Exadata Database Machine: Configuring Plug-ins for High Availability    Demo    23-May-11
Monitor Exadata Database Machine: Creating a Dashboard for Database Machine     Demo    23-May-11
Monitor Exadata Database Machine: Monitoring Exadata Storage Servers using Enterprise Manager Grid Control and the System Monitoring Plug-in for Exadata Storage Server Demo    23-May-11
Monitor Exadata Database Machine: Managing Exadata Storage Server Alerts and Checking for Undelivered Alerts    Demo    23-May-11
Monitor Exadata Database Machine: Exadata Storage Server Monitoring and Management using Integrated Lights Out Manager (ILOM)   Demo    23-May-11
Monitor Exadata Database Machine: Monitoring the Database Machine InfiniBand network    Demo    23-May-11
Monitor Exadata Database Machine: Monitoring the Cisco Catalyst Ethernet switch and the Avocent MergePoint Unity KVM using Grid Control Demo    23-May-11
Monitor Exadata Database Machine: Using HealthCheck     Demo    23-May-11
Monitor Exadata Database Machine: Using DiagTools       Demo    23-May-11
Monitor Exadata Database Machine: Using ADRCI on an Exadata Storage Cell        Demo    23-May-11
Monitor Exadata Database Machine        Demo    23-May-11
Oracle Exadata Database Machine Best Practices Series   Tutorial        29-Mar-11
Managing Parallel Processing with the Database Resource Manager Demo    19-Nov-10       60 mins
Exadata and Database Machine Version 2 Series - 1 of 25: Introduction to Smart Scan     Demo    19-Sep-10       10 mins
Exadata and Database Machine Version 2 Series - 2 of 25: Introduction to Exadata Hybrid Columnar Compression    Demo    19-Sep-10       10 mins
Exadata and Database Machine Version 2 Series - 3 of 25: Introduction to Exadata Smart Flash Cache      Demo    19-Sep-10       12 mins
Exadata and Database Machine Version 2 Series - 4 of 25: Exadata Process Introduction   Demo    19-Sep-10       6 mins
Exadata and Database Machine Version 2 Series - 5 of 25: Hierarchy of Exadata Storage Objects   Demo    19-Sep-10       8 mins
Exadata and Database Machine Version 2 Series - 6 of 25: Creating Interleaved Grid Disks        Demo    19-Sep-10       8 mins
Exadata and Database Machine Version 2 Series - 7 of 25: Examining Exadata Smart Flash Cache    Demo    19-Sep-10       8 mins
Exadata and Database Machine Version 2 Series - 8 of 25: Exadata Cell Configuration     Demo    19-Sep-10       6 mins
Exadata and Database Machine Version 2 Series - 9 of 25: Exadata Storage Provisioning   Demo    19-Sep-10       7 mins
Exadata and Database Machine Version 2 Series - 10 of 25: Consuming Exadata Grid Disks Using ASM        Demo    19-Sep-10       10 mins
Exadata and Database Machine Version 2 Series - 11 of 25: Exadata Cell User Accounts    Demo    19-Sep-10       5 mins
Exadata and Database Machine Version 2 Series - 12 of 25: Monitoring Exadata Using Metrics, Alerts and Active Requests  Demo    19-Sep-10       10 mins
Exadata and Database Machine Version 2 Series - 13 of 25: Monitoring Exadata From Within Oracle Database        Demo    19-Sep-10       10 mins
Exadata and Database Machine Version 2 Series - 14 of 25: Exadata High Availability     Demo    19-Sep-10       10 mins
Exadata and Database Machine Version 2 Series - 15 of 25: Intradatabase I/O Resource Management Demo    19-Sep-10       10 mins
Exadata and Database Machine Version 2 Series - 16 of 25: Interdatabase I/O Resource Management Demo    19-Sep-10       12 mins
Exadata and Database Machine Version 2 Series - 17 of 25: Configuring Flash-Based Disk Groups   Demo    19-Sep-10       16 mins
Exadata and Database Machine Version 2 Series - 18 of 25: Examining Exadata Hybrid Columnar Compression Demo    19-Sep-10       14 mins
Exadata and Database Machine Version 2 Series - 19 of 25: Index Elimination with Exadata        Demo    19-Sep-10       8 mins
Exadata and Database Machine Version 2 Series - 20 of 25: Database Machine Configuration Example using Configuration Worksheet  Demo    19-Sep-10       14 mins
Exadata and Database Machine Version 2 Series - 21 of 25: Migrating to Database Machine Using Transportable Tablespaces Demo    19-Sep-10       14 mins
Exadata and Database Machine Version 2 Series - 22 of 25: Bulk Data Loading with Database Machine       Demo    19-Sep-10       20 mins
Exadata and Database Machine Version 2 Series - 23 of 25: Backup Optimization Using RMAN and Exadata    Demo    19-Sep-10       15 mins
Exadata and Database Machine Version 2 Series - 24 of 25: Recovery Optimization Using RMAN and Exadata  Demo    19-Sep-10       12 mins
Exadata and Database Machine Version 2 Series - 25 of 25: Using the distributed command line utility (dcli)     Demo    19-Sep-10       14 mins
Using Exadata Smart Scan        Video   19-Aug-10       4 mins
Storage Index in Exadata        Demo    01-Mar-10
Hybrid Columnar Compression     Demo    01-Oct-09       22 mins
Smart Flash Cache Architecture  Demo    01-Oct-09       8 mins
Cell First Boot Demo    01-Sep-09       5 mins
Cell Configuration      Demo    01-Sep-09       10 mins
Smart Scan Scale Out Example    Demo    01-Sep-09       10 mins
Smart Flash Cache Monitoring    Demo    01-Sep-09       25 mins
The Magic of Exadata    Demo    01-Jul-07
Configuring DCLI        Demo    01-Jul-07       5 mins
Installing and Configuring Enterprise Manager Exadata Plug-in (Part 1)  Demo    01-Jul-07       24 mins
Installing and Configuring Enterprise Manager Exadata Plug-in (Part 2)  Demo    01-Jul-07       30 mins
Exadata Cell First Boot Initialization  Demo    01-Jul-07       12 mins
Exadata Calibrate and Cell/Grid Disks Configuration     Demo    01-Jul-07       12 mins
IORM and Exadata        Demo    01-Jul-07       40 mins
Possible Execution Plans with Exadata Offloading        Demo    01-Jul-07
Real Performance Tests with Exadata     Demo    01-Jul-07       42 mins
Exadata Automatic Reconnect     Demo    01-Jul-07       12 mins
Exadata Cell Failure Scenario   Demo    01-Jul-07       10 mins

}}}
with screenshots
http://netsoftmate.blogspot.com/2017/01/discover-exadata-database-machine-in.html
https://netsoftmate.com/discover-exadata-database-machine-in/


Enterprise Manager Oracle Exadata Database Machine Getting Started Guide
https://docs.oracle.com/cd/E63000_01/EMXIG/ch4_post_discovery.htm#EMXIG143
http://kevinclosson.wordpress.com/2012/02/27/modern-servers-are-better-than-you-think-for-oracle-database-part-i-what-problems-actually-need-fixed/
http://sqlblog.com/blogs/joe_chang/archive/2011/11/29/intel-server-strategy-shift-with-sandy-bridge-en-ep.aspx


http://kevinclosson.wordpress.com/2012/05/02/oracles-timeline-copious-benchmarks-and-internal-deployments-prove-exadata-is-the-worlds-first-best-oltp-machine/#comments
http://kevinclosson.wordpress.com/2011/11/01/flash-is-fast-provisioning-flash-for-oracle-database-redo-logging-emc-f-a-s-t-is-flash-and-fast-but-leaves-redo-where-it-belongs/
http://glennfawcett.wordpress.com/2011/05/10/exadata-drives-exceed-the-laws-of-physics-asm-with-intelligent-placement-improves-iops/


''Conversations with Kevin about OLTP and IOPS - FW: Fwd: IOPs from your scripts - Exadata - link'' - http://www.evernote.com/shard/s48/sh/c270db94-a167-4913-8676-024a7e2cdefa/9146389f651cc09202e1182d2c883b2c
On the cell nodes /usr/share/doc/oracle/Exadata/doc

On the edelivery zip file p18084575_121111_Linux-x86-64.zip go to directory 
/Users/karl/Downloads/software/database/iso_exadata_121111/V46534-01.zip Folder/dl180/boot/cellbits/doclib.zip


''112240''
Most of the things that were removed were put into the storage server owner's guide (multi rack cabling is now an appendix, site planning has been broken out into relevant chapters in owner's guide), etc.
<<<
''* Release Notes''
[[ e15589.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\e15589.pdf	  ]]	<- Oracle® Exadata Storage Server Hardware Read This First 11g Release 2        ##
[[ e13875.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\e13875.pdf	  ]]	<- Oracle Exadata Database Machine Release Notes 11g Release 2        ##
[[ e13862.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\e13862.pdf	  ]]	<- Oracle® Exadata Storage Server Software Release Notes 11g Release 2        ## 
[[ e13106.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\e13106.pdf	  ]]    <- Oracle® Enterprise Manager Release Notes for System Monitoring Plug-In for Oracle Exadata Storage Server ##
''* Site/Hardware Readiness''
[[ e17431.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\e17431.pdf	  ]]	<- Sun Oracle Database Machine Site Planning Guide 	       
[[ e16099.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\e16099.pdf	  ]]	<- Oracle® Exadata Database Machine Configuration Worksheets 11g Release 2        ##
[[ e10594.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\e10594.pdf	  ]]    <- Oracle® Database Licensing Information 11g Release 2 ###
''* Installation''
[[ e17432.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\e17432.pdf	  ]]	<- Sun Oracle Database Machine Installation Guide        
[[ e13874.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\e13874.pdf	  ]]	<- Oracle® Exadata Database Machine Owner's Guide 11g Release 2         ##
[[ install.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\install.pdf  ]]    <- Oracle Exadata Quick-Installation Guide
[[ e14591.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\e14591.pdf	  ]]    <- Oracle® Enterprise Manager System Monitoring Plug-In Installation Guide for Oracle Exadata Storage Server  ##
''* Administration''                                              112240
[[ e13861.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\e13861.pdf	  ]]	<- Oracle® Exadata Storage Server Software User's Guide 11g Release 2 ##
''* Cabling/Monitoring''                                          112240
[[ e17435.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\e17435.pdf	  ]]	<- SunOracle Database Machine Multi-Rack Cabling Guide        
[[ e13105.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112240\e13105.pdf	  ]]	<- Oracle® Enterprise Manager System Monitoring Plug-In Metric Reference Manual for Oracle Exadata Storage Server ##
<<<
''112232''
<<<
''* Release Notes''
[[ e15589.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\e15589.pdf	  ]]	<- Oracle® Exadata Storage Server Hardware Read This First 11g Release 2        
[[ e13875.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\e13875.pdf	  ]]	<- Oracle Exadata Database Machine Release Notes 11g Release 2        
[[ e13862.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\e13862.pdf	  ]]	<- Oracle® Exadata Storage Server Software Release Notes 11g Release 2        
[[ e13106.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\e13106.pdf	  ]]    <- Oracle® Enterprise Manager Release Notes for System Monitoring Plug-In for Oracle Exadata Storage Server
''* Site/Hardware Readiness''
[[ e17431.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\e17431.pdf	  ]]	<- Sun Oracle Database Machine Site Planning Guide 	       
[[ e16099.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\e16099.pdf	  ]]	<- Oracle® Exadata Database Machine Configuration Worksheets 11g Release 2        
[[ e10594.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\e10594.pdf	  ]]    <- Oracle® Database Licensing Information 11g Release 2
''* Installation''
[[ e17432.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\e17432.pdf	  ]]	<- Sun Oracle Database Machine Installation Guide        
[[ e13874.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\e13874.pdf	  ]]	<- Oracle® Exadata Database Machine Owner's Guide 11g Release 2        
[[ install.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\install.pdf  ]]    <- Oracle Exadata Quick-Installation Guide
[[ e14591.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\e14591.pdf	  ]]    <- Oracle® Enterprise Manager System Monitoring Plug-In Installation Guide for Oracle Exadata Storage Server 
''* Administration''                                              112232
[[ e13861.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\e13861.pdf	  ]]	<- Oracle® Exadata Storage Server Software User's Guide 11g Release 2
''* Cabling/Monitoring''                                          112232
[[ e17435.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\e17435.pdf	  ]]	<- SunOracle Database Machine Multi-Rack Cabling Guide        
[[ e13105.pdf	|	C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112232\e13105.pdf	  ]]	<- Oracle® Enterprise Manager System Monitoring Plug-In Metric Reference Manual for Oracle Exadata Storage Server
<<<
''112220''
<<<
''* Release Notes''
[[e15589.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\e15589.pdf  ]]	<- Oracle® Exadata Storage Server Hardware Read This First 11g Release 2               
[[e13875.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\e13875.pdf  ]]	<- Oracle Exadata Database Machine Release Notes 11g Release 2               
[[e13862.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\e13862.pdf  ]]	<- Oracle® Exadata Storage Server Software Release Notes 11g Release 2               
[[e13106.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\e13106.pdf  ]]    <- Oracle® Enterprise Manager Release Notes for System Monitoring Plug-In for Oracle Exadata Storage Server  
''* Site/Hardware Readiness''                                112220
[[e17431.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\e17431.pdf  ]]	<- Sun Oracle Database Machine Site Planning Guide 	              
[[e16099.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\e16099.pdf  ]]	<- Oracle® Exadata Database Machine Configuration Worksheets 11g Release 2               
[[e10594.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\e10594.pdf  ]]    <- Oracle® Database Licensing Information 11g Release 2
''* Installation''                                           112220
[[e17432.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\e17432.pdf  ]]	<- Sun Oracle Database Machine Installation Guide               
[[e13874.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\e13874.pdf  ]]	<- Oracle® Exadata Database Machine Owner's Guide 11g Release 2               
[[install.pdf| C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\install.pdf ]]    <- Oracle Exadata Quick-Installation Guide  
[[e14591.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\e14591.pdf  ]]    <- Oracle® Enterprise Manager System Monitoring Plug-In Installation Guide for Oracle Exadata Storage Server    
''* Administration''                                         112220
[[e13861.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\e13861.pdf  ]]	<- Oracle® Exadata Storage Server Software User's Guide 11g Release 2       
''* Cabling/Monitoring''                                     112220
[[e17435.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\e17435.pdf  ]]	<- SunOracle Database Machine Multi-Rack Cabling Guide               
[[e13105.pdf | C:\Dropbox\oracle\OfficialDocs\oracle-exadata-112220\e13105.pdf  ]]	<- Oracle® Enterprise Manager System Monitoring Plug-In Metric Reference Manual for Oracle Exadata Storage Server        
<<<





A nice diagram of the whole HW installation process
http://www.evernote.com/shard/s48/sh/3b3b70a8-b28e-48b7-bc99-141e8ca1b5ba/851bf62bc26c4de0e13b18e2f7b9a592
''The blueprint''
http://www.facebook.com/photo.php?pid=7017079&l=72efd9ea41&id=552113028   

''Treemap version''
http://www.facebook.com/photo.php?pid=6973769&l=9b4b053f64&id=552113028
http://www.facebook.com/photo.php?pid=7076816&l=beea222cd0&id=552113028

''Failure scenario''
http://www.facebook.com/photo.php?pid=7118589&l=cd58bfb8e4&id=552113028

''The Provisioning Worksheet''
http://www.facebook.com/photo.php?pid=7163444&l=9e30e54cea&id=552113028

some other notes, speeds and feeds, etc. http://www.evernote.com/shard/s48/sh/a8c75ac7-9019-43cc-8ada-fad80681a63a/fdf513512c3bef27d4ac00c1912a8b13


-- ''Papers''
http://www.linkedin.com/groupItem?view=&srchtype=discussedNews&gid=918317&item=63941267&type=member&trk=eml-anet_dig-b_pd-ttl-cn&ut=0pKCK5WPN524Y1  <-- kerry explains how we do it
<<<
Kerry Osborne
We've worked on a number of consolidation projects. The first step is always an analysis of the DB's that need to be migrated. This is not significantly different than a consolidation on to a non-Exadata platform. This step includes gathering a bunch of raw data including current memory usage (both SGA and sessions), type of CPU's (so a calculation can compare to the relative speed of CPU on Exadata), number of CPU's and utilization at peak, storage usage, projected growth, etc… One key early step is to determine which (if any) databases can be combined into a single database. Determining which can live together is just standard analysis of whether they can play nicely together (downtime windows, backup requirements, version constraints should match up fairly closely). This is usually done with databases that are not considered super critical from a performance standpoint by the way. We're working on a project now that started with 90 instances going onto a half rack. In this case there is only one large system and a whole bunch of very small systems. In this case, combining instances was a key part of the plan.

Once the mapping of source to destination instances has been determined then we work on defining the requirements for each new instance on Exadata. This includes HA considerations (RAC or not). This is where a little bit of art enters the picture. Since Exadata is capable of offloading work to the storage tier, some estimation as to how well each individual system be able to take advantage of Exadata optimizations should be a part of the process. Systems that can offload a lot of work don't need as much CPU on the compute nodes, for example, as on the original platforms.

The next step os to take those requirements and lay the instances out across the compute nodes and storage cells. Since we've done several of these projects, we've built some tools to help automate the process including visualizing how resources are divided amongst the instances. This allows us to easily play with "what if scenarios" to see what happens if you lose a node in a RAC cluster for example.

Also, you might want to consider using Instance Caging and DBRM/IORM to limit resource usage. This will help avoid the situation where users of the first few systems that get migrated getting disappointed when the system slows down as more and more systems are migrated on to the platform.

One final thing you might want to consider is that you can carve the storage up into independent clusters as well. We call this a "Split Config". If for example you want to make sure that work on your test environment is relatively isolated from your production environment, you can create two separate clusters each with their own compute nodes and their own storage cells inside a single rack. You'll still be sharing the IB network, but the rest will be separated. This can also provide a way to test patches on part of the system (dev/test for example), without affecting the production cluster. It's not as good as having a separate Rack, but it's better than than not having anywhere to vet out a patch before applying it to production.

For sizing, you won't have to go through as much detail, but you should consider the same issues and do some basic calculations. In practice, most of the sizing decisions we've observed have been dominated by storage requirements including projected growth over whatever time the business is intending to amortize the purchase. This includes throughput as well as volume considerations.

Hope that helps.
!
Kevin,

Yes, the process I described is certainly more involved than a one day exercise. It really depends on how accurate you want to be, but there is a fair amount of leg work that should be done to be confident about your sizing and capacity planning. The larger ones we've worked on have been a few weeks (2-4) depending on the number of environments. For the most part it is something that can be done by any experienced Oracle person. I would expect that someone that doesn't have experience with Exadata will tend to over estimate memory and CPU requirements on the DB tier based on current usage, but I could be wrong about that. OLTP systems won't see much reduction in those requirements by the way, while DW type systems will.

On the issue of index usage, you should definitely allow time for testing to prove to the business whether dropping some will be beneficial or not. In some cases they will be absolutely necessary (OLTP oriented workloads). In others they will not. In my opinion, many systems are over indexed, and the process of moving to Exadata provides a good excuse to evaluate them and get rid of some that are not necessary. This is a hard sell in many shops, so Exadata can actually be a political help in some of these situations. As far as your comment about sizing and indexes, if you think that the business will not allow you make changes to the application (including index usage), you should probably do your POC / POV with the app as it exists today We do commonly find that a little bit of tweaking can pay huge dividends though. So again, I would highly recommend allowing time for testing prior to doing a production cut over.
<<<
''Oracle Exadata Database Machine  Consolidation: Segregating Databases and  Roles'' http://www.oracle.com/technetwork/database/focus-areas/availability/maa-exadata-consolidated-roles-459605.pdf 
''Database Instance Caging: A Simple Approach to Server Consolidation''  http://www.oracle.com/technetwork/database/focus-areas/performance/instance-caging-wp-166854.pdf
''Boris - Capacity Management for Oracle Database Machine Exadata v2'' https://docs.google.com/viewer?url=http://www.nocoug.org/download/2010-05/DB_Machine_5_17_2010.pdf&pli=1
''Performance Stories from Exadata Migrations'' http://www.slideshare.net/tanelp/tanel-poder-performance-stories-from-exadata-migrations
''Workload Management for Operational Data Warehousing'' http://blogs.oracle.com/datawarehousing/entry/workload_management_for_operat
''Workload Management – Statement Queuing'' http://blogs.oracle.com/datawarehousing/entry/workload_management_statement
''Workload Management – A Simple (but real) Example'' http://blogs.oracle.com/datawarehousing/entry/workload_management_a_simple_b
''A fair bite of the CPU pie? Monitoring & Testing Oracle Resource Manager'' http://rnm1978.wordpress.com/2010/09/10/a-fair-bite-of-the-cpu-pie-monitoring-testing-oracle-resource-manager/
''Parallel Execution and workload management for an Operational DW environment'' http://www.oracle.com/technetwork/database/focus-areas/bi-datawarehousing/twp-bidw-parallel-execution-130766.pdf
http://www.oracle.com/technetwork/database/focus-areas/bi-datawarehousing/index.html


! Other cool stuff the prov worksheet can do:
''HOWTO: Update the worksheet from an existing environment'' http://www.evernote.com/shard/s48/sh/5c981ef3-f504-4c20-8f19-34ba57f4d0d6/c404ce1e552ce3d7440a2911573dde3e
''provisioning email to tanel'' RE: refreshing dev/testing databases from exadata - v2 - link - http://www.evernote.com/shard/s48/sh/2e1ca2e0-7bb5-4829-b18a-4bb8ac3d003e/54d7fc3956cd6cb1c5761b17c2055c6b
''free -m, hugepages, free memory'' http://www.evernote.com/shard/s48/sh/efec6f4e-da2a-464f-87d4-69a79d5339f0/f848c602817940e5015df8f6fae5437e
''diff on prov worksheet, configuration changes, instance mapping changes'' http://www.evernote.com/shard/s48/sh/47a62c47-c05c-4ac2-839c-17f6e6d2cae5/70b4c2021804eb4e86016e782dca6b73


this guy talks about workload placement 
https://www.linkedin.com/pulse/workload-placement-optimizing-capacity-prashant-wali






http://www.evernote.com/shard/s48/sh/0151d8f8-e00e-4aed-8e9a-9266e3a43e36/13be76ca387aa5d2130edba30672d9ff

Changing IP addresses on Exadata Database Machine [ID 1317159.1]
<<<
https://twitter.com/GavinAtHQ/status/1532075662684524545
Recently announced #Exadata System Software 22.1 introduces an exciting new monitoring capability, @OracleExadata
 Real-Time Insight. Check out the deep dive over on the Exadata PM blog site
<<<

https://blogs.oracle.com/exadata/post/exadata-real-time-insight


    What's New In Exadata 22.1 - link  https://docs.oracle.com/en/engineered-systems/exadata-database-machine/dbmso/new-features-exadata-system-software-release-22.html#GUID-C0643E3C-ED50-45DB-8248-1B1A1D6C9F9A

    Using Real-Time Insight - link https://docs.oracle.com/en/engineered-systems/exadata-database-machine/sagug/exadata-storage-server-monitoring.html#GUID-8448C324-784E-44F5-9D44-9CB5C697E436

    Alter metricdefinition - cellcli link, dbmcli link https://docs.oracle.com/en/engineered-systems/exadata-database-machine/sagug/exadata-storage-server-cellcli.html#GUID-1D67C9CD-1077-43C5-9056-62EF4E42B3F0
https://docs.oracle.com/en/engineered-systems/exadata-database-machine/dbmmn/exadata-dbmcli.html#GUID-1D67C9CD-1077-43C5-9056-62EF4E42B3F0

Code https://github.com/oracle-samples/oracle-db-examples/tree/main/exadata
http://www.evernote.com/shard/s48/sh/ce6b1dc4-1166-4135-ab97-4f5726c40680/3fb775712c4a6ce2ee128dece9deb5fc
http://www.evernote.com/shard/s48/sh/1eb5b0c7-11c9-439c-a24f-4b8f8f6f3fae/f8eee4a52c650d87ec993039237237bb

{{{
dcli -g ~/cell_group -l celladmin 'cellcli -e list flashlog detail'
}}}


{{{
Exadata Smart Flash Log - video demo --> http://j.mp/svbfrR
http://www.oracle.com/webfolder/technetwork/tutorials/demos/db/exadata/exadatav2/Exadata_Smart_Flash_Log/player.html
}}}

http://guyharrison.squarespace.com/blog/2011/12/6/using-ssd-for-redo-on-exadata-pt-2.html

''enable the esfl''
http://minersoracleblog.wordpress.com/2013/03/19/improving-log-file-sync-times-with-exadata-smart-flash-logs/

http://www.oracle.com/webfolder/technetwork/tutorials/demos/db/exadata/exadatav2/Exadata_Smart_Flash_Log/data/presentation.xml
There’s a white paper about mixed disks 
http://www.oracle.com/technetwork/database/availability/maa-exadata-upgrade-asm-2339100.pdf

Scenario 2: Add 4 TB storage servers to 3 TB storage servers and expand existing disk groups 
Understanding ASM Capacity and Reservation of Free Space in Exadata (Doc ID 1551288.1) <- contains a cool PL/SQL script
http://prutser.wordpress.com/2013/01/03/demystifying-asm-required_mirror_free_mb-and-usable_file_mb/
https://aprakash.wordpress.com/2014/09/17/asm-diskgroup-shows-usable_file_mb-value-in-negative/

<<<
This statement is correct: 
"If I have 1GB worth of data in my DB I should be using 2GB for Normal Redundancy and 3 GB for High Redundancy."
but then you also have to account for the "required mirror free" which is required in the case of a lost of failure group. 

So this is the output of the script I sent you, it already accounts for the redundancy level you are on. Just look at the columns with "REAL" on it. On your statement above, the 4869.56 is used and that already accounts for the normal redundancy.. you said you have 4605 GB (incl TEMP) so that's just about right. Now you have to add the 2538 which will total to 7407.56 and if you subtract the total space requirement to the capacity (7614 - 7407.56) you'll get 206.44
{{{
                                                               REQUIRED     USABLE
                       RAW       REAL       REAL       REAL MIRROR_FREE       FILE
STATE    TYPE     TOTAL_GB   TOTAL_GB    USED_GB    FREE_GB          GB         GB PCT_USED PCT_FREE NAME
-------- ------ ---------- ---------- ---------- ---------- ----------- ---------- -------- -------- ----------
CONNECTE NORMAL      15228       7614    4869.56    2744.44        2538     206.44       64       36 DATA_AEX1
CONNECTE NORMAL    3804.75    1902.38     1192.5     709.87      634.13      75.75       63       37 RECO_AEX1
MOUNTED  NORMAL     873.75     436.88       1.23     435.64      145.63     290.02        0      100 DBFS_DG
                ---------- ---------- ---------- ---------- ----------- ----------
sum                19906.5    9953.26    6063.29    3889.95     3317.76     572.21
}}}
I hope that clears up the confusion on the space usage. 

I'm also referencing a very good blog post that discuss about the required mirror free and usable file mb
http://prutser.wordpress.com/2013/01/03/demystifying-asm-required_mirror_free_mb-and-usable_file_mb/
<<<


{{{
-- WITH REDUNDANCY
set colsep ','
set lines 600
col state format a9
col dgname format a15
col sector format 999990
col block format 999990
col label format a25
col path format a40
col redundancy format a25
col pct_used format 990
col pct_free format 990
col voting format a6   
BREAK ON REPORT
COMPUTE SUM OF raw_gb ON REPORT 
COMPUTE SUM OF usable_total_gb ON REPORT 
COMPUTE SUM OF usable_used_gb ON REPORT 
COMPUTE SUM OF usable_free_gb ON REPORT 
COMPUTE SUM OF required_mirror_free_gb ON REPORT 
COMPUTE SUM OF usable_file_gb ON REPORT 
COL name NEW_V _hostname NOPRINT
select lower(host_name) name from v$instance;
select 
        trim('&_hostname') hostname,
        name as dgname,
        state,
        type,
        sector_size sector,
        block_size block,
        allocation_unit_size au,
        round(total_mb/1024,2) raw_gb,
        round((DECODE(TYPE, 'HIGH', 0.3333 * total_mb, 'NORMAL', .5 * total_mb, total_mb))/1024,2) usable_total_gb,
        round((DECODE(TYPE, 'HIGH', 0.3333 * (total_mb - free_mb), 'NORMAL', .5 * (total_mb - free_mb), (total_mb - free_mb)))/1024,2) usable_used_gb,
        round((DECODE(TYPE, 'HIGH', 0.3333 * free_mb, 'NORMAL', .5 * free_mb, free_mb))/1024,2) usable_free_gb,
        round((DECODE(TYPE, 'HIGH', 0.3333 * required_mirror_free_mb, 'NORMAL', .5 * required_mirror_free_mb, required_mirror_free_mb))/1024,2) required_mirror_free_gb,
        round(usable_file_mb/1024,2) usable_file_gb,
        round((total_mb - free_mb)/total_mb,2)*100 as "PCT_USED", 
        round(free_mb/total_mb,2)*100 as "PCT_FREE",
        offline_disks,
        voting_files voting
from v$asm_diskgroup
where total_mb != 0
order by 1;
}}}




{{{

-- count of datafiles for each disk group
 
select count(*), name
from
(select regexp_substr(name, '[^/]+', 1, 1) name from v$datafile
union all
select regexp_substr(name, '[^/]+', 1, 1) name from v$tempfile)
group by name
order by 1 desc;
 
478 +DATA
241 +DATAHC
213 +DATAEF
1   +DBFS
 
 
 
-- count datafile vs tempfile 
 
col name format a30
select count(*), name || ' - Datafile' name
from
(select regexp_substr(name, '[^/]+', 1, 1) name from v$datafile)
group by name
union all
select count(*), name || ' - Tempfile' name
from
(select regexp_substr(name, '[^/]+', 1, 1) name from v$tempfile)
group by name
order by 1 desc, 2 asc; 
 
  COUNT(*) NAME
---------- ------------------------------
        39 +DATA - Datafile
         4 +DATA - Tempfile
         4 +DATA2 – Tempfile
 
 
}}}

{{{

Exadata Internals - Data Processing and I/O Flow
Measuring Exadata - Troubleshoo?ng at the Database Layer
Cell Metrics in V$SESSTAT - A storage cell is not a black box to the database session!
Exadata Snapper - Measure I/O Reduc?on and Offloading Efficiency
Measuring Exadata - Storage Cell Layer
Flash Cache 
	Write-back Flash Cache
	Flash Logging
Parallel Execu?on, Par??oning and Bloom Filters on Exadata
Hybrid Columnar Compression
Data Loading, DML


####################################################
1st
####################################################

-- dba registry history
@reg

-- cell config
@exadata/cellver

mpstat -P ALL

@desc SALES_ARCHIVE_HIGH_BIG

@sn 1 1 1364

@snapper all 5 1 1364


####################################################
internals
####################################################

strace -cp 31676
* the  -c is for system calls 
* select count(*) from dba_source;
* then do a CTRL-C

-- on linux non-exadata, to get the FD being read
strace -p 31676
ls -l /proc/31676/fd/	<-- then look for the device


iostat -xmd

-- io translation
@asmdg
@asmls data
@sgastat asm  			"ASM extent pointer array" ... maps the physical to logical block mapping

after the ASM metadata is cached then the database process itself will do the IO.. 
* ASM is a disk address translation layer, and the DB processes does the actual IO
* after it's cached you don't have to talk that much to ASM.. when you allocate datafile then that's when you talk..
* ASM does the mirroring

-- MPP layer
* cells don't talk to each other
* unlike RAC they don't synchronize any data between them
* it's the database layer who orchestrates what cells do independently 
* ASM only reads the primary allocation unit.. 

* storage cells are shared nothing
* the compute nodes sees everything that makes it a shared everything
@1:31 start working.. each cell.. advantage of this is each cell will do the work independently, also discusses 1MB block prefetch
the more cells you have the more workers and the faster the retrieval will go

-- IO request flow, levels of caching & buffering
@1:41 explains the disk, controller caching and flash
* flash cards don't have battery.. they have super capacitor
* cellsrv on critical path of all IO request


-- cell thread history
@2:22:58

strings cellsrv > /tmp/cellsrv_strings.txt

@desc v$cell_thread_history

@sqlid <sqlid> %
@sqlidx <sqlid> %

@awr/gen_awr_report
open <the html file>

@cellver.sql
@cth

-- on v2
select count(*)
from tanel.sales_archive_high
where prod_id+cust_id+channel_id+promo_id+quantity_sold+amount_sold < 10;

set heading off
set echo off
set long 9999999
select dbms_metadata.get_ddl('TABLE','SALES_ARCHIVE_HIGH','TANEL') from dual; 

  CREATE TABLE "TANEL"."SALES_ARCHIVE_HIGH"
   (    "PROD_ID" NUMBER,
        "CUST_ID" NUMBER,
        "TIME_ID" DATE,
        "CHANNEL_ID" NUMBER,
        "PROMO_ID" NUMBER,
        "QUANTITY_SOLD" NUMBER(10,2),
        "AMOUNT_SOLD" NUMBER(10,2)
   ) SEGMENT CREATION IMMEDIATE
  PCTFREE 0 PCTUSED 40 INITRANS 1 MAXTRANS 255
 COMPRESS FOR ARCHIVE HIGH LOGGING
  STORAGE(INITIAL 16777216 NEXT 16777216 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "TANEL_BIGFILE"


-- examining cell storage disks

iostat -xmd 5 | egrep "Device|^sd[a-l] "
** tanel has charbench with 50 users doing roughly 300 TPS

while : ; do iostat -xmd 5 | egrep "Device|^sd[a-l] " ; echo "--" ; sleep 5; done | while read line ; do echo "`date +%T`" "$line" ; done

select /*+ PARALLEL(8) */ count(*) from tanel.t4;


-- a simple test case to prove large and small IO size

cell_offload_processing=false
_serial_direct_read=always
_db_file_exec_read_count=128Â&nbsp;Â&nbsp;Â&nbsp;<- parameter to set to to how many blocks to read starting in 10.2... this means 128 blocks of 8kb each = 1MB
select coun(*) from tanel.sales;
@mys "cell flash cache read hits"
_db_file_exec_read_count=17Â&nbsp;Â&nbsp;Â&nbsp;Â&nbsp;<- 136kb largeÂ&nbsp;Â&nbsp;Â&nbsp;<- not aligned to the extent size so ends up reading just a couple of blocks at the end of exents .... so 17 17 17 then before the end of extents is 5blocks so that 5blocks will be small IOs <128kb
_db_file_exec_read_count=16Â&nbsp;Â&nbsp;Â&nbsp;Â&nbsp;<- 128kb large
_db_file_exec_read_count=15Â&nbsp;Â&nbsp;Â&nbsp;Â&nbsp;<- 120kb small


-- io reasons 
v$iostat_function_detail

alter cell events = 'immediate cellsrv_dump(ioreasons,0)'



####################################################
networking
####################################################

rds-ping
rds-stress
rds-info

ibdumpÂ&nbsp;Â&nbsp;<-- download this tool from melanox website to dump the infiniband traffic similar to tcpdump

cellcli -e 'list metriccurrent N_MB_SENT_SEC'Â&nbsp;Â&nbsp;Â&nbsp;Â&nbsp;Â&nbsp;Â&nbsp;Â&nbsp;Â&nbsp;Â&nbsp;Â&nbsp;Â&nbsp;Â&nbsp;Â&nbsp;<- doesn't show the zero-copy
cellcli -e 'list metriccurrent N_HCA_MB_TRANS_SEC'Â&nbsp;Â&nbsp;Â&nbsp;<- shows the zero-copy statistics, shows the total traffic went through your infiniband card... the netstat and tcpdump doesn't show all the low level traffic



####################################################
measuring exadata
####################################################


SELECT tablespace_name,status,contents
           ,logging,predicate_evaluation,compress_for
     FROM dba_tablespaces;


-- table
select avg(line) from tanel.t4 where owner like 'S%';    <-- "storage" on the row source means oracle is using cell storage aware codepath
-- mview
select count(*) from tanel.mv1 where owner like 'S%';


col name format a50
col PARAMETER1 format a10
col PARAMETER2 format a10
col PARAMETER3 format a10
SELECT name,wait_class,parameter1,parameter2,parameter3 from v$event_name where name like 'cell%';
SELECT name,wait_class,parameter1,parameter2,parameter3 from v$event_name where name like '%flash%' and name not like '%flashback%';

NAME                                               WAIT_CLASS                                                       PARAMETER1 PARAMETER2 PARAMETER3
-------------------------------------------------- ---------------------------------------------------------------- ---------- ---------- ----------
cell smart table scan                              User I/O                                                         cellhash#
cell smart index scan                              User I/O                                                         cellhash#
cell statistics gather                             User I/O                                                         cellhash#
cell smart incremental backup                      System I/O                                                       cellhash#
cell smart file creation                           User I/O                                                         cellhash#
cell smart restore from backup                     System I/O                                                       cellhash#
cell single block physical read                    User I/O                                                         cellhash#  diskhash#  bytes
cell multiblock physical read                      User I/O                                                         cellhash#  diskhash#  bytes
cell list of blocks physical read                  User I/O                                                         cellhash#  diskhash#  blocks
cell manager opening cell                          System I/O                                                       cellhash#
cell manager closing cell                          System I/O                                                       cellhash#
cell manager discovering disks                     System I/O                                                       cellhash#
cell worker idle                                   Idle
cell smart flash unkeep                            Other                                                            cellhash#
cell worker online completion                      Other                                                            cellhash#
cell worker retry                                  Other                                                            cellhash#
cell manager cancel work request                   Other

17 rows selected.

NAME                                               WAIT_CLASS                                                       PARAMETER1 PARAMETER2 PARAMETER3
-------------------------------------------------- ---------------------------------------------------------------- ---------- ---------- ----------
write complete waits: flash cache                  Configuration                                                    file#      block#
db flash cache single block physical read          User I/O
db flash cache multiblock physical read            User I/O
db flash cache write                               User I/O
db flash cache invalidate wait                     Concurrency
db flash cache dynamic disabling wait              Administrative
cell smart flash unkeep                            Other                                                            cellhash#

7 rows selected.


@cellio.sql 

@xpa

-- X$KCBBES – breakdown of DBWR buffer write reasons and priori?es 
-- measures how The Direct Path Read Ckpt buffers wriren metric is insignificant compared to other CKPT activity
@kcbbs


select name, value from v$sysstat where name like 'cell%' and value > 0;


#########################
cell metrics in sesstat
#########################

		alter session set current_schema=tanel;
		select count(*) from sales where amount_sold > 3;


		19:26:12 SYS@DEMO1> select table_name from dba_tables where owner = 'TANEL' order by 1 asc;

		TABLE_NAME
		------------------------------
		BIG
		BLAH
		CUSTOMERS_WITH_RAW
		CUSTOMERS_WITH_RAW_HEX
		DBC1
		EX_SESSION
		EX_SESSTAT
		EX_SNAPSHOT
		FLASH_WRITE_TEST4
		FLASH_WRITE_TEST5
		NETWORK_DUMP
		OOW1
		SALES
		SALES2
		SALES3
		SALES_ARCHIVE_HIGH
		SALES_ARCHIVE_HIGH_BIG
		SALES_C
		SALES_CL
		SALES_COMPRESSED_OLTP
		SALES_FLASH_CACHED
		SALES_FLASH_CACHED2
		SALES_FLASH_CACHED3
		SALES_HACK
		SALES_M
		SALES_ORDERED
		SALES_Q
		SALES_QUERY_HIGH
		SALES_QUERY_LOW
		SALES_U
		SALES_UPD_VS_SEL
		SMALL_FLASH_TEST
		SUMMARY
		SUMMARY2
		T1
		T2
		T3
		T4
		T5
		T9
		TANEL_DW
		TANEL_TMP
		TBLAH2
		TEST_MERGE
		TF
		TF_SMALL
		TMP
		TMP1
		TTT
		T_BP1
		T_BP2
		T_BP3
		T_BP4
		T_BP5
		T_CHAINED_TEST
		T_CHAR
		T_GC
		T_GROUP_SEPARATOR
		T_INS
		T_SEQ_TMP
		T_TMP
		T_V
		UKOUG_EXA
		X


@sys "<seach string for sysstat value>"

SELECT sql_id, physical_read_bytes
FROM V$SQLSTATS
WHERE io_cell_offload_eligible_bytes = 0 ORDER BY physical_read_bytes DESC

@xls


#########################
exadata snapper
#########################


SELECT * FROM TABLE(exasnap.display_sid(123));
SELECT * FROM TABLE(exasnap.display_snap(90, 91, 'BASIC'));


https://cloudcontrol.enkitec.com:7801/em

-- demo: big select on high OLTP
	select /*+ monitor noparallel */ sum(length(d)) from sales_c;
	@xpa <sid>
	select /*+ monitor noparallel */ sum(length(d)) from sales_c;
	select * from table(exasnap.display(2141, 5, '%'));				<-- monitor for 5 secs, and output all metrics
	select * from table(exasnap.display('2141@4', 5, '%'));				<-- monitor for 5 secs, and output all metrics, on sid 2141 on RAC node 4

-- demo: big update on high OLTP @2:27:20 -- you should see the txn layer drop on number vs cache + data layers
-- exec while true loop update t set a=-a; commit; end loop;    <-- not this

	update sales_u set quantity_sold = quantity_sold + quantity_sold + 1 where prod_id = 123;
	@trans sid=<sid>        <-- USED_UREC shows the undo rows for that transaction, if it has indexes then every index update is one undo record

	var begin_snap number
	var end_snap number 
	exec :begin_snap := exasnap.begin_snap;
	select /*+ monitor noparallel */ sum(quantity_sold) from sales_u;
	exec :end_snap := exasnap.end_snap;
	select * from table(exasnap.display_snap(:begin_snap, :end_snap, p_detail=>'%'));

-- but how about the IOPS for OLTP sessions?


#########################
storage cell layer
#########################

@ash/event_hist cell.*read
@ash/event_hist log.file

iostat -xmd  on storage cells!

@exadata/cellio
@exadata/cellio sysdata-1/24/60 sysdate

ls -ltr *txt
cellsrvstat_metrics.txt
exadata_cleanouts.txt
metricdefinition_detail.txt

-- to troubleshoot high disk latency
cellcli -e list metriccurrent CD_IO_TM_R_SM_RQ;		<-- output is in microseconds.. so divide by 1000
@exadata/exadisktopo2
iostat -xmd 5 | egrep "Device|sd[a-z] |^$"
iostat -xmd 5 | egrep "Device|^sd[a-l] "

select table_name, cell_flash_cache from dba_tables where owner = 'SOE';

@exadata/default_flash_cache_for_user.sql SOE

cellsrvstat -interval=5 -count=2    	<-- statspack for storage cells, STORAGE CELLS ALSO HAVE SGA! 
                                         -- but it doesn't really behave like the DB SGA.. it just has bufers for sending things over network before writing to disk, and a bunch of metadata (storage index, flash cache, etc.), etc... 
cellsrvstat -help                                         
cellsrvstat -stat=exec_ntwork,exec_ntreswait,exec_ntmutexwait,exec_ntnetwait -interval=5 -count=999

*** HASH JOIN can filter based from the bitmap calculation of the driving table...

-- oswatcher and cellservstat
/opt/oracle.oswatcher/osw/archive/oswcellsrvstat
# ./oswextract.sh "Number of latency threshold warnings for redo log writes" \ enkcel03.enkitec.com_cellsrvstat_11.05.25.*.dat.bz2 


@exadata/cellver

when is the next battery lean cycle time? 
LIST CELL ATTRIBUTES bbuLearnCycleTime

/opt/MegaRAID/MegaCli/MegaCli64 -AdpBbuCmd -GetBbuStatus -a0

-- check the top user io SQL
@ashtop username,sqlid "wait_class='User I/O'" sysdate-1/24/60 sysdate
@sqlid <sql_id> % 
@xpia <sql_id> % 	<-- sql monitor report
@xpd <SID> 

-- top cell cpu consuming sql from cellcli (undocumented, where the top sql for storage cells is being pulled)
list topcpu 2, 16,6000 detail;

-- top sqls across the cells 
@ashtop cell_name,wait_state,sqlid 1=1 sysdate-1/24/60 sysdate


#########################
flash cache
#########################

-- if you specify KEEP on one partition then it will "NOT" be honored
@tabpart TEST_MERGE % 
select cell_flash_cache from dba_tables where table_name = 'TEST_MERGE';
select partition_name, cell_flash_cache from dba_tab_partitions where table_name = 'TEST_MERGE';
alter table test_merge modify partition P_20130103 storage (cell_flash_cache keep);
select partition_name, cell_flash_cache from dba_tab_partitions where table_name = 'TEST_MERGE';
select * from table(exasnap.display_sid(7, 10, '%'));

@desc test_merge
@descxx test_merge    <-- show num_distinct, density, etc.

select count(*) from test_merge where item_idnt = 1;
select count(*) from test_merge where item_idnt*loc_idnt = 1;  <-- you'll not get any help from storage index here because it only knows about individual columns
																	and not the expressions.. but it will still be offloaded! 

alter table test_merge storage (cell_flash_cache keep);  <-- put the table and all partitions in flash cache

select count(*) from test_merge where item_idnt*loc_idnt = 1;    <-- now if this is executed, at first exec there will be no flash IOs because they are actuall put on cache
																	the 2nd exec will be read on flash! 


-- flash cache metrics 

"the physical read IO request" should be the same with "cell flash cache read hits"

select sql_id, executions, physical_read_requests, optimized_physical_read_requests from v$sql where sql_id = '<sql_id>'; 	<-- from v$sql, optimized_physical_read_requests could 																																 	either be flash cache hits or storage index 																																	meaning you avoided going to the spinning disk and 																																	made use of flash

AWR report has this "UnOptimized top sql" which does a minus of "physical_read_requests - optimized_physical_read_requests"


-- requirements
@reg

-- write back flash cache protection against failures
* ASM mirroring done at a higher level

	- by default large IOs do not get written to the cache... so a direct path load doesn't get to the write back cache and it goes directly to disk.. because if you do many many MBs load of data the sequential writing speed of disk is also good enough, and if terabytes of terabytes they will end up also in disk anyway.
	- it decides on the caching based on the size of the IO.. just like read cache
	- tanel said, by default IO gets cached based on the size of the IO.. if large IOs it will end up straight to the disk because it's a direct path load (sequential writing speed of disk is also good enough) <-- but what if the table is on KEEP ? 
	- and this also means that your DBWR will benefit from flash 
			-- a simple write test case
			@snapper4 all 10 1 dbwr
			alter system checkpoint;

* A write I/O gets sent to 2 – 3 separate cells
* Depending on ASM disk redundancy
	Thus it will be mirrored in mul?ple cells

-- a simple write test case
			@snapper4 all 10 1 dbwr
			alter system checkpoint;

			"cell flash cache read hits" <-- around 1015...   there's not metric like "write hits", both read & write are both accumulated under the read metric
			"physical read requests optimized"	<-- also 1015.. means you avoided the spinning disks
			"physical writes total IO request"	<-- around 918

				<-- so if we are going to take into account the mirroring (let's say normal redundancy) it's going to be "physical writes total IO request" x 2
					which is 1836... so here you hit the flash 55% of the time (1015/1836).. write hits to flash!
				-- DBWR doesn't have to read anyway.. DBWR only take blocks from buffer cache and writes them that's why you only see "physical writes total IO request"

-- write back cache behavior vs the other storage arrays
	so if you just keep on writting and let's say you only have 1TB worth of writes and 5TB of flash then you may end up caching everything.. oracle doesn't have to immediately start to write it to disk at all (from cache)... flash media is persistent and mirrored (thanks to ASM).. it's not just a cache it's sort of an extension of storage as well which can be destaged to disk if there's space issue in flash



#########################
flash logging
#########################

-- test case
must be done with smarts scans going on too, else LSI cache is not too overwhelmed and disk will still win

flash logging speeds up LGWR writes to reduce commit latency.. so LGWR can write to disk faster so it would not wait for the slow disk to acknowledge the write. 

Cells LSI RAID cards do have write-cache (barery backed), but...
	•  It's just 512MB cache per cell and shared between all IO reasons
	•  Theres a lot of other disk I/O going on, reads, loads, TEMP IO etc
	•  If the cache is full (as disks can't keep up) you'll end up wai?ng for disk
	•  100ms+, 1-2 second commit ?mes if disks are busy & with long IO queues

smart flash logging IO flow: 
	* when the LGWR writes, it sends the write IO request to 2 or 3 cells depending on mirroring
	* inside the cells, the cell knows that this is the write coming from the LGWR.. and internally it will issue IO on both the "disk and flash"
	* whatever IO complete first it will return an acknowledgement "IO done" back

-- flash housekeeping issue
but every now and then flash devices has this internal housekeeping happening.. where you may have a hiccup of a few hundred ms where internal flash housekeeping is happening, this is a known problem with flash.. usually it's fast but every now and then you have a short spike in latency. And this is now when the disk would win! so normally you enable flash logging so your commits would run faster and will not be affected by all the smart scans hammering the disks.. but every now and then when you have this behavior (flash housekeeping hiccup) then the disk will win.. so hopefully you will not wait for half a second for the commit to complete. 
Also if you are on IORM enabled it can prioritize IO, it knows that LGWR is more important than other IO.. so it actually sends out this request first.. but of course the problem might be in iostat on the OS level you may already have hunders of IO queues waiting already.

-- smart flash logging after crash
@1:36:52 after the crash, and if the IOs were not yet written to disk then at startup those IOs will be applied from flash to disk.. 
flash log size is 64MB x 16 = 1GB  <-- caches log writes doesn't really keep the redo log


-- Smart Flash Logging Metrics: DB 		<-- you wait on "log file sync" when you commit, "log file parallel write" when you do large updates/deletes.. LGWR independently writes to disk even if you don't commit

@ash/shortmon log  			
@ash/shortmon "log file|cell.*read"


-- Smart Flash Logging Metrics: Cell
cellcli -e "LIST METRICHISTORY WHERE name LIKE 'FL_.*' AND collectionTime > '"`date --date \ 
 '1 day ago' "+%Y-%m-%dT%H:%M:%S%:z"`"'" | ./exastat FL_DISK_FIRST FL_FLASH_FIRST


-- saturated flash disks
* even if you have flash logs and IORM enabled, if the flash disks are very busy then it may still take time for the IO to complete because of long queues (avgqu-sz) on the flash disks.. essentially it still has to honor the long OS IO queue
* also, on the DISK_FIRST, FLASH_FIRST.. you could be seeing DISK_FIRST having more numbers because the IOs to disk are being helped by the controller cache and only when it spikes on latency that it hits the flash


./exastat  		<-- LIST METRICHISTORY convenience script.. parses the text file output or pipe then output just the columns you specify


-- IORM and flash
* you can specify that this database may not use flash cache at all.. 
* starting 11.2.3.2 IORM is always enabled.. the "list iormplan detail;" output is BASIC instead of OFF
* IORM allows high priority IO (redo writes, controlfile writes, etc.) to be submitted first to the OS queues before any IO




#########################
PX and bloom filters
#########################


* line 18, "table access storage full" and SYS_OP_BLOOM_FILTER... bloom filter is just a bitmap which describes what kind of data you have in the driving table
* hash join joins table A and table B.. we send in the bloom filter on table B
	* on the A table we compute for bitmap which tells us what kind of join values we have on the join column (whatever column I join by it tells us what kind of values we have on the join column) then we will ship this bitmap to the driving table which is table B... when we start scanning table B when we start the smart scan we will send in the bloom filter as well so we will know basically when we scan on B for a particular value that it is not on A anyway
	* it can do early filtering based on the join on the storage cell 
	* table A is driving table
* bloom filter is also used for partition elimination
* nested loop joins doesn't do bloom filter
* early filtering based on bloom filter and WHERE condition

-- simple example
select username, mod(ora_hash(username),8) bloom_bit from dba_users where rownum <= 10 order by bloom_bit;		<-- bit value from 0-7

@pd bloom%size
@hint join%filter

@jf <SID>


}}}
https://www.oracle.com/technetwork/database/availability/exadata-ovm-2795225.pdf
http://blog.umairmansoob.com/wp-content/uploads/2016/08/Exadata-Deployment-Life-Cycle-By-Umair-Mansoob.pdf
''MindMap: Exadata Workload Characterization'' http://www.evernote.com/shard/s48/sh/1a5fae96-fea1-42eb-8436-f1f27c98dc5a/286eb93d18749c845808749fb3590418
<<<
One Exadata XT Storage Server will include twelve 14 TB SAS disk drives with 168 TB total raw disk capacity. To achieve a lower cost, Flash is not included, and storage software is optional.

This lower-cost addition to the Exadata Storage Server lineup delivers Exadata class benefits:

• Efficient – The XT server offers the same high capacity as the HC Storage server, including Hybrid Columnar Compression
• Simple – The XT server adds capacity to Exadata while remaining transparent to applications, transparent to SQL, and retains the same operational model
• Secure – The XT server enables customers to extend the same security model and encryption used for online data to low-use data, because it is integrated within the same Exadata
• Fast and Scalable – Unlike other low-access data storage solutions, the XT server is integrated to the Exadata fabric, for fast access and easy scale-out
• Compatible – The XT server is just another flavor of Exadata Storage server – you can just add XT servers to any Exadata rack
<<<

https://docs.oracle.com/en/engineered-systems/exadata-database-machine/dbmso/whats-new-oracle-exadata-database-machine-19.2.html
https://www.oracle.com/a/ocom/docs/engineered-systems/exadata/exadata-x8-2-ds.pdf
<<<
A new storage configuration is available starting with Oracle Exadata Database Machine X8-2. The XT model does not have Flash drives, only 14 TB hard drives with HCC compression. This is a lower cost storage option, with only one CPU, less memory, and with SQL Offload capability turned off by default. If used without the SQL Offload feature, then you are not required to purchase the Exadata Storage license for the servers.
<<<

Also on the price list, the minimum two XT per rack
https://www.oracle.com/assets/exadata-pricelist-070598.pdf
[23] Minimum two Exadata Storage Server Extended (XT) required per rack. No mandatory Exadata Storage Software license required.

! the use case 
<<<
I haven’t been able to find documentation (haven’t looked too hard) on how the XT servers are used, by I would expect that they have to be in their own ASM diskgroup.  That diskgroup would then have the cell.smart_scan_capable attribute set to false, nullifying the ability to produce smart scans.
 
I think Oracle is trying to look for a way to provide “cheaper” storage solutions for stale data that don’t include running a big data appliance, gluent, or any other solution.  The one good thing that you get out of the XT servers is that everything still sits in ASM, rather than over NFS from a ZFSSA.  You could take the very stale data, compress it with HCC, and then just let it sit in those separate diskgroups in case anybody wants it.  Not the most elegant thing, but I think that’s the idea.
 
Another idea is to take 2, 3, 4 of those XT storage servers and build out a giant RECO diskgroup to hold RMAN backups if you don’t have any other solution.
<<<
 
http://drsalbertspijkers.blogspot.com/2019/08/oracle-exadata-hardware-x8-2-and-x8-8.html
https://technology.amis.nl/2019/04/20/newly-released-oracle-exadata-x8-2-bigger-disks-for-saving-money-expanding-capacity/
https://emilianofusaglia.net/2019/10/06/exadata-x8m-architectural-changes/?utm_campaign=58cf92e3d4dbac245c04c47c&utm_content=5d99be432e38dc00012249b4&utm_medium=smarpshare&utm_source=linkedin


https://www.oracle.com/sa/a/ocom/docs/engineered-systems/exadata/exadata-x8m-2-ds.pdf
https://www.oracle.com/technetwork/database/exadata/exadata-x8-2-ds-5444350.pdf


! RDMA 
https://zcopy.wordpress.com/2010/10/08/quick-concepts-part-1-%e2%80%93-introduction-to-rdma/

! NUMA 
https://www.morganslibrary.org/reference/numa.html

! RoCE
<<<
What is RDMA over Converged Ethernet (RoCE)? https://www.youtube.com/watch?v=dLw5bA5ziwU
https://www.electronicdesign.com/industrial-automation/11-myths-about-rdma-over-converged-ethernet-roce
http://www.mellanox.com/related-docs/whitepapers/WP_RoCE_vs_iWARP.pdf
https://en.wikipedia.org/wiki/RDMA_over_Converged_Ethernet
<<<

! Intel Optane 
<<<
Intel Optane DC Persistent Memory Fills the Gap between DRAM and SSDs https://www.youtube.com/watch?v=f9pIXw1ndRI
<<<

! extending exadata x8m
https://docs.oracle.com/en/engineered-systems/exadata-database-machine/dbmmr/preparing-to-extend.html#GUID-EF7BA63C-D3CC-4FDC-8524-2709E4F85ED7

https://twitter.com/karlarao/status/1174436375682326531
<<<
for companies w/ on-premises multi-fullrack config let's say two X7-8. they can only extend their #Exadata cluster w/ the old X8-8 and not w/ the new X8M-8 
@OracleExadata
 , is this correct? just curious about the HW upgrade path for big multi-rack environments
<<<
<<<

up to X8-2 it is Xen. Then starting with X8M-2 it is KVM, at the same time the backend networking changed from IB to RcoE
our X3-2 was re-imaged from physical to Xen at some point
as far as i know if you want to change from physical to virtual or vise versa, you have to re-image

<<<
https://dl.dropboxusercontent.com/u/66720567/Exa_backup_recovery.pdf
http://www.evernote.com/shard/s48/sh/2f784775-a9c0-408d-9c8d-a03c4b82f37e/d1a0b87b148ef71ecf5ea300d1e952b9

{{{
Database Server:
root/welcome1
oracle/welcome1
grid/welcome1
grub/sos1Exadata

Exadata Storage Servers:
root/welcome1
celladmin/welcome1
cellmonitor/welcome1

InfiniBand switches:
root/welcome1
nm2user/changeme

Ethernet switches:
admin/welcome1

Power distribution units (PDUs):
admin/welcome1
root/welcome1

Database server ILOMs:
root/welcome1

Exadata Storage Server ILOMs:
root/welcome1

InfiniBand ILOMs:
ilom-admin/ilom-admin
ilom-operator/ilom-operator

Keyboard, video, mouse (KVM):
admin/welcome1
}}}
{{{
@bryangrenn try this select * from gv$cell; in mr. tools I do this mrskew --name='smart.*scan' --group='$p1' *trc
@bryangrenn and v$asm_disk.hash_value could be your diskhash# in exadata waits
}}}
https://technicalsanctuary.wordpress.com/2014/06/06/creating-an-infiniband-listener-on-supercluster/
http://ermanarslan.blogspot.com/2013/10/oracle-exadata-infiniband-ofed.html
http://vijaydumpa.blogspot.com/2012/05/configure-infiniband-listener-on.html

also check [[1GbE to 10GbE upgrade]]
http://allthingsoracle.com/method-for-huge-diagnostic-information-in-exadata/

Location of Different Logfiles in Exadata Environment [ID 1326382.1]
{{{
Location of Different Logfiles in Exadata Environment

On the cell nodes

================

1. Cell alert.log file
/opt/oracle/cell11.2.1.2.1_LINUX.X64_100131/log/diag/asm/cell/<node name>/trace/alert.log.
or 
if the CELLTRACE parameter is set just do cd $CELLTRACE

2. MS logfile
/opt/oracle/cell11.2.1.2.1_LINUX.X64_100131/log/diag/asm/cell/<node name>/trace/ms-odl.log.
or
if the CELLTRACE parameter is set just do cd $CELLTRACE

3. OS watcher output data
/opt/oracle.oswatcher/osw/archive/

To get OS watcher data of specific date :
cd /opt/oracle.oswatcher/osw/archive
find . -name '*11.04.11*' -print -exec zip /tmp/osw_`hostname`.zip {} \;

4. Os message logfile
/var/log/messages

5. VM Core files
/var/crash/

6. SunDiag output files.
/tmp/sundiag_.tar.bz2 

7. Imaging issues related logfiles:
    /var/log/cellos 

8. Disk controller firmware logs: 
     /opt/MegaRAID/MegaCli/Megacli64 -fwtermlog -dsply -a0 


On the Database nodes


=====================

1. Database alert.log 
$ORACLE_BASE/diag/rdbms/{sid}/{sid}/trace/alert_{sid}.log

2. ASM alert.log
/diag/asm/+asm/+ASM2/trace

3. Clusterware CRS alert.log 
$GRID_HOME/log/<node name>

4. Diskmon logfiles
$GRID_HOME/log/<node name>/diskmon

5. OS Watcher output files
/opt/oracle.oswatcher/osw/archive/

6. Os message logfile
/var/log/messages

7. VM Core files for Linux
/var/crash/ or /var/log/oracle/crashfiles

8. Imaging/patching issues related logfiles:
  /var/log/cellos 

9. Disk controller firmware logs: 
     /opt/MegaRAID/MegaCli/Megacli64 -fwtermlog -dsply -a0 
}}}
''how exadata is manufactured'' http://vimeo.com/46778003
check here https://www.evernote.com/shard/s48/sh/3b53d0f2-8bdd-47f1-928e-9d3a93750c07/6629a92562f2269417bf595cf2081dfe
Method for Huge Diagnostic Information in Exadata
http://allthingsoracle.com/method-for-huge-diagnostic-information-in-exadata/

* https://fritshoogland.wordpress.com/2013/10/21/exadata-and-the-passthrough-or-pushback-mode/
* https://www.oracle.com/webfolder/community/engineered_systems/4108865.html AWR shows 100% passthru reasons as "cell num smart IO sessions using passthru mode due to cellsrv"


https://www.google.com/search?source=hp&ei=CoGsX_2qCZKc_Qa44rqgDA&q=exadata+passthrough&oq=exadata+passthru&gs_lcp=CgZwc3ktYWIQAxgAMgsIABDJAxAWEAoQHjIICAAQFhAKEB46CwgAELEDEIMBEMkDOgUIABCxAzoICAAQsQMQgwE6AggAOgsILhCxAxDHARCjAjoICAAQsQMQyQM6BAgAEAo6CAguEMcBEK8BOgUIABDJAzoGCAAQFhAeOgkIABDJAxAWEB5QsgRY7xZg5iJoAHAAeACAAW-IAboJkgEEMTUuMZgBAKABAaoBB2d3cy13aXo&sclient=psy-ab
http://goo.gl/W7njY
Steps to shut down or reboot an Exadata storage cell without affecting ASM (Doc ID 1188080.1)
https://oracleracdba1.wordpress.com/2013/08/14/steps-to-shut-down-or-reboot-an-exadata-storage-cell-without-affecting-asm/
https://baioradba.wordpress.com/2012/02/03/steps-to-power-down-or-reboot-a-cell-without-affecting-asm/
http://www.oracle.com/us/products/database/exadata-vs-ibm-1870172.pdf
https://www.evernote.com/shard/s48/sh/7de6a930-08b6-47cf-812e-cab2b2a83b5b/ed7e27628608f801b8ba48d553e7c82e
What I did here is I compiled the "Whats New?" on the official doc into groups of components and versioned it by hardware and software
This way I can easily track the improvements on the storage software. So if you are on an older release pretty much you know the software features you are missing which is easier to justify the testing of that patch level. 

The URLs below are the placeholders of the documents and I'll keep on updating them moving forward. See my tweet here https://twitter.com/karlarao/status/558611482368573441 to get some idea how these files look like. 

Check out the files below: 

@@ ''Spreadsheet'' - ''Exadata-FeaturesAcrossVersions'' - https://db.tt/BZly5L13 @@
''MindMap version'' - 12cExaNewFeat.mm https://db.tt/xAwgzk6N


''Note:'' BTW the docs of the new release (12.1.2.1.0) is at patch #10386736 which is not really obvious if you look at 888828.1 note





IMG_4319.JPG - X4270 cell server 
IMG_4325.JPG - X4170 db server
IMG_4330.JPG - SAS2 10K RPM 300GB

INTERNAL Exadata Database Machine Hardware Training and Knowledge [ID 1360358.1]
Oracle Sun Database Machine X2-2/X2-8 Diagnosability and Troubleshooting Best Practices [ID 1274324.1]

Oracle System Options http://www.oracle.com/technetwork/documentation/oracle-system-options-190050.html#solid
https://twitter.com/karlarao/status/375289300360765440




Information Center: Troubleshooting Oracle Exadata Database Machine [ID 1346612.2]
Exadata V2 Starter Kit [ID 1244344.1]
Master Note for Oracle Database Machine and Exadata Storage Server [ID 1187674.1]
http://blogs.oracle.com/db/2011/01/oracle_database_machine_and_exadata_storage_server.html

Database Machine and Exadata Storage Server 11g Release 2 (11.2) Supported Versions [ID 888828.1] <-- ALERTS ON NEW PATCH BUNDLES
Oracle Exadata Storage Server Software 11g Release 2 (11.2.1) Patch Set 2 (11.2.1.2.0) [ID 888834.1]  <-- UPGRADING THE EXADATA
Oracle Database Machine Monitoring Best Practices [ID 1110675.1]
OS Watcher User Guide [ID 301137.1] <-- Version 3.0.1 now supports Exadata


''Webinars''
Selected Webcasts in the Oracle Data Warehouse Global Leaders Webcast Series [ID 1306350.1]



''Exadata Best Practices''
Oracle Exadata Best Practices [ID 757552.1]
Engineered Systems Welcome Center [ID 1392174.1]
INTERNAL Master Note for Exadata Database Machine Hardware Support [ID 1354631.1]
Oracle Sun Database Machine X2-2 Diagnosability and Troubleshooting Best Practices (Doc ID 1274324.1)
Oracle Sun Database Machine Setup/Configuration Best Practices (Doc ID 1274318.1)


''TROUBLESHOOT INFORMATION CENTER''
TROUBLESHOOT INFORMATION CENTER: Exadata Database Machine - Storage Cell Issues (cellcli,celldisks,griddisks,processes rs,ms,cellsrv) and Offload Processing Issues (Doc ID 1531832.2)




''Exadata Maintenance''
Oracle Database Machine HealthCheck [ID 1070954.1]
Oracle Auto Service Request (Doc ID 1185493.1)
Oracle Database Machine and Exadata Storage Server Information Center (Doc ID 1306791.1)



''Resize /u01 on compute node'' 
Doc ID 1357457.1 How to Expand Exadata Compute Node File Systems ()
Doc ID 1359297.1	Unable To Resize filesystem on Exadata ()
tune2fs -l /dev/mapper/VGExaDb-LVDbOra1 | grep -i features
Filesystem features: has_journal filetype needs_recovery sparse_super large_file
The feature needed is: resize_inode
Without that feature the filesystem cannot be resized




''Exadata shutdown procedure''
Steps to shut down or reboot an Exadata storage cell without affecting ASM: [ID 1188080.1]
Steps To Shutdown/Startup The Exadata & RDBMS Services and Cell/Compute Nodes On An Exadata Configuration. [ID 1093890.1]



''Exadata Enterprise Manager''
Enterprise Manager for Oracle Exadata Database Machine (Doc ID 1308449.1)


''Exadata versions''
12c - Exadata - Exadata 12.1.1.1.0 release and patch (16980054 ) (Doc ID 1571789.1)


''Exadata Patching''
Oracle Support Lifecycle Advisors [ID 250.1] <— new! it has a demo video on patching db and cell nodes
Patching & Maintenance Advisor: Database (DB) Oracle Database 11.2.0.x [ID 331.1]
Exadata Critical Issues [ID 1270094.1]
Database Machine and Exadata Storage Server 11g Release 2 (11.2) Supported Versions [ID 888828.1]
Exadata Patching Overview and Patch Testing Guidelines [ID 1262380.1]
Exadata Critical Issues [ID 1270094.1]  <-- MUST READ
List of Critical Patches Required For Oracle 11.2 DBFS and DBFS Client [ID 1150157.1]
Oracle Software Patching with OPLAN [ID 1306814.1]
Patch Oracle Exadata Database Machine via Oracle Enterprise Manager 11gR1 (11.1.0.1) [ID 1265998.1]
Oracle Patch Assurance - Data Guard Standby-First Patch Apply [ID 1265700.1]

Patch 12577723: EXADATA 11.2.2.3.2 (MOS NOTE 1323958.1)
Exadata 11.2.2.3.2 release and patch (12577723 ) for Exadata 11.1.3.3, 11.2.1.2.x, 11.2.2.2.x, 11.2.2.3.1 [ID 1323958.1]
Quarterly Cpu vs Patch bundle, Patch collide http://dbaforums.org/oracle/index.php?showtopic=18588

-- ''Major Release upgrade''
11.2.0.1 to 11.2.0.2 Database Upgrade on Exadata Database Machine [ID 1315926.1]
* Upgrade Advisor: Database (DB) Exadata from 11.2.0.1 to 11.2.0.2 [ID 336.1] https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=336.1#evaluate
* Advisor Webcast Archives - 2011 [ID 1400762.1] https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=1400762.1#OraSSEXA
** Exadata Patching Cell Server Demo - 11.2.2.3.2 - https://oracleaw.webex.com/oracleaw/lsr.php?AT=pb&SP=EC&rID=64442222&rKey=3150b16a21d3dc69
** Exadata Patching Database Server Demo https://oracleaw.webex.com/oracleaw/lsr.php?AT=pb&SP=EC&rID=64455552&rKey=6b4faa15ff3cc250
** Exadata Patching Strategy https://oracleaw.webex.com/oracleaw/lsr.php?AT=pb&SP=EC&rID=63488107&rKey=422b763d21527597
11.2.0.1/11.2.0.2 to 11.2.0.3 Database Upgrade on Exadata Database Machine [ID 1373255.1]
* for 11201 BPs, there's a separate patch for GI and DB.. both patch patches both homes.. silly!
* the BPs should be staged on all DB nodes, 
* the DB & Grid Patchsets are staged only on DB node 1, & will be pused to all the DB nodes
* the CELL image software is staged only on DB node 1, & will be pushed to all the cells
* the CELL EX patch (critical issues) is staged only on DB node 1, & will be pushed to all the cells... they are fixed (cumulative) on the latest cell software version



''x2-8''
Exadata 11.2.2.2.0 release and patch (10356485) for Exadata 11.1.3.3.1, 11.2.1.2.3, 11.2.1.2.4, 11.2.1.2.6, 11.2.1.3.1, 11.2.2.1.0, 11.2.2.1.1 [ID 1270634.1] <-- mentions of UEK



''DB BP''
BP8 https://updates.oracle.com/Orion/Services/download?type=readme&aru=13789775



''Cell SW''
''11.2.2.3.2 patch 12577723 and My Oracle Support note 1323958.1'' https://updates.oracle.com/Orion/Services/download?type=readme&aru=13852123



''Exadata onecommand''
Ntpd Does not Use Defined NTP Server [ID 1178614.1]



''Exadata networking''
Changing IP addresses on Exadata Database Machine [ID 1317159.1]
Configuring Exadata Database Server Routing [ID 1306154.1]
How to Change Interconnect/Public Network (Interface or Subnet) in Oracle Clusterware [ID 283684.1]
How to update the IP address of the SCAN VIP resources (ora.scan.vip) [ID 952903.1]



''Exadata Bare Metal''
Bare Metal Restore Procedure for Compute Nodes on an Exadata Environment [ID 1084360.1]
BMR(bare metal restore) document. Doc ID 1084360.1



''Exadata bugs''
-- Exadata grid disks going offline.
<<<
The bug below is the software specific bug and it has now been closed:
Bug 12431721 - UNEXPECTED STATUS OF GRIDDISK DEVICE STATUS IS NOT 'ACTIVE' 
- provided a fix via setting _cell_io_hang_time = 30 on all cells
The fix to extend the IO hang timeout is merged into our next release of 11.2.2.3.2.

The root cause of the disks going offline in an unknown state is being further investigated in a hardware bug. We believe LSI to be causing this unknown disk state - the bug is still being investigated. 
<<<
Bug 10180307 - Dbrm dbms_resouce_manager.calibrate_io reports very high values for max_pmbps (Doc ID 10180307.8) <-- Automatic Degree of Parallelism in 11.2.0.2 (Doc ID 1269321.1)
memlock setting http://translate.google.com/translate?sl=auto&tl=en&u=http://www.oracledatabase12g.com/archives/warning-even-exadata-has-a-wrong-memlock-setting.html
Flashcache missing, in status critical after multiple "Flash disk removed" alerts [ID 1383267.1]



__''Exadata HW failure''__

-- STORAGE CELLS - FAILED DISK
How To Gather/Backup ASM Metadata In A Formatted Manner? [ID 470211.1]
Script to Report the Percentage of Imbalance in all Mounted Diskgroups [ID 367445.1]
Oracle Exadata Diagnostic Information required for Disk Failures (Doc ID 761868.1)
Things to Check in ASM When Replacing an ONLINE disk from Exadata Storage Cell [ID 1326611.1]
Steps to manually create cell/grid disks on Exadata V2 if auto-create fails during disk replacement [ID 1281395.1]
High Redundancy Disk Groups in an Exadata Environment [ID 1339373.1]

{{{

1)Upload Sundiag output from Exadata Storage server having disk problems.
# /opt/oracle.SupportTools/sundiag.sh
Oracle Exadata Diagnostic Information required for Disk Failures (Doc ID 761868.1)
1.1) Serial Numbers for System Components
#/opt/oracle.SupportTools/CheckHWnFWProfile -S
1.2)Using cellcli provide the following
# cellcli -e "list griddisk attributes name,asmmodestatus,asmdeactivationoutcome"
1.3)#/usr/sbin/sosreport { need file created in /tmp (LINUX)]
1.4) Please upload a ILOM snapshot. Follow this note Diagnostic information for ILOM, ILO , LO100 issues (Doc ID 1062544.1) - How To Create a Snapshot With the ILOM Web Interface
}}}

-- COMPUTE NODE - FAILED DISK
Dedicated and Global Hot Spares for Exadata Compute Nodes in 11.2.2.3.2 (Doc ID 1339647.1)
Removing HotSpare Flag on replaced disk in Exadata storage cell [ID 1300310.1]
Marking a replaced disk as Hot Spare in Exadata Compute Node [ID 1289684.1]



''Compute Node / DB node''
How to Expand Exadata Compute Node File Systems (Doc ID 1357457.1)


''Exadata Migration''
Migrating an Oracle E-Business Suite Database to Oracle Exadata Database Machine [ID 1133355.1]



''Exadata DBFS''
Configuring DBFS on Oracle Database Machine (Doc ID 1054431.1)
Configuring a Database for DBFS on Oracle Database Machine (Doc ID 1191144.1)



''MegaCli''
http://www.myoraclesupports.com/content/oracle-sun-database-machine-diagnosability-and-troubleshooting-best-practices




''Exadata Resource Management''
Tool for Gathering I/O Resource Manager Metrics: metric_iorm.pl (Doc ID 1337265.1)
Scripts and Tips for Monitoring CPU Resource Manager (Doc ID 1338988.1)
Configuring Resource Manager for Mixed Workloads in a Database (Doc ID 1358709.1)


''Exadata 3rd party software or tools on compute nodes''
Installing Third Party Monitoring Tools in Exadata Environment [ID 1157343.1]


''ASM redundancy''
Understanding ASM Capacity and Reservation of Free Space in Exadata (Doc ID 1551288.1)





''MindMap - EMGC Monitoring'' http://www.evernote.com/shard/s48/sh/67300d1c-00c0-4d25-b113-a644eb3ba58a/33abcafa4dda6538de9c19f65930d022

''Oracle Database Machine Monitoring Best Practices [ID 1110675.1]''   -> deployment documents are here https://www.dropbox.com/s/95qv1ejspkrzavf
<<<
fo_ext.sql
emudm_netif_state.sh
emudm_ibconnect.sh
Sun_Oracle_Database_Machine_Monitoring_v120.pdf
OEM_Exadata_Dashboard_Deployment_v104.pdf
OEM_Exadata_Dashboard_Prerequisites_and_Overview_v100.pdf
<<<
''Patch Requirements for Setting up Monitoring and Administration for Exadata [ID 1323298.1]''  <-- take note of this first
http://www.oracle.com/technetwork/oem/grid-control/downloads/devlic-188770.html  <-- ''exadata plugin bundle link''
http://www.oracle.com/technetwork/oem/grid-control/downloads/exadata-plugin-194085.html   <-- ''exadata plugin link''
http://www.oracle.com/technetwork/oem/extensions/index.html  <-- ''extensions exchange link''


''em11.1''
A script to deploy the agents and the plugins to the compute nodes is available as patch 11852882
A script to create a Grid Control 11 environment from scratch is available as patch 11852869

''em12c''
The script to deploy the agents to the compute nodes is available as patch 12960596
The script to create a Cloud Control 12c environment from scratch is available as 12960610
The documentation for Exadata target discovery is located in the Cloud Control Administration Guide (chapter 28)

''ASR''
http://www.oracle.com/technetwork/server-storage/asr/documentation/exadata-asr-quick-install-330086.pdf

''MIBs''
How to Obtain MIBs for Exadata Database Machine Components [ID 1315086.1]

''agent failover'' http://blogs.oracle.com/XPSONHA/entry/failover_capability_for_plugins_exadata

check the white paper here [[MAA - Exadata Health and Resource Usage Monitoring]]




! check this
[[check patch level]]



''Software Updates, Best Practices and Notes'' https://blogs.oracle.com/XPSONHA/entry/software_updates_best_practices_and
Database Machine and Exadata Storage Server 11g Release 2 (11.2) Supported Versions ''[ID 888828.1]''
https://support.oracle.com/epmos/faces/ui/km/DocContentDisplay.jspx?_afrLoop=908824894646555&id=888828.1


''How to determine BP Level?'' https://forums.oracle.com/forums/thread.jspa?threadID=2224966
{{{
opatch lsinv -bugs_fixed | egrep -i 'bp|exadata|bundle'
/u01/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch lsinventory -bugs_fixed | egrep -i 'bp|exadata|bundle'
OR
registry$history or dba_registry_history

col action format a10
col namespace format a10
col action_time format a30
col version format a10
col comments format a30
select * from dba_registry_history;


and then.. go to 
MOS 888828.1 --> Patch Release History for Exadata Database Machine Components --> Exadata Storage Server software patches
}}}


<<<
I think you want this doc: 11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1 to 12.1.0.2 Grid Infrastructure and Database Upgrade on Exadata Database Machine running Oracle Linux (Doc ID 1681467.1).

And all of the patches' readme files have detailed installation instructions. Read every one of them.

It's working out all of the dependencies that is most complicated. There's a long string of dependencies that's going to come into play for you:
Grid Infrastructure 12.1.0.x requires Exadata storage software (ESS) 12.1.1.1.1. (You can use ESS 11.2.3.3.1 but you will lose some Exadata features).
ESS 12.1.1.1.1 requires Oracle Linux (OL) 5.5 (kernel 2.6.18-194) or later, so you'll likely have to upgrade the OS on the database nodes.
ESS updates are full OS images, so you'll get an OL6 upgrade on your storage servers with the update.
Because of #3, you should update your database servers to OL6 in #2 rather than the minimum 5.5, so that everything is on OL6.
Some more links you'll want:
The starting point for Exadata patching is Information Center: Upgrading Oracle Exadata Database Machine(1364356.2).

There's a patching overview at Exadata Patching Overview and Patch Testing Guidelines.

To update the database server OS, follow Document 1284070.1.

Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1)
11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1 to 12.1.0.2 Grid Infrastructure and Database Upgrade on Exadata Database Machine running Oracle Linux (Doc ID 1681467.1)
Exadata Patching Overview and Patch Testing Guidelines (Doc ID 1262380.1)
Updating key software components on database hosts to match those on the cells (Doc ID 1284070.1)
<<<





{{{
Boris Erlikhman http://goo.gl/2LvXU
                smart scan http://goo.gl/chy2s
                flash cache http://goo.gl/YlCA7
                smart flash log http://goo.gl/TwyRx
                write back cache http://goo.gl/2WCmw

Roger Macnicol http://goo.gl/oxxu7
                hcc http://goo.gl/9ptFe, http://goo.gl/3IOSi

Sue Lee http://goo.gl/6WCFw, http://goo.gl/bI0pd
                iorm http://goo.gl/BHIc1
}}}


''phydisk, lun, celldisk, griddisk mapping''
{{{
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:0">
<Attribute NAME="deviceId" VALUE="23"></Attribute>
<Attribute NAME="physicalSize" VALUE="1862.6559999994934G"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="id" VALUE="JK11D1YAJB8GGZ"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1273975845146"></Attribute>
<Attribute NAME="errMediaCount" VALUE="53"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="slotNumber" VALUE="0"></Attribute>
<Attribute NAME="physicalSerial" VALUE="JK11D1YAJB8GGZ"></Attribute>
<Attribute NAME="name" VALUE="35:0"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="HITACHI H7220AA30SUN2.0T"></Attribute>
</Target>

<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_00_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sda3"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-793d-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sda"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="0_0"></Attribute>
<Attribute NAME="name" VALUE="CD_00_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070151040"></Attribute>
<Attribute NAME="size" VALUE="1832.59375G"></Attribute>
</Target>


CellCLI> list physicaldisk 35:0 detail
         name:                   35:0
         deviceId:               23
         diskType:               HardDisk
         enclosureDeviceId:      35
         errMediaCount:          53
         errOtherCount:          0
         foreignState:           false
         luns:                   0_0
         makeModel:              "HITACHI H7220AA30SUN2.0T"
         physicalFirmware:       JKAOA28A
         physicalInsertTime:     2010-05-15T21:10:45-05:00
         physicalInterface:      sata
         physicalSerial:         JK11D1YAJB8GGZ
         physicalSize:           1862.6559999994934G
         slotNumber:             0
         status:                 normal
         
CellCLI> list lun 0_0 detail
         name:                   0_0
         cellDisk:               CD_00_cell01
         deviceName:             /dev/sda
         diskType:               HardDisk
         id:                     0_0
         isSystemLun:            TRUE
         lunAutoCreate:          FALSE
         lunSize:                1861.712890625G
         lunUID:                 0_0
         physicalDrives:         35:0
         raidLevel:              0
         lunWriteCacheMode:      "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU"
         status:                 normal         

CellCLI> list celldisk where name = CD_00_cell01 detail
         name:                   CD_00_cell01
         comment:
         creationTime:           2010-05-28T13:09:11-05:00
         deviceName:             /dev/sda
         devicePartition:        /dev/sda3
         diskType:               HardDisk
         errorCount:             0
         freeSpace:              0
         id:                     00000128-e01a-793d-0000-000000000000
         interleaving:           none
         lun:                    0_0
         raidLevel:              0
         size:                   1832.59375G
         status:                 normal

CellCLI> list griddisk where name = DATA_CD_00_cell01 detail
         name:                   DATA_CD_00_cell01
         availableTo:
         cellDisk:               CD_00_cell01
         comment:
         creationTime:           2010-06-14T17:41:12-05:00
         diskType:               HardDisk
         errorCount:             0
         id:                     00000129-389f-a070-0000-000000000000
         offset:                 32M
         size:                   1282.8125G
         status:                 active

CellCLI> list griddisk where name = RECO_CD_00_cell01 detail
         name:                   RECO_CD_00_cell01
         availableTo:
         cellDisk:               CD_00_cell01
         comment:
         creationTime:           2010-06-14T17:41:13-05:00
         diskType:               HardDisk
         errorCount:             0
         id:                     00000129-389f-a656-0000-000000000000
         offset:                 1741.328125G
         size:                   91.265625G
         status:                 active

CellCLI> list griddisk where name = STAGE_CD_00_cell01 detail
         name:                   STAGE_CD_00_cell01
         availableTo:
         cellDisk:               CD_00_cell01
         comment:
         creationTime:           2010-06-14T17:41:12-05:00
         diskType:               HardDisk
         errorCount:             0
         id:                     00000129-389f-a267-0000-000000000000
         offset:                 1282.859375G
         size:                   458.140625G
         status:                 active

CellCLI> list griddisk where name = SYSTEM_CD_00_cell01 detail
         name:                   SYSTEM_CD_00_cell01
         availableTo:
         cellDisk:               CD_00_cell01
         comment:
         creationTime:           2010-06-14T17:41:13-05:00
         diskType:               HardDisk
         errorCount:             0
         id:                     00000129-389f-a45f-0000-000000000000
         offset:                 1741G
         size:                   336M
         status:                 active

         
<Target TYPE="oracle.ossmgmt.ms.core.MSCell" NAME="enkcel01">
<Target TYPE="oracle.ossmgmt.ms.core.MSIDBPlan" NAME="enkcel01_IORMPLAN">
---                  
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:0">
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="0_0">
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_00_cell01">
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="DATA_CD_00_cell01">
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCache" NAME="enkcel01_FLASHCACHE">
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="35fff6cd-001e-4ebf-8a48-a53b36b22fbf">


         
$ cat enkcel01-collectl.txt  | grep -i "target type" | grep CD_00_cell01
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_00_cell01">
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="DATA_CD_00_cell01">
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="RECO_CD_00_cell01">
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="STAGE_CD_00_cell01">
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SYSTEM_CD_00_cell01">


list celldisk where name = CD_00_cell01 detail 
list griddisk where name = SYSTEM_CD_00_cell01 detail
}}}




''/opt/oracle/cell/cellsrv/deploy/config/cell_disk_config.xml config file''
{{{
/opt/oracle/cell/cellsrv/deploy/config/cell_disk_config.xml
<?xml version="1.0" encoding="UTF-8"?>
<Targets version="0.0">
<Target TYPE="oracle.ossmgmt.ms.core.MSCell" NAME="enkcel01">
<Attribute NAME="interconnect1" VALUE="bondib0"></Attribute>
<Attribute NAME="hwRetentionDays" VALUE="0"></Attribute>
<Attribute NAME="metricHistoryDays" VALUE="14"></Attribute>
<Attribute NAME="locatorLEDStatus" VALUE="off"></Attribute>
<Attribute NAME="bbuLastLearnCycleTime" VALUE="1310886021911"></Attribute>
<Attribute NAME="smtpFrom" VALUE="Enkitec Exadata"></Attribute>
<Attribute NAME="bbuLearnCycleTime" VALUE="1318834800000"></Attribute>
<Attribute NAME="snmpSubscriber" VALUE="((host=server,port=3872,community=public))"></Attribute>
<Attribute NAME="smtpServer" VALUE="server"></Attribute>
<Attribute NAME="sellastcollection" VALUE="1312830889000"></Attribute>
<Attribute NAME="cellVersion" VALUE="OSS_11.2.0.3.0_LINUX.X64_110520"></Attribute>
<Attribute NAME="management_ip" VALUE="0.0.0.0"></Attribute>
<Attribute NAME="id" VALUE="1017XFG056"></Attribute>
<Attribute NAME="notificationMethod" VALUE="mail,snmp"></Attribute>
<Attribute NAME="notificationPolicy" VALUE="critical"></Attribute>
<Attribute NAME="adrLastMineTime" VALUE="1313845212042"></Attribute>
<Attribute NAME="makeModel" VALUE="SUN MICROSYSTEMS SUN FIRE X4275 SERVER SATA"></Attribute>
<Attribute NAME="OEHistory" VALUE="3112.791028881073 5706.6555216653005 4995.632752835751 4996.394891858101 5121.992709875107 3480.762350344658 4270.716751503945 5062.652316808701 4987.846175163984 5702.463975906372 5222.782039854262 5016.283513784409 5752.083408117294 5852.781406164169 5710.337441308157 5712.052912848337 4999.097516179085 5714.517756598337 5431.95593547821 6157.329520089285 5004.541987478733 5720.458814076015 5722.174355370657 5723.889757156372 "></Attribute>
<Attribute NAME="smtpFromAddr" VALUE="x@server"></Attribute>
<Attribute NAME="realmName" VALUE="enkitec_realm"></Attribute>
<Attribute NAME="iormBoost" VALUE="0.0"></Attribute>
<Attribute NAME="offloadEfficiency" VALUE="5213.871233422416"></Attribute>
<Attribute NAME="name" VALUE="enkcel01"></Attribute>
<Attribute NAME="smtpToAddr" VALUE="x@server"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSIDBPlan" NAME="enkcel01_IORMPLAN">
<Attribute NAME="objective" VALUE="high_throughput"></Attribute>
<Attribute NAME="catPlan"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="dbPlan"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_00_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sda3"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-793d-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sda"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="0_0"></Attribute>
<Attribute NAME="name" VALUE="CD_00_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070151040"></Attribute>
<Attribute NAME="size" VALUE="1832.59375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_01_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdb3"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-8c16-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdb"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="0_1"></Attribute>
<Attribute NAME="name" VALUE="CD_01_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070155868"></Attribute>
<Attribute NAME="size" VALUE="1832.59375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_02_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdc"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-8e29-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdc"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="0_2"></Attribute>
<Attribute NAME="name" VALUE="CD_02_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070156404"></Attribute>
<Attribute NAME="size" VALUE="1861.703125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_03_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdd"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-904a-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdd"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="0_3"></Attribute>
<Attribute NAME="name" VALUE="CD_03_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070156954"></Attribute>
<Attribute NAME="size" VALUE="1861.703125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_04_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sde"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-9274-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sde"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="0_4"></Attribute>
<Attribute NAME="name" VALUE="CD_04_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070157500"></Attribute>
<Attribute NAME="size" VALUE="1861.703125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_05_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdf"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-948e-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="1152.8125G"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdf"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="0_5"></Attribute>
<Attribute NAME="name" VALUE="CD_05_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070158041"></Attribute>
<Attribute NAME="size" VALUE="1861.703125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_06_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdg"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-96a9-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdg"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="0_6"></Attribute>
<Attribute NAME="name" VALUE="CD_06_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070158585"></Attribute>
<Attribute NAME="size" VALUE="1861.703125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_07_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdh"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-98ce-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdh"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="0_7"></Attribute>
<Attribute NAME="name" VALUE="CD_07_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070159129"></Attribute>
<Attribute NAME="size" VALUE="1861.703125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_08_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdi"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-9aec-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdi"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="0_8"></Attribute>
<Attribute NAME="name" VALUE="CD_08_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070159672"></Attribute>
<Attribute NAME="size" VALUE="1861.703125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_09_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdj"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-9cfe-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdj"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="0_9"></Attribute>
<Attribute NAME="name" VALUE="CD_09_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070160199"></Attribute>
<Attribute NAME="size" VALUE="1861.703125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_10_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdk"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-9f1b-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdk"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="0_10"></Attribute>
<Attribute NAME="name" VALUE="CD_10_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070160741"></Attribute>
<Attribute NAME="size" VALUE="1861.703125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="CD_11_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdl"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-a13e-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdl"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="0_11"></Attribute>
<Attribute NAME="name" VALUE="CD_11_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070161295"></Attribute>
<Attribute NAME="size" VALUE="1861.703125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_00_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdr"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-a3b6-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdr"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="1_0"></Attribute>
<Attribute NAME="name" VALUE="FD_00_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070161933"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_00_enkcel01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdaa"></Attribute>
<Attribute NAME="id" VALUE="1b0ee672-a892-4f58-9dd5-04f9f6aee3e9"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdaa"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="2_1"></Attribute>
<Attribute NAME="name" VALUE="FD_00_enkcel01"></Attribute>
<Attribute NAME="creationTime" VALUE="1313091948052"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_01_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sds"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-a633-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sds"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="1_1"></Attribute>
<Attribute NAME="name" VALUE="FD_01_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070162567"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_02_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdt"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-a8b1-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdt"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="1_2"></Attribute>
<Attribute NAME="name" VALUE="FD_02_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070163206"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_03_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdu"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-ab2d-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdu"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="1_3"></Attribute>
<Attribute NAME="name" VALUE="FD_03_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070163842"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_04_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdz"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-ada7-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdz"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="2_0"></Attribute>
<Attribute NAME="name" VALUE="FD_04_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070164476"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_06_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdab"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-b297-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdab"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="2_2"></Attribute>
<Attribute NAME="name" VALUE="FD_06_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070165741"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_07_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdac"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-b512-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdac"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="2_3"></Attribute>
<Attribute NAME="name" VALUE="FD_07_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070166377"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_08_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdn"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-b78f-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdn"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="4_0"></Attribute>
<Attribute NAME="name" VALUE="FD_08_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070167015"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_09_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdo"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-ba0e-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdo"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="4_1"></Attribute>
<Attribute NAME="name" VALUE="FD_09_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070167653"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_10_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdp"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-bc8b-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdp"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="4_2"></Attribute>
<Attribute NAME="name" VALUE="FD_10_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070168288"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_11_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdq"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-bf0a-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdq"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="4_3"></Attribute>
<Attribute NAME="name" VALUE="FD_11_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070168926"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_12_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdv"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-c182-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdv"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="5_0"></Attribute>
<Attribute NAME="name" VALUE="FD_12_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070169561"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_13_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdw"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-c3fe-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdw"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="5_1"></Attribute>
<Attribute NAME="name" VALUE="FD_13_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070170198"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_14_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdx"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-c677-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdx"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="5_2"></Attribute>
<Attribute NAME="name" VALUE="FD_14_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070170828"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSCellDisk" NAME="FD_15_cell01">
<Attribute NAME="interleaving" VALUE="none"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="devicePartition" VALUE="/dev/sdy"></Attribute>
<Attribute NAME="id" VALUE="00000128-e01a-c8ef-0000-000000000000"></Attribute>
<Attribute NAME="freeSpace" VALUE="0"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdy"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="lun" VALUE="5_3"></Attribute>
<Attribute NAME="name" VALUE="FD_15_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1275070171459"></Attribute>
<Attribute NAME="size" VALUE="22.875G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="DATA_CD_00_cell01">
<Attribute NAME="cellDisk" VALUE="CD_00_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a070-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="DATA_CD_00_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272349"></Attribute>
<Attribute NAME="size" VALUE="1282.8125G"></Attribute>
<Attribute NAME="offset" VALUE="32M"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="DATA_CD_01_cell01">
<Attribute NAME="cellDisk" VALUE="CD_01_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a09e-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="DATA_CD_01_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272400"></Attribute>
<Attribute NAME="size" VALUE="1282.8125G"></Attribute>
<Attribute NAME="offset" VALUE="32M"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="DATA_CD_02_cell01">
<Attribute NAME="cellDisk" VALUE="CD_02_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a0d2-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="DATA_CD_02_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272431"></Attribute>
<Attribute NAME="size" VALUE="1282.8125G"></Attribute>
<Attribute NAME="offset" VALUE="32M"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="DATA_CD_03_cell01">
<Attribute NAME="cellDisk" VALUE="CD_03_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a0f0-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="DATA_CD_03_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272461"></Attribute>
<Attribute NAME="size" VALUE="1282.8125G"></Attribute>
<Attribute NAME="offset" VALUE="32M"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="DATA_CD_04_cell01">
<Attribute NAME="cellDisk" VALUE="CD_04_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a10e-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="DATA_CD_04_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272503"></Attribute>
<Attribute NAME="size" VALUE="1282.8125G"></Attribute>
<Attribute NAME="offset" VALUE="32M"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="DATA_CD_06_cell01">
<Attribute NAME="cellDisk" VALUE="CD_06_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a159-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="DATA_CD_06_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272565"></Attribute>
<Attribute NAME="size" VALUE="1282.8125G"></Attribute>
<Attribute NAME="offset" VALUE="32M"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="DATA_CD_07_cell01">
<Attribute NAME="cellDisk" VALUE="CD_07_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a176-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="DATA_CD_07_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272594"></Attribute>
<Attribute NAME="size" VALUE="1282.8125G"></Attribute>
<Attribute NAME="offset" VALUE="32M"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="DATA_CD_08_cell01">
<Attribute NAME="cellDisk" VALUE="CD_08_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a193-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="DATA_CD_08_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272616"></Attribute>
<Attribute NAME="size" VALUE="1282.8125G"></Attribute>
<Attribute NAME="offset" VALUE="32M"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="DATA_CD_09_cell01">
<Attribute NAME="cellDisk" VALUE="CD_09_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a1a9-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="DATA_CD_09_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272640"></Attribute>
<Attribute NAME="size" VALUE="1282.8125G"></Attribute>
<Attribute NAME="offset" VALUE="32M"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="DATA_CD_10_cell01">
<Attribute NAME="cellDisk" VALUE="CD_10_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a1c2-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="DATA_CD_10_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272671"></Attribute>
<Attribute NAME="size" VALUE="1282.8125G"></Attribute>
<Attribute NAME="offset" VALUE="32M"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="DATA_CD_11_cell01">
<Attribute NAME="cellDisk" VALUE="CD_11_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a1e0-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="DATA_CD_11_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272700"></Attribute>
<Attribute NAME="size" VALUE="1282.8125G"></Attribute>
<Attribute NAME="offset" VALUE="32M"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="RECO_CD_00_cell01">
<Attribute NAME="cellDisk" VALUE="CD_00_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a656-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="RECO_CD_00_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273818"></Attribute>
<Attribute NAME="size" VALUE="91.265625G"></Attribute>
<Attribute NAME="offset" VALUE="1741.328125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="RECO_CD_01_cell01">
<Attribute NAME="cellDisk" VALUE="CD_01_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a65b-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="RECO_CD_01_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273822"></Attribute>
<Attribute NAME="size" VALUE="91.265625G"></Attribute>
<Attribute NAME="offset" VALUE="1741.328125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="RECO_CD_02_cell01">
<Attribute NAME="cellDisk" VALUE="CD_02_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a65f-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="RECO_CD_02_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273827"></Attribute>
<Attribute NAME="size" VALUE="120.375G"></Attribute>
<Attribute NAME="offset" VALUE="1741.328125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="RECO_CD_03_cell01">
<Attribute NAME="cellDisk" VALUE="CD_03_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a664-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="RECO_CD_03_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273831"></Attribute>
<Attribute NAME="size" VALUE="120.375G"></Attribute>
<Attribute NAME="offset" VALUE="1741.328125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="RECO_CD_04_cell01">
<Attribute NAME="cellDisk" VALUE="CD_04_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a668-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="RECO_CD_04_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273836"></Attribute>
<Attribute NAME="size" VALUE="120.375G"></Attribute>
<Attribute NAME="offset" VALUE="1741.328125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="RECO_CD_06_cell01">
<Attribute NAME="cellDisk" VALUE="CD_06_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a672-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="RECO_CD_06_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273845"></Attribute>
<Attribute NAME="size" VALUE="120.375G"></Attribute>
<Attribute NAME="offset" VALUE="1741.328125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="RECO_CD_07_cell01">
<Attribute NAME="cellDisk" VALUE="CD_07_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a676-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="RECO_CD_07_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273850"></Attribute>
<Attribute NAME="size" VALUE="120.375G"></Attribute>
<Attribute NAME="offset" VALUE="1741.328125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="RECO_CD_08_cell01">
<Attribute NAME="cellDisk" VALUE="CD_08_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a67b-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="RECO_CD_08_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273855"></Attribute>
<Attribute NAME="size" VALUE="120.375G"></Attribute>
<Attribute NAME="offset" VALUE="1741.328125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="RECO_CD_09_cell01">
<Attribute NAME="cellDisk" VALUE="CD_09_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a680-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="RECO_CD_09_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273860"></Attribute>
<Attribute NAME="size" VALUE="120.375G"></Attribute>
<Attribute NAME="offset" VALUE="1741.328125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="RECO_CD_10_cell01">
<Attribute NAME="cellDisk" VALUE="CD_10_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a685-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="RECO_CD_10_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273864"></Attribute>
<Attribute NAME="size" VALUE="120.375G"></Attribute>
<Attribute NAME="offset" VALUE="1741.328125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="RECO_CD_11_cell01">
<Attribute NAME="cellDisk" VALUE="CD_11_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a689-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="RECO_CD_11_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273869"></Attribute>
<Attribute NAME="size" VALUE="120.375G"></Attribute>
<Attribute NAME="offset" VALUE="1741.328125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SCRATCH_CD_05_cell01">
<Attribute NAME="cellDisk" VALUE="CD_05_cell01"></Attribute>
<Attribute NAME="id" VALUE="9fd44ab2-a674-40ba-aa4f-fb32d380c573"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SCRATCH_CD_05_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1293210663053"></Attribute>
<Attribute NAME="size" VALUE="578.84375G"></Attribute>
<Attribute NAME="offset" VALUE="32M"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SMITHERS_CD_05_cell01">
<Attribute NAME="cellDisk" VALUE="CD_05_cell01"></Attribute>
<Attribute NAME="id" VALUE="ee413b30-fe57-47a3-b1ad-815fa25b471c"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SMITHERS_CD_05_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1297885099027"></Attribute>
<Attribute NAME="size" VALUE="100G"></Attribute>
<Attribute NAME="offset" VALUE="578.890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="STAGE_CD_00_cell01">
<Attribute NAME="cellDisk" VALUE="CD_00_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a267-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="STAGE_CD_00_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272811"></Attribute>
<Attribute NAME="size" VALUE="458.140625G"></Attribute>
<Attribute NAME="offset" VALUE="1282.859375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="STAGE_CD_01_cell01">
<Attribute NAME="cellDisk" VALUE="CD_01_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a26c-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="STAGE_CD_01_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272816"></Attribute>
<Attribute NAME="size" VALUE="458.140625G"></Attribute>
<Attribute NAME="offset" VALUE="1282.859375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="STAGE_CD_02_cell01">
<Attribute NAME="cellDisk" VALUE="CD_02_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a271-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="STAGE_CD_02_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272822"></Attribute>
<Attribute NAME="size" VALUE="458.140625G"></Attribute>
<Attribute NAME="offset" VALUE="1282.859375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="STAGE_CD_03_cell01">
<Attribute NAME="cellDisk" VALUE="CD_03_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a277-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="STAGE_CD_03_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272828"></Attribute>
<Attribute NAME="size" VALUE="458.140625G"></Attribute>
<Attribute NAME="offset" VALUE="1282.859375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="STAGE_CD_04_cell01">
<Attribute NAME="cellDisk" VALUE="CD_04_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a27d-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="STAGE_CD_04_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272833"></Attribute>
<Attribute NAME="size" VALUE="458.140625G"></Attribute>
<Attribute NAME="offset" VALUE="1282.859375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="STAGE_CD_06_cell01">
<Attribute NAME="cellDisk" VALUE="CD_06_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a288-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="STAGE_CD_06_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272844"></Attribute>
<Attribute NAME="size" VALUE="458.140625G"></Attribute>
<Attribute NAME="offset" VALUE="1282.859375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="STAGE_CD_07_cell01">
<Attribute NAME="cellDisk" VALUE="CD_07_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a28d-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="STAGE_CD_07_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272850"></Attribute>
<Attribute NAME="size" VALUE="458.140625G"></Attribute>
<Attribute NAME="offset" VALUE="1282.859375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="STAGE_CD_08_cell01">
<Attribute NAME="cellDisk" VALUE="CD_08_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a293-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="STAGE_CD_08_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272856"></Attribute>
<Attribute NAME="size" VALUE="458.140625G"></Attribute>
<Attribute NAME="offset" VALUE="1282.859375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="STAGE_CD_09_cell01">
<Attribute NAME="cellDisk" VALUE="CD_09_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a299-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="STAGE_CD_09_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272861"></Attribute>
<Attribute NAME="size" VALUE="458.140625G"></Attribute>
<Attribute NAME="offset" VALUE="1282.859375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="STAGE_CD_10_cell01">
<Attribute NAME="cellDisk" VALUE="CD_10_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a29e-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="STAGE_CD_10_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272867"></Attribute>
<Attribute NAME="size" VALUE="458.140625G"></Attribute>
<Attribute NAME="offset" VALUE="1282.859375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="STAGE_CD_11_cell01">
<Attribute NAME="cellDisk" VALUE="CD_11_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a2a4-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="STAGE_CD_11_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555272872"></Attribute>
<Attribute NAME="size" VALUE="458.140625G"></Attribute>
<Attribute NAME="offset" VALUE="1282.859375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SWING_CD_05_cell01">
<Attribute NAME="cellDisk" VALUE="CD_05_cell01"></Attribute>
<Attribute NAME="id" VALUE="aaf8a3bc-7f81-45f2-b091-5bf73c93d972"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SWING_CD_05_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1298320563479"></Attribute>
<Attribute NAME="size" VALUE="30G"></Attribute>
<Attribute NAME="offset" VALUE="678.890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SYSTEM_CD_00_cell01">
<Attribute NAME="cellDisk" VALUE="CD_00_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a45f-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SYSTEM_CD_00_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273315"></Attribute>
<Attribute NAME="size" VALUE="336M"></Attribute>
<Attribute NAME="offset" VALUE="1741G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SYSTEM_CD_01_cell01">
<Attribute NAME="cellDisk" VALUE="CD_01_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a464-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SYSTEM_CD_01_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273318"></Attribute>
<Attribute NAME="size" VALUE="336M"></Attribute>
<Attribute NAME="offset" VALUE="1741G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SYSTEM_CD_02_cell01">
<Attribute NAME="cellDisk" VALUE="CD_02_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a468-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SYSTEM_CD_02_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273323"></Attribute>
<Attribute NAME="size" VALUE="336M"></Attribute>
<Attribute NAME="offset" VALUE="1741G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SYSTEM_CD_03_cell01">
<Attribute NAME="cellDisk" VALUE="CD_03_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a46c-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SYSTEM_CD_03_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273327"></Attribute>
<Attribute NAME="size" VALUE="336M"></Attribute>
<Attribute NAME="offset" VALUE="1741G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SYSTEM_CD_04_cell01">
<Attribute NAME="cellDisk" VALUE="CD_04_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a470-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SYSTEM_CD_04_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273332"></Attribute>
<Attribute NAME="size" VALUE="336M"></Attribute>
<Attribute NAME="offset" VALUE="1741G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SYSTEM_CD_06_cell01">
<Attribute NAME="cellDisk" VALUE="CD_06_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a479-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SYSTEM_CD_06_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273341"></Attribute>
<Attribute NAME="size" VALUE="336M"></Attribute>
<Attribute NAME="offset" VALUE="1741G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SYSTEM_CD_07_cell01">
<Attribute NAME="cellDisk" VALUE="CD_07_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a47e-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SYSTEM_CD_07_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273345"></Attribute>
<Attribute NAME="size" VALUE="336M"></Attribute>
<Attribute NAME="offset" VALUE="1741G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SYSTEM_CD_08_cell01">
<Attribute NAME="cellDisk" VALUE="CD_08_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a482-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SYSTEM_CD_08_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273349"></Attribute>
<Attribute NAME="size" VALUE="336M"></Attribute>
<Attribute NAME="offset" VALUE="1741G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SYSTEM_CD_09_cell01">
<Attribute NAME="cellDisk" VALUE="CD_09_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a486-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SYSTEM_CD_09_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273354"></Attribute>
<Attribute NAME="size" VALUE="336M"></Attribute>
<Attribute NAME="offset" VALUE="1741G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SYSTEM_CD_10_cell01">
<Attribute NAME="cellDisk" VALUE="CD_10_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a48b-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SYSTEM_CD_10_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273358"></Attribute>
<Attribute NAME="size" VALUE="336M"></Attribute>
<Attribute NAME="offset" VALUE="1741G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSNetDisk" NAME="SYSTEM_CD_11_cell01">
<Attribute NAME="cellDisk" VALUE="CD_11_cell01"></Attribute>
<Attribute NAME="id" VALUE="00000129-389f-a48f-0000-000000000000"></Attribute>
<Attribute NAME="resizeError"></Attribute>
<Attribute NAME="availableTo"></Attribute>
<Attribute NAME="comment"></Attribute>
<Attribute NAME="lastResizeStatus" VALUE="complete"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="errorCount" VALUE="0"></Attribute>
<Attribute NAME="name" VALUE="SYSTEM_CD_11_cell01"></Attribute>
<Attribute NAME="creationTime" VALUE="1276555273363"></Attribute>
<Attribute NAME="size" VALUE="336M"></Attribute>
<Attribute NAME="offset" VALUE="1741G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:0">
<Attribute NAME="deviceId" VALUE="23"></Attribute>
<Attribute NAME="physicalSize" VALUE="1862.6559999994934G"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="id" VALUE="JK11D1YAJB8GGZ"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1273975845146"></Attribute>
<Attribute NAME="errMediaCount" VALUE="61"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="slotNumber" VALUE="0"></Attribute>
<Attribute NAME="physicalSerial" VALUE="JK11D1YAJB8GGZ"></Attribute>
<Attribute NAME="name" VALUE="35:0"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="HITACHI H7220AA30SUN2.0T"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:1">
<Attribute NAME="deviceId" VALUE="24"></Attribute>
<Attribute NAME="physicalSize" VALUE="1862.6559999994934G"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="id" VALUE="JK11D1YAJB4V0Z"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1273975846476"></Attribute>
<Attribute NAME="errMediaCount" VALUE="8"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="slotNumber" VALUE="1"></Attribute>
<Attribute NAME="physicalSerial" VALUE="JK11D1YAJB4V0Z"></Attribute>
<Attribute NAME="name" VALUE="35:1"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="HITACHI H7220AA30SUN2.0T"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:2">
<Attribute NAME="deviceId" VALUE="25"></Attribute>
<Attribute NAME="physicalSize" VALUE="1862.6559999994934G"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="id" VALUE="JK11D1YAJAZMMZ"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1273975847789"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="slotNumber" VALUE="2"></Attribute>
<Attribute NAME="physicalSerial" VALUE="JK11D1YAJAZMMZ"></Attribute>
<Attribute NAME="name" VALUE="35:2"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="HITACHI H7220AA30SUN2.0T"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:3">
<Attribute NAME="deviceId" VALUE="26"></Attribute>
<Attribute NAME="physicalSize" VALUE="1862.6559999994934G"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="id" VALUE="JK11D1YAJ7JX2Z"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1273975849109"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="slotNumber" VALUE="3"></Attribute>
<Attribute NAME="physicalSerial" VALUE="JK11D1YAJ7JX2Z"></Attribute>
<Attribute NAME="name" VALUE="35:3"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="HITACHI H7220AA30SUN2.0T"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:4">
<Attribute NAME="deviceId" VALUE="27"></Attribute>
<Attribute NAME="physicalSize" VALUE="1862.6559999994934G"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="id" VALUE="JK11D1YAJ60R8Z"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1273975850399"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="slotNumber" VALUE="4"></Attribute>
<Attribute NAME="physicalSerial" VALUE="JK11D1YAJ60R8Z"></Attribute>
<Attribute NAME="name" VALUE="35:4"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="HITACHI H7220AA30SUN2.0T"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:5">
<Attribute NAME="deviceId" VALUE="28"></Attribute>
<Attribute NAME="physicalSize" VALUE="1862.6559999994934G"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="id" VALUE="JK11D1YAJB4J8Z"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1273975851693"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="slotNumber" VALUE="5"></Attribute>
<Attribute NAME="physicalSerial" VALUE="JK11D1YAJB4J8Z"></Attribute>
<Attribute NAME="name" VALUE="35:5"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="HITACHI H7220AA30SUN2.0T"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:6">
<Attribute NAME="deviceId" VALUE="29"></Attribute>
<Attribute NAME="physicalSize" VALUE="1862.6559999994934G"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="id" VALUE="JK11D1YAJ7JXGZ"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1273975852946"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="slotNumber" VALUE="6"></Attribute>
<Attribute NAME="physicalSerial" VALUE="JK11D1YAJ7JXGZ"></Attribute>
<Attribute NAME="name" VALUE="35:6"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="HITACHI H7220AA30SUN2.0T"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:7">
<Attribute NAME="deviceId" VALUE="30"></Attribute>
<Attribute NAME="physicalSize" VALUE="1862.6559999994934G"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="id" VALUE="JK11D1YAJB4E5Z"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1273975854177"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="slotNumber" VALUE="7"></Attribute>
<Attribute NAME="physicalSerial" VALUE="JK11D1YAJB4E5Z"></Attribute>
<Attribute NAME="name" VALUE="35:7"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="HITACHI H7220AA30SUN2.0T"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:8">
<Attribute NAME="deviceId" VALUE="31"></Attribute>
<Attribute NAME="physicalSize" VALUE="1862.6559999994934G"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="id" VALUE="JK11D1YAJ8TY3Z"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1273975855496"></Attribute>
<Attribute NAME="errMediaCount" VALUE="506"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="slotNumber" VALUE="8"></Attribute>
<Attribute NAME="physicalSerial" VALUE="JK11D1YAJ8TY3Z"></Attribute>
<Attribute NAME="name" VALUE="35:8"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="HITACHI H7220AA30SUN2.0T"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:9">
<Attribute NAME="deviceId" VALUE="32"></Attribute>
<Attribute NAME="physicalSize" VALUE="1862.6559999994934G"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="id" VALUE="JK11D1YAJ8TXKZ"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1273975856931"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="slotNumber" VALUE="9"></Attribute>
<Attribute NAME="physicalSerial" VALUE="JK11D1YAJ8TXKZ"></Attribute>
<Attribute NAME="name" VALUE="35:9"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="HITACHI H7220AA30SUN2.0T"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:10">
<Attribute NAME="deviceId" VALUE="33"></Attribute>
<Attribute NAME="physicalSize" VALUE="1862.6559999994934G"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="id" VALUE="JK11D1YAJ8TYLZ"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1273975858176"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="slotNumber" VALUE="10"></Attribute>
<Attribute NAME="physicalSerial" VALUE="JK11D1YAJ8TYLZ"></Attribute>
<Attribute NAME="name" VALUE="35:10"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="HITACHI H7220AA30SUN2.0T"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="35:11">
<Attribute NAME="deviceId" VALUE="34"></Attribute>
<Attribute NAME="physicalSize" VALUE="1862.6559999994934G"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="id" VALUE="JK11D1YAJAZNKZ"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1273975859476"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="slotNumber" VALUE="11"></Attribute>
<Attribute NAME="physicalSerial" VALUE="JK11D1YAJAZNKZ"></Attribute>
<Attribute NAME="name" VALUE="35:11"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="HITACHI H7220AA30SUN2.0T"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_1_0">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02JC3"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249971"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 1; FDOM: 0"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02JC3"></Attribute>
<Attribute NAME="name" VALUE="FLASH_1_0"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_1_1">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02JYG"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249972"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 1; FDOM: 1"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02JYG"></Attribute>
<Attribute NAME="name" VALUE="FLASH_1_1"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_1_2">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02JV9"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249972"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 1; FDOM: 2"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02JV9"></Attribute>
<Attribute NAME="name" VALUE="FLASH_1_2"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_1_3">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02J93"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249972"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 1; FDOM: 3"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02J93"></Attribute>
<Attribute NAME="name" VALUE="FLASH_1_3"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_2_0">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02JFK"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249972"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 2; FDOM: 0"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02JFK"></Attribute>
<Attribute NAME="name" VALUE="FLASH_2_0"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_2_1">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02JFL"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249973"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 2; FDOM: 1"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02JFL"></Attribute>
<Attribute NAME="name" VALUE="FLASH_2_1"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_2_2">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02JF7"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249973"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 2; FDOM: 2"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02JF7"></Attribute>
<Attribute NAME="name" VALUE="FLASH_2_2"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_2_3">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02JF8"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249973"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 2; FDOM: 3"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02JF8"></Attribute>
<Attribute NAME="name" VALUE="FLASH_2_3"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_4_0">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02HP5"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249973"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 4; FDOM: 0"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02HP5"></Attribute>
<Attribute NAME="name" VALUE="FLASH_4_0"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_4_1">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02HNN"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249973"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 4; FDOM: 1"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02HNN"></Attribute>
<Attribute NAME="name" VALUE="FLASH_4_1"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_4_2">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02HP2"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249974"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 4; FDOM: 2"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02HP2"></Attribute>
<Attribute NAME="name" VALUE="FLASH_4_2"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_4_3">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02HP4"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249974"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 4; FDOM: 3"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02HP4"></Attribute>
<Attribute NAME="name" VALUE="FLASH_4_3"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_5_0">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02JUD"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249974"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 5; FDOM: 0"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02JUD"></Attribute>
<Attribute NAME="name" VALUE="FLASH_5_0"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_5_1">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02JVF"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249975"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 5; FDOM: 1"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02JVF"></Attribute>
<Attribute NAME="name" VALUE="FLASH_5_1"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_5_2">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02JAP"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249975"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 5; FDOM: 2"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02JAP"></Attribute>
<Attribute NAME="name" VALUE="FLASH_5_2"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSPhysDiskImpl" NAME="FLASH_5_3">
<Attribute NAME="physicalSize" VALUE="22.8880615234375G"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="id" VALUE="1014M02JVH"></Attribute>
<Attribute NAME="errCmdTimeoutCount" VALUE="0"></Attribute>
<Attribute NAME="physicalInsertTime" VALUE="1304701249975"></Attribute>
<Attribute NAME="isReenableLunDone" VALUE="FALSE"></Attribute>
<Attribute NAME="errHardReadCount" VALUE="0"></Attribute>
<Attribute NAME="errMediaCount" VALUE="0"></Attribute>
<Attribute NAME="errHardWriteCount" VALUE="0"></Attribute>
<Attribute NAME="sectorRemapCount" VALUE="0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="doZap" VALUE="FALSE"></Attribute>
<Attribute NAME="slotNumber" VALUE="PCI Slot: 5; FDOM: 3"></Attribute>
<Attribute NAME="physicalSerial" VALUE="1014M02JVH"></Attribute>
<Attribute NAME="name" VALUE="FLASH_5_3"></Attribute>
<Attribute NAME="errOtherCount" VALUE="0"></Attribute>
<Attribute NAME="makeModel" VALUE="MARVELL SD88SA02"></Attribute>
<Attribute NAME="errSeekCount" VALUE="0"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="0_0">
<Attribute NAME="cellDisk" VALUE="CD_00_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="lunUID" VALUE="0_0"></Attribute>
<Attribute NAME="id" VALUE="0_0"></Attribute>
<Attribute NAME="isSystemLun" VALUE="TRUE"></Attribute>
<Attribute NAME="raidLevel" VALUE="0"></Attribute>
<Attribute NAME="physicalDrives" VALUE="JK11D1YAJB8GGZ"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sda"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="0_0"></Attribute>
<Attribute NAME="lunSize" VALUE="1861.712890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="0_1">
<Attribute NAME="cellDisk" VALUE="CD_01_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="lunUID" VALUE="0_1"></Attribute>
<Attribute NAME="id" VALUE="0_1"></Attribute>
<Attribute NAME="isSystemLun" VALUE="TRUE"></Attribute>
<Attribute NAME="raidLevel" VALUE="0"></Attribute>
<Attribute NAME="physicalDrives" VALUE="JK11D1YAJB4V0Z"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdb"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="0_1"></Attribute>
<Attribute NAME="lunSize" VALUE="1861.712890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="0_2">
<Attribute NAME="cellDisk" VALUE="CD_02_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="lunUID" VALUE="0_2"></Attribute>
<Attribute NAME="id" VALUE="0_2"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="raidLevel" VALUE="0"></Attribute>
<Attribute NAME="physicalDrives" VALUE="JK11D1YAJAZMMZ"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdc"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="0_2"></Attribute>
<Attribute NAME="lunSize" VALUE="1861.712890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="0_3">
<Attribute NAME="cellDisk" VALUE="CD_03_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="lunUID" VALUE="0_3"></Attribute>
<Attribute NAME="id" VALUE="0_3"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="raidLevel" VALUE="0"></Attribute>
<Attribute NAME="physicalDrives" VALUE="JK11D1YAJ7JX2Z"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdd"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="0_3"></Attribute>
<Attribute NAME="lunSize" VALUE="1861.712890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="0_4">
<Attribute NAME="cellDisk" VALUE="CD_04_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="lunUID" VALUE="0_4"></Attribute>
<Attribute NAME="id" VALUE="0_4"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="raidLevel" VALUE="0"></Attribute>
<Attribute NAME="physicalDrives" VALUE="JK11D1YAJ60R8Z"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sde"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="0_4"></Attribute>
<Attribute NAME="lunSize" VALUE="1861.712890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="0_5">
<Attribute NAME="cellDisk" VALUE="CD_05_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="lunUID" VALUE="0_5"></Attribute>
<Attribute NAME="id" VALUE="0_5"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="raidLevel" VALUE="0"></Attribute>
<Attribute NAME="physicalDrives" VALUE="JK11D1YAJB4J8Z"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdf"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="0_5"></Attribute>
<Attribute NAME="lunSize" VALUE="1861.712890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="0_6">
<Attribute NAME="cellDisk" VALUE="CD_06_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="lunUID" VALUE="0_6"></Attribute>
<Attribute NAME="id" VALUE="0_6"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="raidLevel" VALUE="0"></Attribute>
<Attribute NAME="physicalDrives" VALUE="JK11D1YAJ7JXGZ"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdg"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="0_6"></Attribute>
<Attribute NAME="lunSize" VALUE="1861.712890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="0_7">
<Attribute NAME="cellDisk" VALUE="CD_07_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="lunUID" VALUE="0_7"></Attribute>
<Attribute NAME="id" VALUE="0_7"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="raidLevel" VALUE="0"></Attribute>
<Attribute NAME="physicalDrives" VALUE="JK11D1YAJB4E5Z"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdh"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="0_7"></Attribute>
<Attribute NAME="lunSize" VALUE="1861.712890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="0_8">
<Attribute NAME="cellDisk" VALUE="CD_08_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="lunUID" VALUE="0_8"></Attribute>
<Attribute NAME="id" VALUE="0_8"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="raidLevel" VALUE="0"></Attribute>
<Attribute NAME="physicalDrives" VALUE="JK11D1YAJ8TY3Z"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdi"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="0_8"></Attribute>
<Attribute NAME="lunSize" VALUE="1861.712890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="0_9">
<Attribute NAME="cellDisk" VALUE="CD_09_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="lunUID" VALUE="0_9"></Attribute>
<Attribute NAME="id" VALUE="0_9"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="raidLevel" VALUE="0"></Attribute>
<Attribute NAME="physicalDrives" VALUE="JK11D1YAJ8TXKZ"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdj"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="0_9"></Attribute>
<Attribute NAME="lunSize" VALUE="1861.712890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="0_10">
<Attribute NAME="cellDisk" VALUE="CD_10_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="lunUID" VALUE="0_10"></Attribute>
<Attribute NAME="id" VALUE="0_10"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="raidLevel" VALUE="0"></Attribute>
<Attribute NAME="physicalDrives" VALUE="JK11D1YAJ8TYLZ"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdk"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="0_10"></Attribute>
<Attribute NAME="lunSize" VALUE="1861.712890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="0_11">
<Attribute NAME="cellDisk" VALUE="CD_11_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="HardDisk"></Attribute>
<Attribute NAME="lunUID" VALUE="0_11"></Attribute>
<Attribute NAME="id" VALUE="0_11"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="raidLevel" VALUE="0"></Attribute>
<Attribute NAME="physicalDrives" VALUE="JK11D1YAJAZNKZ"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdl"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="0_11"></Attribute>
<Attribute NAME="lunSize" VALUE="1861.712890625G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="1_0">
<Attribute NAME="physicalDrives" VALUE="1014M02JC3"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_00_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdr"></Attribute>
<Attribute NAME="id" VALUE="1_0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="1_0"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="1_1">
<Attribute NAME="physicalDrives" VALUE="1014M02JYG"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_01_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sds"></Attribute>
<Attribute NAME="id" VALUE="1_1"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="1_1"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="1_2">
<Attribute NAME="physicalDrives" VALUE="1014M02JV9"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_02_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdt"></Attribute>
<Attribute NAME="id" VALUE="1_2"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="1_2"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="1_3">
<Attribute NAME="physicalDrives" VALUE="1014M02J93"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_03_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdu"></Attribute>
<Attribute NAME="id" VALUE="1_3"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="1_3"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="2_0">
<Attribute NAME="physicalDrives" VALUE="1014M02JFK"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_04_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdz"></Attribute>
<Attribute NAME="id" VALUE="2_0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="2_0"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="2_1">
<Attribute NAME="physicalDrives" VALUE="1014M02JFL"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_00_enkcel01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdaa"></Attribute>
<Attribute NAME="id" VALUE="2_1"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="2_1"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="2_2">
<Attribute NAME="physicalDrives" VALUE="1014M02JF7"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_06_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdab"></Attribute>
<Attribute NAME="id" VALUE="2_2"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="2_2"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="2_3">
<Attribute NAME="physicalDrives" VALUE="1014M02JF8"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_07_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdac"></Attribute>
<Attribute NAME="id" VALUE="2_3"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="2_3"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="4_0">
<Attribute NAME="physicalDrives" VALUE="1014M02HP5"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_08_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdn"></Attribute>
<Attribute NAME="id" VALUE="4_0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="4_0"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="4_1">
<Attribute NAME="physicalDrives" VALUE="1014M02HNN"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_09_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdo"></Attribute>
<Attribute NAME="id" VALUE="4_1"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="4_1"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="4_2">
<Attribute NAME="physicalDrives" VALUE="1014M02HP2"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_10_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdp"></Attribute>
<Attribute NAME="id" VALUE="4_2"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="4_2"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="4_3">
<Attribute NAME="physicalDrives" VALUE="1014M02HP4"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_11_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdq"></Attribute>
<Attribute NAME="id" VALUE="4_3"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="4_3"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="5_0">
<Attribute NAME="physicalDrives" VALUE="1014M02JUD"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_12_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdv"></Attribute>
<Attribute NAME="id" VALUE="5_0"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="5_0"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="5_1">
<Attribute NAME="physicalDrives" VALUE="1014M02JVF"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_13_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdw"></Attribute>
<Attribute NAME="id" VALUE="5_1"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="5_1"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="5_2">
<Attribute NAME="physicalDrives" VALUE="1014M02JAP"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_14_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdx"></Attribute>
<Attribute NAME="id" VALUE="5_2"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="5_2"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.hwadapter.diskadp.MSLUNImpl" NAME="5_3">
<Attribute NAME="physicalDrives" VALUE="1014M02JVH"></Attribute>
<Attribute NAME="cellDisk" VALUE="FD_15_cell01"></Attribute>
<Attribute NAME="diskType" VALUE="FlashDisk"></Attribute>
<Attribute NAME="deviceName" VALUE="/dev/sdy"></Attribute>
<Attribute NAME="id" VALUE="5_3"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="lunAutoCreate" VALUE="FALSE"></Attribute>
<Attribute NAME="isSystemLun" VALUE="FALSE"></Attribute>
<Attribute NAME="name" VALUE="5_3"></Attribute>
<Attribute NAME="lunSize" VALUE="22.8880615234375G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCache" NAME="enkcel01_FLASHCACHE">
<Attribute NAME="cellDisk" VALUE="FD_10_cell01,FD_02_cell01,FD_06_cell01,FD_01_cell01,FD_12_cell01,FD_03_cell01,FD_15_cell01,FD_04_cell01,FD_09_cell01,FD_14_cell01,FD_00_enkcel01,FD_11_cell01,FD_08_cell01,FD_00_cell01,FD_07_cell01,FD_13_cell01"></Attribute>
<Attribute NAME="degradedCelldisks"></Attribute>
<Attribute NAME="effectiveCacheSize" VALUE="365.25G"></Attribute>
<Attribute NAME="id" VALUE="8347628f-365d-436b-8dc0-30162514ae6a"></Attribute>
<Attribute NAME="status" VALUE="normal"></Attribute>
<Attribute NAME="name" VALUE="enkcel01_FLASHCACHE"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="365.25G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="35fff6cd-001e-4ebf-8a48-a53b36b22fbf">
<Attribute NAME="cellDisk" VALUE="FD_10_cell01"></Attribute>
<Attribute NAME="id" VALUE="35fff6cd-001e-4ebf-8a48-a53b36b22fbf"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="35fff6cd-001e-4ebf-8a48-a53b36b22fbf"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="914968cf-bfdf-48e8-98f7-5159af6347cd">
<Attribute NAME="cellDisk" VALUE="FD_02_cell01"></Attribute>
<Attribute NAME="id" VALUE="914968cf-bfdf-48e8-98f7-5159af6347cd"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="914968cf-bfdf-48e8-98f7-5159af6347cd"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="9c7cf975-3291-4fa5-8527-7991e4e8d868">
<Attribute NAME="cellDisk" VALUE="FD_06_cell01"></Attribute>
<Attribute NAME="id" VALUE="9c7cf975-3291-4fa5-8527-7991e4e8d868"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="9c7cf975-3291-4fa5-8527-7991e4e8d868"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="db895100-a9d4-427c-960a-940a43bcda6d">
<Attribute NAME="cellDisk" VALUE="FD_01_cell01"></Attribute>
<Attribute NAME="id" VALUE="db895100-a9d4-427c-960a-940a43bcda6d"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="db895100-a9d4-427c-960a-940a43bcda6d"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="a86c5ab5-9b93-49cf-832b-125893ac23ee">
<Attribute NAME="cellDisk" VALUE="FD_12_cell01"></Attribute>
<Attribute NAME="id" VALUE="a86c5ab5-9b93-49cf-832b-125893ac23ee"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="a86c5ab5-9b93-49cf-832b-125893ac23ee"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="15d1c631-58fd-47d0-aa08-70328d97e07a">
<Attribute NAME="cellDisk" VALUE="FD_03_cell01"></Attribute>
<Attribute NAME="id" VALUE="15d1c631-58fd-47d0-aa08-70328d97e07a"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="15d1c631-58fd-47d0-aa08-70328d97e07a"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="d0a06d79-d65d-485a-b5a9-d8db55a07a4b">
<Attribute NAME="cellDisk" VALUE="FD_15_cell01"></Attribute>
<Attribute NAME="id" VALUE="d0a06d79-d65d-485a-b5a9-d8db55a07a4b"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="d0a06d79-d65d-485a-b5a9-d8db55a07a4b"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="e8542b03-e2e8-4cc6-8bdc-1baf88da17cf">
<Attribute NAME="cellDisk" VALUE="FD_04_cell01"></Attribute>
<Attribute NAME="id" VALUE="e8542b03-e2e8-4cc6-8bdc-1baf88da17cf"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="e8542b03-e2e8-4cc6-8bdc-1baf88da17cf"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="ef3893ef-d779-4a8e-b738-f7c7a85a7a65">
<Attribute NAME="cellDisk" VALUE="FD_09_cell01"></Attribute>
<Attribute NAME="id" VALUE="ef3893ef-d779-4a8e-b738-f7c7a85a7a65"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="ef3893ef-d779-4a8e-b738-f7c7a85a7a65"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="f01bf8e0-3e59-4c2d-bdc5-f83b230c72b4">
<Attribute NAME="cellDisk" VALUE="FD_14_cell01"></Attribute>
<Attribute NAME="id" VALUE="f01bf8e0-3e59-4c2d-bdc5-f83b230c72b4"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="f01bf8e0-3e59-4c2d-bdc5-f83b230c72b4"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="124ef0dc-15b6-4f35-914d-8e7af9c2ff7c">
<Attribute NAME="cellDisk" VALUE="FD_00_enkcel01"></Attribute>
<Attribute NAME="id" VALUE="124ef0dc-15b6-4f35-914d-8e7af9c2ff7c"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="124ef0dc-15b6-4f35-914d-8e7af9c2ff7c"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="bfa93ed1-0965-4b54-a0a2-3d9625fa345d">
<Attribute NAME="cellDisk" VALUE="FD_11_cell01"></Attribute>
<Attribute NAME="id" VALUE="bfa93ed1-0965-4b54-a0a2-3d9625fa345d"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="bfa93ed1-0965-4b54-a0a2-3d9625fa345d"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="ccd6ae62-5676-4e86-aa62-126a9a5d8876">
<Attribute NAME="cellDisk" VALUE="FD_08_cell01"></Attribute>
<Attribute NAME="id" VALUE="ccd6ae62-5676-4e86-aa62-126a9a5d8876"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="ccd6ae62-5676-4e86-aa62-126a9a5d8876"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="afabcda1-bb4d-4c46-96e0-e3f8245ab1e9">
<Attribute NAME="cellDisk" VALUE="FD_00_cell01"></Attribute>
<Attribute NAME="id" VALUE="afabcda1-bb4d-4c46-96e0-e3f8245ab1e9"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="afabcda1-bb4d-4c46-96e0-e3f8245ab1e9"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="14ac943a-5589-4d86-bb93-530b1c7b809f">
<Attribute NAME="cellDisk" VALUE="FD_07_cell01"></Attribute>
<Attribute NAME="id" VALUE="14ac943a-5589-4d86-bb93-530b1c7b809f"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="14ac943a-5589-4d86-bb93-530b1c7b809f"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
<Target TYPE="oracle.ossmgmt.ms.core.MSFlashCachePart" NAME="39555b4d-2503-48fc-a4bb-509924cd3ddd">
<Attribute NAME="cellDisk" VALUE="FD_13_cell01"></Attribute>
<Attribute NAME="id" VALUE="39555b4d-2503-48fc-a4bb-509924cd3ddd"></Attribute>
<Attribute NAME="status" VALUE="active"></Attribute>
<Attribute NAME="name" VALUE="39555b4d-2503-48fc-a4bb-509924cd3ddd"></Attribute>
<Attribute NAME="creationTime" VALUE="1313092123392"></Attribute>
<Attribute NAME="size" VALUE="22.828125G"></Attribute>
</Target>
</Targets>
}}}
http://blogs.oracle.com/ATeamExalogicCAF/entry/exalogic_networking_part_1


! Exalogic OBE Series:
''Oracle Exalogic: Storage Appliance'' http://apex.oracle.com/pls/apex/f?p=44785:24:2875967671743702::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5110,29







http://msdn.microsoft.com/en-us/library/ff700515.aspx
http://betterandfasterdecisions.com/2011/01/10/improving-calculation-performance-in-excelfinal/
http://betterandfasterdecisions.com/2011/01/07/improving-calculation-performance-in-excel/
http://betterandfasterdecisions.com/2011/01/08/improving-calculation-performance-in-excelpart-2/
http://betterandfasterdecisions.com/2011/01/09/improving-calculation-performance-in-excelpart-3/
http://social.msdn.microsoft.com/Forums/en/exceldev/thread/b7c63f9d-e373-4455-a793-f58707353032
http://www.databison.com/index.php/excel-slow-to-respond-avoiding-mistakes-that-make-excel-slow-down-to-a-crawl/

''Scripts''
http://www.expertoracleexadata.com/scripts/

''Errata''
http://www.expertoracleexadata.com/errata/
http://www.apress.com/9781430233923

http://www.oracle.com/technetwork/database/focus-areas/bi-datawarehousing/twp-explain-the-explain-plan-052011-393674.pdf

{{{
In order to determine if you are looking at a good execution plan or not, you need to understand how 
the Optimizer determined the plan in the first place. You should also be able to look at the execution
plan and assess if the Optimizer has made any mistake in its estimations or calculations, leading to a
suboptimal plan. The components to assess are:  
• Cardinality– Estimate of the number of rows coming out of each of the operations.  
• Access method – The way in which the data is being accessed, via either a table scan or index 
access. 
• Join method – The method (e.g., hash, sort-merge, etc.) used to join tables with each other. 
• Join type – The type of join (e.g., outer, anti, semi, etc.). 
• Join order – The order in which the tables are joined to each other.  
• Partition pruning – Are only the necessary partitions being accessed to answer the query?  
• Parallel Execution  – In case of parallel execution, is each operation  in the plan being 
conducted in parallel? Is the right data redistribution method being used? 
}}}

How to understand connect by explain plans [ID 729201.1]






''watch this first''
Assign a Macro to a Button, Check box, or any object in Microsoft Excel http://www.youtube.com/watch?v=XmOk1QW6T0g&feature=relmfu
Insert Macros into an Excel Workbook or File and Delete Macros from Excel http://www.youtube.com/watch?v=8pfdm7xs3QE



http://www.ozgrid.com/forum/showthread.php?t=76720
{{{
Sub testexport() 
     '
     ' export Macro
     
    Range("A3:A5").Select 
    Selection.Copy 
    Workbooks.Add 
    ActiveSheet.Paste 
    ActiveWorkbook. SaveAs Filename:= _ 
    "C:\Documents and Settings\Simon\My Documents\Book2.csv" _ 
    , FileFormat:=xlCSV, CreateBackup:=False 
    Application.DisplayAlerts = False 
    ActiveWorkbook.Close 
    Application.DisplayAlerts = True 
     
End Sub 
}}}

another source 
http://www.mrexcel.com/forum/showthread.php?18262-Select-variable-range-then-save-as-csv-Macro!
http://www.pcreview.co.uk/forums/excel-2007warning-following-fetures-cannot-saved-macro-free-workbook-t3037442.html
http://awads.net/wp/2011/05/17/shell-script-output-to-oracle-database-via-external-table/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+EddieAwadsFeed+%28Eddie+Awad%27s+blog%29


''create external table using SQL Developer'' http://sueharper.blogspot.com/2006/08/i-didnt-know-you-could-do-that.html
''a cool code to do external tables on the DB side'' http://mikesmithers.wordpress.com/2011/08/26/oracle-external-tables-or-what-i-did-on-my-holidays/
''Executing operating system commmands from PL/SQL'' http://www.oracle.com/technetwork/database/enterprise-edition/calling-shell-commands-from-plsql-1-1-129519.pdf
''Shell Script Output to Oracle Database Via External Table'' http://awads.net/wp/2011/05/17/shell-script-output-to-oracle-database-via-external-table/
''Calling OS Commands from Plsql'' https://forums.oracle.com/forums/thread.jspa?threadID=369320
''Execute operating system commands from PL/SQL'' http://hany4u.blogspot.com/2008/12/execute-operating-system-commands-from.html
you can't use wildcard on external tables http://www.freelists.org/post/oracle-l/10g-External-Table-Location-Parameter,3
http://jiri.wordpress.com/2010/03/29/oracle-external-tables-by-examples-part-4-column_transforms-clause-load-clob-blob-or-any-constant-using-external-tables/
''Performant and scalable data loading withOracle Database 11g'' http://www.scribd.com/doc/61785526/26/Accessing-remote-data-staging-files-using-Oracle-external-tables
http://decipherinfosys.wordpress.com/2007/04/28/writing-data-to-a-text-file-from-oracle/
http://decipherinfosys.wordpress.com/2007/04/17/using-external-tables-in-oracle-to-load-up-data/












/***
|Name:|ExtentTagButtonPlugin|
|Description:|Adds a New tiddler button in the tag drop down|
|Version:|3.2 ($Rev: 3861 $)|
|Date:|$Date: 2008-03-08 10:53:09 +1000 (Sat, 08 Mar 2008) $|
|Source:|http://mptw.tiddlyspot.com/#ExtendTagButtonPlugin|
|Author:|Simon Baird <simon.baird@gmail.com>|
|License|http://mptw.tiddlyspot.com/#TheBSDLicense|
***/
//{{{

window.onClickTag_mptw_orig = window.onClickTag;
window.onClickTag = function(e) {
	window.onClickTag_mptw_orig.apply(this,arguments);
	var tag = this.getAttribute("tag");
	var title = this.getAttribute("tiddler");
	// Thanks Saq, you're a genius :)
	var popup = Popup.stack[Popup.stack.length-1].popup;
	createTiddlyElement(createTiddlyElement(popup,"li",null,"listBreak"),"div");
	wikify("<<newTiddler label:'New tiddler' tag:'"+tag+"'>>",createTiddlyElement(popup,"li"));
	return false;
}

//}}}
http://oracle-randolf.blogspot.com/2011/12/extended-displaycursor-with-rowsource.html
''download link'' http://www.sqltools-plusplus.org:7676/media/xplan_extended_display_cursor.sql
maria colgan https://blogs.oracle.com/optimizer/extended-statistics 
http://blogs.oracle.com/optimizer/entry/extended_statistics

http://jonathanlewis.wordpress.com/2012/03/09/index-upgrades/#comments
http://structureddata.org/2007/10/31/oracle-11g-extended-statistics/


HOWTO Using Extended Statistics to Optimize Multi-Column Relationships and Function-Based Statistics 
https://www.oracle.com/webfolder/technetwork/tutorials/obe/db/11g/r1/prod/perform/multistats/multicolstats.htm


Nigel Bayliss 
Why do I have SQL statement plans that change for the worse? https://blogs.oracle.com/optimizer/sql-plans-change-for-worse
Use Extended Statistics For Better SQL Execution Plans https://blogs.oracle.com/optimizer/extended-statistics-better-plans


! howto 
{{{

--create column group
exec DBMS_STATS.GATHER_TABLE_STATS('TC69649','PK_SUBMISSIONENTRY', method_opt=>'for all columns size auto for columns (entry_type, name) size 254');
exec DBMS_STATS.GATHER_TABLE_STATS('TC69649','PK_SUBMISSION', method_opt=>'for all columns size auto for columns (syncdeleted, ignored, submission_state, submission_type, interface_id) size 254');
 


--to delete the column group
exec dbms_stats.drop_extended_stats('TC69649',tabname => 'PK_SUBMISSIONENTRY',extension => '(entry_type, name)');
exec dbms_stats.drop_extended_stats('TC69649',tabname => 'PK_SUBMISSION',extension => '(syncdeleted, ignored, submission_state, submission_type, interface_id)');
 
}}}
http://tonguc.wordpress.com/2008/03/11/a-little-more-on-external-tables/	
http://tonguc.wordpress.com/2007/08/09/unload-data-with-external-tables-and-data-pump/
http://prsync.com/oracle/owb-gr-ndash-bulk-file-loading---more-faster-easier-12451/
http://www.oracle-developer.net/display.php?id=512
http://tkyte.blogspot.com/2006/08/interesting-data-set.html
http://sueharper.blogspot.com/2006/08/i-didnt-know-you-could-do-that.html <-- SQL Developer demo, oh crap!

http://www.oracle-developer.net/display.php?id=204 <-- GOOD STUFF, this is much easier! 

On hardware and ETL
http://glennfawcett.wordpress.com/2010/06/08/open-storage-s7000-with-exadata-a-good-fit-etlelt-operations/
http://viralpatel.net/blogs/oracle-xmltable-tutorial/ <— xml random data

http://www.mistersoft.org/freelancing/getafreelancer/2009/10/Javascript-Oracle-SQL-Visual-Basic-XML-Extract-xml-from-Oracle-Db-then-reload-in-another-Oracle-DB-nbsp-519463.html
http://www.cosort.com/products/FACT
http://www.access-programmers.co.uk/forums/showthread.php?t=162752
http://www.codeguru.com/forum/showthread.php?t=466326
http://www.attunity.com/forums/data-access/running-multiple-data-extract-sql-jcl-1233.html


http://www.oracle.com/technology/pub/articles/jain-xmldb.html
http://www.scribd.com/doc/238504/Load-XML-to-Oracle-Database
http://docs.fedoraproject.org/en-US/Fedora/16/html/Release_Notes/sect-Release_Notes-Changes_for_Sysadmin.html
http://fedoraproject.org/wiki/Releases/16/FeatureList
http://fedoraproject.org/wiki/Features/XenPvopsDom0
http://blog.xen.org/index.php/2011/05/13/xen-support-upstreamed-to-qemu/
<<<
EMC calls it FAST
Hitachi calls it Dynamic Tiering
Dell Compellent has "Storage Center" which has a feature called "Dynamic Block Architecture" 
<<<

EMC Workload Profile Assessment for Oracle AWR Report / StatsPack Gathering Procedures Instructions https://community.emc.com/docs/DOC-13949
New Assessment Available: Oracle AWR/Statspack Assessment https://community.emc.com/docs/DOC-14008
White Paper: EMC Tiered Storage for Oracle Database 11g — Data Warehouse Enabled by EMC Symmetrix VMAX with FAST and EMC Ionix ControlCenter StorageScope — A Detailed Review https://community.emc.com/docs/DOC-14191
EMC Tiered Storage for Oracle Database 11g — Data Warehouse Enabled by EMC Symmetrix VMAX with FAST and EMC Ionix ControlCenter StorageScope https://community.emc.com/docs/DOC-11047
Demo Station 3: Maximize Oracle Database Performance https://community.emc.com/docs/DOC-11912
Service Overview: EMC Database Performance Tiering Assessment https://community.emc.com/docs/DOC-14012
Maximize Operational Efficiency for Oracle RAC Environments with EMC Symmetrix FAST VP (Automated Tiering) https://community.emc.com/docs/DOC-11138




http://en.wikipedia.org/wiki/Fibre_Channel_over_Ethernet
CNA
<<<
Computers connect to FCoE with Converged Network Adapters (CNAs), which contain both Fibre Channel Host Bus Adapter (HBA) and Ethernet Network Interface Card (NIC) functionality on the same adapter card. CNAs have one or more physical Ethernet ports. FCoE encapsulation can be done in software with a conventional Ethernet network interface card, however FCoE CNAs offload (from the CPU) the low level frame processing and SCSI protocol functions traditionally performed by Fibre Channel host bus adapters.
<<<
Kyle also has some good reference on making use of FIO https://github.com/khailey/fio_scripts/blob/master/README.md

https://github.com/khailey/fio_scripts/blob/master/README.md
https://sites.google.com/site/oraclemonitor/i-o-graphics#TOC-Percentile-Latency
explanation of the graphs https://plus.google.com/photos/105986002174480058008/albums/5773661884246310993
''RSS to Groups''
http://www.facebook.com/topic.php?uid=4915599711&topic=4658#topic_top
http://www.youtube.com/watch?v=HgGxgX9KFfc

timeline
https://www.facebook.com/about/timeline

facebook download ALL info https://www.facebook.com/help/?page=116481065103985


! graph search 
https://www.facebook.com/find-friends/browser/
https://www.sitepoint.com/facebook-graph-search/
https://www.labnol.org/internet/facebook-graph-search-commands/28542/
http://graph.tips/
https://www.google.com/search?q=oracle+Failed+Logon+Delay&ei=O9uJW9OVIKGQggfhl5KwBw&start=0&sa=N&biw=1389&bih=764
https://www.dba-resources.com/oracle/finding-the-origin-of-failed-login-attempts/
{{{
1. Using database auditing (if already enabled)

Caveat: This is the simplest method to determine the source of failed login attempts providing that auditing is already enabled on your database as the information has (probably) already been captured. However, if auditing is not enabled then doing so will require that the database be restarted, in which case this option is no longer the simplest!

Firstly, check to see whether auditing is enabled and set to "DB" (meaning the audit trail is written to a database table).

show parameter audit_trail

If not set, then you will need to enable auditing, restart the database and then enable auditing of unsucessful logins as follows:

audit session whenever not successful;

The audit records for unsuccessful logon attempts can then be found as follows:

col ntimestamp# for a30 heading "Timestamp"
col userid for a20 heading "Username"
col userhost for a15 heading "Machine"
col spare1 for a15 heading "OS User"
col comment$text for a80 heading "Details" wrap

select ntimestamp#, userid, userhost, spare1, comment$text from sys.aud$ where returncode=1017 order by 1;

Sample output:

Timestamp Username Machine OS User
------------------------------ -------------------- --------------- ---------------
Details
--------------------------------------------------------------------------------
08-DEC-14 12.39.42.945635 PM APPUSER unix_app_001 orafrms
Authenticated by: DATABASE; Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=10.218.
64.44)(PORT=42293))

08-DEC-14 12.42.10.170957 PM APPUSER unix_app_001 orafrms
Authenticated by: DATABASE; Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=10.218.
64.44)(PORT=48541))

Note: the USERHOST column is only populated with the Client Host machine name as of 10G, in earlier versions this was the Numeric instance ID for the Oracle instance from which the user is accessing the database in a RAC environment.
2. Use a trigger to capture additional information

The following trigger code can be used to gather additional information about unsuccessful login attempts and write them to the database alert log, it is recommended to integrate this code into an existing trigger if you already have a trigger for this triggering event.

CREATE OR REPLACE TRIGGER logon_denied_write_alertlog AFTER SERVERERROR ON DATABASE
DECLARE
 l_message varchar2(2000);
BEGIN
 -- ORA-1017: invalid username/password; logon denied
 IF (IS_SERVERERROR(1017)) THEN
 select 'Failed login attempt to the "'|| sys_context('USERENV' ,'AUTHENTICATED_IDENTITY') ||'" schema'
 || ' using ' || sys_context ('USERENV', 'AUTHENTICATION_TYPE') ||' authentication'
 || ' at ' || to_char(logon_time,'dd-MON-yy hh24:mi:ss' )
 || ' from ' || osuser ||'@'||machine ||' ['||nvl(sys_context ('USERENV', 'IP_ADDRESS'),'Unknown IP')||']'
 || ' via the "' ||program||'" program.'
 into l_message
 from sys .v_$session
 where sid = to_number(substr(dbms_session.unique_session_id,1 ,4), 'xxxx')
 and serial# = to_number(substr(dbms_session.unique_session_id,5 ,4), 'xxxx');
 
 -- write to alert log
 sys.dbms_system .ksdwrt( 2,l_message );
 END IF;
END;
/

Some sample output from the alert.log looks like:

Tue Jan 06 09:45:36 2015
Failed login attempt to the "appuser" schema using DATABASE authentication at 06-JAN-15 09:45:35 from orafrms@unix_app_001 [10.218.64.44] via the "frmweb@unix_app_001 (TNS V1-V3)" program.

3. Setting an event to generate trace files on unsuccessful login.

You can instruct the database to write a trace file whenever an unsuccessful login attempt is made by setting the following event (the example below will only set the event until the next time the database is restarted. Update your pfile or spfile accordingly if you want this to be permanent).

alter system set events '1017 trace name errorstack level 10';

Trace files will be generated in user_dump_dest whenever someone attempts to login using an invalid username / password. As the trace is requested at level 10 it will include a section labeled PROCESS STATE that includes trace information such as :

O/S info: user:orafrms, term: pts/15, ospid: 29959, machine:unix_app_001
program: frmweb@unix_app_001 (TNS V1-V3)
application name: frmweb@unix_app_001 (TNS V1-V3), hash value=0
last wait for 'SQL*Net message from client' blocking sess=0x0 seq=2 wait_time=5570 seconds since wait started=0

In this case it was an 'frmweb' client running as OS user 'orafrms' that started the client session. The section "Call Stack Trace" may aid support in further diagnosing the issue.

Note: If the OS user or program is 'oracle' the connection may originate from a Database Link.
4. Using SQL*Net tracing to gather information

A sqlnet trace can provide you with even more details about the connection attempt but use this only if none of the above are successful in determining the origin of the failed login as it will be hard to find what you are looking for if you enable sqlnet tracing (and it can potentially consume large amounts of disk space).

To enable SQL*Net tracing create or edit the server side sqlnet.ora file and add the following parameters:

# server side sqlnet trace parameters
trace_level_server = 16
trace_file_server=server
trace_directory_server = <any directory on a volume with enough freespace>


}}}
Pull disk test case http://www.evernote.com/shard/s48/sh/bff156b7-9898-4010-9346-f16ba106354b/32ec97f26d92eb07b8b5974f4a4093ff
{{{
-- identify a specific disk in the storage cells.. will make the amber light blink
/opt/MegaRAID/MegaCli/MegaCli64 -pdlocate -physdrv '[35:5]' -a0                               
/opt/MegaRAID/MegaCli/MegaCli64 -pdlocate –stop -physdrv '[35:5]' -a0       
}}}
ITI - causing harddisk failure http://www.evernote.com/shard/s48/sh/4d0b5df6-8e55-4a61-9b16-b0995ec5511c/3349abb1342f5cfb996a6a35a246b561

Why gridisks are not automatically created after replacing a diskdrive in a cellserver? http://jeyaseelan-m.blogspot.com/2011/07/why-gridisks-are-not-automatically.html
Replacing a failed Exadata Storage Server System Drive http://jarneil.wordpress.com/2011/10/21/replacing-a-failed-exadata-storage-server-system-drive/
http://www.evernote.com/shard/s48/sh/ee3819be-be40-4c6b-a2b3-7b415d63e1ba/b6755dc12c817f384b807449821037b2

http://iggyfernandez.wordpress.com/2011/07/04/take-that-exadata-fast-index-creation-using-noparallel/
http://www.rittmanmead.com/files/oow2010_bryson_fault_tolerance.pdf
http://www.rittmanmead.com/2010/02/data-warehouse-fault-tolerance-an-introduction/
http://www.rittmanmead.com/2010/02/data-warehouse-fault-tolerance-part-1-resuming/
http://www.rittmanmead.com/2010/02/data-warehouse-fault-tolerance-part-2-restarting/
http://www.rittmanmead.com/2010/02/data-warehouse-fault-tolerance-part-3-restoring/
fedora dvd iso http://mirrors.rit.edu/fedora/fedora/linux/releases/





http://blogs.oracle.com/kirkMcgowan/2007/06/who_are_the_rac_pack.html
http://blogs.oracle.com/kirkMcgowan/2007/06/whos_afraid_of_the_big_bad_rac.html
http://blogs.oracle.com/kirkMcgowan/2007/08/fencing_yet_again.html

<<<
Fencing - yet again
By kirk.mcgowan on August 9, 2007 12:12 AM
Sheesh. It is amazing to me how this topic continues to spin. Clearly people just like speculate, and I suppose a little controversy can serve to energize, but this topic seems to have taken on a life of its own. The real question in my mind is why do you care?&nbspFencing is a core functionality of the cluster infrastructure. You cant control it, or influence it in any way. It has to be there in some form, or bad things will happen (corruptions being one of them). And if the particular fencing implementation in Oracle clusterware was fundamentally flawed, it would have been exposed long ago over the course of the 5+ years of existence, and the thousands of deployments.


So any discussion of fencing and the Oracle implementation is purely theoretical, and largely academic, since it has more than proven itself. ok. I enjoy lively academic or theortical technical debate, particularly over a beer or 2, but not at the expense of ignoring reality. So lets pull apart the discussions Ive seen, and address them point by point. Note that this discussion is focused solely on Oracle Clusterware used in conjunction with RAC.


Oracle Clusterware uses the Stonith algorithm. This is only partially true. Oracles fencing mechanism is based on the Stonith Algorithm. However, there is no general design rule of how that algorithm should be implemented. Strict use of the algorithm is complicated, or perhaps even prevented, by the fact that there is no API on many platforms for doing a remote power-off reset of the system. So the current implementation is in fact a suicide, as opposed to an execution. As system/OS vendors makes such APIs available, Oracle will be able to make use of them.


Suicide is not reliable because you are expecting an already unhealthy system to respond to some other directive. Sure. There are corner cases where this is a possibility, but these have proven to be very rare, they have been fixed when they appeared, and the real underlying concern, which is exposure to data corruption, in non-existent (see next point). This issue is actually related to the FUD&nbspwe often see about some cluster managers running in Kernel mode vs user space where Oracle Clusterware runs. Well ... If the OS kernel is misbehaving, then it doesnt really matter where the clusterware runs - bad things are going to happen. (Weve seen this occur in several situations.) If someone makes a programming error in the clusterware code and it is running in kernel mode, then the OS kernel is exposed. (This is theoretical since Oracle clusterware does not run in kernel mode, but its not like this hasnt happened before in other envrionments where user/application code is allowed to run in kernel space). And lastly, if running in userspace, and other user space programs misbehave, then the obvious concern in the sensitivity the cluster has to that misbehaving application - like not being able to get CPU time to communicate in a timely manner. We have certainly seen this kind of scenario many times, but in general it is easily mitigated by renicing or increasing the priority of the key background communication processes. Bottom line is that suicide has proven sufficiently reliable. Any claim to the contrary is pure speculation.


Because suicide is unreliable, you are exposed to data corruptions. Not true. Either in theory, or in practice. Its no secret RAC does unbuffered IO (bypasses the OS cache), and any IO done in a RAC environment is in complete coordination with the other nodes in the cluster. Cache fusion assures this. And this holds true in a split brain condition. If RAC cant coordinate the write with the other nodes as a result of interconnect failure, then that write is put on hold until communication is restored, or until the eviction protocol is invoked.


This is obviously over simplified, but frankly, so are the criticisms in this area. The challenge to any non-believer is the following: Find me a repeatable test case where interconnect failure, and the resulting fencing algorithm implemented in Oracle clusterware, results in database corruption. If you are successful, I will:


1. Fall off my chair in disbelief


2. Write : They were right, I was wrong, 1000 times in my blog, and apologize profusely to anyone who may have taken offence to the claims made in this posting.


3. File a bug, and get the&nbspdamn thing&nbspfixed.


Now that I think about it, it would probably be prudent to reverse 2. and 3. Note however, that in the off chance you are successful, it is a bug, and will be fixed as such. As opposed to a fundamental architectural flaw.


So lets put this one to bed. Next topic.
<<<
http://msdn.microsoft.com/en-us/library/aa365247(v=vs.85).aspx
http://www.computing.net/answers/windows-xp/file-name-or-extension-is-too-long/183526.html
-- a workaround on this on Windows 7 is to map the folder you want to copy on a network drive.. then create a short directory on c:\x then do the copy! 
http://blog.unmaskparasites.com/2009/09/01/beware-filezilla-doesnt-protect-your-ftp-passwords/
http://www.makeuseof.com/tag/how-to-recover-passwords-from-asterisk-characters/
http://www.groovypost.com/howto/retrieve-recover-filezilla-ftp-passwords/
http://arjudba.blogspot.com/2008/05/how-to-discover-find-dbid.html
http://oraclepoint.com/oralife/2010/10/21/how-to-find-the-date-when-a-database-object-role-was-created/
http://dboptimizer.com/?p=694
http://en.wikipedia.org/wiki/IEEE_1394
Unfortunately, the program is question is pulling back 3 millions rows, all the rows from a view, and then doing some processing, so we are generating a lot of network traffic. This is one old program that will take some effort to change.

We got a boost in performance by disabling SQL*Net Inspection on the firewall. ( See below )
We are also tuning the SDU sizes in the sqlnet files to help performance as you suggested below.


From their admin about their firewall:
===============================
A few notes from what we learned overnight.
 
1. Looks like removing the SQL inspection (sometimes called a fix up helped our performance)
                a. About a 75% gain. (See note at bottom for more about SQL inspection)


{{{
SQL*Net Inspection
SQL*Net inspection is enabled by default.
The SQL*Net protocol consists of different packet types that the ASA handles to make the data stream appear consistent to the Oracle applications on either side of the ASA.
The default port assignment for SQL*Net is 1521. This is the value used by Oracle for SQL*Net, but this value does not agree with IANA port assignments for Structured Query Language (SQL). Use the class-map command to apply SQL*Net inspection to a range of port numbers.
<AA5A66F3-3B66-45D2-BC45-6810C3BCBA83.png>
________________________________________
Note <745DBE7D-7180-4918-A1A7-D03BDDEA1CF6.png>Disable SQL*Net inspection when SQL data transfer occurs on the same port as the SQL control TCP port 1521. The security appliance acts as a proxy when SQL*Net inspection is enabled and reduces the client window size from 65000 to about 16000 causing data transfer issues.
________________________________________

}}}
http://arup.blogspot.com/2010/06/build-simple-firewall-for-databases.html
! passwordless perf 
{{{
Alternatively you can get flamegraph perf data collected without being root by adding below on sudoers file:
Use "sudo visudo" and add below lines:

oracle ALL=(ALL) NOPASSWD: /usr/sbin/perf
oracle ALL=(ALL) NOPASSWD: /usr/bin/zip


If the above check doesn't return the permission error, then just ctrl-C the perf command → user oracle apparently was already allowed to run perf.
}}}


{{{


select /* usercheck */ s.sid sid, s.serial# serial#, lpad(p.spid,7) unix_pid 
from gv$process p, gv$session s
where p.addr=s.paddr
and   s.username is not null
and (s.inst_id, s.sid) in (select inst_id, sid from gv$mystat where rownum < 2);



-- make sure that you have this set as ROOT
-- [root@localhost ~]# echo -1 > /proc/sys/kernel/perf_event_paranoid


$ cat flamegraph.sql


set lines 200
with d as
(
select '&procid' spid,
       '&&prefix._perf_graph.data' newfilename,
       '&&prefix._perf_graph.data-folded' folded_filename,
       '&&prefix._flamegraph.svg' flamegraph_filename,
       '&&prefix.' tarname
  from dual
)
SELECT
       'perf record -g -p ' || spid || chr(10) ||
       'mv perf.data ' || newfilename || chr(10) ||
       'perf script -i ' || newfilename || ' | ./stackcollapse-perf.pl > ' || folded_filename || chr(10) ||
       'cat ' || folded_filename || '| ./flamegraph.pl > ' || flamegraph_filename || chr(10) ||
       'tar -cjvpf ' || tarname || '_perf_data.tar.bz2 ' || tarname || '*' 
       as commands
  from d
;



COMMANDS
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
perf record -g -p 12345
mv perf.data testcase1_perf_graph.data
perf script -i testcase1_perf_graph.data | ./stackcollapse-perf.pl > testcase1_perf_graph.data-folded
cat testcase1_perf_graph.data-folded| ./flamegraph.pl > testcase1_flamegraph.svg
tar -cjvpf testcase1_perf_data.tar.bz2 testcase1*




}}}



! references

http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html



!! windows 
https://randomascii.wordpress.com/2013/03/26/summarizing-xperf-cpu-usage-with-flame-graphs/


!! tanel 
https://blog.tanelpoder.com/posts/visualizing-sql-plan-execution-time-with-flamegraphs/



!! other examples 
https://tableau.github.io/Logshark/docs/logshark_art#flamecharts




http://www.flashconf.com/how-to/how-to-install-flash-player-on-centosredhat-linux/
Master Note For Oracle Flashback Technologies [ID 1138253.1]

Flashback Database Best Practices & Performance
  	Doc ID: 	Note:565535.1

What Do All 10g Flashback Features Rely on and what are their Limitations ? 
  Doc ID:  Note:435998.1 

Restrictions on Flashback Table Feature
  	Doc ID: 	270535.1




Creating a 10gr2 Data Guard Physical Standby database with Real-Time apply [ID 343424.1]

11gR1 Data Guard Portal [ID 798974.1]

Master Note for Data Guard [ID 1101938.1]

How To Open Physical Standby For Read Write Testing and Flashback [ID 805438.1]

Step by Step Guide on How To Reinstate Failed Primary Database into Physical Standby [ID 738642.1]

Using RMAN Effectively In A Dataguard Environment. [ID 848716.1]

Reinstating a Physical Standby Using Backups Instead of Flashback [ID 416310.1]
Oracle11g Data Guard: Database Rolling Upgrade Shell Script [ID 949322.1]

Steps to perform for Rolling forward a standby database using RMAN incremental backup when primary and standby are in ASM filesystem [ID 836986.1]


''Flashback and nologging''
http://docs.oracle.com/cd/B28359_01/backup.111/b28273/rcmsynta023.htm
http://www.pythian.com/news/4884/questions-you-always-wanted-to-ask-about-flashback-database/
http://rnm1978.wordpress.com/2011/06/28/oracle-11g-how-to-force-a-sql_id-to-use-a-plan_hash_value-using-sql-baselines/

By Tanel Poder:

cat /tmp/x | awk '{ printf "%s", $0 ; if (NR % 3 == 0) print } END { print }'
Getting Started With Forms 9i - Hints and Tips
  	Doc ID: 	Note:237191.1

Troubleshooting Web Deployed Oracle Forms Performance Issues
  	Doc ID: 	Note:363285.1


-- NETWORK PERFORMANCE

Bandwith Per User Session For Oracle Form Base Web Deployment In Oracle9ias
  	Doc ID: 	Note:287237.1

How to Find Out How Much Network Traffic is Created by Web Deployed Forms?
  	Doc ID: 	Note:109597.1

Few Basic Techniques to Improve Performance of Forms.
  	Doc ID: 	Note:221529.1



-- MIGRATE TO 9i/10g

Migrating to Oracle Forms 9i / 10g - Forms Upgrade Center
  	Doc ID: 	Note:234540.1


-- CORRUPTION

Recovering Corrupted Forms 
  Doc ID:  161430.1 
''Good stuff topics I need to catch up on..''

oaktable - linux filesystems https://mail.google.com/mail/u/0/#inbox/14a779368d7427ed
oracle -l Memory operations on Sun/Oracle M class servers vs T class servers https://mail.google.com/mail/u/0/#inbox/14a54d821527fd6d

oaktable - CPU wait https://mail.google.com/mail/u/0/#inbox/14ae89d485847182
	counting memory stall on Sandy Bridge https://software.intel.com/en-us/forums/topic/514733
oaktable - CPU overhead in multiple instance RAC databases https://mail.google.com/mail/u/0/#inbox/14af37881a9831f2
oaktable - Has anyone seem current-ish AMD Opterons lately https://mail.google.com/mail/u/0/#inbox/14afef9391d6dd76

oracle-l - Exadata + OMCS https://mail.google.com/mail/u/0/#inbox/14ae51e818976ce7
oracle-l - how do you manage your project list https://mail.google.com/mail/u/0/#inbox/14a12a1121b829a1

http://jonathanlewis.wordpress.com/2010/07/13/fragmentation-1/
http://jonathanlewis.wordpress.com/2010/07/16/fragmentation-2/
http://jonathanlewis.wordpress.com/2010/07/19/fragmentation-3/
http://jonathanlewis.wordpress.com/2010/07/22/fragmentation-4/
http://joshodgers.com/storage/fusionio-iodrive2-virtual-machine-performance-benchmarking-part-1/
http://longwhiteclouds.com/2012/08/17/io-blazing-datastore-performance-with-fusion-io/
<<showtoc>>

! video courses
!! design and architecture 
https://www.pluralsight.com/courses/google-dataflow-architecting-serverless-big-data-solutions
https://www.pluralsight.com/courses/google-cloud-platform-leveraging-architectural-design-patterns
https://www.pluralsight.com/courses/google-cloud-functions-architecting-event-driven-serverless-solutions
https://www.pluralsight.com/courses/google-dataproc-architecting-big-data-solutions
https://www.pluralsight.com/courses/google-machine-learning-apis-designing-implementing-solutions
https://www.pluralsight.com/courses/google-bigquery-architecting-data-warehousing-solutions
https://www.pluralsight.com/courses/google-cloud-automl-designing-implementing-solutions

https://www.linkedin.com/learning/search?keywords=apache%20beam
https://www.linkedin.com/learning/data-science-on-google-cloud-platform-building-data-pipelines/what-goes-into-a-data-pipeline  <-- good summary 
https://www.linkedin.com/learning/google-cloud-platform-for-enterprise-essential-training/enterprise-ready-gcp

https://www.linkedin.com/learning/architecting-big-data-applications-batch-mode-application-engineering/dw-lay-out-the-architecture <-- good 5 use cases 
https://www.linkedin.com/learning/data-science-on-google-cloud-platform-architecting-solutions/architecting-data-science  <-- good 4 use cases 
https://www.linkedin.com/learning/data-science-on-google-cloud-platform-designing-data-warehouses/why-data-warehouses-are-important
https://www.linkedin.com/learning/architecting-big-data-applications-real-time-application-engineering/sm-analyze-the-problem  <-- good 4 use cases 


!! detailed tech
https://www.pluralsight.com/courses/google-cloud-platform-firestore-leveraging-realtime-database-solutions
https://www.pluralsight.com/courses/google-cloud-sql-creating-administering-instances


!! authors 
https://www.pluralsight.com/authors/janani-ravi
https://www.linkedin.com/learning/instructors/kumaran-ponnambalam  <-- nice architecture courses 


! ### GCP architecture references
https://cloud.google.com/docs/tutorials#architecture

! ### architecture - Smart analytics reference patterns
https://cloud.google.com/solutions/smart-analytics/reference-patterns/overview


! ### GCP solutions by industry
https://cloud.google.com/solutions/migrating-oracle-to-cloud-spanner


! ### migration patterns 

!! multiple source systems to bigquery
from pythian whitepapers
https://resources.pythian.com/hubfs/Framework-For-Migrate-Your-Data-Warehouse-Google-BigQuery-WhitePaper.pdf
https://resources.pythian.com/hubfs/White-Papers/Migrate-Teradata-to-Google-BigQuery.pdf
[img(70%,70%)[ https://i.imgur.com/IVdnYBz.png]]


!! from on-prem hadoop to dataproc and bigquery 
<<<
You use a Hadoop cluster both for serving analytics and for processing and transforming data. The data is currently stored on HDFS in Parquet format. The data processing jobs run for 6 hours each night. Analytics users can access the system 24 hours a day. Phase 1 is to quickly migrate the entire Hadoop environment without a major re-architecture. Phase 2 will include migrating to BigQuery for analytics and to Cloud Dataflow for data processing. You want to make the future migration to BigQuery and Cloud Dataflow easier by following Google-recommended practices and managed services. What should you do?
A. Lift and shift Hadoop/HDFS to Cloud Dataproc.
B. Lift and shift Hadoop/HDFS to Compute Engine.
C. Create a single Cloud Dataproc cluster to support both analytics and data processing, and point it at a Cloud Storage bucket that contains the Parquet files that were previously stored on HDFS.
D. Create separate Cloud Dataproc clusters to support analytics and data processing, and point both at the same Cloud Storage bucket that contains the Parquet files that were previously stored on HDFS.
Feedback
A is not correct because it is not recommended to attach persistent HDFS to Cloud Dataproc clusters in GCP. (see references link)

B Is not correct because they want to leverage managed services which would mean Cloud Dataproc.

C is not correct because it is recommended that Cloud Dataproc clusters be job specific.

D Is correct because it leverages a managed service (Cloud Dataproc), the data is stored on GCS in Parquet format which can easily be loaded into BigQuery in the future and the Cloud Dataproc clusters are job specific.
<<<
https://cloud.google.com/solutions/migration/hadoop/hadoop-gcp-migration-jobs


! ### performance comparison 
Cloud Data Warehouse Benchmark: Redshift, Snowflake, Azure, Presto and BigQuery https://fivetran.com/blog/warehouse-benchmark


! ### data and file storage 
Understanding Data and File Storage https://cloud.google.com/appengine/docs/standard/java/storage
https://cloud.google.com/products/storage


!! ### cloud Spanner (globally scalable transactions RDMBS)
https://cloud.google.com/spanner/

!! ### cloud SQL (managed RDBMS)

!! ### cloud datastore (NOSQL single region - storing structured application data that are mutable - think User entity, Blog post, etc)

!!! cloud firestore datastore (NEWER managed schemaless NoSQL)

Google Cloud Storage vs Google Cloud DataStore https://groups.google.com/forum/#!topic/gcd-discuss/SajfBn79LVw
<<<
Google Cloud Storage is for storing immutable blob objects (think images, and static files).
Google Cloud Datastore is for storing structured application data that are mutable (think User entity, Blog post, etc).
<<<

<<<
Another difference that is worth mentioning is that Google Cloud Storage supports Multi-Regional buckets that synchronize data across regions automatically, 
while Google Cloud Datastore is stored within a single region. So if you want to store your data across multiple regions, for example, Cloud Storage is your way to go.
<<<

Firestore in Datastore mode documentation https://cloud.google.com/datastore/docs

https://stackoverflow.com/questions/46549766/whats-the-difference-between-cloud-firestore-and-the-firebase-realtime-database
<<<
It is an improved version

Firebase database was enough for basic applications. But it was not powerful enough to handle complex requirements. That is why Cloud Firestore is introduced. Here are some major changes.

The basic file structure is improved.
Offline support for the web client.
Supports more advanced querying.
Write and transaction operations are atomic.
Reliability and performance improvements
Scaling will be automatic.
Will be more secure.
Pricing

In Cloud Firestore, rates have lowered even though it charges primarily on operations performed in your database along with bandwidth and storage. You can set a daily spending limit too. Here is the complete details about billing.

Future plans of Google

When they discovered the flaws with Real-time Database, they created another product rather than improving the old one. Even though there are no reliable details revealing their current standings on Real-time Database, it is the time to start thinking that it is likely to be abandoned.
<<<

!!! FireBase Real Time DB (OLDER - JSON based NO SQL DB)
https://firebase.google.com/docs/database/rtdb-vs-firestore
What's the Difference Between Cloud Firestore & Firebase Realtime Database https://www.youtube.com/watch?v=KeIx-mArUck


!! ### cloud storage (persistent HDFS bucket S3 Multi-Regional buckets - storing immutable blob objects - think images, and static files) 
https://medium.com/@jana.avula/google-cloud-dataproc-hdfs-vs-google-cloud-storage-for-dataproc-data-processing-jobs-5800de2ecfa
cloud storage documentation https://cloud.google.com/storage/docs

!! ### bigtable (hbase) 
https://www.guru99.com/hbase-shell-general-commands.html

https://www.google.com/search?biw=1675&bih=985&sxsrf=ACYBGNRSdd9JLZO0oq181zQHdenKslJd_Q%3A1571860356250&ei=hK-wXdfrDpGzggf89YFQ&q=bigtable++cassandra&oq=bigtable++cassandra&gs_l=psy-ab.3..0i7i30l2j0j0i7i30j0i7i5i30j0i5i30j0i8i30l4.31693.31693..31906...0.3..0.79.79.1......0....1..gws-wiz.......0i71.3eXHaAqlSRY&ved=0ahUKEwjXva6RlLPlAhWRmeAKHfx6AAo4ChDh1QMICw&uact=5

https://db-engines.com/en/system/Cassandra%3BGoogle+Cloud+Bigtable%3BHBase
https://www.google.com/search?q=cassandra+vs+bigtable&oq=cassandra+vs+bi&aqs=chrome.0.0j69i57j0l3j69i60.3802j0j4&sourceid=chrome&ie=UTF-8
https://stackshare.io/stackups/cassandra-vs-google-cloud-bigtable
https://stackoverflow.com/questions/41579281/db-benchmarks-cassandra-vs-bigtable-vs-hadoops

Moving from Cassandra to Auto-Scaling Bigtable at Spotify (Cloud Next '19) https://www.youtube.com/watch?v=Hfd3VZOYXNU



!! ### google pubsub (real-time kafka)

!!! pubsub architecture 
https://cloud.google.com/pubsub/architecture

!!! windowing basics 
https://beam.apache.org/documentation/programming-guide/#windowing







! ### cloud dataproc (ephermal hadoop processing namenodes - spark,hive jobs)
https://www.g2.com/compare/amazon-emr-vs-google-cloud-dataproc

!! cloud storage connector 
https://cloud.google.com/dataproc/docs/concepts/connectors/cloud-storage




! ### big query (hive SQL engine)

!! data transfer service 
https://cloud.google.com/bigquery/docs/dts

!! query plan 
https://cloud.google.com/bigquery/query-plan-explanation
https://medium.com/google-cloud/visualising-bigquery-41bf6833b98

!! clustered tables 
https://cloud.google.com/bigquery/docs/creating-clustered-tables

!! bq command line 
https://cloud.google.com/bigquery/docs/bq-command-line-tool

!! import not match byte-for-byte
https://cloud.google.com/bigquery/docs/loading-data#loading_encoded_data

!! external data source (also known as a federated data source)
https://cloud.google.com/bigquery/external-data-sources
<<<
BigQuery offers support for querying data directly from:

Bigtable
Cloud Storage
Google Drive
<<<

https://supermetrics.com/?utm_source=adwords&utm_medium=cpc&utm_campaign=supermetrics-brand-us&utm_adgroup=brand-exact&utm_category=search-brand&utm_term=supermetrics&location&gclid=EAIaIQobChMIjPib2Jeu5QIVEYTICh0IdAyIEAAYASAAEgJbCPD_BwE





! ### data fusion (based on CDAP, works like Talend) gui no programming (creates spark pipeline runs on dataproc cluster) 
https://cloud.google.com/data-fusion/pricing
https://cloud.google.com/data-fusion/
https://cloud.google.com/data-fusion/docs/tutorials/reusable-pipeline



! ### data prep (wrangling by Trifacta) (creates beam pipeline runs on dataflow)
https://cloud.google.com/dataprep/
https://cloud.google.com/dataprep/docs/quickstarts/quickstart-dataprep

https://www.stitchdata.com/vs/google-cloud-dataprep/google-cloud-data-fusion/
https://stackoverflow.com/questions/58175386/can-google-data-fusion-make-the-same-data-cleaning-than-dataprep
<<<
Datafusion and Datapred can perform the same things. However their execution are different.

Datafusion create a Spark pipeline and run it on Dataproc cluster
Datapred create a Beam pipeline and run it on Dataflow
IMO, Datafusion is more designed for data ingestion from one source to another one, with few transformation. Dataprep is more designed for data preparation (as its name means), data cleaning, new column creation, splitting column. Dataprep also provide insight of the data for helping you in your recipes.

In addition, Beam is a part of Tensorflow extended and your Data engineer pipeline will be more consistent if you use a tool compliant with Beam

That's I will recommend Dataprep instead Datafusion.
<<<



! ### dataflow (apache beam batch and streaming) w/ programming
https://cloud.google.com/dataflow/docs/guides/stopping-a-pipeline

!! dataflow commands reference  
https://cloud.google.com/dataflow/docs/reference/sql/operators



! ### cloud composer (airflow) 
https://cloud.google.com/composer/docs/how-to/using/writing-dags









! ### machine learning 

drfib.me/ML - a handful of machine learning courses 
https://docs.google.com/presentation/d/1gVf_6SL0JkI9fPQ-2pZYr4QYIiNtIXXhFeyEwU4tTIE/present?slide=id.g4c173eec31_0_0

!! bigquery ML 
https://cloud.google.com/bigquery-ml/pricing
<<<
3 categories 
* Storage
* Queries (analysis)
* BigQuery ML CREATE MODEL queries
<<<

https://towardsdatascience.com/machine-learning-with-sql-ae46b1fe78a9

!! google AutoML
https://www.statworx.com/at/blog/a-performance-benchmark-of-different-automl-frameworks/
https://medium.com/@brianray_7981/google-clouds-automl-first-look-cb7d29e06377


! ### data studio (Tableau)
https://www.holistics.io/blog/google-data-studio-pricing-and-in-depth-reviews/






! whitepapers 
https://resources.pythian.com/hubfs/Framework-For-Migrate-Your-Data-Warehouse-Google-BigQuery-WhitePaper.pdf
https://resources.pythian.com/hubfs/White-Papers/Migrate-Teradata-to-Google-BigQuery.pdf


! end to end example 
https://medium.com/@imrenagi/how-i-should-have-orchestrated-my-etl-pipeline-better-with-cloud-dataflow-template-f140b958f544
https://medium.com/@samueljboficial/bigdata-project-using-google-ecosystem-with-storage-function-composer-dataflow-and-bigquery-6ed8b5d42f9f
https://towardsdatascience.com/no-code-data-pipelines-a-first-impression-of-cloud-data-fusion-2b6f117a3ce8



! exam 
https://towardsdatascience.com/passing-the-google-cloud-professional-data-engineer-certification-87da9908b333
https://linuxacademy.com/cp/modules/view/id/208
https://cloud.google.com/certification/practice-exam/data-engineer



























.
<<showtoc>>

also see [[GCP exercices, GCP community]]

! dev tools 

! setup and iam

! storage

! compute 

! networking

! big data pipelines

! ci/cd tools

! ai and ml

! sample data

! gcp essential urls



<<showtoc>> 

! main exercises portals 
!! codelabs 
https://codelabs.developers.google.com/
https://codelabs.developers.google.com/codelabs/real-time-csv-cdf-bq/index.html?index=..%2F..index#0

!! qwiklabs 
qwiklabs.com 
https://googlepluralsight.qwiklabs.com
https://googlepluralsight.qwiklabs.com/focuses/8196573?parent=lti_session

!! tutorials 
https://cloud.google.com/community/tutorials/ssh-port-forwarding-set-up-load-testing-on-compute-engine
https://cloud.google.com/community/tutorials/bigquery-from-dbeaver-odbc
https://cloud.google.com/community/tutorials/write
https://cloud.google.com/docs/tutorials
https://github.com/GoogleCloudPlatform/community/tree/master/tutorials



! others 
https://www.qwiklabs.com/focuses/6376?locale=en&parent=catalog
https://www.qwiklabs.com/focuses/3719?parent=catalog
https://www.qwiklabs.com/focuses/925?parent=catalog
https://www.qwiklabs.com/focuses/3392?parent=catalog
https://www.qwiklabs.com/focuses/3460?parent=catalog
https://www.qwiklabs.com/focuses/1159?parent=catalog
https://cloud.google.com/vpc-service-controls
https://cloud.google.com/vpc-service-controls/docs/quickstart
<<showtoc>>


! course workshop 
{{{

GCP streaming analytics
==============
Day 1
Intro
History: Why MapReduce to Flume, Describing a pipeline declaratively - DAG
Basic Windows, Timestamps, Triggers
Exploring the SDK Part I
>>SDKs & Runners Basics
>>Building a pipeline building blocks
>>Utility functions
Dataflow Runner
>>Overview
>>Autoscaling
Sources and Sinks
Example Pipelines Part I

Day 2
Schemas & SQL
>>Schemas
>>Beam / Dataflow SQL
Example Pipelines Part II
Execution Model
>>Bundles & DoFn Lifecycle
>>Autoscaling part II & Dynamic Work Rebalancing
>>Advanced Watermarks
SDK Part II - State and Timers API
Pipeline Productionization Part I
>>Monitoring
>>Performance
>>Troubleshooting & Debugging
Pipeline Productionization Part II
>>Templates
Pipeline Productionization Part III
>>Testing & CI / CD
}}}






! example use case - reference architecture traveloka
Stream analytics on GCP: How Traveloka’s multi-cloud, fully-managed data stack keeps the focus on revolutionizing human mobility
https://learning.oreilly.com/videos/strata-data-conference/9781491976326/9781491976326-video316623


[img(70%,70%)[https://i.imgur.com/eb6pcqJ.png]]
[img(70%,70%)[https://i.imgur.com/VbMoU8N.png]]
[img(70%,70%)[https://i.imgur.com/81oyBZM.png]]
[img(70%,70%)[https://i.imgur.com/A8JN4dq.png]]
[img(70%,70%)[https://i.imgur.com/FWgxEjB.png]]
[img(70%,70%)[https://i.imgur.com/BSxIXst.png]]
[img(70%,70%)[https://i.imgur.com/hCh49UL.png]]
[img(70%,70%)[https://i.imgur.com/DfIMJ5j.png]]









6 THINGS DATABASE ADMINISTRATORS SHOULD KNOW ABOUT GDPR
http://info.enterprisedb.com/rs/069-ALB-339/images/GDPR%20for%20DBA_EDB%20Tech%20Guide.pdf?aliId=93035477
{{{
For each Oracle RAC database homes and the GI home that are being patched, run the following commands as the home owner to extract the OPatch utility.

unzip <OPATCH-ZIP> -d <ORACLE_HOME>
<ORACLE_HOME>/OPatch/opatch version

-----------------

As the Grid home owner execute:

%<ORACLE_HOME>/OPatch/ocm/bin/emocmrsp

-----------------

%<ORACLE_HOME>/OPatch/opatch lsinventory -detail -oh <ORACLE_HOME>

-----------------

As the Oracle RAC database home owner execute:

%<ORACLE_HOME>/bin/emctl stop dbconsole

-----------------

The Opatch utility has automated the patch application for the Oracle Grid Infrastructure (GI) home and the Oracle RAC database homes. It operates by querying existing configurations and automating the steps required for patching each Oracle RAC database home of same version and the GI home.

The utility must be executed by an operating system (OS) user with root privileges (usually the user root), and it must be executed on each node in the cluster if the GI home or Oracle RAC database home is in Non-shared storage. The utility should not be run in parallel on the cluster nodes.

Depending on command line options specified, one invocation of Opatch can patch the GI home, one or more Oracle RAC database homes, or both GI and Oracle RAC database homes of the same Oracle release version. You can also roll back the patch with the same selectivity.

Add the directory containing the opatch to the $PATH environment variable. 
For example:
export PATH=$PATH:<GI_HOME path>/OPatch

To patch GI home and all Oracle RAC database homes of the same version:
#opatch auto <UNZIPPED_PATCH_LOCATION>

To patch only the GI home:
#opatch auto <UNZIPPED_PATCH_LOCATION> -oh <GI_HOME>

To patch one or more Oracle RAC database homes:
#opatch auto <UNZIPPED_PATCH_LOCATION> -oh <path to RAC database1 home>, <path of the RAC database1 home>

To roll back the patch from the GI home and each Oracle RAC database home:
#opatch auto <UNZIPPED_PATCH_LOCATION> -rollback

To roll back the patch from the GI home:
#opatch auto <UNZIPPED_PATCH_LOCATION> -oh <path to GI home> -rollback

To roll back the patch from the Oracle RAC database home:
#opatch auto <UNZIPPED_PATCH_LOCATION> -oh <path to RAC database home> -rollback

-----------------

2.6 Patch Post-Installation Instructions for Databases Created or Upgraded after Installation of PSU 11.2.0.2.3 in the Oracle Home

These instructions are for a database that is created or upgraded after the installation of PSU 11.2.0.2.3.

You must execute the steps in Section 2.5.2, "Loading Modified SQL Files into the Database" for any new database only if it was created by any of the following methods:

Using DBCA (Database Configuration Assistant) to select a sample database (General, Data Warehouse, Transaction Processing)

Using a script that was created by DBCA that creates a database from a sample database

There are no actions required for databases that have been upgraded.
}}}
Loving GIMP for this LOMOfied photo :)

Check the original photo here http://www.facebook.com/photo.php?pid=4737724&l=6a5d70369b&id=552113028

To LOMOfy go here http://blog.grzadka.info/2010/07/02/lomografia-w-gimp/

BTW, the author (Samuel Albrecht) of the GIMP plugin emailed me with the batch mode (elsamuko-lomo-batch.scm).. go here for details http://sites.google.com/site/elsamuko/gimp/lomo

now you can run it on all your digital photos as

gimp -i -b '(elsamuko-lomo-batch "*.JPG" 1.5 10 10 0.8 5 1 3 128 2 FALSE FALSE TRUE FALSE 0 0 115)' -b '(gimp-quit 0)'

the 10th input value is the "color effect", see below:

0 - neutral
1 - old red
2 - xpro green
3 - blue
4 - intense red
5 - movie
6 - vintage-look
7 - LAB
8 - light blue
9 - redscale
10 - retro bw
11 - paynes
12 - sepia

Enjoy!
What are the GPL and LGPL and how do they differ? https://www.youtube.com/watch?v=JlIrSMzF8T4
I found a bunch of references. See below. From what I’ve read so far, GPU database products/technologies heavily relies on being columnar oriented. And GPUs are used mainly as an accelerator. The MapD for example is a columnar database that uses GPUs as a primary cache, it’s pretty much like the flash cache but SIMD-aware. 

In Oracle, we have the Oracle Database In-Memory although it uses the CPU SIMD. Basically any program that have to use the GPU capabilities have to use the functions/APIs of the hardware so let’s say in the future if Oracle will introduce some kind of GPU accelerator then Oracle should make use of or programmed with NVIDIA APIs. So it’s really not a straightforward stuff, see the GPU and R integration you need to install some package and make use of the functions under that package to take advantage of the GPU hardware although I thought R would take advantage of the GPU right away because of the innate vector operations. 


	
Oracle Database In-Memory uses CPU SIMD vector processing. SIMD instructions are available on both CPU and GPU. Oracle TimesTen on the other hand is completely different from Oracle DB In-Memory and doesn't use SIMD. 
	https://blogs.oracle.com/In-Memory/entry/getting_started_with_oracle_database2	
	http://blog.tanelpoder.com/2014/10/05/oracle-in-memory-column-store-internals-part-1-which-simd-extensions-are-getting-used/
	https://community.oracle.com/thread/3687203?start=0&tstart=0
	http://www.nextplatform.com/2015/10/26/sgi-targets-oracle-in-memory-on-big-iron/
	http://stackoverflow.com/questions/27333815/cpu-simd-vs-gpu-simd
	https://www.quora.com/Whats-the-difference-between-a-CPU-and-a-GPU
	http://superuser.com/questions/308771/why-are-we-still-using-cpus-instead-of-gpus
	http://stackoverflow.com/questions/7690230/in-depth-analysis-of-the-difference-between-the-cpu-and-gpu


the book Computer-Architecture-A-Quantitative-Aproach-5thEd talks about instruction level, data level, and thread level parallelism. data level parallelism is the one related to SIMD/GPU/Vector operations
	https://www.dropbox.com/sh/shu0r3rvfodtdnz/AAAogcKfP_cE83UdTfl2avwsa?dl=0

GPU topics 
https://www.quora.com/Are-there-any-available-material-or-good-tutorials-for-GPGPU-GPU-computing-using-CUDA-and-applied-to-database-query-acceleration
https://www.quora.com/How-does-the-performance-of-GPU-databases-like-MapD-and-Sqream-and-GPUdb-and-BlazingDB-compare-to-Spark-SQL-and-columnar-databases
Exploring High Performance SQL Databases with Graphics Processing Units http://hgpu.org/?p=11557
Do GPU optimized databases threaten the hegemony of Oracle, Splunk and Hadoop? http://diginomica.com/2016/04/11/do-gpu-optimized-databases-threaten-the-hegemony-of-oracle-splunk-and-hadoop/ , ycombinator discussion https://news.ycombinator.com/item?id=11476141
http://blog.accelereyes.com/blog/2009/01/22/data-parallelism-vs-task-parallelism/


Papers
 Parallelism in Database Operations https://www.cs.helsinki.fi/webfm_send/1002/kalle_final.pdf
	 From this example alone we can see that these operations
	require the data elements to be placed strategically. As the
	SQL search query has criteria on certain data elements, and if
	the database system applies some of the strategies described
	here, the data layout must be columnar for the system to be
	the most efficient. If the layout is not columnar, the system
	needs to transform the dataset into a columnar shape if SIMD
	operations are to be used.
Fast Computation of Database Operations Using Graphics Processors http://gamma.cs.unc.edu/DB/
Rethinking SIMD Vectorization for In-Memory Databases http://www.cs.columbia.edu/~orestis/sigmod15.pdf
Scaling database performance on GPUs https://ir.nctu.edu.tw/bitstream/11536/16582/1/000307276000006.pdf
Accelerating SQL Database Operations on a GPU with CUDA https://www.cs.virginia.edu/~skadron/Papers/bakkum_sqlite_gpgpu10.pdf


GPU database products/technologies
	PGStorm/Pgopencl https://wiki.postgresql.org/wiki/PGStrom , https://wiki.postgresql.org/images/6/65/Pgopencl.pdf
	sqream http://sqream.com/solutions/products/sqream-db/ , http://sqream.com/where-are-the-gpu-based-sql-databases/
	Alenka https://github.com/antonmks/Alenka , https://www.reddit.com/r/programming/comments/oxq6a/a_database_engine_that_runs_on_a_gpu_outperforms/
	MapD https://moodle.technion.ac.il/pluginfile.php/568218/mod_resource/content/1/mapd_overview.pdf , https://devblogs.nvidia.com/parallelforall/mapd-massive-throughput-database-queries-llvm-gpus/
	CUDADB http://www.contrib.andrew.cmu.edu/~tchitten/418/writeup.pdf
        gpudb,kinetica https://www.linkedin.com/pulse/we-just-turned-your-oracle-12c-environment-overpriced-wes-showfety?trk=prof-post , https://www.datanami.com/2016/07/25/gpu-powered-analytics-improves-mail-delivery-usps/

GPU and R
	http://blog.revolutionanalytics.com/2015/01/parallel-programming-with-gpus-and-r.html
	https://www.r-bloggers.com/r-gpu-programming-for-all-with-gpur/
	http://www.r-tutor.com/gpu-computing
	https://devblogs.nvidia.com/parallelforall/accelerate-r-applications-cuda/
	http://datascience.stackexchange.com/questions/9945/r-machine-learning-on-gpu
	https://www.kaggle.com/forums/f/15/kaggle-forum/t/19178/gpu-computing-in-r
	https://www.researchgate.net/post/How_do_a_choose_the_best_GPU_for_parellel_processing_in_R_and_PhotoScan
http://www.afterthedeadline.com


https://www.shoeboxed.com/
http://xpenser.com/docs/features/


http://www.alananna.co.uk/blog/2016/fenix-3-back-up-and-restore/
https://forums.garmin.com/showthread.php?80345-FENIX-2-SETTINGS-DATA-PAGES-is-there-a-way-to-backup-or-a-file-to-easy-edit
-- WINDOWS
C:\Windows\system32>fsutil fsinfo ntfsinfo c:
NTFS Volume Serial Number :       0xe278db3378db0567
Version :                         3.1
Number Sectors :                  0x000000001d039fff
Total Clusters :                  0x0000000003a073ff
Free Clusters  :                  0x000000000059f08d
Total Reserved :                  0x0000000000000870
Bytes Per Sector  :               512
Bytes Per Cluster :               4096
Bytes Per FileRecord Segment    : 1024
Clusters Per FileRecord Segment : 0
Mft Valid Data Length :           0x00000000141c0000
Mft Start Lcn  :                  0x00000000000c0000
Mft2 Start Lcn :                  0x0000000000000002
Mft Zone Start :                  0x000000000263a720
Mft Zone End   :                  0x0000000002643c40
RM Identifier:        BA5F6457-522B-11E0-B977-D967961022A3

C:\Windows\system32>
C:\Windows\system32>

http://arjudba.blogspot.com/2008/07/how-to-determine-os-block-size-for.html


! Linux 
{{{
-- blocksize of the filesystem
[root@desktopserver ~]# blockdev --getbsz /dev/sda1 
1024

-- blocksize of the device
[root@desktopserver ~]# blockdev --getbsz /dev/sda
4096
[root@desktopserver ~]# blockdev --getbsz /dev/sdb
4096


-- physical sector size
[root@desktopserver ~]# cat /sys/block/sda/queue/physical_block_size 
512
-- logical sector size 
[root@desktopserver ~]# cat /sys/block/sda/queue/logical_block_size 
512


dumpe2fs /dev/sda1 | grep -i 'Block size'
dumpe2fs 1.41.12 (17-May-2010)
Block size:               1024

[root@desktopserver ~]# dumpe2fs /dev/sda | grep -i 'Block size'
dumpe2fs 1.41.12 (17-May-2010)
dumpe2fs: Bad magic number in super-block while trying to open /dev/sda

}}}
more blockdev
{{{
[root@desktopserver ~]# blockdev --getbsz /dev/sda1           <-- Print blocksize in bytes
1024
[root@desktopserver ~]# blockdev --getsize /dev/sda1          <-- Print device size in sectors (BLKGETSIZE). Deprecated in favor of the --getsz option.
614400
[root@desktopserver ~]# blockdev --getsize64 /dev/sda1      <-- Print device size in bytes (BLKGETSIZE64)
314572800
[root@desktopserver ~]# blockdev --getsz /dev/sda1             <-- Get size in 512-byte sectors (BLKGETSIZE64 / 512).
614400
[root@desktopserver ~]# blockdev --getss /dev/sda1             <-- Print sectorsize in bytes - usually 512.
512
[root@desktopserver ~]# 
[root@desktopserver ~]# 
[root@desktopserver ~]# blockdev --getbsz /dev/sda
4096
[root@desktopserver ~]# blockdev --getsize /dev/sda
1953525168
[root@desktopserver ~]# blockdev --getsize64 /dev/sda
1000204886016
[root@desktopserver ~]# blockdev --getsz /dev/sda
1953525168
[root@desktopserver ~]# blockdev --getss /dev/sda
512
}}}
http://www.linuxnix.com/2011/07/find-block-size-linux.html
http://prefetch.net/blog/index.php/2009/09/12/why-partition-x-does-now-end-on-cylinder-boundary-warnings-dont-matter/



{{{

$ du -sm * | sort -rnk1
911     sysaux01.dbf
701     system01.dbf
301     undotbs01.dbf
52      temp02.dbf
51      redo03.log
51      redo02.log
51      redo01.log
10      control01.ctl
9       users01.dbf
oracle@karldevfedora:/u01/app/oracle/oradata/cdb1:cdb1
$ du -sm
2130    .


-- 2187.875
select 
( select sum(bytes)/1024/1024 data_size from dba_data_files ) +
( select nvl(sum(bytes),0)/1024/1024 temp_size from dba_temp_files ) +
( select sum(bytes)/1024/1024 redo_size from sys.v_$log ) +
( select sum(BLOCK_SIZE*FILE_SIZE_BLKS)/1024/1024 controlfile_size from v$controlfile) "Size in GB"
from
dual
/

-- 2187.875
SELECT a.data_size + b.temp_size + c.redo_size + d.controlfile_size 
"total_size in GB" 
FROM (SELECT SUM (bytes) / 1024 / 1024 data_size FROM dba_data_files) a, 
(SELECT NVL (SUM (bytes), 0) / 1024 / 1024 temp_size 
FROM dba_temp_files) b, 
(SELECT SUM (bytes) / 1024 / 1024 redo_size FROM sys.v_$log) c, 
(SELECT SUM (BLOCK_SIZE * FILE_SIZE_BLKS) / 1024 / 1024 
controlfile_size 
FROM v$controlfile) d
/

-- Database Size        Used space           Free space
-- -------------------- -------------------- --------------------
-- 2169 MB              1640 MB              529 MB
col "Database Size" format a20
col "Free space" format a20
col "Used space" format a20
select round(sum(used.bytes) / 1024 / 1024  ) || ' MB' "Database Size"
, round(sum(used.bytes) / 1024 / 1024  ) - 
round(free.p / 1024 / 1024 ) || ' MB' "Used space"
, round(free.p / 1024 / 1024 ) || ' MB' "Free space"
from (select bytes
from v$datafile
union all
select bytes
from v$tempfile
union all
select bytes
from v$log) used
, (select sum(bytes) as p
from dba_free_space) free
group by free.p
/


}}}
{{{
Go to the bdump directory to run these shell commands

Date and errors in alert.log

     cat alert_+ASM.log | \
     awk 'BEGIN{buf=""}
          /[0-9]:[0-9][0-9]:[0-9]/{buf=$0}
          /ORA-/{print buf,$0}' > ORA-errors-$(date +%Y%m%d%H%M).txt

Use the following script to easily find the trace files on the alert log. Just run it on the bdump directory

cat alert_prod1.log | \
     awk 'BEGIN{buf=""}
          /[0-9]:[0-9][0-9]:[0-9]/{buf=$0}
          /.trc/{print buf,$0}'

Use the following script to easily find the ORA- errors and trace files on the alert log. Just run it on the bdump directory

cat alert_prod1.log | \
     awk 'BEGIN{buf=""}
          /[0-9]:[0-9][0-9]:[0-9]/{buf=$0}
          /.trc|ORA-/{print buf,$0}' 

Date of startups in the alert.log

     cat RDA_LOG_alert_log.txt | \
     awk 'BEGIN{buf=""}
          /[0-9]:[0-9][0-9]:[0-9]/{buf=$0}
          /Starting ORACLE/{print buf,$0}' > StartupTime-$(date +%Y%m%d%H%M).txt

Date of startups in the RDA alert.log

     cat RDA_LOG_alert_log.txt | \
     awk 'BEGIN{buf=""}
          /[0-9]:[0-9][0-9]:[0-9]/{buf=$0}
          /Starting ORACLE/{print buf,$0}' > StartupTime-$(date +%Y%m%d%H%M).txt


########################################################################

-- create a file called getalert
-- run it as ./getalert <node name> 

export node=$1
cat alert_"$node".log | \
     awk 'BEGIN{buf=""}
          /[0-9]:[0-9][0-9]:[0-9]/{buf=$0}
          /.trc|ORA-/{print buf,$0}' > alert_"$node"_ORA-TRC_$(date +%Y%m%d%H%M).log

     cat alert_"$node".log | \
     awk 'BEGIN{buf=""}
          /[0-9]:[0-9][0-9]:[0-9]/{buf=$0}
          /Starting ORACLE/{print buf,$0}' > alert_"$node"_startup_$(date +%Y%m%d%H%M).log
}}}
{{{

historical awr_storagesize_summary-tableau-cdb1-karldevfedora.csv



from MOS 1551288.1
------ DISK and CELL Failure Diskgroup Space Reserve Requirements  ------
This procedure determines how much space you need to survive a DISK or CELL failure. It also shows the usable space
available when reserving space for disk or cell failure.
Please see MOS note 1551288.1 for more information.
.  .  .
Description of Derived Values:
One Cell Required Mirror Free MB : Required Mirror Free MB to permit successful rebalance after losing largest CELL regardless of redundancy type
Disk Required Mirror Free MB     : Space needed to rebalance after loss of single or double disk failure (for normal or high redundancy)
Disk Usable File MB              : Usable space available after reserving space for disk failure and accounting for mirroring
Cell Usable File MB              : Usable space available after reserving space for SINGLE cell failure and accounting for mirroring
.  .  .
ASM Version: 11.2.0.4
.  .  .
----------------------------------------------------------------------------------------------------------------------------------------------------
|          |         |     |          |            |            |            |Cell Req'd  |Disk Req'd  |            |            |    |    |       |
|          |DG       |Num  |Disk Size |DG Total    |DG Used     |DG Free     |Mirror Free |Mirror Free |Disk Usable |Cell Usable |    |    |PCT    |
|DG Name   |Type     |Disks|MB        |MB          |MB          |MB          |MB          |MB          |File MB     |File MB     |DFC |CFC |Util   |
----------------------------------------------------------------------------------------------------------------------------------------------------
|DATA_SOCPP|NORMAL   |   84| 2,260,992| 189,923,328| 156,156,856|  33,766,472|  29,845,094|   2,370,683|  15,697,895|   1,960,689|PASS|PASS|  82.2%|
|DBFS_DG   |NORMAL   |   70|    34,608|   2,422,560|      14,692|   2,407,868|     380,688|     184,664|   1,111,602|   1,013,590|PASS|PASS|    .6%|
|RECO_SOCPP|NORMAL   |   84|   565,360|  47,490,240|  37,766,648|   9,723,592|   7,462,752|     633,084|   4,545,254|   1,130,420|PASS|PASS|  79.5%|
----------------------------------------------------------------------------------------------------------------------------------------------------
.  .  .
Script completed.


-- 1e.77. Database Size on Disk (GV$DATABASE)
WITH
sizes AS (
SELECT /*+  MATERIALIZE NO_MERGE  */ /* 1e.77 */
       'Data' file_type,
       SUM(bytes) bytes
  FROM v$datafile
 UNION ALL
SELECT 'Temp' file_type,
       SUM(bytes) bytes
  FROM v$tempfile
 UNION ALL
SELECT 'Log' file_type,
       SUM(bytes) * MAX(members) bytes
  FROM v$log
 UNION ALL
SELECT 'Control' file_type,
       SUM(block_size * file_size_blks) bytes
  FROM v$controlfile
),
dbsize AS (
SELECT /*+  MATERIALIZE NO_MERGE  */ /* 1e.77 */
       'Total' file_type,
       SUM(bytes) bytes
  FROM sizes
)
SELECT d.dbid,
       d.name db_name,
       s.file_type,
       s.bytes,
       ROUND(s.bytes/POWER(10,9),3) gb,
       CASE
       WHEN s.bytes > POWER(10,15) THEN ROUND(s.bytes/POWER(10,15),3)||' P'
       WHEN s.bytes > POWER(10,12) THEN ROUND(s.bytes/POWER(10,12),3)||' T'
       WHEN s.bytes > POWER(10,9) THEN ROUND(s.bytes/POWER(10,9),3)||' G'
       WHEN s.bytes > POWER(10,6) THEN ROUND(s.bytes/POWER(10,6),3)||' M'
       WHEN s.bytes > POWER(10,3) THEN ROUND(s.bytes/POWER(10,3),3)||' K'
       WHEN s.bytes > 0 THEN s.bytes||' B' END display
  FROM v$database d,
       sizes s
 UNION ALL
SELECT d.dbid,
       d.name db_name,
       s.file_type,
       s.bytes,
       ROUND(s.bytes/POWER(10,9),3) gb,
       CASE
       WHEN s.bytes > POWER(10,15) THEN ROUND(s.bytes/POWER(10,15),3)||' P'
       WHEN s.bytes > POWER(10,12) THEN ROUND(s.bytes/POWER(10,12),3)||' T'
       WHEN s.bytes > POWER(10,9) THEN ROUND(s.bytes/POWER(10,9),3)||' G'
       WHEN s.bytes > POWER(10,6) THEN ROUND(s.bytes/POWER(10,6),3)||' M'
       WHEN s.bytes > POWER(10,3) THEN ROUND(s.bytes/POWER(10,3),3)||' K'
       WHEN s.bytes > 0 THEN s.bytes||' B' END display
  FROM v$database d,
       dbsize s;



-- 2b.201. Data Files Usage (DBA_DATA_FILES)
WITH
alloc AS (
SELECT /*+  MATERIALIZE NO_MERGE  */ /* 2b.201 */
       tablespace_name,
       COUNT(*) datafiles,
       ROUND(SUM(bytes)/POWER(10,9)) gb
  FROM dba_data_files
 GROUP BY
       tablespace_name
),
free AS (
SELECT /*+  MATERIALIZE NO_MERGE  */ /* 2b.201 */
       tablespace_name,
       ROUND(SUM(bytes)/POWER(10,9)) gb
  FROM dba_free_space
 GROUP BY
       tablespace_name
),
tablespaces AS (
SELECT /*+  MATERIALIZE NO_MERGE  */ /* 2b.201 */
       a.tablespace_name,
       a.datafiles,
       a.gb alloc_gb,
       (a.gb - f.gb) used_gb,
       f.gb free_gb
  FROM alloc a, free f
 WHERE a.tablespace_name = f.tablespace_name
 ORDER BY
       a.tablespace_name
),
total AS (
SELECT /*+  MATERIALIZE NO_MERGE  */ /* 2b.201 */
       SUM(alloc_gb) alloc_gb,
       SUM(used_gb) used_gb,
       SUM(free_gb) free_gb
  FROM tablespaces
)
SELECT v.tablespace_name,
       v.datafiles,
       v.alloc_gb,
       v.used_gb,
       CASE WHEN v.alloc_gb > 0 THEN
       LPAD(TRIM(TO_CHAR(ROUND(100 * v.used_gb / v.alloc_gb, 1), '990.0')), 8)
       END pct_used,
       v.free_gb,
       CASE WHEN v.alloc_gb > 0 THEN
       LPAD(TRIM(TO_CHAR(ROUND(100 * v.free_gb / v.alloc_gb, 1), '990.0')), 8)
       END pct_free
  FROM (
SELECT tablespace_name,
       datafiles,
       alloc_gb,
       used_gb,
       free_gb
  FROM tablespaces
 UNION ALL
SELECT 'Total' tablespace_name,
       TO_NUMBER(NULL) datafiles,
       alloc_gb,
       used_gb,
       free_gb
  FROM total
) v;



-- 2b.207. Largest 200 Objects (DBA_SEGMENTS)
WITH schema_object AS (
SELECT /*+  MATERIALIZE NO_MERGE  */ /* 2b.207 */
       segment_type,
       owner,
       segment_name,
       tablespace_name,
       COUNT(*) segments,
       SUM(extents) extents,
       SUM(blocks) blocks,
       SUM(bytes) bytes
  FROM dba_segments
 WHERE 'Y' = 'Y'
 GROUP BY
       segment_type,
       owner,
       segment_name,
       tablespace_name
), totals AS (
SELECT /*+  MATERIALIZE NO_MERGE  */ /* 2b.207 */
       SUM(segments) segments,
       SUM(extents) extents,
       SUM(blocks) blocks,
       SUM(bytes) bytes
  FROM schema_object
), top_200_pre AS (
SELECT /*+  MATERIALIZE NO_MERGE  */ /* 2b.207 */
       ROWNUM rank, v1.*
       FROM (
SELECT so.segment_type,
       so.owner,
       so.segment_name,
       so.tablespace_name,
       so.segments,
       so.extents,
       so.blocks,
       so.bytes,
       ROUND((so.segments / t.segments) * 100, 3) segments_perc,
       ROUND((so.extents / t.extents) * 100, 3) extents_perc,
       ROUND((so.blocks / t.blocks) * 100, 3) blocks_perc,
       ROUND((so.bytes / t.bytes) * 100, 3) bytes_perc
  FROM schema_object so,
       totals t
 ORDER BY
       bytes_perc DESC NULLS LAST
) v1
 WHERE ROWNUM < 201
), top_200 AS (
SELECT p.*,
       (SELECT object_id
          FROM dba_objects o
         WHERE o.object_type = p.segment_type
           AND o.owner = p.owner
           AND o.object_name = p.segment_name
           AND o.object_type NOT LIKE '%PARTITION%') object_id,
       (SELECT data_object_id
          FROM dba_objects o
         WHERE o.object_type = p.segment_type
           AND o.owner = p.owner
           AND o.object_name = p.segment_name
           AND o.object_type NOT LIKE '%PARTITION%') data_object_id,
       (SELECT SUM(p2.bytes_perc) FROM top_200_pre p2 WHERE p2.rank <= p.rank) bytes_perc_cum
  FROM top_200_pre p
), top_200_totals AS (
SELECT /*+  MATERIALIZE NO_MERGE  */ /* 2b.207 */
       SUM(segments) segments,
       SUM(extents) extents,
       SUM(blocks) blocks,
       SUM(bytes) bytes,
       SUM(segments_perc) segments_perc,
       SUM(extents_perc) extents_perc,
       SUM(blocks_perc) blocks_perc,
       SUM(bytes_perc) bytes_perc
  FROM top_200
), top_100_totals AS (
SELECT /*+  MATERIALIZE NO_MERGE  */ /* 2b.207 */
       SUM(segments) segments,
       SUM(extents) extents,
       SUM(blocks) blocks,
       SUM(bytes) bytes,
       SUM(segments_perc) segments_perc,
       SUM(extents_perc) extents_perc,
       SUM(blocks_perc) blocks_perc,
       SUM(bytes_perc) bytes_perc
  FROM top_200
 WHERE rank < 101
), top_20_totals AS (
SELECT /*+  MATERIALIZE NO_MERGE  */ /* 2b.207 */
       SUM(segments) segments,
       SUM(extents) extents,
       SUM(blocks) blocks,
       SUM(bytes) bytes,
       SUM(segments_perc) segments_perc,
       SUM(extents_perc) extents_perc,
       SUM(blocks_perc) blocks_perc,
       SUM(bytes_perc) bytes_perc
  FROM top_200
 WHERE rank < 21
)
SELECT v.rank,
       v.segment_type,
       v.owner,
       v.segment_name,
       v.object_id,
       v.data_object_id,
       v.tablespace_name,
       CASE
       WHEN v.segment_type LIKE 'INDEX%' THEN
         (SELECT i.table_name
            FROM dba_indexes i
           WHERE i.owner = v.owner AND i.index_name = v.segment_name)
       WHEN v.segment_type LIKE 'LOB%' THEN
         (SELECT l.table_name
            FROM dba_lobs l
           WHERE l.owner = v.owner AND l.segment_name = v.segment_name)
       END table_name,
       v.segments,
       v.extents,
       v.blocks,
       v.bytes,
       ROUND(v.bytes / POWER(10,9), 3) gb,
       LPAD(TO_CHAR(v.segments_perc, '990.000'), 7) segments_perc,
       LPAD(TO_CHAR(v.extents_perc, '990.000'), 7) extents_perc,
       LPAD(TO_CHAR(v.blocks_perc, '990.000'), 7) blocks_perc,
       LPAD(TO_CHAR(v.bytes_perc, '990.000'), 7) bytes_perc,
       LPAD(TO_CHAR(v.bytes_perc_cum, '990.000'), 7) perc_cum
  FROM (
SELECT d.rank,
       d.segment_type,
       d.owner,
       d.segment_name,
       d.object_id,
       d.data_object_id,
       d.tablespace_name,
       d.segments,
       d.extents,
       d.blocks,
       d.bytes,
       d.segments_perc,
       d.extents_perc,
       d.blocks_perc,
       d.bytes_perc,
       d.bytes_perc_cum
  FROM top_200 d
 UNION ALL
SELECT TO_NUMBER(NULL) rank,
       NULL segment_type,
       NULL owner,
       NULL segment_name,
       TO_NUMBER(NULL),
       TO_NUMBER(NULL),
       'TOP  20' tablespace_name,
       st.segments,
       st.extents,
       st.blocks,
       st.bytes,
       st.segments_perc,
       st.extents_perc,
       st.blocks_perc,
       st.bytes_perc,
       TO_NUMBER(NULL) bytes_perc_cum
  FROM top_20_totals st
 UNION ALL
SELECT TO_NUMBER(NULL) rank,
       NULL segment_type,
       NULL owner,
       NULL segment_name,
       TO_NUMBER(NULL),
       TO_NUMBER(NULL),
       'TOP 100' tablespace_name,
       st.segments,
       st.extents,
       st.blocks,
       st.bytes,
       st.segments_perc,
       st.extents_perc,
       st.blocks_perc,
       st.bytes_perc,
       TO_NUMBER(NULL) bytes_perc_cum
  FROM top_100_totals st
 UNION ALL
SELECT TO_NUMBER(NULL) rank,
       NULL segment_type,
       NULL owner,
       NULL segment_name,
       TO_NUMBER(NULL),
       TO_NUMBER(NULL),
       'TOP 200' tablespace_name,
       st.segments,
       st.extents,
       st.blocks,
       st.bytes,
       st.segments_perc,
       st.extents_perc,
       st.blocks_perc,
       st.bytes_perc,
       TO_NUMBER(NULL) bytes_perc_cum
  FROM top_200_totals st
 UNION ALL
SELECT TO_NUMBER(NULL) rank,
       NULL segment_type,
       NULL owner,
       NULL segment_name,
       TO_NUMBER(NULL),
       TO_NUMBER(NULL),
       'TOTAL' tablespace_name,
       t.segments,
       t.extents,
       t.blocks,
       t.bytes,
       100 segemnts_perc,
       100 extents_perc,
       100 blocks_perc,
       100 bytes_perc,
       TO_NUMBER(NULL) bytes_perc_cum
  FROM totals t) v;

}}}

..
''command here''
{{{
cat `cat /etc/oraInst.loc | grep -i inventory | sed 's/..............\(.*\)/\1/'`/ContentsXML/inventory.xml | grep HOME
}}}

$ cat `cat /etc/oraInst.loc | grep -i inventory | sed 's/..............\(.*\)/\1/'`/ContentsXML/inventory.xml | grep HOME
<HOME_LIST>
<HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="1"/>
<HOME NAME="oms11g1" LOC="/u01/app/oracle/product/middleware/oms11g" TYPE="O" IDX="2"/>
<HOME NAME="agent11g1" LOC="/u01/app/oracle/product/middleware/agent11g" TYPE="O" IDX="3"/>
<HOME NAME="common11g1" LOC="/u01/app/oracle/product/middleware/oracle_common" TYPE="O" IDX="4"/>
<HOME NAME="webtier11g1" LOC="/u01/app/oracle/product/middleware/Oracle_WT" TYPE="O" IDX="5"/>
</HOME_LIST>

$ vi gethome.sh
oracle@emgc11g:/home/oracle:emrep
$ chmod 755 gethome.sh
oracle@emgc11g:/home/oracle:emrep
$ ''sh gethome.sh''
<HOME_LIST>
<HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="1"/>
<HOME NAME="oms11g1" LOC="/u01/app/oracle/product/middleware/oms11g" TYPE="O" IDX="2"/>
<HOME NAME="agent11g1" LOC="/u01/app/oracle/product/middleware/agent11g" TYPE="O" IDX="3"/>
<HOME NAME="common11g1" LOC="/u01/app/oracle/product/middleware/oracle_common" TYPE="O" IDX="4"/>
<HOME NAME="webtier11g1" LOC="/u01/app/oracle/product/middleware/Oracle_WT" TYPE="O" IDX="5"/>
</HOME_LIST>


''find home''
{{{
#!/bin/bash
# A little helper script for finding ORACLE_HOMEs for all running instances in a Linux server
# by Tanel Poder (http://blog.tanelpoder.com)

printf "%6s %-20s %-80s\n" "PID" "NAME" "ORACLE_HOME"
pgrep -lf _pmon_ |
  while read pid pname  y ; do
    printf "%6s %-20s %-80s\n" $pid $pname `ls -l /proc/$pid/exe | awk -F'>' '{ print $2 }' | sed 's/bin\/oracle$//' | sort | uniq` 
  done
}}}

''find home from sqlplus''
{{{
var OH varchar2(200);
EXEC dbms_system.get_env('ORACLE_HOME', :OH) ;
PRINT OH
}}}
{{{
# logon storms by hour
fgrep "30-OCT-2010" listener.log | fgrep "establish" | \
awk '{ print $1 " " $2 }' | awk -F: '{ print $1 }' | \
sort | uniq –c

# logon storms by minute
fgrep "30-OCT-2010 22:" listener.log | fgrep "establish" | \
awk '{ print $1 " " $2 }' | awk -F: '{ print $1 ":" $2 }' | \
sort | uniq –c
}}}
* CPU-Z
* System Information for Windows - Gabriel Topala
* WinDirStat


{{{
-- SHOW FREE
SET LINESIZE 300
SET PAGESIZE 9999
SET VERIFY   OFF
COLUMN status      FORMAT a9                 HEADING 'Status'
COLUMN name        FORMAT a25                HEADING 'Tablespace Name'
COLUMN type        FORMAT a12                HEADING 'TS Type'
COLUMN extent_mgt  FORMAT a10                HEADING 'Ext. Mgt.'
COLUMN segment_mgt FORMAT a9                 HEADING 'Seg. Mgt.'
COLUMN pct_free    FORMAT 999.99             HEADING "% Free" 
COLUMN gbytes      FORMAT 99,999,999         HEADING "Total GBytes" 
COLUMN used        FORMAT 99,999,999         HEADING "Used Gbytes" 
COLUMN free        FORMAT 99,999,999         HEADING "Free Gbytes" 
BREAK ON REPORT
COMPUTE SUM OF gbytes ON REPORT 
COMPUTE SUM OF free ON REPORT 
COMPUTE SUM OF used ON REPORT 

SELECT d.status status, d.bigfile, d.tablespace_name name, d.contents type, d.extent_management extent_mgt, d.segment_space_management segment_mgt, df.tssize 		gbytes, (df.tssize - fs.free) used, fs.free free 
    FROM
	  dba_tablespaces d,
	  (SELECT tablespace_name, ROUND(SUM(bytes)/1024/1024/1024) tssize FROM dba_data_files GROUP BY tablespace_name) df,
	  (SELECT tablespace_name, ROUND(SUM(bytes)/1024/1024/1024) free FROM dba_free_space GROUP BY tablespace_name) fs
    WHERE
	d.tablespace_name = df.tablespace_name(+)
    AND d.tablespace_name = fs.tablespace_name(+)
    AND NOT (d.extent_management like 'LOCAL' AND d.contents like 'TEMPORARY')
UNION ALL
SELECT d.status status, d.bigfile, d.tablespace_name name, d.contents type, d.extent_management extent_mgt, d.segment_space_management segment_mgt, df.tssize 		gbytes, (df.tssize - fs.free) used, fs.free free 
    FROM
	  dba_tablespaces d,
	  (select tablespace_name, sum(bytes)/1024/1024/1024 tssize from dba_temp_files group by tablespace_name) df,
	  (select tablespace_name, sum(bytes_cached)/1024/1024/1024 free from v$temp_extent_pool group by tablespace_name) fs
    WHERE
	d.tablespace_name = df.tablespace_name(+)
    AND d.tablespace_name = fs.tablespace_name(+)
    AND d.extent_management like 'LOCAL' AND d.contents like 'TEMPORARY'
ORDER BY 9;
CLEAR COLUMNS BREAKS COMPUTES


-- SHOW FREE SPACE IN DATAFILES
SET LINESIZE 145
SET PAGESIZE 9999
SET VERIFY   OFF
COLUMN tablespace  FORMAT a18             HEADING 'Tablespace Name'
COLUMN filename    FORMAT a50             HEADING 'Filename'
COLUMN filesize    FORMAT 99,999,999,999  HEADING 'File Size'
COLUMN used        FORMAT 99,999,999,999  HEADING 'Used (in MB)'
COLUMN pct_used    FORMAT 999             HEADING 'Pct. Used'
BREAK ON report
COMPUTE SUM OF filesize  ON report
COMPUTE SUM OF used      ON report
COMPUTE AVG OF pct_used  ON report

SELECT /*+ ordered */
    d.tablespace_name                     tablespace
  , d.file_name                           filename
  , d.file_id                             file_id
  , d.bytes/1024/1024                     filesize
  , NVL((d.bytes - s.bytes)/1024/1024, d.bytes/1024/1024)     used
  , TRUNC(((NVL((d.bytes - s.bytes) , d.bytes)) / d.bytes) * 100)  pct_used
FROM
    sys.dba_data_files d
  , v$datafile v
  , ( select file_id, SUM(bytes) bytes
      from sys.dba_free_space
      GROUP BY file_id) s
WHERE
      (s.file_id (+)= d.file_id)
  AND (d.file_name = v.name)
UNION
SELECT
    d.tablespace_name                       tablespace 
  , d.file_name                             filename
  , d.file_id                               file_id
  , d.bytes/1024/1024                       filesize
  , NVL(t.bytes_cached/1024/1024, 0)                  used
  , TRUNC((t.bytes_cached / d.bytes) * 100) pct_used
FROM
    sys.dba_temp_files d
  , v$temp_extent_pool t
  , v$tempfile v
WHERE 
      (t.file_id (+)= d.file_id)
  AND (d.file_id = v.file#)
ORDER BY 1;


-- SHOW AUTOEXTEND TABLESPACES (9i,10G SqlPlus)
set lines 300
col file_name format a65
select 
        c.file#, a.tablespace_name as "TS", a.file_name, a.bytes/1024/1024 as "A.SIZE", a.increment_by * c.block_size/1024/1024 as "A.INCREMENT_BY", a.maxbytes/1024/1024 as "A.MAX"
from 
        dba_data_files a, dba_tablespaces b, v$datafile c
where 
        a.tablespace_name = b.tablespace_name
        and a.file_name = c.name
        and a.tablespace_name in (select tablespace_name from dba_tablespaces)
    	and a.autoextensible = 'YES'
union all
select 
        c.file#, a.tablespace_name as "TS", a.file_name, a.bytes/1024/1024 as "A.SIZE", a.increment_by * c.block_size/1024/1024 as "A.INCREMENT_BY", a.maxbytes/1024/1024 as "A.MAX"
from 
        dba_temp_files a, dba_tablespaces b, v$tempfile c
where 
        a.tablespace_name = b.tablespace_name
        and a.file_name = c.name
        and a.tablespace_name in (select tablespace_name from dba_tablespaces)
    	and a.autoextensible = 'YES';
}}}
{{{
WITH d AS
  (SELECT TO_CHAR(startup_time,'MM/DD/YYYY HH24:MI:SS') startup_time,startup_time startup_time2,
    TO_CHAR(lag(startup_time) over ( partition BY dbid, instance_number order by startup_time ),'MM/DD/YYYY HH24:MI:SS') last_startup
  FROM dba_hist_database_instance
  order by startup_time2 desc
  )
SELECT startup_time,
  last_startup,
  ROUND(
  CASE
    WHEN last_startup IS NULL
    THEN 0
    ELSE (TO_DATE(startup_time,'MM/DD/YYYY HH24:MI:SS') - TO_DATE(last_startup,'MM/DD/YYYY HH24:MI:SS'))
  END,0) days
FROM d;	
}}}
https://community.oracle.com/thread/2391416?tstart=0
<<showtoc>>

! watch this first 
http://www.git-tower.com/learn/git/videos

! read this first
http://nvie.com/posts/a-successful-git-branching-model/

! other videos 
Short and Sweet: Advanced Git Command-Line Mastery https://www.udemy.com/course/draft/742846/
Short and Sweet: Next-Level Git and GitHub - Get Productive https://www.udemy.com/course/draft/531846/
https://www.udemy.com/course/short-and-sweet-get-started-with-git-and-github-right-now/
https://www.udemy.com/course/the-complete-github-course-for-developers/


! test accounts
emberdev1
emberdev2
emberdevgroup

! official documentation, videos, and help
https://guides.github.com/activities/hello-world/
https://guides.github.com/
https://help.github.com/articles/setting-up-teams/
https://www.youtube.com/githubguides
https://help.github.com/
https://guides.github.com/features/mastering-markdown/

! version control format
http://git-scm.com/book/en/v2/Git-Basics-Tagging
Semantic Versioning 2.0.0 http://semver.org/

''Awesome github walkthrough - video series'' http://308tube.com/youtube/github/
https://github.com/karlarao
http://git-scm.com/download/win
http://www.javaworld.com/javaworld/jw-08-2012/120830-osjp-github.html?page=1		

! HOWTO - general workflow

[img[ https://lh5.googleusercontent.com/-9Sx7XCA7jQ8/UyKDQRA3ngI/AAAAAAAACJA/fzmAgpCi6xU/w2048-h2048-no/github_basic_workflow.png ]]

! Basic commands and getting started
<<<
''Git Data Flow''
{{{
1) Current Working Directory	<-- git init <project>
2) Index (cache)				<-- git add .
3) Local Repository				<-- git commit -m "<comment>"
4) Remote Repository	
}}}
<<<

<<<
''Client side setup''
{{{
http://git-scm.com/downloads   <-- download here 

git config --global user.name "karlarao"
git config --global user.email "karlarao@gmail.com"
}}}
<<<

<<<
''Common commands''
{{{
git init awrscripts				<-- or you can just cd on "awrscripts" folder and execute "git init"
git status
git add . 						<-- add all the files under the master folder to the staging area
git <filename>					<-- add just a file
git rm --cached <filename>		<-- remote a file
git commit -m "initial commit"	<-- to commit changes (w/ comment), and save a snapshot of the local repository 
                                             * note that when you modify, you have to do a "git add ." first..else it will say no changes added to commit
git log							<-- show summary of commits
vi README.md        <-- markdown format readme file, header should start with #

git diff
git add .				
git diff --cached				<-- get the differences in the staging area, because you've already executed the "add"..

## shortcuts
git commit -a -m "short commit"		<-- combination of add and commit
git log --oneline					<-- shorter summary
git status -s						<-- shorter show changes
}}}

''Exclude file''
https://coderwall.com/p/n1d-na/excluding-files-from-git-locally
<<<

! Integration with Github.com
<<<
''Github.com setup''
{{{
go to github.com and create a new repository
on your PC go to C:\Users\Karl
open git bash and type in ssh-keygen below
ssh-keygen.exe -t rsa -C "karlarao@gmail.com"		<-- this will create RSA on C:\Users\Karl directory
copy the contents of id_rsa.pub under C:\Users\karl\.ssh directory
go to github.com -> Account Settings -> SSH Keys -> Add SSH Key
ssh -T git@github.com								<-- to test the authentication
}}}
''Github.com integrate and push''
{{{
go to repositories folder -> on SSH tab -> copy the key
git remote add origin <repo ssh key from website>
git remote add origin git@github.com:karlarao/awrscripts.git
git push origin master
}}}
''Github.com integrate with GUI''
{{{
download the GUI here http://windows.github.com/
login and configure, at the end just hit skip
go to tools -> options -> change the default storage directory to the local git directory C:\Dropbox\CodeNinja\GitHub
click Scan For Repositories -> click Add -> click Update
click Publish -> click Sync
}}}
''for existing repos, you can do a clone''
{{{
git clone git@github.com:<name>/<repo>
}}}
<<<

! github pages
sync the repo to github pages
{{{
cd ~ubuntu/telegram/
git config --global user.name "karlarao"
git config --global user.email "karlarao@gmail.com"
git add .
git status # to see what changes are going to be commited
git commit -m "."
git remote add origin git@github.com:karlarao/telegram.git
git push origin master
# git branch gh-pages # this is one time
git checkout gh-pages # go to the gh-pages branch
git rebase master # bring gh-pages up to date with master
git push origin gh-pages # commit the changes
git checkout master # return to the master branch
	
access the page at http://karlarao.github.io/<repo>/
}}}

https://github.com/blog/2289-publishing-with-github-pages-now-as-easy-as-1-2-3


!! github pages for blogging 
https://howchoo.com/g/yzg0yjdmntl/how-to-blog-in-markdown-using-github-and-jekyll-now
https://blog.iarsov.com/general/blog-migrated-to-github-pages/


! render HTML file in github without git pages
http://stackoverflow.com/questions/8446218/how-to-see-an-html-page-on-github-as-a-normal-rendered-html-page-to-see-preview


! track a zipfile based script repo, useful for blogs or sites
{{{
add this on your crontab 
# refresh git with the <script> scripts
0 4 * * * /home/karl/bin/git_<script>.sh

$ cat /home/karl/bin/git_<script>.sh
cd ~/github/<script directory>
rm <script>.zip
wget http://<site>.com/files/<script>.zip
unzip -o <script>.zip -d ~/github/<script directory>
git config --global user.name "karlarao"
git config --global user.email "karlarao@gmail.com"
git add .
git commit -m "."
#git remote add origin git@github.com:karlarao/<script>.git
git push origin master

then make sure to favorite the repo to get emails!
}}}


! Branch, Merge, Clone, Fork
{{{
Branching	<-- allows you to create a separate working copy of your code 
Merging		<-- merge branches together
Cloning		<-- other developers can get a copy of your code from a remote repo
Forking		<-- make use of someone's code as starting point of a new project


-- 1st developer created a branch r2_index
git branch								<-- show branches
git branch r2_index						<-- create a branch name "r2_index"
git checkout r2_index					<-- to switch to the "r2_index" branch
git checkout <the branch you want to go>	* make sure to close all files before switching to another branch

-- 2nd developer on another machine created r2_misc
git clone <ssh link>					<-- to clone a project
git branch r2_misc
git checkout r2_misc
git push origin <branch name>	<-- to update the remote repo

-- bug fix on master
git checkout master
git push origin master

-- merge to combine the changes from 1st developer to the master project
	* conflict may happen due to changes at the same spot for both branches
git branch r2_index
git merge master

	* conflict looks like the following:
		<<<<<<< HEAD
		1)
		=======
		TOC:
		1) one
		2) two
		3) three
		>>>>>>> master
git push origin r2_index

-- pull, synchronizes the local repo with the remote repo
	* remember, PUSH to send up GitHub, PULL to sync with GitHub
git pull origin master

}}}


! Delete files on git permanently
http://stackoverflow.com/questions/1983346/deleting-files-using-git-github  <-- good stuff
http://dalibornasevic.com/posts/2-permanently-remove-files-and-folders-from-a-git-repository
https://www.kernel.org/pub/software/scm/git/docs/git-filter-branch.html
{{{
cd /Users/karl/Dropbox/CodeNinja/GitHub/tmp
git init
git status
git filter-branch --force --index-filter 'git rm --cached --ignore-unmatch *' --prune-empty --tag-name-filter cat -- --all
git commit -m "."
git push origin master --force
}}}

! delete history 
https://rtyley.github.io/bfg-repo-cleaner/
https://help.github.com/articles/remove-sensitive-data/
stackoverflow.com/questions/37219/how-do-you-remove-a-specific-revision-in-the-git-history
https://hellocoding.wordpress.com/2015/01/19/delete-all-commit-history-github/
http://samwize.com/2014/01/15/how-to-remove-a-commit-that-is-already-pushed-to-github/


! Deleting a repository
https://help.github.com/articles/deleting-a-repository

! rebase 
rebase https://www.youtube.com/watch?v=SxzjZtJwOgo

! forking 
forking https://www.youtube.com/watch?v=5oJHRbqEofs
<<<
Some notes on forking: 
* Let's say you get assigned as a collaborator on a private repo called REPO1
* If REPO1 gets forked as a private REPO2 by another guy, instantly you'll also be part of that REPO2
* When the original creator delete you as a collaborator on REPO1 you will no longer see anything from that but you will still get access to REPO2
<<<

! pull request 
pull request (contributing to the fork) https://www.youtube.com/watch?v=d5wpJ5VimSU

! team collaboration 
team https://www.youtube.com/watch?v=61WbzS9XMwk

! git on dropbox conflicts
http://edinburghhacklab.com/2012/11/when-git-on-dropbox-conflicts-no-problem/
http://stackoverflow.com/questions/12773488/git-fatal-reference-has-invalid-format-refs-heads-master


! other references
gitflow http://nvie.com/posts/a-successful-git-branching-model/

''master zip''
http://stackoverflow.com/questions/8808164/set-the-name-of-a-zip-downloadable-from-github-or-other-ways-to-enroll-google-tr
http://stackoverflow.com/questions/7106012/download-a-single-folder-or-directory-from-a-github-repo
http://alblue.bandlem.com/2011/09/git-tip-of-week-git-archive.html
http://gitready.com/intermediate/2009/01/29/exporting-your-repository.html
http://manpages.ubuntu.com/manpages/intrepid/man1/git-archive.1.html
http://stackoverflow.com/questions/8377081/github-api-download-zip-or-tarball-link

''uploading binary files (zip)'' 
https://help.github.com/articles/distributing-large-binaries/
https://help.github.com/articles/about-releases/
https://help.github.com/articles/creating-releases/
https://gigaom.com/2013/07/09/oops-github-did-it-again-relaunches-binary-uploads-after-scuttling-them/ 
https://github.com/blog/1547-release-your-software

''live demos''
http://solutionoptimist.com/2013/12/28/awesome-github-tricks/

''Git hook to send email notification on repo changes''
http://stackoverflow.com/questions/552360/git-hook-to-send-email-notification-on-repo-changes

''git rebase'' http://git-scm.com/docs/git-rebase

''git gist'' https://gist.github.com/

''gitignore.io'' https://www.gitignore.io/ <- Create useful .gitignore files for your project, there's also a webstorm plugin for this



! git merge upstream 
https://www.google.com/search?q=git+merge+upstream+tower+2&oq=git+merge+upstream+tower+2&aqs=chrome..69i57j69i64.9674j0j1&sourceid=chrome&ie=UTF-8


! git - contributing to a repo or project

!! fork and pull model (ideal for large projects)
here you create a branch

!! direct clone model / collaboration model (ideal for small projects)
here you work on master branch
cb
pb
pbg
pull


! git revert / fix a damaging commit 
create branch
head
revert 
commit 


! Equivalent of “svn checkout” for git
http://stackoverflow.com/questions/18900774/equivalent-of-svn-checkout-for-git
http://stackoverflow.com/questions/15595778/github-what-does-checkout-do


! HOWTO rename a repo 
<<<
what happens when I rename a git repo? do you have any advice on properly renaming a git repo?
<<<
<<<
I have pretty good news for you -- since 2013, GitHub automatically redirects people from the old repo to the new repo. (There are some caveats, so read on.) Please read the Stack Overflow answer that starts with "since May 2013" on this page: 

http://stackoverflow.com/questions/5751585/how-do-i-rename-a-repository-on-github

Here is the GitHub announcement of the redirect feature: 

https://github.com/blog/1508-repository-redirects-are-here

https://help.github.com/articles/renaming-a-repository/

Note, it is still recommended that you update your local repository to specifically point at the new repository. You can do that with a simple: 

git remote set-url origin https://github.com/YourGitHubUserName/NewRepositoryName.git

(where you replace YourGitHubUserName and NewRepositoryName with the appropriate information)

Ideally, your collaborators will also update their local repositories, but it's not an immediate requirement unless you are going to re-use the old repository name (which will break redirects and I don't recommend doing that). 
<<<



! github acquisition 
https://blog.github.com/2018-06-04-github-microsoft/
https://news.microsoft.com/2018/06/04/microsoft-to-acquire-github-for-7-5-billion/
https://blogs.microsoft.com/blog/2018/06/04/microsoft-github-empowering-developers/
https://thenextweb.com/dd/2018/06/04/microsoft-buying-github-doesnt-scare-me/


! awesome github clients 

!! gitsome

https://github.com/donnemartin/gitsome
<<<
* needs python 3.5.0 

example query commands: 
{{{
gh feed karlarao -p
gh search-repos "created:>=2017-01-01 user:karlarao"
}}}

installation:
{{{
brew install pyenv
xcode-select --install
pyenv local 3.5.0
PATH="~/.pyenv/versions/3.5.0/bin:${PATH}"
pip3 install gitsome

# then configure 
gh configure

}}}

add on .bash_profile 
{{{
cat .bash_profile
PATH="~/.pyenv/versions/3.5.0/bin:${PATH}"
}}}


<<<

references 
https://apple.stackexchange.com/questions/237430/how-to-install-specific-version-of-python-on-os-x
https://www.chrisjmendez.com/2017/08/03/installing-multiple-versions-of-python-on-your-mac-using-homebrew/
http://mattseymour.net/blog/2016/03/brew-installing-specific-python-version/
https://github.com/pyenv/pyenv

!! gist

https://github.com/defunkt/gist
<<<

example query commands:
{{{
gist -l
}}}

installation: 
{{{
brew install gist 
gist --login
}}}


walking the json tree:
example file https://api.github.com/gists/7460546580cc3969547029aca27c5fe6
https://stackoverflow.com/questions/28983131/is-there-any-way-to-retrieve-the-name-for-a-gist-that-github-displays
https://dev.to/m1guelpf/3-ways-to-get-data-from-github-gists-9bg
https://sendgrid.com/blog/gist-please/
https://stackoverflow.com/questions/49382979/how-to-loop-a-json-keys-result-from-bash-script
https://github.com/ingydotnet/json-bash
https://starkandwayne.com/blog/bash-for-loop-over-json-array-using-jq/
https://github.com/stedolan/jq
https://stackoverflow.com/questions/48384217/get-secret-gists-using-github-graphql
{{{
sample='[{"name":"foo"},{"name":"bar"}]'
for row in $(echo "${sample}" | jq -r '.[] | @base64'); do
    _jq() {
     echo ${row} | base64 --decode | jq -r ${1}
    }

   echo $(_jq '.name')
done
foo
bar
}}}
https://stackoverflow.com/questions/33655700/github-api-fetch-issues-with-exceeds-rate-limit-prematurely


<<<


! github flagged account
https://webapps.stackexchange.com/questions/105956/my-github-account-has-been-suddenly-flagged-and-hidden-from-public-view-how
https://github.community/t5/How-to-use-Git-and-GitHub/My-account-is-flagged/td-p/221
https://stackoverflow.com/questions/41344476/github-account-disabled-for-posting-gists
http://a-habakiri.hateblo.jp/entry/20161208accountflagged









.











Oracle Global Data Services (GDS) MAA Best Practices http://www.oracle.com/technetwork/database/availability/maa-globaldataservices-3413211.pdf
Intelligent Workload Management across Database Replicas https://zenodo.org/record/45065/files/SummerStudentReport-RitikaNevatia.pdf

-- hierarchy 

nls_database
nls_instance
nls_session
environment



-- nls_length_semantics

LS_LENGTH_SEMANTICS enables you to create CHAR and VARCHAR2 columns using either byte or character length semantics. Existing columns are not affected.
NCHAR, NVARCHAR2, CLOB, and NCLOB columns are always character-based. You may be required to use byte semantics in order to maintain compatibility with existing applications.
NLS_LENGTH_SEMANTICS does not apply to tables in SYS and SYSTEM. The data dictionary always uses byte semantics.

http://oracle.ittoolbox.com/groups/technical-functional/oracle-db-installs-l/need-to-change-nls_length_semantics-from-byte-to-char-on-production-systems-1168275
http://www.oracle-base.com/articles/9i/CharacterSemanticsAndGlobalization9i.php
http://decipherinfosys.wordpress.com/2007/02/19/nls_length_semantics/

The National Character Set ( NLS_NCHAR_CHARACTERSET ) in Oracle 9i, 10g and 11g (Doc ID 276914.1)
Unicode Character Sets In The Oracle Database (Doc ID 260893.1)
AL32UTF8 / UTF8 (Unicode) Database Character Set Implications (Doc ID 788156.1)
Changing the NLS_CHARACTERSET to AL32UTF8 / UTF8 (Unicode) (Doc ID 260192.1)
Complete Checklist for Manual Upgrades to 11gR2 (Doc ID 837570.1)
Complete Upgrade Checklist for Manual Upgrades from 8.X / 9.0.1 to Oracle9iR2 (9.2.0) (Doc ID 159657.1)
Problems connecting to AL32UTF8 databases from older versions (8i and lower) (Doc ID 237593.1)
NLS considerations in Import/Export - Frequently Asked Questions (Doc ID 227332.1)


-- TIME

Dates & Calendars - Frequently Asked Questions
  	Doc ID: 	227334.1


Time related columns can get ahead of SYSDATE
  	Doc ID: 	Note:268967.1
  	
Impact of changes to daylight saving time (DST) rules on the Oracle database
  	Doc ID: 	Note:357056.1
  	
What are the effects of changing the system clock on an Oracle Server instance?
  	Doc ID: 	Note:77370.1
  	
Y2K FAQ - Server Products
  	Doc ID: 	Note:69388.1


-- DST
http://www.pythian.com/news/18111/have-your-scheduler-jobs-changed-run-times-since-dst/


-- UTC 
Time Zones in MySQL http://www.youtube.com/watch?v=RDgGzaZIpbk



— GNOME3 FALLBACK MODE
http://forums.fedoraforum.org/showthread.php?t=263491
https://www.virtualbox.org/wiki/Downloads

* install kernel-headers, kernel-devel
* install vbox guest additions
* install vbox extension pack
* enable 3d on vbox
* reboot
http://go-database-sql.org/references/
http://goprouser.freeforums.org/how-do-you-carry-your-gopro-t362-20.html

users guide http://gopro.com/wp-content/uploads/2011/03/HD-HERO-UM-ENG-110110.pdf
goalsontrack http://www.youtube.com/watch?v=Rmb0OxMw95I, http://www.goalsontrack.com/
http://lifetick.com/
http://lifehacker.com/5873909/five-best-goal-tracking-services
https://www.mindbloom.com/lifegame
* lifestyle
* career
* creativity
* spirituality
* health
* relationships
* finances

SMART goal 
http://mobileoffice.about.com/od/glossary/g/smart-goals-definition.htm
<<<
Definition: SMART is an acronym used as a mnemonic to make sure goals or objectives are actionable and achievable. Project managers use the criteria spelled out in SMART to evaluate goals, but SMART can also be used by individuals for personal development or personal productivity.
What Does SMART Mean?

There are many variations to the SMART definition; the letters can alternately signify:
S - specific, significant, simple

M - measurable, meaningful, manageable

A - achievable, actionable, appropriate, aligned

R - relevant, rewarding, realistic, results-oriented

T - timely, tangible, trackable

Alternate Spellings: S.M.A.R.T.
Examples:
A general goal may be to "make more money" but a SMART goal would definte the who, what, where, when, and why of the objective: e.g., "Make $500 more a month by freelancing writing for online blogs 3 hours a week"
<<<

SMART goal worksheet http://www.goalsontrack.com/resources/SMART_Goal_Worksheet_(PDF).pdf
Task flow worksheet http://www.goalsontrack.com/resources/Task_Flow_Worksheet_(PDF).pdf
12 month success planner http://www.goalsontrack.com/resources/TSP-12MonthPlanner.pdf







<<showtoc>>

! golden gate health check 
<<<
•	Latest GoldenGate/Database (OGG/RDBMS) Patch recommendations (Doc ID 2193391.1)
•	Oracle GoldenGate Performance Data Gathering (Doc ID 1488668.1)
•	https://www.ateam-oracle.com/loren-penton
•	https://www.oracle.com/technetwork/database/availability/maa-gg-performance-1969630.pdf
•	https://www.oracle.com/a/tech/docs/maa-goldengate-hub.pdf
•	SRDC: Oracle GoldenGate Integrated Extract and Replicat Performance Diagnostic Collector (Doc ID 2262988.1)
•	Master Note for Streams Recommended Configuration (Doc ID 418755.1)

MOS Note:1298562.1:
Oracle GoldenGate database Complete Database Profile check script for Oracle DB (All Schemas) Classic Extract 

MOS Note: 1296168.1
Oracle GoldenGate database Schema Profile check script for Oracle DB

MOS Note: 1448324.1
GoldenGate Integrated Capture and Integrated Replicat Healthcheck Script 

<<<


[img(100%,100%)[ https://i.imgur.com/TZiyftQ.jpg ]]

[img(70%,70%)[https://i.imgur.com/avZx13h.png]]
[img(70%,70%)[https://i.imgur.com/qmNlMaH.jpg]]
[img(70%,70%)[https://i.imgur.com/qeu3wUA.jpg]]
[img(70%,70%)[https://i.imgur.com/hGon9iR.jpg]]
[img(70%,70%)[https://i.imgur.com/evzj3DP.jpg]]
[img(70%,70%)[https://i.imgur.com/dx5xSt0.png]]
[img(70%,70%)[https://i.imgur.com/AoOiyMi.png]]
[img(70%,70%)[https://i.imgur.com/JYlRjnt.jpg]]



! Enkitec materials
https://connectedlearning.accenture.com/curator/chanea-heard
https://connectedlearning.accenture.com/learningboard/goldengate-administration


! Oracle youtube materials
Oracle GoldenGate 12c Overview https://www.youtube.com/watch?v=GdjuiWPPmVs
Oracle GoldenGate 12c New Features https://www.youtube.com/watch?v=ABle015pRXY&list=PLgvgXKR2fhHBa592Btv5qT3tdqKF5ifuF
Oracle Golden Gate: The Essentials of Data Replication https://www.youtube.com/watch?v=d-YAouQ1g0Y
Oracle GoldenGate Tutorials for Beginners https://www.youtube.com/watch?v=qQJvc1DyLIw
Oracle GoldenGate Deep Dive Hands on Lab - Part 1https://www.youtube.com/watch?v=5Yp6bvGeP2s
Oracle GoldenGate Deep Dive Hands on Lab - Part 2 https://www.youtube.com/watch?v=bOnGgnjXdNo
Oracle GoldenGate Deep Dive Hands on Lab - Part 3 https://www.youtube.com/watch?v=86QK9NXEKks
Oracle GoldenGate Integrated Extract and Replicat demo https://www.youtube.com/watch?v=dF5RfCeClIo
https://www.youtube.com/user/oraclegoldengate/videos
Oracle GoldenGate 12c: Fundamentals for Oracle http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=609&get_params=dc:D84357,clang:EN
Oracle Vbox VM http://www.oracle.com/technetwork/middleware/data-integrator/odi-demo-2032565.html , http://www.oracle.com/technetwork/middleware/data-integrator/downloads/odi-12c-getstart-vm-install-guide-2401840.pdf

! other youtube materials
Oracle Goldengate Installation and COnfiguration https://www.youtube.com/watch?v=XnyqS6_IVMQ
Oracle GoldenGate 12c Installation/Walkthrough on VirtualBox https://www.youtube.com/watch?v=c4NxBTnJYvo
11 part series - MS SQL Server to Oracle GG https://www.youtube.com/playlist?list=PLbkU_gVPZ7OTgLRLABah9kdrJ07Tml8E4
6 part series - GoldenGate Tutorial goldengate installation on linux https://www.youtube.com/watch?v=lb3UKpgCA1U&list=PLZSKX9aay1XvIQjy0lWJ5RSn0iuCWGrnL



! golden gate waits 
Integrated Replicat Stuck With REPL Capture/Apply: flow control (Doc ID 2354344.1)










[img(50%,50%)[ https://lh3.googleusercontent.com/-kIc50LO2TF4/UYqTs_DcWxI/AAAAAAAAB5s/aplJ3QweTYI/w757-h568-no/GoldenGateUseCases.jpg ]]

http://38.114.158.111/
http://forums.oracle.com/forums/thread.jspa?messageID=4306770
http://cglendenningoracle.blogspot.com/2010/02/streams-vs-golden-gate.html

GoldenGate Quick Start Tutorials
http://gavinsoorma.com/oracle-goldengate-veridata-web/

Oracle Active Data Guard and Oracle GoldenGate
http://www.oracle.com/technetwork/database/features/availability/dataguardgoldengate-096557.html

Oracle GoldenGate high availability using Oracle Clusterware
https://docs.google.com/viewer?url=http://www.oracle.com/technetwork/middleware/goldengate/overview/ha-goldengate-whitepaper-128197.pdf

Zero-Downtime Database Upgrades Using Oracle GoldenGate
https://docs.google.com/viewer?url=http://www.oracle.com/technetwork/middleware/goldengate/overview/ggzerodowntimedatabaseupgrades-174928.pdf

Oracle GoldenGate 11g: Real-Time Access to Real-Time Information
https://docs.google.com/viewer?url=http://www.oracle.com/us/products/middleware/data-integration/goldengate11g-realtime-wp-168153.pdf%3FssSourceSiteId%3Dotnen

Golden Gate Scripting
http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/goldengate/11g/ogg_automate/index.html
Oracle GoldenGate Tutorial 10- performing a zero downtime cross platform migration and 11g database upgrade
http://gavinsoorma.wordpress.com/2010/03/08/oracle-goldengate-tutorial-10-performing-a-zero-downtime-cross-platform-migration-and-11g-database-upgrade/

! console 
http://console.cloud.google.com/




https://stackoverflow.com/questions/22697049/what-is-the-difference-between-google-app-engine-and-google-compute-engine

! official documentation 
https://cloud.google.com/products/
https://cloud.google.com/compute/docs/how-to

! cloud sdk 
https://cloud.google.com/sdk/downloads
https://github.com/GoogleCloudPlatform

! GCP vs AWS comparison 
https://cloud.google.com/docs/compare/aws
https://www.coursera.org/learn/gcp-fundamentals-aws

! hadoop and gcp
https://www.lynda.com/Hadoop-tutorials/Hadoop-Google-Cloud-Platform/516574/593166-4.html

! GCP networking

!! edge network 
https://peering.google.com/#/


! online courses 
http://bit.ly/2Al1rUP
https://www.coursera.org/specializations/gcp-architecture
https://www.coursera.org/learn/gcp-fundamentals-aws#pricing <- Google Cloud Platform Fundamentals for AWS Professionals
https://www.udemy.com/courses/search/?q=google%20compute%20engine&src=ukw
https://www.lynda.com/Cloud-tutorials/Google-Cloud-Compute-Engine-Essential-Training/181244-2.html
https://www.pluralsight.com/search?q=google%20compute
https://www.pluralsight.com/authors/lynn-langit
https://www.safaribooksonline.com/search/?query=%22Google%20Compute%20Engine%22&extended_publisher_data=true&highlight=true&is_academic_institution_account=false&source=user&include_assessments=false&include_case_studies=true&include_courses=true&include_orioles=true&include_playlists=true&formats=video&sort=relevance

!! Lynn-Langit
https://www.lynda.com/Google-Cloud-Platform-tutorials/Google-Cloud-Platform-Essential-Training/540539-2.html
https://www.lynda.com/Google-Cloud-tutorials/Google-Cloud-Spanner-First-Look/597023-2.html

!! Joseph Lowery
app engine https://www.lynda.com/Developer-Cloud-Computing-tutorials/Google-App-Engine-Essential-Training/194134-2.html
compute engine https://www.lynda.com/Cloud-tutorials/Google-Cloud-Compute-Engine-Essential-Training/181244-2.html
https://www.lynda.com/Cloud-tutorials/Google-Cloud-Storage-Data-Essential-Training/181243-2.html

!! James Wilson 
cloud functions https://app.pluralsight.com/library/courses/google-cloud-functions-getting-started/table-of-contents









http://net.tutsplus.com/tutorials/other/easy-graphs-with-google-chart-tools/ <-- very detailed tutorial
http://code.google.com/apis/chart/image/docs/chart_playground.html  <-- chart playground 
http://code.google.com/apis/ajax/playground/?type=visualization#tree_map  <-- treemap code playground
http://cran.r-project.org/web/packages/googleVis/vignettes/googleVis.pdf  <-- Using the Google Visualisation API with R: googleVis-0.2.13 Package Vignette
http://psychopyko.com/tutorial/how-to-use-google-charts/
http://www.a2ztechguide.com/2011/11/example-on-how-to-use-google-chart-api.html
http://forums.msexchange.org/m_1800499864/mpage_1/key_exchsvr/tm.htm#1800499871

Embedding a Googe Chart within an email http://groups.google.com/group/google-chart-api/browse_thread/thread/0ca54c8281952005
http://www.bencurtis.com/2011/02/quick-tip-sending-google-chart-links-via-email/
http://googleappsdeveloper.blogspot.com/2011/09/visualize-your-data-charts-in-google.html
http://www.ibm.com/developerworks/data/library/techarticle/dm-1111googlechart/index.html?ca=drs-
http://www.2webvideo.com/blog/data-visualization-tutorial-using-google-chart-tools
http://www.guidingtech.com/7221/create-charts-graphs-google-image-chart-editor/
http://blog.ouseful.info/2009/02/17/creating-your-own-results-charts-for-surveys-created-with-google-forms/







http://awads.net/wp/2006/04/17/orana-powered-by-google-and-feedburner/	
google chrome linux
http://superuser.com/questions/52428/where-does-google-chrome-for-linux-store-user-specific-data
http://www.google.com/support/forum/p/Chrome/thread?tid=328b2114587dd5ee&hl=en
http://www.google.com/support/forum/p/Chrome/thread?tid=08e9aa36ad5159cb&hl=en <-- profile
http://www.google.ru/support/forum/p/Chrome/thread?tid=6a3d820ca818336b&hl=en <-- transfer settings
http://www.google.com/support/forum/p/Chrome/thread?tid=328b2114587dd5ee&hl=en <-- sync


google chrome windows
http://www.google.com.ph/support/forum/p/Chrome/thread?tid=34397b8ff6a48a99&hl=en <-- windows
http://www.walkernews.net/2010/09/13/how-to-backup-and-restore-google-chrome-bookmark-history-plugin-and-theme/

sync
http://www.google.com/support/chrome/bin/answer.py?answer=185277

manual uninstall
http://support.google.com/chrome/bin/answer.py?hl=en&answer=111899

shockwave issue http://www.howtogeek.com/103292/how-to-fix-shockwave-flash-crashes-in-google-chrome/

http://superuser.com/questions/772092/making-google-chrome-35-work-on-centos-6-5
{{{
The Google Chrome team no longer officially supports CentOS 6. That doesn't mean it won't work, however. Richard Lloyd has put together a script that does everything necessary to get it running:

wget http://chrome.richardlloyd.org.uk/install_chrome.sh
sudo bash install_chrome.sh
}}}
''Wiki'' http://en.wikipedia.org/wiki/Greenplum

''Datasheets'' 
http://www.greenplum.com/sites/default/files/h7419.5-greenplum-dca-ds.pdf
http://www.greenplum.com/sites/default/files/h8995-greenplum-database-ds.pdf
http://finland.emc.com/collateral/campaign/global/forums/greenplum-emc-driving-the-future.pdf
http://goo.gl/Baa60

''Architecture document'' http://goo.gl/lGwQ1











this is how onecommand creates grid disks
http://www.evernote.com/shard/s48/sh/65b7e258-543e-4d79-a855-78458a82b830/4f043b0a2dbc947dc603b93718974910
http://www.pythian.com/news/16103/how-to-gns-process-log-level-for-diagnostic-purposes-11g-r2-rac-scan-gns/

http://coskan.wordpress.com/2010/09/11/dbca-could-not-startup-the-asm-instance-configured-on-this-node-error-for-lower-versions-with-11gr2-gi/
https://blogs.oracle.com/fatbloke/entry/growing_your_virtualbox_virtual_disk
http://www.perfdynamics.com/Manifesto/gcaprules.html
http://karlarao.wordpress.com/2010/07/27/guesstimations
http://blog.oracle-ninja.com/2015/07/haip-and-exadata/

11gR2 Grid Infrastructure Redundant Interconnect and ora.cluster_interconnect.haip [ID 1210883.1]

http://blogs.oracle.com/AlejandroVargas/resource/HAIP-CHM.pdf <-- alejandro's introduction

https://forums.oracle.com/forums/thread.jspa?threadID=2220975
http://oraxperts.com/wordpress/highly-available-ip-redundant-private-ip-in-oracle-grid-infrastructure-11g-release-2-11-2-0-2-or-above/
http://www.oracleangels.com/2011/05/public-virtual-private-scan-haip-in-rac.html
	
Hardware Assisted Resilient Data H.A.R.D
  	Doc ID: 	Note:227671.1


http://www.oracle.com/technology/deploy/availability/htdocs/vendors_hard.html

http://www.oracle.com/technology/deploy/availability/htdocs/HARD.html

http://www.oracle.com/technology/deploy/availability/htdocs/hardf.html

http://www.oracle.com/corporate/press/1067828.html

http://www.dba-oracle.com/real_application_clusters_rac_grid/hard.html
''8Gb/s Fibre Channel HBAs — All the Facts''
http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=6&ved=0CEYQFjAF&url=http%3A%2F%2Fwww.emulex-oracle.com%2Fartifacts%2Ff1f78fdb-7501-4baf-84ea-fe0dfb8e62ec%2Felx_wp_all_hba_8Gb_next_gen.pdf&ei=dqSIUNLFJsWS2AXW-YGgAg&usg=AFQjCNG9HMM9j2GFZYXJf1nAPJ-WkryALg

https://twiki.cern.ch/twiki/bin/view/PDBService/OrionTests
<<<
* Sequential IO performance is almost inevitably the HBA speed, that is typically 400 MB per sec, or 800 MB when multipathing is used.
* Maximum MBPS typically saturates to the HBA speed. For a single ported 4Gbps HBA you will see something less than 400 MBPS. If the HBA is dual ported and you are using multipathing the number should be close to 800 MBPS
<<<
https://docs.oracle.com/cd/E18283_01/appdev.112/e16760/d_compress.htm#  <-- official doc 

https://oracle-base.com/articles/11g/dbms_compression-11gr2
https://oracle-base.com/articles/12c/dbms_compression-enhancements-12cr1
http://uhesse.com/2011/09/12/dbms_compression-example/
https://jonathanlewis.wordpress.com/2011/10/04/hcc/
https://antognini.ch/2010/05/how-good-are-the-values-returned-by-dbms_compression-get_compression_ratio/   <-- good stuff 
https://oraganism.wordpress.com/2013/01/10/compression-advisory-dbms_compression/  <-- security grants 
https://hortonworks.com/services/training/certification/exam-objectives/#hdpcd
https://learn.hortonworks.com/hdp-certified-developer-hdpcd2019-exam

https://2xbbhjxc6wk3v21p62t8n4d4-wpengine.netdna-ssl.com/wp-content/uploads/2018/12/DEV-331_Apache_Hive_Advanced_SQL_DS.pdf

.
{{{
cd c:\Dropbox\Python
c:\Python32\python.exe
import pyreadline as readline           
import readline                         
import rlcompleter                      
readline.parse_and_bind("tab: complete")

-- index of modules
http://localhost:7464/
}}}

! Autocompletion Windows
{{{
c:\Python32>cd Scripts
c:\Python32\Scripts>easy_install.exe pyreadline

c:\Python32\Scripts>cd ..

c:\Python32>python.exe
Python 3.2 (r32:88445, Feb 20 2011, 21:29:02) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>> import pyreadline as readline
>>> import readline
>>> import rlcompleter
>>> readline.parse_and_bind("tab: complete")
>>> print(var)
test
}}}
http://www-01.ibm.com/support/docview.wss?uid=swg21425643
http://www.littletechtips.com/2012/03/how-to-enable-tab-completion-in-python.html
http://stackoverflow.com/questions/6024952/readline-functionality-on-windows-with-python-2-7  <-- good stuff
http://www.python.org/ftp/python/contrib-09-Dec-1999/Misc/readline-python-win32.README
''Easy_install doc'' http://packages.python.org/distribute/easy_install.html
http://blog.sadphaeton.com/2009/01/20/python-development-windows-part-1installing-python.html  <-- good stuff
http://blog.sadphaeton.com/2009/01/20/python-development-windows-part-2-installing-easyinstallcould-be-easier.html
http://www.varunpant.com/posts/how-to-setup-easy_install-on-windows
''setuptools (old version 3<)'' http://pypi.python.org/pypi/setuptools#files
''autocompletion linux'' http://www.youtube.com/watch?v=zUHFu8OlDZg
''distribute (new version 3>)'' http://stackoverflow.com/questions/7558518/will-setuptools-work-with-python-3-2-x
http://regebro.wordpress.com/2009/02/01/setuptools-and-easy_install-for-python-3/
http://pypi.python.org/pypi/distribute





!
! Ch1 - starting to code

functions 
* print
* int
* input 

code branches (aka path)
* branch condition (true or false)
* if/else branches
* Python uses indents to connect paths (nested if/else)

IDLE tidbits
* make use of : on if/else
* it automatically indents.. and indents matter for the code path
* when you TAB, it automatically converts it to 4 spaces
{{{
# simple if/else 

if gas > 10:
    print("trip is good to go!")
else:
    if money > 100:
        print("you should buy food")
    else:
        print("withdraw from atm and buy food")
print("lets go!")
}}}

Loop 
* if the loop condition is true, then a loop will run a given piece of code, until it becomes false
* Did you notice that you had to set the value of the answer variable to something sensible before you started the loop? This is important, because if the answer variable doesn’t already have the value no, the loop condition would have been false and the code in the loop body would never have run at all.

{{{
# simple loop
answer = "no"
while answer == "no":
    answer = input("Are we there? ")
print("We're there!")
}}}

''# code template: a simple loop game''
{{{
from random import randint
secret = randint(1, 10)

print("Welcome!")
guess = 0
while guess != secret:
    g = input("Guess the number:")
    guess = int(g)
    if guess == secret:
            print("You win!")
    else:
        if guess > secret:
            print("Too high!")
        else:
            print("Too low!")
        print("You lose!")
print("Game over!")
}}}


!
! Ch2 - textual data

The computer keeps track of individual characters by using ''two pieces of information:''
1) the ''start'' of the string and the offset of an individual character. 
2) The ''offset'' is how far the individual character is from the start of the string.  - up to, but not including

* The first character in a string has an offset of 0.. and so on.. this offset is also called ''index''
* The offset value is always 1 less than the position

''substring''
s[138:147]
s[a:b]
a is the index of the first character
b is the index after the last character

''function''
print(msg.upper())

''library and function''
page = urllib.request.urlopen("http://...")
           library name.function name

{{{
# simple search code

import urllib.request
import time

price = 99.99

while price > 4.74:
    time.sleep(900)
    page = urllib.request.urlopen("http://www.beans-r-us.biz/prices-loyalty.html")
    text = page.read().decode("utf8")

    index = text.find(">$")
    position = int(index)
    price = float(text[position+2:position+6])
print("Buy!")
print(price)
}}}

<<<
''built-in string methods''

text.endswith(".jpg")
* Return the value True if the string has the given substring at the end.

text.upper(): 
* Return a copy of the string converted to uppercase.

text.lower():
* Return a copy of the string converted to lowercase.

text.replace("tomorrow", "Tuesday"):
* Return a copy of the string with all occurrences of one substring replaced by another.

text.strip():
* Return a copy of the string with the leading and trailing whitespace removed.

text.find("python"):
* Return the first index value when the given substring is found.

text.startswith("<HTML>")
* Return the value True if the string has the given substring at the beginning.
<<<

<<<
''some of the functions provided by Python’s built-in time library''

time.clock()
* The current time in seconds, given as a floating point number.

time.daylight()
* This returns 0 if you are not currently in Daylight Savings Time.

time.gmtime()
* Tells you current UTC date and time (not affected by the timezone).

time.localtime()
* Tells you the current local time (is affected by your timezone).

time.sleep(secs)
* Don’t do anything for the specified number of seconds.

time.time()
* Tells you the number of seconds since January 1st, 1970.

time.timezone()
* Tells you the number of hours difference between your timezone and the UTC timezone (London).
<<<


!
! Ch3 - Functions

* A function is a boxed-up piece of reusable code. 
* In Python, use the ''def'' keyword to define a new function 

{{{
# a simple smoothie function

def make_smoothie():
    juice = input("What juice would you like? ")
    fruit = input("OK - and how about the fruit? ")
    print("Thanks. Let's go!")
    print("Crushing the ice...")
    print("Blending the " + fruit)
    print("Now adding in the " + juice + " juice")
    print("Finished! There's your " + fruit + " and " + juice + " smoothie!")

print("Welcome to smoothie-matic 2.0")
another = "Y"
while another == "Y":
    make_smoothie()
    another = input("How about another(Y/N)? ")
}}}

* If you use the ''return()'' command within a function, you can send a data value back to the calling code.
* The value assigned to “price" is 5.51. The assignment happens after the code in the function executes
* Well... sort of. The print() command is designed
to display (or output) a message, typically on screen. The
return() command is designed to allow you to arrange for a
function you write to provide a value to your program. Recall the
use of randint() in Chapter 1: a random number between
two values was returned to your code. So, obviously, when
providing your code with a random number, the randint()
function uses return() and not print(). In fact, if
randint() used print() instead of return(), it
would be pretty useless as a reusable function.

Q: Does return() always come at the end of the function?
A: Usually, but this is not a requirement, either. The
return() can appear anywhere within a function and, when it
is executed, control returns to the calling code from that point in the
function. It is perfectly reasonable, for instance, to have multiple
uses of return() within a function, perhaps embedded
with if statements which then provide a way to control which
return() is invoked when.
Q: Can return() send more than one result back to the
caller?
A: Yes, it can. return() can provide a list of results to the
calling code. But, let’s not get ahead of ourselves, because lists are
not covered until the next chapter. And there’s a little bit more to
learn about using return() first, so let’s read on and get back
to work.

{{{
# send to twitter function

def send_to_twitter():
	msg = "I am a message that will be sent to Twitter"
	password_manager = urllib.request.HTTPPasswordMgr()
	password_manager.add_password("Twitter API",
	"http://twitter.com/statuses", "...", "...")
	http_handler = urllib.request.HTTPBasicAuthHandler(password_manager)
	page_opener = urllib.request.build_opener(http_handler)
	urllib.request.install_opener(page_opener)
	params = urllib.parse.urlencode( {'status': msg} )
	resp = urllib.request.urlopen("http://twitter.com/statuses/update.json", params)
	resp.read()
}}}

* Use parameters to avoid duplicating functions
* Just like it’s a bad idea to use copy’n’paste for repeated usages of code, it’s also a bad idea to create multiple copies of a function with only minor differences between them.
* A parameter is a value that you send into your function.
* The parameter’s value works just like a variable within the function, ''except for the fact that its initial value is set outside the function code''

To use a parameter in Python, simply put a variable name between the parentheses that come after the definition of the function name and before the colon.
Then within the function itself, simply use the variable like you would any other

{{{
# sample function parameter
def shout_out(the_name):
    return("Congratulations " + the_name + "!")

# use it as follows
print(shout_out('Wanda'))
msg = shout_out('Graham, John, Michael, Eric, and Terry by 2')
print(shout_out('Monty'))
}}}

* check out the use of ''msg'' parameter on the function and also on the price watch code
* also ''password'' variable is defined globally
{{{
# sample send to twitter code

import urllib.request 
import time

password="C8H10N4O2" 

def send_to_twitter(msg): 
    password_manager = urllib.request.HTTPPasswordMgr() 
    password_manager.add_password("Twitter API", 
                   "http://twitter.com/statuses", "starbuzzceo", password) 
    http_handler = urllib.request.HTTPBasicAuthHandler(password_manager) 
    page_opener = urllib.request.build_opener(http_handler) 
    urllib.request.install_opener(page_opener) 
    params = urllib.parse.urlencode( {'status': msg} ) 
    resp = urllib.request.urlopen("http://twitter.com/statuses/update.json", params) 
    resp.read()

def get_price(): 
    page = urllib.request.urlopen("http://www.beans-r-us.biz/prices.html") 
    text = page.read().decode("utf8") 
    where = text.find('>$') 
    start_of_price = where + 2 
    end_of_price = start_of_price + 4 
    return float(text[start_of_price:end_of_price]) 

price_now = input("Do you want to see the price now (Y/N)? ") 

if price_now == "Y": 
    send_to_twitter(get_price()) 
else: 
    price = 99.99 
    while price > 4.74: 
        time.sleep(900) 
        price = get_price() 
    send_to_twitter("Buy!")
}}}

* The rest of the program can’t see the ''local variable'' from another function
* Programming languages record variables using a section of memory called the stack. It works like a notepad. 
* When you call a function, the computer creates a fresh list of variables.. But when you call a function, Python starts to record any new variables created in the function’s code on a new sheet of paper on the stack
* This new sheet of paper on the stack is
called a new stack frame. Stack frames
record all of the new variables that are
created within a function. These are known
as local variables.
The variables that were created before the
function was called are still there if the function
needs them; they are on the previous stack frame.

Twitter Basic vs OAuth authentication
http://www.linuxjournal.com/content/twittering-command-line  <-- OLD STYLE basic authentication removed June 2010
http://jeffmiller.github.com/2010/05/31/twitter-from-the-command-line-in-python-using-oauth  <-- NEW STYLE
http://forums.oreilly.com/topic/20756-sending-messages-to-twitter/page__st__20
http://dev.twitter.com/pages/oauth_faq
http://dev.twitter.com/pages/basic_to_oauth

-- some issues I encountered 
http://answers.yahoo.com/question/index?qid=20090504211017AAQexjf

!!!! Step by step HOWTO - send tweets on command line (all codes are python3)
just go to this page and follow the guide posted by ''Core_500'' no need to install tweepy
{{{
# oauth1.py 

import tweepy

CONSUMER_KEY = 'lGfFmQHYEdGGp2TAE6P0A'
CONSUMER_SECRET = 'iD7OfMrCEWY7X6mQ85QrEhMA2jGtqPmvIoR0mU2gg'

auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth_url = auth.get_authorization_url()

print ('Please authorize:' + auth_url)

verifier = input('PIN: ').strip()
auth.get_access_token(verifier)

print ("ACCESS_KEY = '%s'" % auth.access_token.key)
print ("ACCESS_SECRET = '%s'" % auth.access_token.secret)
}}}

{{{
# oauth2.py 

import sys
import tweepy

CONSUMER_KEY = 'lGfFmQHYEdGGp2TAE6P0A'
CONSUMER_SECRET = 'iD7OfMrCEWY7X6mQ85QrEhMA2jGtqPmvIoR0mU2gg'
ACCESS_KEY = '277601098-oVnCXceKKih6B37huPNfxNJsM6q6xvhtZQTdLci8'
ACCESS_SECRET = 'JRzzK88I3oNEEj4FDknVAoJSzC6AhBqkarbkKv59UM'

auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
api = tweepy.API(auth)
api.update_status(sys.argv[1])
}}}

{{{
# putting it all together 

import sys
import tweepy
import urllib.request
import time


def send_to_twitter(msg):
    CONSUMER_KEY = 'lGfFmQHYEdGGp2TAE6P0A'
    CONSUMER_SECRET = 'iD7OfMrCEWY7X6mQ85QrEhMA2jGtqPmvIoR0mU2gg'
    ACCESS_KEY = '277601098-oVnCXceKKih6B37huPNfxNJsM6q6xvhtZQTdLci8'
    ACCESS_SECRET = 'JRzzK88I3oNEEj4FDknVAoJSzC6AhBqkarbkKv59UM'
    auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
    auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
    api = tweepy.API(auth)
    api.update_status(msg)


def get_price():
    page = urllib.request.urlopen("http://www.beans-r-us.biz/prices.html")
    text = page.read().decode("utf8")
    where = text.find('>$')
    start_of_price = where + 2
    end_of_price = start_of_price + 4
    return float(text[start_of_price:end_of_price])


price_now = input("Do you want to see the price now (Y/N)? ")


if price_now == "Y":
    send_to_twitter(get_price())
else:
    price = 99.99
    while price > 4.74:
        time.sleep(900)
        send_to_twitter("Buy!")
}}}

-- use it!
C:\Dropbox\Python>oauth2.py "my 5 tweet"


!
! Ch4 - Data in Files and Arrays

!!! read data in files
{{{
result_f = open("results.txt")     <-- open it!
...
result_f.close()      <-- close it!
}}}

!!!the ''for loop shredder'' 
* The entire file is fed into the for loop shredder...
* Note: unlike a real shredder, the for loop shredderTM doesn't destroy your data—it just chops it into lines.
* ...which breaks it up into oneline- at-a-time chunks (which are themselves strings).
* Each time the body of the for loop runs, a variable is set to a string containing the current line of text in the file. This is referred to as ''iterating'' through the data in the file
{{{
result_f = open("results.txt")
for each_line in result_f:
    print(each_line)
result_f.close()
}}}

!!!''Split'' each line as you read it
* Python strings have a built-in split() method.
* Split into ''separate variables''

rock_band = "Al Carl Mike Brian"

{{{
highest_score = 0
result_f = open("results.txt")
for line in result_f:
    (name,score) = line.split()
    if float(score) > highest_score:
        highest_score = float(score)
result_f.close()
print("The highest score was:")
print(highest_score)
}}}

Using a programming feature called ''multiple assignment'', you can take the result from the cut performed by split() and assign it to a collection of variables
(rhythm, lead, vocals, bass) = rock_band.split()

!!! ''Sorting'' is easier in memory
* Keep the data in files on the disk
* Keep the data in memory

!!! Sometimes, you need to deal with a whole bundle of data, all at once. To do that, most languages give you the ''array''.
* Think of an array as a data train. Each car in the train is called an array element and can store a single piece of data. If you want to store a number in one element and a string in another, you can.
* Even though an array contains a whole bunch of data items, the array itself is a single variable, which just so happens to contain a collection of data. Once your data is in an array, you can treat the array just like any other variable.
* For example, in Python most programmers think array when they are actually using a Python list. For our purposes, think of Python lists and arrays as the essentially same thing.
{{{
my_words = ["Dudes", "and"]
print(my_words[0])
    Dudes
print(my_words[1])
    and
}}}
* But what if you need to add some extra information to an array?.. you can use ''append''
* you can start with ''zero values'' from your array and just do ''append''
{{{
my_words.append("Bettys")
print(my_words[2])
    Bettys
}}}

<<<
''some of the methods that come built into every array''

count()
* Tells you how many times a value is in the array

extend()
* Adds a list of items to an array

index()
* Looks for an item and returns its index value

insert()
* Adds an item at any index location

pop()
* Removes and returns the last array item

remove()
* Removes and returns the first array item

reverse()
* Reverses the order of the array

sort()
* Sorts the array into a specified order (low to high)
<<<

!!! ''Sort'' the array before displaying the results

It was very simple to sort an array of data using just two lines of code. But it turns out you can do even
better than that if you use an option with the sort() method. Instead of using these two lines:
''scores.sort()
scores.reverse()''
you could have used just one, which gives the same result: ''scores.sort(reverse = True)''

!!! putting it all together
{{{
scores = []
result_f = open("results.txt")
for line in result_f:
    (name, score) = line.split()
    scores.append(float(score))
result_f.close()
scores.sort(reverse=True)
print("The highest score was:")
print(scores[0])
print(scores[1])
print(scores[2])
}}}

!
! Ch5 - Hashes and Databases

Data Structure A standard method of organizing a collection of data items in your computer's memory. You've already met one of the classic data structures: ''the array''

<<<
''data structure names'' 

Array
* A variable with multiple indexed slots for holding data

Linked list
* A variable that creates a chain of data where one data item points to another data item, which itself points to another data item, and another, and so on and so forth

Queue
* A variable that allows data to enter at one end of a collection and leave at the other end, supporting a first-in, first-out mechanism

Hash
* A variable that has exactly two columns and (potentially) many rows of data
* Known in the Python world as a “dictionary.”

Set
* A variable that contains a collection of unique data items

Multi-dimensional array
* A variable that contains data arranged as a matrix of multiple dimensions (but typically, only two)
<<<

!!! Associate a key with a value using a ''hash''
* Start with an empty hash, curly brackets
{{{
scores = {}
}}}
* After splitting out the name and the score, use the value of “score" as the key of the hash and the value of “name" as the value.
{{{
for line in result_f:
    (name, score) = line.split()
    scores[score] = name
}}}
* Use a ''for loop'' to process/print the contents of the hash
{{{
# not sorted
for each_score in scores.keys():
    print('Surfer ' + scores[each_score] + ' scored ' + each_score)
}}}
* Python hashes don't have a sort() method, you must use ''sorted()''
* Now that you are sorting the keys of the hash (which represent the surfer’s scores), it should be clear why the scores were used as the key when adding data into the hash: you need to sort the scores, not the surfer names, so the scores need to be on the left side of the hash (because that’s what the built-in sorted() function works with).
{{{
# sorted using function sorted()
for each_score in sorted(scores.keys(), reverse = True):
    print('Surfer ' + scores[each_score] + ' scored ' + each_score)
}}}

!!! Iterate hash data with ''for''
There are two methods to iterate hash data
1) using ''keys()'' method
{{{
for each_score in scores.keys():
    print('Surfer ' + scores[each_score] + ' scored ' + each_score)
}}}
2) using ''items()'' method, returns each key-value pair 
{{{
for score, surfer in scores.items():
    print(surfer + ' had a score of ' + str(score))
}}}
























https://www.hhs.gov/hipaa/for-professionals/special-topics/de-identification/index.html
{{{

21:07:29 SYS@cdb1> show parameter log_archive_start

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_start                    boolean     FALSE


21:10:26 SYS@cdb1> startup mount
ORACLE instance started.

Total System Global Area  734003200 bytes
Fixed Size                  2928728 bytes
Variable Size             633343912 bytes
Database Buffers           92274688 bytes
Redo Buffers                5455872 bytes
Database mounted.
21:10:40 SYS@cdb1>
21:10:56 SYS@cdb1>
21:10:57 SYS@cdb1> alter database archivelog;

Database altered.

21:11:02 SYS@cdb1> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     656
Next log sequence to archive   658
Current log sequence           658
21:11:11 SYS@cdb1> alter database open
21:11:17   2  ;

Database altered.


21:11:24 SYS@cdb1> show parameter log_archive_start

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_start                    boolean     FALSE



alter system switch logfile;
alter system switch logfile;


21:17:27 SYS@cdb1> 

  1  select snap_id, archived from dba_hist_log
  2* order by snap_id asc

   SNAP_ID ARC
---------- ---
      2785 NO
      2785 NO

... output snipped ... 

      2919 NO
      2919 NO
      2919 NO

405 rows selected.


21:18:22 SYS@cdb1> exec dbms_workload_repository.create_snapshot;

PL/SQL procedure successfully completed.

21:18:36 SYS@cdb1> select snap_id, archived from dba_hist_log order by snap_id asc;

   SNAP_ID ARC
---------- ---
      2785 NO
      2785 NO

... output snipped ... 

      2920 NO
      2920 YES

   SNAP_ID ARC
---------- ---
      2920 YES

408 rows selected.

}}}
<<showtoc>>

<<<
Alright, here’s something that’s working. This script/command/process can be fired from just the Global Zone and will output the data on each running instances on every non-Global Zone. 
We don’t have to login on each zone then su- to oracle and set the environment for every database. This is useful for resource accounting and general monitoring. 

You can also modify the scripts to pull anything you want from the instances and format it in such a way that’s easily grep’able. Let’s say you can put “Zone :” and “Instance :” in front of every output so you can easily grep it on the final text file. The advantage of this zlogin (this is how we login on every zone) method over using dcli is it’s native and we don’t have to mess with SSH keys on every zone. 

Moving forward I’ll put all the scripts under /root/dba/scripts/ for Global Zones and under /export/home/oracle/dba/scripts/ for non-Global zones

Sample output below: 

<<<

{{{
################################################
Zone : ssc1s1vm04
Oracle Corporation      SunOS 5.11      11.3    August 2016

Instance : +ASM1
Instance : dbm041
################################################
Zone : ssc1s1vm05
Oracle Corporation      SunOS 5.11      11.3    August 2016

Instance : +ASM1
Instance : dbm051
}}}

! Here’s the step by step: 

!! # Login on Global Zone and create oracle/dba directory under /export/home/oracle on every zone
{{{
for i in `zoneadm list|grep -v global`; do  echo "################################################
$i"; mkdir -p /zoneHome/$i/root/export/home/oracle/dba/scripts; done

for i in `zoneadm list|grep -v global`; do  echo "################################################
$i"; ls -ld /zoneHome/$i/root/export/home/oracle/dba; done

for i in `zoneadm list|grep -v global`; do  echo "################################################
$i"; chown -R 1001:1001 /zoneHome/$i/root/export/home/oracle/dba; done
}}}

!! # Copy script files from global to non-global 
{{{
mkdir -p dba/scripts
}}}

{{{
for i in `zoneadm list|grep -v global`; do  echo "################################################
$i"; cp /root/dba/scripts/get_* /zoneHome/$i/root/export/home/oracle/dba/scripts/; done

for i in `zoneadm list|grep -v global`; do  echo "################################################
$i"; chown -R 1001:1001 /zoneHome/$i/root/export/home/oracle/dba; done

for i in `zoneadm list|grep -v global`; do  echo "################################################
$i"; chmod -R 755 /zoneHome/$i/root/export/home/oracle/dba; done

for i in `zoneadm list|grep -v global`; do  echo "################################################
$i"; ls -l /zoneHome/$i/root/export/home/oracle/dba/scripts; done
}}}

!! # Execute shell for every zone and output to file.txt
{{{
for i in `zoneadm list|grep -v global`; do  echo "################################################
$i"; zlogin -l oracle $i /export/home/oracle/dba/scripts/get_inst; done > file.txt ; cat file.txt

for i in `zoneadm list|grep -v global`; do  echo "################################################
$i"; zlogin -l oracle $i /export/home/oracle/dba/scripts/get_asm_size; done > file.txt ; cat file.txt
}}}


!! # Example scripts (create under /root/dba/scripts/ of Global Zone)

!!! get_inst
{{{
#!/bin/bash
# get_inst script
  
db=`ps -ef | grep pmon | grep -v grep | cut -f3 -d_`
for i in $db ; do
       export ORATAB=/var/opt/oracle/oratab
       export ORACLE_SID=$i
       export ORAINST=`ps -ef | grep pmon | grep -v grep | cut -f3 -d_ | grep -i $ORACLE_SID | sed 's/.$//' `
       export ORACLE_HOME=`egrep -i ":Y|:N" $ORATAB | grep -v ^# | grep $ORAINST | cut -d":" -f2 | grep -v "\#" | grep -v "\*"`

$ORACLE_HOME/bin/sqlplus -s /nolog <<EOF
connect / as sysdba


  set echo off
        set heading off
        select instance_name from v\$instance;

EOF
done
}}}

!!! get_asm_size
{{{
#!/bin/bash
# get_asm_size script
  
db=`ps -ef | grep pmon | grep -v grep | grep -i asm | cut -f3 -d_`
for i in $db ; do
       export ORATAB=/var/opt/oracle/oratab
       export ORACLE_SID=$i
       export ORAINST=`ps -ef | grep pmon | grep -v grep | cut -f3 -d_ | grep -i $ORACLE_SID | sed 's/.$//' `
       export ORACLE_HOME=`egrep -i ":Y|:N" $ORATAB | grep -v ^# | grep $ORAINST | cut -d":" -f2 | grep -v "\#" | grep -v "\*"`

$ORACLE_HOME/bin/sqlplus -s /nolog <<EOF
connect / as sysdba


set colsep ','
set lines 600
col state format a9
col dgname format a15
col sector format 999990
col block format 999990
col label format a25
col path format a40
col redundancy format a25
col pct_used format 990
col pct_free format 990
col voting format a6   
BREAK ON REPORT
COMPUTE SUM OF raw_gb ON REPORT 
COMPUTE SUM OF usable_total_gb ON REPORT 
COMPUTE SUM OF usable_used_gb ON REPORT 
COMPUTE SUM OF usable_free_gb ON REPORT 
COMPUTE SUM OF required_mirror_free_gb ON REPORT 
COMPUTE SUM OF usable_file_gb ON REPORT 
COL name NEW_V _hostname NOPRINT
select lower(host_name) name from v\$instance;
select 
        trim('&_hostname') hostname,
        name as dgname,
        state,
        type,
        sector_size sector,
        block_size block,
        allocation_unit_size au,
        round(total_mb/1024,2) raw_gb,
        round((DECODE(TYPE, 'HIGH', 0.3333 * total_mb, 'NORMAL', .5 * total_mb, total_mb))/1024,2) usable_total_gb,
        round((DECODE(TYPE, 'HIGH', 0.3333 * (total_mb - free_mb), 'NORMAL', .5 * (total_mb - free_mb), (total_mb - free_mb)))/1024,2) usable_used_gb,
        round((DECODE(TYPE, 'HIGH', 0.3333 * free_mb, 'NORMAL', .5 * free_mb, free_mb))/1024,2) usable_free_gb,
        round((DECODE(TYPE, 'HIGH', 0.3333 * required_mirror_free_mb, 'NORMAL', .5 * required_mirror_free_mb, required_mirror_free_mb))/1024,2) required_mirror_free_gb,
        round(usable_file_mb/1024,2) usable_file_gb,
        round((total_mb - free_mb)/total_mb,2)*100 as "PCT_USED", 
        round(free_mb/total_mb,2)*100 as "PCT_FREE",
        offline_disks,
        voting_files voting
from v\$asm_diskgroup
where total_mb != 0
order by 1;

EOF
done

}}}


!!! get_zone
{{{
dcli -l root -c er2s1app01,er2s1app02,er2s2app01,er2s2app02 zoneadm list -civ

}}}




!! # Example output (get_asm_size)

{{{
root@ssc1s1db01:~/dba/scripts# cat file.txt
################################################
ssc1s1vm01
Oracle Corporation      SunOS 5.11      11.3    August 2016




old   2:         trim('&_hostname') hostname,
new   2:         trim('ssc1s1vm01') hostname,

HOSTNAME ,DGNAME         ,STATE    ,TYPE  , SECTOR,  BLOCK,        AU,    RAW_GB,USABLE_TOTAL_GB,USABLE_USED_GB,USABLE_FREE_GB,REQUIRED_MIRROR_FREE_GB,USABLE_FILE_GB,PCT_USED,PCT_FREE,OFFLINE_DISKS,VOTING
---------,---------------,---------,------,-------,-------,----------,----------,---------------,--------------,--------------,-----------------------,--------------,--------,--------,-------------,------
ssc1s1vm01,DBFSBWDR       ,MOUNTED  ,HIGH  ,    512,   4096,   4194304,      1528,         509.28,          5.67,        503.61,                   5.33,        498.33,       1,      99,            0,Y
ssc1s1vm01,RECOBWDR       ,MOUNTED  ,NORMAL,    512,   4096,   4194304,      8213,         4106.5,        145.06,       3961.44,                   21.5,       3939.94,       4,      96,            0,N
ssc1s1vm01,DATABWDR       ,MOUNTED  ,NORMAL,    512,   4096,   4194304,     32661,        16330.5,        164.71,      16165.79,                   85.5,      16080.29,       1,      99,            0,N
         ,               ,         ,      ,       ,       ,          ,----------,---------------,--------------,--------------,-----------------------,--------------,        ,        ,             ,
sum      ,               ,         ,      ,       ,       ,          ,     42402,       20946.28,        315.44,      20630.84,                 112.33,      20518.56,        ,        ,             ,

################################################
ssc1s1vm02
Oracle Corporation      SunOS 5.11      11.3    August 2016




old   2:         trim('&_hostname') hostname,
new   2:         trim('ssc1s1vm02') hostname,

HOSTNAME ,DGNAME         ,STATE    ,TYPE  , SECTOR,  BLOCK,        AU,    RAW_GB,USABLE_TOTAL_GB,USABLE_USED_GB,USABLE_FREE_GB,REQUIRED_MIRROR_FREE_GB,USABLE_FILE_GB,PCT_USED,PCT_FREE,OFFLINE_DISKS,VOTING
---------,---------------,---------,------,-------,-------,----------,----------,---------------,--------------,--------------,-----------------------,--------------,--------,--------,-------------,------
ssc1s1vm02,RECODEV        ,MOUNTED  ,NORMAL,    512,   4096,   4194304,      6303,         3151.5,        200.83,       2950.67,                   16.5,       2934.17,       6,      94,            0,N
ssc1s1vm02,DBFSDEV        ,MOUNTED  ,HIGH  ,    512,   4096,   4194304,      1528,         509.28,          6.69,        502.59,                   5.33,         497.3,       1,      99,            0,Y
ssc1s1vm02,DATADEV        ,MOUNTED  ,NORMAL,    512,   4096,   4194304,     14325,         7162.5,        367.28,       6795.22,                   37.5,       6757.72,       5,      95,            0,N
         ,               ,         ,      ,       ,       ,          ,----------,---------------,--------------,--------------,-----------------------,--------------,        ,        ,             ,
sum      ,               ,         ,      ,       ,       ,          ,     22156,       10823.28,         574.8,      10248.48,                  59.33,      10189.19,        ,        ,             ,

################################################
ssc1s1vm03
Oracle Corporation      SunOS 5.11      11.3    August 2016




old   2:         trim('&_hostname') hostname,
new   2:         trim('ssc1s1vm03') hostname,

HOSTNAME ,DGNAME         ,STATE    ,TYPE  , SECTOR,  BLOCK,        AU,    RAW_GB,USABLE_TOTAL_GB,USABLE_USED_GB,USABLE_FREE_GB,REQUIRED_MIRROR_FREE_GB,USABLE_FILE_GB,PCT_USED,PCT_FREE,OFFLINE_DISKS,VOTING
---------,---------------,---------,------,-------,-------,----------,----------,---------------,--------------,--------------,-----------------------,--------------,--------,--------,-------------,------
ssc1s1vm03,DATASBX        ,MOUNTED  ,NORMAL,    512,   4096,   4194304,     20437,        10218.5,        166.21,      10052.29,                   53.5,       9998.79,       2,      98,            0,N
ssc1s1vm03,DBFSSBX        ,MOUNTED  ,HIGH  ,    512,   4096,   4194304,      1528,         509.28,          5.67,        503.61,                   5.33,        498.33,       1,      99,            0,Y
ssc1s1vm03,RECOSBX        ,MOUNTED  ,NORMAL,    512,   4096,   4194304,      8213,         4106.5,        145.11,       3961.39,                   21.5,       3939.89,       4,      96,            0,N
         ,               ,         ,      ,       ,       ,          ,----------,---------------,--------------,--------------,-----------------------,--------------,        ,        ,             ,
sum      ,               ,         ,      ,       ,       ,          ,     30178,       14834.28,        316.99,      14517.29,                  80.33,      14437.01,        ,        ,             ,

################################################
ssc1s1vm04
Oracle Corporation      SunOS 5.11      11.3    August 2016




old   2:         trim('&_hostname') hostname,
new   2:         trim('ssc1s1vm04') hostname,

HOSTNAME ,DGNAME         ,STATE    ,TYPE  , SECTOR,  BLOCK,        AU,    RAW_GB,USABLE_TOTAL_GB,USABLE_USED_GB,USABLE_FREE_GB,REQUIRED_MIRROR_FREE_GB,USABLE_FILE_GB,PCT_USED,PCT_FREE,OFFLINE_DISKS,VOTING
---------,---------------,---------,------,-------,-------,----------,----------,---------------,--------------,--------------,-----------------------,--------------,--------,--------,-------------,------
ssc1s1vm04,DATAQA         ,MOUNTED  ,NORMAL,    512,   4096,   4194304,    108106,          54053,        168.17,      53884.83,                    283,      53601.83,       0,     100,            0,N
ssc1s1vm04,DBFSQA         ,MOUNTED  ,HIGH  ,    512,   4096,   4194304,      1528,         509.28,          5.56,        503.72,                   5.33,        498.44,       1,      99,            0,Y
ssc1s1vm04,RECOQA         ,MOUNTED  ,NORMAL,    512,   4096,   4194304,     34762,          17381,        153.69,      17227.31,                     91,      17136.31,       1,      99,            0,N
         ,               ,         ,      ,       ,       ,          ,----------,---------------,--------------,--------------,-----------------------,--------------,        ,        ,             ,
sum      ,               ,         ,      ,       ,       ,          ,    144396,       71943.28,        327.42,      71615.86,                 379.33,      71236.58,        ,        ,             ,

################################################
ssc1s1vm05
Oracle Corporation      SunOS 5.11      11.3    August 2016




old   2:         trim('&_hostname') hostname,
new   2:         trim('ssc1s1vm05') hostname,

HOSTNAME ,DGNAME         ,STATE    ,TYPE  , SECTOR,  BLOCK,        AU,    RAW_GB,USABLE_TOTAL_GB,USABLE_USED_GB,USABLE_FREE_GB,REQUIRED_MIRROR_FREE_GB,USABLE_FILE_GB,PCT_USED,PCT_FREE,OFFLINE_DISKS,VOTING
---------,---------------,---------,------,-------,-------,----------,----------,---------------,--------------,--------------,-----------------------,--------------,--------,--------,-------------,------
ssc1s1vm05,DATAECCDR      ,MOUNTED  ,NORMAL,    512,   4096,   4194304,     61311,        30655.5,         99.79,      30555.71,                  160.5,      30395.21,       0,     100,            0,N
ssc1s1vm05,DBFSECCDR      ,MOUNTED  ,HIGH  ,    512,   4096,   4194304,      1528,         509.28,          5.57,        503.71,                   5.33,        498.43,       1,      99,            0,Y
ssc1s1vm05,RECOECCDR      ,MOUNTED  ,NORMAL,    512,   4096,   4194304,     20437,        10218.5,          83.4,       10135.1,                   53.5,       10081.6,       1,      99,            0,N
         ,               ,         ,      ,       ,       ,          ,----------,---------------,--------------,--------------,-----------------------,--------------,        ,        ,             ,
sum      ,               ,         ,      ,       ,       ,          ,     83276,       41383.28,        188.76,      41194.52,                 219.33,      40975.24,        ,        ,             ,

################################################
ssc1s1vm06
Oracle Corporation      SunOS 5.11      11.3    August 2016




old   2:         trim('&_hostname') hostname,
new   2:         trim('ssc1s1vm06') hostname,

HOSTNAME ,DGNAME         ,STATE    ,TYPE  , SECTOR,  BLOCK,        AU,    RAW_GB,USABLE_TOTAL_GB,USABLE_USED_GB,USABLE_FREE_GB,REQUIRED_MIRROR_FREE_GB,USABLE_FILE_GB,PCT_USED,PCT_FREE,OFFLINE_DISKS,VOTING
---------,---------------,---------,------,-------,-------,----------,----------,---------------,--------------,--------------,-----------------------,--------------,--------,--------,-------------,------
ssc1s1vm06,DATAPODR       ,MOUNTED  ,NORMAL,    512,   4096,   4194304,     10314,           5157,        100.34,       5056.66,                     27,       5029.66,       2,      98,            0,N
ssc1s1vm06,RECOPODR       ,MOUNTED  ,NORMAL,    512,   4096,   4194304,      4202,           2101,        100.19,       2000.81,                     11,       1989.81,       5,      95,            0,N
ssc1s1vm06,DBFSPODR       ,MOUNTED  ,HIGH  ,    512,   4096,   4194304,      1528,         509.28,          5.56,        503.72,                   5.33,        498.44,       1,      99,            0,Y
         ,               ,         ,      ,       ,       ,          ,----------,---------------,--------------,--------------,-----------------------,--------------,        ,        ,             ,
sum      ,               ,         ,      ,       ,       ,          ,     16044,        7767.28,        206.09,       7561.19,                  43.33,       7517.91,        ,        ,             ,

################################################
ssc1s1vm07
Oracle Corporation      SunOS 5.11      11.3    August 2016




old   2:         trim('&_hostname') hostname,
new   2:         trim('ssc1s1vm07') hostname,

HOSTNAME ,DGNAME         ,STATE    ,TYPE  , SECTOR,  BLOCK,        AU,    RAW_GB,USABLE_TOTAL_GB,USABLE_USED_GB,USABLE_FREE_GB,REQUIRED_MIRROR_FREE_GB,USABLE_FILE_GB,PCT_USED,PCT_FREE,OFFLINE_DISKS,VOTING
---------,---------------,---------,------,-------,-------,----------,----------,---------------,--------------,--------------,-----------------------,--------------,--------,--------,-------------,------
ssc1s1vm07,DATAECCSTG     ,MOUNTED  ,NORMAL,    512,   4096,   4194304,     61311,        30655.5,        160.73,      30494.77,                  160.5,      30334.27,       1,      99,            0,N
ssc1s1vm07,RECOECCSTG     ,MOUNTED  ,NORMAL,    512,   4096,   4194304,     20437,        10218.5,         60.07,      10158.43,                   53.5,      10104.93,       1,      99,            0,N
ssc1s1vm07,DBFSECCSTG     ,MOUNTED  ,HIGH  ,    512,   4096,   4194304,      1528,         509.28,          5.56,        503.72,                   5.33,        498.44,       1,      99,            0,Y
         ,               ,         ,      ,       ,       ,          ,----------,---------------,--------------,--------------,-----------------------,--------------,        ,        ,             ,
sum      ,               ,         ,      ,       ,       ,          ,     83276,       41383.28,        226.36,      41156.92,                 219.33,      40937.64,        ,        ,             ,

################################################
ssc1s1vm08
Oracle Corporation      SunOS 5.11      11.3    August 2016




old   2:         trim('&_hostname') hostname,
new   2:         trim('ssc1s1vm08') hostname,

HOSTNAME ,DGNAME         ,STATE    ,TYPE  , SECTOR,  BLOCK,        AU,    RAW_GB,USABLE_TOTAL_GB,USABLE_USED_GB,USABLE_FREE_GB,REQUIRED_MIRROR_FREE_GB,USABLE_FILE_GB,PCT_USED,PCT_FREE,OFFLINE_DISKS,VOTING
---------,---------------,---------,------,-------,-------,----------,----------,---------------,--------------,--------------,-----------------------,--------------,--------,--------,-------------,------
ssc1s1vm08,DATAPOSTG      ,MOUNTED  ,NORMAL,    512,   4096,   4194304,     10314,           5157,        100.51,       5056.49,                     27,       5029.49,       2,      98,            0,N
ssc1s1vm08,DBFSPOSTG      ,MOUNTED  ,HIGH  ,    512,   4096,   4194304,      1528,         509.28,          5.57,        503.71,                   5.33,        498.43,       1,      99,            0,Y
ssc1s1vm08,RECOPOSTG      ,MOUNTED  ,NORMAL,    512,   4096,   4194304,      4202,           2101,         99.38,       2001.62,                     11,       1990.62,       5,      95,            0,N
         ,               ,         ,      ,       ,       ,          ,----------,---------------,--------------,--------------,-----------------------,--------------,        ,        ,             ,
sum      ,               ,         ,      ,       ,       ,          ,     16044,        7767.28,        205.46,       7561.82,                  43.33,       7518.54,        ,        ,             ,

root@ssc1s1db01:~/dba/scripts#

}}}



! Tableau calculated fields 

{{{
DGTYPE
IF contains(lower(trim([Dgname])),'dbfs')=true THEN 'DBFS' 
ELSEIF contains(lower(trim([Dgname])),'reco')=true THEN 'RECO' 
ELSEIF contains(lower(trim([Dgname])),'data')=true THEN 'DATA' 
ELSE 'OTHER' END



DATA CENTER
IF contains(lower(trim([Ldom])),'er1')=true THEN 'ER1' 
ELSEIF contains(lower(trim([Ldom])),'er2')=true THEN 'ER2' 
ELSE 'OTHER' END


CHASSIS
IF contains(lower(trim([Ldom])),'er1p1')=true THEN 'er1p1' 
ELSEIF contains(lower(trim([Ldom])),'er1p2')=true THEN 'er1p2' 
ELSEIF contains(lower(trim([Ldom])),'er2s1')=true THEN 'er2s1' 
ELSEIF contains(lower(trim([Ldom])),'er2s2')=true THEN 'er2s2' 
ELSE 'OTHER' END
}}}


! Visualization 


!! Here’s the high level storage usage/allocation by DATA,RECO,and DBFS disk groups


[img(80%,80%)[ http://i.imgur.com/IrMJZGK.png ]]

!! Here’s the breakdown of that by Zone 


[img(80%,80%)[ http://i.imgur.com/sTYOyGL.png ]]

!! Another view of the breakdown by zone


[img(80%,80%)[ http://i.imgur.com/S7lmHde.png ]] 



! the final workbook 
https://public.tableau.com/profile/karlarao#!/vizhome/SPARCSuperclusterLDOM-ZoneASMStorageMapping/LDOM-ZoneASMStorageMapping






<<showtoc>>

HOWTO: Resource Manager and IORM by Cluster Service
http://goo.gl/I1mjd

•	This HOWTO shows the following

o how to make use of Cluster Services to map users on resource manager  
o limit the PX slaves per user
o cancel SQLs running longer than 15secs (just for testing purposes)
o limit the backup operations
o activate the IORM intradatabase plan 

Also, at the end of this guide is an INTERDATABASE IORM PLAN 

The FYIs section at the bottom are some of my observations during the test cases


! INTRADATABASE IORM PLAN 
{{{
-- Create the cluster service
#############################################

--Create a service for Reporting sessions
srvctl add service -d dbm -s DBM_REPORTING -r dbm1,dbm2
-- srvctl add service -d dbm -s DBM_REPORTING -r dbm1
srvctl start service -d dbm -s DBM_REPORTING
srvctl stop service -d dbm -s DBM_REPORTING
srvctl remove service -d dbm -s DBM_REPORTING 

--Create a service for ETL sessions
srvctl add service -d dbm -s DBM_ETL -r dbm1,dbm2
-- srvctl add service -d dbm -s DBM_ETL -r dbm1
srvctl start service -d dbm -s DBM_ETL
srvctl stop service -d dbm -s DBM_ETL
srvctl remove service -d dbm -s DBM_ETL 

-- check service status
srvctl status service -d dbm


-- Create Resource Groups
#############################################

BEGIN
  dbms_resource_manager.clear_pending_area();
  dbms_resource_manager.create_pending_area();

  dbms_resource_manager.create_consumer_group(
    consumer_group => 'REPORTING',
    comment        => 'Consumer group for REPORTS');
   dbms_resource_manager.create_consumer_group(
    consumer_group => 'ETL',
    comment        => 'Consumer group for ETL');
  dbms_resource_manager.create_consumer_group(
    consumer_group => 'MAINT',
    comment        => 'Consumer group for maintenance jobs');

  dbms_resource_manager.validate_pending_area();
  dbms_resource_manager.submit_pending_area();
END;


-- Create Consumer Group Mapping Rules
#############################################

BEGIN
  dbms_resource_manager.clear_pending_area();
  dbms_resource_manager.create_pending_area();

  dbms_resource_manager.set_consumer_group_mapping(
    attribute      => dbms_resource_manager.service_name,
    value          => 'DBM_REPORTING',
    consumer_group => 'REPORTING');

  dbms_resource_manager.set_consumer_group_mapping(
    attribute      => dbms_resource_manager.service_name,
    value          => 'DBM_ETL',
    consumer_group => 'ETL');

  dbms_resource_manager.set_consumer_group_mapping(
    attribute      => dbms_resource_manager.oracle_function,
    value          => 'BACKUP',
    consumer_group => 'MAINT');

  dbms_resource_manager.set_consumer_group_mapping(
    attribute      => dbms_resource_manager.oracle_function,
    value          => 'COPY',
    consumer_group => 'MAINT');

  dbms_resource_manager.validate_pending_area(); 
  dbms_resource_manager.submit_pending_area();
END;


-- Resource Group Mapping Priorities
#############################################


BEGIN
dbms_resource_manager.clear_pending_area();
dbms_resource_manager.create_pending_area();
dbms_resource_manager.set_consumer_group_mapping_pri(
	explicit => 1,
	service_name => 2,
	oracle_user => 3,
	client_program => 4,
	service_module_action => 5,
	service_module => 6,
	module_name_action => 7,
	module_name => 8,
	client_os_user => 9,
	client_machine => 10 );
dbms_resource_manager.validate_pending_area(); 
dbms_resource_manager.submit_pending_area();
END;


-- Create the Resource Plan and Plan Directives
-- * DAYTIME for reports
-- * NIGHTTIME for ETL jobs
#############################################

-- create DAYTIME plan
BEGIN
 dbms_resource_manager.clear_pending_area();
 dbms_resource_manager.create_pending_area();

 dbms_resource_manager.create_plan(
   plan    => 'DAYTIME',
   comment => 'Resource plan for normal business hours');
 dbms_resource_manager.create_plan_directive(
   plan             => 'DAYTIME',
   group_or_subplan => 'REPORTING',
   comment          => 'High priority for users/applications',
   mgmt_p1          => 70,
   PARALLEL_DEGREE_LIMIT_P1 => 4);
 dbms_resource_manager.create_plan_directive(
   plan             => 'DAYTIME',
   group_or_subplan => 'ETL',
   comment          => 'Medium priority for ETL processing',
   mgmt_p2          => 50);
dbms_resource_manager.create_plan_directive(
   plan             => 'DAYTIME',
   group_or_subplan => 'MAINT',
   comment          => 'Low priority for daytime maintenance',
   mgmt_p3          => 50);
 dbms_resource_manager.create_plan_directive(
   plan             => 'DAYTIME',
   group_or_subplan => 'OTHER_GROUPS',
   comment          => 'All other groups not explicitely named in this plan',
   mgmt_p3          => 50);

 dbms_resource_manager.validate_pending_area();
 dbms_resource_manager.submit_pending_area();
END;


-- create NIGHTTIME plan
BEGIN
 dbms_resource_manager.clear_pending_area();
 dbms_resource_manager.create_pending_area();

 dbms_resource_manager.create_plan(
   plan    => 'NIGHTTIME',
   comment => 'Resource plan for ETL hours');
 dbms_resource_manager.create_plan_directive(
   plan             => 'NIGHTTIME',
   group_or_subplan => 'ETL',
   comment          => 'High priority for ETL processing',
   mgmt_p1          => 70);
 dbms_resource_manager.create_plan_directive(
   plan             => 'NIGHTTIME',
   group_or_subplan => 'REPORTING',
   comment          => 'Medium priority for users/applications',
   mgmt_p2          => 50,
   PARALLEL_DEGREE_LIMIT_P1 => 4,
   SWITCH_GROUP=>'CANCEL_SQL',
   SWITCH_TIME=>15,
   SWITCH_ESTIMATE=>false
   );
dbms_resource_manager.create_plan_directive(
   plan             => 'NIGHTTIME',
   group_or_subplan => 'MAINT',
   comment          => 'Low priority for daytime maintenance',
   mgmt_p3          => 50);
 dbms_resource_manager.create_plan_directive(
   plan             => 'NIGHTTIME',
   group_or_subplan => 'OTHER_GROUPS',
   comment          => 'All other groups not explicitely named in this plan',
   mgmt_p3          => 50);

 dbms_resource_manager.validate_pending_area();
 dbms_resource_manager.submit_pending_area();
END;



-- Grant the consumer group to the users
-- if you do not do this, the user's RESOURCE_CONSUMER_GROUP will show as OTHER_GROUPS
#############################################
BEGIN
DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA();
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();    
dbms_resource_manager_privs.grant_switch_consumer_group ('oracle','REPORTING',FALSE); 
dbms_resource_manager_privs.grant_switch_consumer_group ('oracle','ETL',FALSE); 
dbms_resource_manager_privs.grant_switch_consumer_group ('oracle','MAINT',FALSE); 
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/



-- Activate the Resource Plan
#############################################

ALTER SYSTEM SET resource_manager_plan='NIGHTTIME' SCOPE=BOTH SID='*';
ALTER SYSTEM SET resource_manager_plan='DAYTIME' SCOPE=BOTH SID='*';
-- to deactivate 
ALTER SYSTEM SET resource_manager_plan='' SCOPE=BOTH SID='*';
-- You can also enable the resource plan with the FORCE Option to avoid the Scheduler window to activate a different plan during the job execution.
ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'FORCE:DAYTIME';



or 


-- The window starts at 11:00 PM (hour 23) and runs through 7:00 AM (480 minutes).
BEGIN
DBMS_SCHEDULER.SET_ATTRIBUTE(
	Name => '"SYS"."WEEKNIGHT_WINDOW"',
	Attribute => 'RESOURCE_PLAN',
	Value => 'NIGHTTIME');
DBMS_SCHEDULER.SET_ATTRIBUTE(
	name => '"SYS"."WEEKNIGHT_WINDOW"',
	attribute => 'REPEAT_INTERVAL',
	value => 'FREQ=WEEKLY;BYDAY=MON,TUE,WED,THU,FRI,SAT,SUN;BYHOUR=23;BYMINUTE=00;BYSECOND=0');
DBMS_SCHEDULER.SET_ATTRIBUTE(
	name=>'"SYS"."WEEKNIGHT_WINDOW"',
	attribute=>'DURATION',
	value=>numtodsinterval(480, 'minute'));
DBMS_SCHEDULER.ENABLE(name=>'"SYS"."WEEKNIGHT_WINDOW"');
END;

-- The window starts at 7:00 AM (hour 7) and runs until 11:00 PM (960 minutes)
BEGIN
DBMS_SCHEDULER.CREATE_WINDOW(
	window_name => '"WEEKDAY_WINDOW"',
	resource_plan => 'DAYTIME',
	start_date => systimestamp at time zone '-6:00',
	duration => numtodsinterval(960, 'minute'),
	repeat_interval => 'FREQ=WEEKLY;BYDAY=MON,TUE,WED,THU,FRI,SAT,SUN;BYHOUR=7;BYMINUTE=0;BYSECOND=0',
	end_date => null,
	window_priority => 'HIGH',
	comments => 'Weekday window. Sets the active resource plan to DAYTIME');
DBMS_SCHEDULER.ENABLE(name=>'"SYS"."WEEKDAY_WINDOW"');
END;



-- Activate IORM on Exadata
#############################################

-- In each storage cell...
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan attributes objective'

dcli -g ~/cell_group -l root 'cellcli -e alter iormplan objective = auto'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan dbPlan=\"\"'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan catPlan=\"\"'
dcli -g ~/cell_group -l root cellcli -e alter iormplan active

dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan attributes objective'




-- Revert/Delete
#############################################
BEGIN
DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA();
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();    
DBMS_RESOURCE_MANAGER.DELETE_PLAN (PLAN => 'NIGHTTIME');
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/

BEGIN
DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA();
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA(); 
DBMS_RESOURCE_MANAGER.DELETE_CONSUMER_GROUP(CONSUMER_GROUP => 'REPORTS');
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/

BEGIN
DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA();
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA(); 
DBMS_RESOURCE_MANAGER.DELETE_CONSUMER_GROUP(CONSUMER_GROUP => 'ETL');
DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/




-- Check Resource Manager configuration
#############################################

set wrap off
set head on
set linesize 300
set pagesize 132
col comments format a64

-- show current resource plan
select * from  V$RSRC_PLAN;

-- show all resource plans
select PLAN,NUM_PLAN_DIRECTIVES,CPU_METHOD,substr(COMMENTS,1,64) "COMMENTS",STATUS,MANDATORY 
from dba_rsrc_plans 
order by plan;

-- show consumer groups
select CONSUMER_GROUP,CPU_METHOD,STATUS,MANDATORY,substr(COMMENTS,1,64) "COMMENTS" 
from DBA_RSRC_CONSUMER_GROUPS 
where CONSUMER_GROUP in ('REPORTING','ETL','MAINT')
order by consumer_group;

-- show  category
SELECT consumer_group, category
FROM DBA_RSRC_CONSUMER_GROUPS
ORDER BY category;

-- show mappings
col value format a30
select ATTRIBUTE, VALUE, CONSUMER_GROUP, STATUS 
from DBA_RSRC_GROUP_MAPPINGS
where CONSUMER_GROUP in ('REPORTING','ETL','MAINT')
order by 3;

-- show mapping priority 
select * from DBA_RSRC_MAPPING_PRIORITY;

-- show directives 
SELECT plan,group_or_subplan,cpu_p1,cpu_p2,cpu_p3, PARALLEL_DEGREE_LIMIT_P1, status 
FROM dba_rsrc_plan_directives 
where plan in ('DAYTIME','NIGHTTIME')
order by 1,3 desc,4 desc,5 desc;

-- show grants
select * from DBA_RSRC_CONSUMER_GROUP_PRIVS order by grantee;
select * from DBA_RSRC_MANAGER_SYSTEM_PRIVS order by grantee;

-- show scheduler windows
select window_name, resource_plan, START_DATE, DURATION, WINDOW_PRIORITY, enabled, active from dba_scheduler_windows;




-- Useful monitoring SQLs
#############################################

## Check the service name used by each session
select inst_id, username, SERVICE_NAME, RESOURCE_CONSUMER_GROUP, count(*) 
from gv$session 
where SERVICE_NAME <> 'SYS$BACKGROUND'
group by inst_id, username, SERVICE_NAME, RESOURCE_CONSUMER_GROUP order by 2,3,1;

## List the Active Resource Consumer Groups since instance startup
select INST_ID, NAME, ACTIVE_SESSIONS, EXECUTION_WAITERS, REQUESTS, CPU_WAIT_TIME, CPU_WAITS, CONSUMED_CPU_TIME, YIELDS, QUEUE_LENGTH, ACTIVE_SESSION_LIMIT_HIT 
from gV$RSRC_CONSUMER_GROUP 
-- where name in ('SYS_GROUP','BATCH','OLTP','OTHER_GROUPS') 
order by 2,1;

## Session level details
SET pagesize 50
SET linesize 155
SET wrap off
COLUMN name format a11 head "Consumer|Group"
COLUMN sid format 9999
COLUMN username format a16
COLUMN CONSUMED_CPU_TIME head "Consumed|CPU time|(s)" format 999999.9
COLUMN IO_SERVICE_TIME head "I/O time|(s)" format 999999.9
COLUMN CPU_WAIT_TIME head "CPU Wait|Time (s)" FOR 99999
COLUMN CPU_WAITS head "CPU|Waits" format 99999
COLUMN YIELDS head "Yields" format 99999
COLUMN state format a10
COLUMN osuser format a8
COLUMN machine format a16
COLUMN PROGRAM format a12
 
SELECT
          rcg.name
        , rsi.sid
        , s.username
        , rsi.state
        , rsi.YIELDS
        , rsi.CPU_WAIT_TIME / 1000 AS CPU_WAIT_TIME
        , rsi.CPU_WAITS
        , rsi.CONSUMED_CPU_TIME / 1000 AS CONSUMED_CPU_TIME
        , rsi.IO_SERVICE_TIME /1000 AS IO_SERVICE_TIME
        , s.osuser
        , s.program
        , s.machine
        , sw.event
FROM V$RSRC_SESSION_INFO rsi INNER JOIN v$rsrc_consumer_group rcg
ON rsi.CURRENT_CONSUMER_GROUP_ID = rcg.id
INNER JOIN v$session s ON rsi.sid=s.sid
INNER JOIN v$session_wait sw ON s.sid = sw.sid
WHERE rcg.id !=0 -- _ORACLE_BACKGROUND_GROUP_
and (sw.event != 'SQL*Net message from client' or rsi.state='RUNNING')
ORDER BY rcg.name, s.username,rsi.cpu_wait_time + rsi.IO_SERVICE_TIME + rsi.CONSUMED_CPU_TIME ASC, rsi.state, sw.event, s.username, rcg.name,s.machine,s.osuser
/

## By consumer group - time series
set linesize 160
set pagesize 60
set colsep '  '
 
column total                    head "Total Available|CPU Seconds"      format 99990
column consumed                 head "Used|Oracle Seconds"              format 99990.9
column consumer_group_name      head "Consumer|Group Name"              format a25      wrap off
column "throttled"              head "Oracle Throttled|Time (s)"        format 99990.9
column cpu_utilization          head "% of Host CPU"                    format 99990.9
break on time skip 2 page
 
select to_char(begin_time, 'YYYY-DD-MM HH24:MI:SS') time,
consumer_group_name,
60 * (select value from v$osstat where stat_name = 'NUM_CPUS') as total,
cpu_consumed_time / 1000 as consumed,
cpu_consumed_time / (select value from v$parameter where name = 'cpu_count') / 600 as cpu_utilization,
cpu_wait_time / 1000 as throttled,
IO_MEGABYTES
from v$rsrcmgrmetric_history
order by begin_time,consumer_group_name
/

## High level
set linesize 160
set pagesize 50
set colsep '  '  
column "Total Available CPU Seconds"    head "Total Available|CPU Seconds"      format 99990
column "Used Oracle Seconds"            head "Used Oracle|Seconds"              format 99990.9
column "Used Host CPU %"                head "Used Host|CPU %"                  format 99990.9
column "Idle Host CPU %"                head "Idle Host|CPU %"                  format 99990.9
column "Total Used Seconds"             head "Total Used|Seconds"               format 99990.9
column "Idle Seconds"                   head "Idle|Seconds"                     format 99990.9
column "Non-Oracle Seconds Used"        head "Non-Oracle|Seconds Used"          format 99990.9
column "Oracle CPU %"                   head "Oracle|CPU %"                     format 99990.9
column "Non-Oracle CPU %"               head "Non-Oracle|CPU %"                 format 99990.9
column "throttled"                      head "Oracle Throttled|Time (s)"        format 99990.9
 
select to_char(rm.BEGIN_TIME,'YYYY-MM-DD HH24:MI:SS') as BEGIN_TIME
        ,60 * (select value from v$osstat where stat_name = 'NUM_CPUS') as "Total Available CPU Seconds"
        ,sum(rm.cpu_consumed_time) / 1000 as "Used Oracle Seconds"
        ,min(s.value) as "Used Host CPU %"
        ,(60 * (select value from v$osstat where stat_name = 'NUM_CPUS')) * (min(s.value) / 100) as "Total Used Seconds"
        ,((100 - min(s.value)) / 100) * (60 * (select value from v$osstat where stat_name = 'NUM_CPUS')) as "Idle Seconds"
        ,((60 * (select value from v$osstat where stat_name = 'NUM_CPUS')) * (min(s.value) / 100)) - sum(rm.cpu_consumed_time) / 1000 as "Non-Oracle Seconds Used"
        ,100 - min(s.value) as "Idle Host CPU %"
        ,((((60 * (select value from v$osstat where stat_name = 'NUM_CPUS')) * (min(s.value) / 100)) - sum(rm.cpu_consumed_time) / 1000) / (60 * (select value from v$osstat where stat_name = 'NUM_CPUS')))*100 as "Non-Oracle CPU %"
        ,(((sum(rm.cpu_consumed_time) / 1000) / (60 * (select value from v$osstat where stat_name = 'NUM_CPUS'))) * 100) as "Oracle CPU %"
        , sum(rm.cpu_wait_time) / 1000 as throttled
from    gv$rsrcmgrmetric_history rm
        inner join
        gV$SYSMETRIC_HISTORY s
        on rm.begin_time = s.begin_time
where   s.metric_id = 2057
  and   s.group_id = 2
group by rm.begin_time,s.begin_time
order by rm.begin_time
/




      

-- PROS/CONS 
#############################################
* When you implement a category plan you specify percentage settings a higher level than the database resource manager and this percentage allocation 
	is generic across the databases. Now, if you want to dynamically alter things you have to do it on both DBRM and on the IORM plans
* If you want it to dynamically change the allocation then just go with the intradatabase plan so you can easily alter the "resource_manager_plan" according to 
	a scheduler window or just by altering the resource plan 



-- FYIs
#############################################
* on resource_manager_cpu_allocation parameter

    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ------------------------------
    cpu_count                            integer     16    <-- set to do instance caging
    parallel_threads_per_cpu             integer     1
    resource_manager_cpu_allocation      integer     32    <-- this parameter is deprecated DON'T ALTER THIS, and if this is set with cpu_count this will take precedence, see warning below

    alter system set cpu_count=32 scope=both sid='dbm1';

WARNING: the resource_manager_cpu_allocation was introduced in 11106 and deprecated right away on 11107, 
...BUT... if you have this set together with the cpu_count parameter then this will take precedence. 
let's say you set both cpu_count to 3 and resource_manager_cpu_allocation to 16 and you have 32 CPUs on the system 
and a workload burning all of the 32 CPUs... what will happen is the server will show as 50% utilized (16 CPUs burned) because 
the resource_manager_cpu_allocation is set to 16! 
* one more reason of deprecating the resource_manager_cpu_allocation parameter is the cpu_count has dependencies on other stuff like the parallel settings, doing so is one less parameter to worry


* for IORM, on a 70/30 percentage plan directive scheme on DAYTIME & NIGHTTIME plans.. the percentage allocation will only take effect at saturation point.. but if only 
  one consumer group is active then that group should be able to get the 100% of the IO 

    -- 70/30 NIGHTTIME taking effect on idle CPU, with only 1 session
        BENCHMARK ,INSTNAME       ,START            ,END              ,   ELAPSED,    MBs
        ----------,---------------,-----------------,-----------------,----------,-------
        benchmark ,dbm1           ,05/02/13 05:57:46,05/02/13 05:57:58,        12,   2877

        BENCHMARK ,INSTNAME       ,START            ,END              ,   ELAPSED,    MBs
        ----------,---------------,-----------------,-----------------,----------,-------
        benchmark ,dbm1           ,05/02/13 05:57:48,05/02/13 05:58:07,        19,   1817

    -- 70/30 NIGHTTIME taking effect on idle CPU, but with more sessions doing IOs
        -- etl
        BENCHMARK ,INSTNAME       ,START            ,END              ,   ELAPSED,    MBs
        ----------,---------------,-----------------,-----------------,----------,-------
        benchmark ,dbm1           ,05/02/13 06:02:58,05/02/13 06:03:17,        18,   1918

        BENCHMARK ,INSTNAME       ,START            ,END              ,   ELAPSED,    MBs
        ----------,---------------,-----------------,-----------------,----------,-------
        benchmark ,dbm1           ,05/02/13 06:02:58,05/02/13 06:03:17,        19,   1817

        -- reporting
        BENCHMARK ,INSTNAME       ,START            ,END              ,   ELAPSED,    MBs
        ----------,---------------,-----------------,-----------------,----------,-------
        benchmark ,dbm1           ,05/02/13 06:03:00,05/02/13 06:03:30,        30,   1151

        BENCHMARK ,INSTNAME       ,START            ,END              ,   ELAPSED,    MBs
        ----------,---------------,-----------------,-----------------,----------,-------
        benchmark ,dbm1           ,05/02/13 06:03:00,05/02/13 06:03:30,        30,   1151

    -- 70/30 NIGHTTIME taking effect on 100% CPU utilization and 32 AAS "resmgr:cpu quantum"
        BENCHMARK ,INSTNAME       ,START            ,END              ,   ELAPSED,    MBs
        ----------,---------------,-----------------,-----------------,----------,-------
        benchmark ,dbm1           ,05/02/13 05:52:07,05/02/13 05:52:22,        14,   2466

        BENCHMARK ,INSTNAME       ,START            ,END              ,   ELAPSED,    MBs
        ----------,---------------,-----------------,-----------------,----------,-------
        benchmark ,dbm1           ,05/02/13 05:52:16,05/02/13 05:55:05,       167,    207

* for PX, if you set the PARALLEL_DEGREE_LIMIT_P1=4 then a session will be flagged with a "Req. DOP" of 32 but it will really have an "Actual DOP" of 4

    TIME                 CONSUMER_GROUP_NAME                 CPU Seconds  Oracle Seconds  % of Host CPU          Time (s)  IO_MEGABYTES
    -------------------  ------------------------------  ---------------  --------------  -------------  ----------------  ------------
    2013-02-05 05:45:36  ETL                                        1920          1521.4           79.2            1319.5         68511
                         MAINT                                      1920             0.0            0.0               0.0             0
                         OTHER_GROUPS                               1920            10.9            0.6             156.6             0
                         REPORTING                                  1920           389.3           20.3            1630.1          1952
                         _ORACLE_BACKGROUND_GROUP_                  1920             0.0            0.0               0.0            20

    TIME                 CONSUMER_GROUP_NAME                 CPU Seconds  Oracle Seconds  % of Host CPU          Time (s)  IO_MEGABYTES
    -------------------  ------------------------------  ---------------  --------------  -------------  ----------------  ------------
    2013-02-05 05:46:37  ETL                                        1920          1589.9           82.8             343.5             0
                         MAINT                                      1920             0.0            0.0               0.0             0
                         OTHER_GROUPS                               1920            10.8            0.6             149.0             0
                         REPORTING                                  1920           322.3           16.8            2074.0         15016
                         _ORACLE_BACKGROUND_GROUP_                  1920             0.0            0.0               0.0            20

* for PX, if you set the PARALLEL_DEGREE_LIMIT_P1=4 and if you have a GROUP BY on the SQL then that part of the operation will be another 4 PX slaves 

    Username     QC/Slave Group  SlaveSet SID    Slave INS STATE    WAIT_EVENT                     QC SID QC INS Req. DOP Actual DOP SQL_ID
    ------------ -------- ------ -------- ------ --------- -------- ------------------------------ ------ ------ -------- ---------- -------------
    ORACLE       QC                       684    1         WAIT     PX Deq: Execute Reply          684                               7bb5hpfv8jd4a
     - p028      (Slave)  1      1        3010   1         WAIT     PX Deq: Execution Msg          684    1            16          4 7bb5hpfv8jd4a
     - p004      (Slave)  1      1        2721   1         WAIT     PX Deq: Execution Msg          684    1            16          4 7bb5hpfv8jd4a
     - p012      (Slave)  1      1        391    1         WAIT     PX Deq: Execution Msg          684    1            16          4 7bb5hpfv8jd4a
     - p020      (Slave)  1      1        1559   1         WAIT     PX Deq: Execution Msg          684    1            16          4 7bb5hpfv8jd4a
     - p060      (Slave)  1      2        3013   1         WAIT     cell smart table scan          684    1            16          4 7bb5hpfv8jd4a
     - p036      (Slave)  1      2        685    1         WAIT     cell smart table scan          684    1            16          4 7bb5hpfv8jd4a
     - p044      (Slave)  1      2        1459   1         WAIT     cell smart table scan          684    1            16          4 7bb5hpfv8jd4a
     - p052      (Slave)  1      2        2237   1         WAIT     cell smart table scan          684    1            16          4 7bb5hpfv8jd4a

* for CANCEL_SQL, if you are currently on DAYTIME plan.. and if there's already a long running SQL, if you switch it to NIGHTTIME which has the CANCEL_SQL directive
  the SWITCH_TIME of 15secs will take effect upon activation. So a SQL that's already running for 1000secs will be canceled after 1015secs if you switch to the NIGHTTIME 
  plan at 1000secs

* for MAINT consumer group where we have percentage allocation for RMAN backups, it will only kick in once you execute the backup command "backup incremental level 0 database;"
  and if you run reports and ETL while the backup is running.. the percentage allocation for the rest of the consumer groups will take effect.. below the ETL still got the 
  IO priority than the reporting and backups on NIGHTTIME resource plan 

       INST_ID USERNAME                       SERVICE_NAME                                                     RESOURCE_CONSUMER_GROUP            COUNT(*)
    ---------- ------------------------------ ---------------------------------------------------------------- -------------------------------- ----------
             1 SYS                            SYS$USERS                                                        MAINT                                1      <-- the RMAN session
             1 SYS                            SYS$USERS                                                        OTHER_GROUPS                         8
             2 SYS                            SYS$USERS                                                        OTHER_GROUPS                         3

    TIME                 CONSUMER_GROUP_NAME                 CPU Seconds  Oracle Seconds  % of Host CPU          Time (s)  IO_MEGABYTES
    -------------------  ------------------------------  ---------------  --------------  -------------  ----------------  ------------
    2013-02-05 07:01:37  ETL                                        1920             0.0            0.0               0.0             0
                         MAINT                                      1920             1.1            0.1               0.0          3172  <-- RMAN
                         OTHER_GROUPS                               1920             1.1            0.1               0.0            17
                         REPORTING                                  1920             0.0            0.0               0.0             0
                         _ORACLE_BACKGROUND_GROUP_                  1920             0.0            0.0               0.0            25

                                                         Total Available            Used                 Oracle Throttled
    TIME                 CONSUMER_GROUP_NAME                 CPU Seconds  Oracle Seconds  % of Host CPU          Time (s)  IO_MEGABYTES
    -------------------  ------------------------------  ---------------  --------------  -------------  ----------------  ------------
    2013-02-05 07:02:37  ETL                                        1920             6.2            0.3               5.1          4873
                         MAINT                                      1920            15.5            0.8               0.0         65644  <-- RMAN with reports and ETL
                         OTHER_GROUPS                               1920             0.0            0.0               0.0             0
                         REPORTING                                  1920             1.6            0.1               0.9          3964
                         _ORACLE_BACKGROUND_GROUP_                  1920             0.0            0.0               0.0            20


    -- etl
    BENCHMARK ,INSTNAME       ,START            ,END              ,   ELAPSED,    MBs
    ----------,---------------,-----------------,-----------------,----------,-------
    benchmark ,dbm1           ,05/02/13 07:03:32,05/02/13 07:03:57,        25,   1381

    BENCHMARK ,INSTNAME       ,START            ,END              ,   ELAPSED,    MBs
    ----------,---------------,-----------------,-----------------,----------,-------
    benchmark ,dbm1           ,05/02/13 07:03:32,05/02/13 07:03:57,        25,   1381

    -- reporting
    BENCHMARK ,INSTNAME       ,START            ,END              ,   ELAPSED,    MBs
    ----------,---------------,-----------------,-----------------,----------,-------
    benchmark ,dbm1           ,05/02/13 07:03:34,05/02/13 07:04:10,        36,    959

    BENCHMARK ,INSTNAME       ,START            ,END              ,   ELAPSED,    MBs
    ----------,---------------,-----------------,-----------------,----------,-------
    benchmark ,dbm1           ,05/02/13 07:03:34,05/02/13 07:04:10,        36,    959


}}}


! INTERDATABASE IORM PLAN 
{{{
-- INTERDATABASE IORM PLAN
#############################################

* do a show parameter db_unique_name&nbsp;across all the databases, this will be the name you'll be putting as a name on the IORM plan

# main commands
alter iormplan dbPlan=( -
(name=dbm,    level=1, allocation=60), -
(name=exadb,   level=1, allocation=40), -
(name=other,    level=2, allocation=100));
alter iormplan active
list iormplan detail

list iormplan attributes objective
alter iormplan objective = auto



# list 
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan attributes objective'


# implement
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan dbPlan=\( \(name=dbm,    level=1, allocation=60\), \(name=exadb,   level=1, allocation=40\), \(name=other,    level=2, allocation=100\)\);'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan active'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'

dcli -g ~/cell_group -l root 'cellcli -e alter iormplan objective = low_latency'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan attributes objective'



# revert
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan dbPlan=\"\"'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan catPlan=\"\"'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan inactive'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'

dcli -g ~/cell_group -l root 'cellcli -e alter iormplan objective=\"\"'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan attributes objective'
}}}
''FYI:''
	* IMPLICITLY CAPTURED baselines are ACCEPTED only if they are the FIRST baselines for the statement
	* If SQL baseline already exists and the same SQL is generating a new plan for any reason, a new SPM baseline will be created, but with NOT ACCEPTED status. That means that 	it WILL NOT BE USED, unless we do something to explicitly enable it.
	* accepted baseline, but at runtime Oracle is still using the worse plan
		look at the v$sql.SQL_PLAN_BASELINE field for your query cursor to see what/whether baseline is actually used (it will be empty if it’s not) or running dbms_xplan on your cursor and looking at the “notes” section. Depending on the results you can look deeper by i.e. dumping 10053 trace and seeing why baseline was (not) chosen.
	* index add 
		If you have a baseline on your query that uses a full table scan and then you add an index, ORACLE will not automatically switch to using that index, even though a new plan with the index will likely be generated and be vastly more efficient. You have to do something to enable the new plan, by either evolving it or simply forcing it (	or dropping the baseline).
	* index drop
		it tries to reproduce the plan from baseline during parse. If it cannot (i.e. if the index is not there), it cannot use this plan, even if it is ACCEPTED and needs to either try other ACCEPTED plans from baseline or parse a new one.
		The plan with the index remain ACCEPTED, I believe (did not check on that) but will not be used.
		Keep in mind that you can have several ACCEPTED plans for the same statement and ORACLE can choose between them during parse. ACCEPTED does not mean BEST, nor does it mean THE ONLY ONE that can be used. Rather it means: it was approved at some point (which you can do manually, btw)
	* plan flips on ACCEPTED plans
		it can also populate baselines with several ACCEPTED plans for the same sql, meaning that your sql will execute sometimes with plan A and sometimes with plan B. These plan flips can be disastrous if your users are relying on stable execution of that sql
	* alter object or indexes, add column
		both Profiles and Baselines do not care about the contents of the object or even the structure of the object when they are “attached”. The only thing that seems to matter is: TEXT OF THE QUERY (by the way, upper/lower case or different number of “spaces” are irrelevant).
	* auto capture using logon trigger
		another thing to consider is that you could turn baseline capture on at a session level for those sessions if you can identify them e.g. using logon trigger.


{{{
Build HOWTO: 
	* auto capture 

		show parameter optimizer_capture_sql_plan_baselines
		alter system set optimizer_capture_sql_plan_baselines=TRUE;  -- to turn on auto capture system wide
		ALTER SESSION SET optimizer_capture_sql_plan_baselines=TRUE;  -- to turn on auto capture on session level

		Logon Trigger on auto capture: 

			DROP TRIGGER SYS.SESSION_OPTIMIZATIONS;

			CREATE OR REPLACE TRIGGER SYS.session_optimizations after logon on database
			begin
			   if (user in ('HR')) then
			      execute immediate('ALTER SESSION SET optimizer_capture_sql_plan_baselines=TRUE');
			   end if;
			end;
			/

	* manual capture from cursor cache - individual SQL

		-- Then, let's build the baseline
		var nRet NUMBER
		EXEC :nRet := dbms_spm.load_plans_from_cursor_cache('1z5x9vpqr5t95');

		-- And finally, let's double check that the baseline has really been built
		SET linesize 180
		colu sql_text format A30

		SELECT plan_name, sql_text, optimizer_cost, accepted
		FROM dba_sql_plan_baselines
		WHERE to_char(sql_text) LIKE 'SELECT * FROM t WHERE n=45'
		ORDER BY signature, optimizer_cost
		/

		PLAN_NAME                      SQL_TEXT                       OPTIMIZER_COST ACC
		------------------------------ ------------------------------ -------------- ---
		SQL_PLAN_01yu884fpund494ecae5c SELECT * FROM t WHERE n=45              68764 YES

         -- manual load a SQL and plan hash value
        var v_num number;
        exec :v_num:=dbms_spm.load_plans_from_cursor_cache(sql_id => 'duk2ypk5fz9g6',plan_hash_value => 1357081020 );

	* manual capture from cursor cache - Parsing Schema 

		other methods:
			> The entire schema
			> Particular module/action
			> All similar SQLs
			> SQL tuning sets (through dbms_spm.load_plans_from_sqlset function)

		DECLARE 
		  nRet NUMBER;
		BEGIN
		  nRet := dbms_spm.load_plans_from_cursor_cache(
		    attribute_name => 'PARSING_SCHEMA_NAME',
		    attribute_value => 'HR'
		  );
		END;
		/


	* forcing an execution plan - fake baselines
		http://jonathanlewis.wordpress.com/2011/01/12/fake-baselines/

		declare
		    m_clob  clob;
		begin
		    select
		        sql_fulltext
		    into
		        m_clob
		    from
		        v$sql
		    where
		        sql_id = '&m_sql_id_1'
		    and child_number = &m_child_number_1
		    ;
		 
		    dbms_output.put_line(m_clob);
		 
		    dbms_output.put_line(
		        dbms_spm.load_plans_from_cursor_cache(
		            sql_id          => '&m_sql_id_2',
		            plan_hash_value     => &m_plan_hash_value_2,
		            sql_text        => m_clob,
		            fixed           => 'NO',
		            enabled         => 'YES'
		        )
		    );
		 
		end;
		/

		There is one thing I would like to point as well. If unhinted statement is using a bind variables, new hinted statement has to use it as well.

			declare
			  stm varchar2(4000);
			  a1 varchar2(128) := '999999';
			  TYPE CurTyp  IS REF CURSOR;
			  tmpcursor    CurTyp;
			 
			begin
			  stm:='select /*HINT */ * from t1 where id = :1';
			  open tmpcursor for stm using a1;
			end;
			/


	* migrate (dump) baseline from one database to another

		1) create the staging table in the source database, 
			exec DBMS_SPM.CREATE_STGTAB_BASELINE('STAGE_SPM');
		2) pack SQL baselines into the staging table, 
			exec :n:=DBMS_SPM.PACK_STGTAB_BASELINE('STAGE_SPM');

			SET long 1000000
			SET longchunksize 30
			colu sql_text format a30
			colu optimizer_cost format 999,999 heading 'Cost'
			colu buffer_gets    format 999,999 heading 'Gets'
			SELECT sql_text, OPTIMIZER_COST, CPU_TIME, BUFFER_GETS, COMP_DATA FROM STAGE_SPM;

		3) copy the staging table to the target database, 
		4) and unpack baselines from the staging table into the SQL Management Base
			exec :n:=DBMS_SPM.UNPACK_STGTAB_BASELINE('STAGE_SPM');

		For Profiles:

		EXEC dbms_sqltune.create_stgtab_sqlprof('profile_stg'); 
		EXEC dbms_sqltune.pack_stgtab_sqlprof(staging_table_name => 'profile_stg');

		For SPM Baselines:

		var n NUMBER 
		EXEC dbms_spm.create_stgtab_baseline('baseline_stg'); 
		EXEC :n := dbms_spm.pack_stgtab_baseline('baseline_stg');


	* evolve

		SQL> SELECT sql_handle FROM dba_sql_plan_baselines
		WHERE plan_name='SQL_PLAN_4wm24mwmr8n9z0efda8a7';

		SQL_HANDLE
		------------------------------
		SYS_SQL_4e4c449f2774513f

		SQL> SET long 1000000
		SQL> SET longchunksize 180

		SELECT dbms_spm.evolve_sql_plan_baseline('SQL_bb77a3e93c0ea7f3') FROM dual;

	* disable 

		declare
		myplan pls_integer;
		begin
		myplan:=DBMS_SPM.ALTER_SQL_PLAN_BASELINE (sql_handle => '&sql_handle',plan_name  => '&plan_name',attribute_name => 'ENABLED',   attribute_value => 'NO');
		end;
		/

	* drop 

		DECLARE
	       plans_dropped    PLS_INTEGER;
	     BEGIN
	       plans_dropped := DBMS_SPM.drop_sql_plan_baseline (
	     sql_handle => 'SYS_SQL_51dcc66dae94c669',
	     plan_name  => 'SQL_PLAN_53r66dqr99jm98a727c3d');
	     DBMS_OUTPUT.put_line(plans_dropped);
		 END;
		  /

        * configure 

          DBMS_SPM.CONFIGURE (‘plan_retention_weeks’, < Number of weeks to retain unused plans before they are purged>) ;


Scripts: SQL PLAN MANAGEMENT [ID 456518.1]
	@find_sql_using_baseline

		-- find SPM baseline by SQL_ID
		col parsing_schema format a8
		col created format a10
		SELECT parsing_schema_name parsing_schema, created, plan_name, sql_handle, sql_text, optimizer_cost, accepted, enabled, origin
		FROM dba_sql_plan_baselines
		WHERE signature IN (SELECT exact_matching_signature FROM v$sql WHERE sql_id='&SQL_ID')
		/

		-- find sql using baseline
		SELECT b.sql_handle, b.plan_name, s.child_number, 
	  	s.plan_hash_value, s.executions
		FROM v$sql s, dba_sql_plan_baselines b
		WHERE s.exact_matching_signature = b.signature(+)
		  AND s.sql_plan_baseline = b.plan_name(+)
		  AND s.sql_id='&SQL_ID'
		/

	@baselines
	@baseline_hints
	@create_baseline
	@create_baseline_awr
	
	col parsing_schema format a8
col created format a20
col sql_handle format a25
col sql_text format a40
SELECT parsing_schema_name parsing_schema, TO_CHAR(created,'MM/DD/YY HH24:MI:SS') created, plan_name, sql_handle, substr(sql_text,1,35) sql_text, optimizer_cost, accepted, enabled, origin 
FROM dba_sql_plan_baselines order by 2 asc;

	set lines 200
	select * from table(dbms_xplan.display_cursor('&sql_id','&child_no','typical'))
	/
}}}
Complete HOWTO is here https://www.evernote.com/l/ADCcW786eL1Ei5Z-dd3-CzTRw9ddUXyNuS8
LMAX - How to Do 100K TPS at Less than 1ms Latency http://www.infoq.com/presentations/LMAX
https://en.wikipedia.org/wiki/Hybrid_transactional/analytical_processing
https://www.kdnuggets.com/2016/11/evaluating-htap-databases-machine-learning-applications.html




.
http://lists.w3.org/Archives/Public/public-coremob/2012Sep/0021.html
http://engineering.linkedin.com/linkedin-ipad-5-techniques-smooth-infinite-scrolling-html5

http://www.html5rocks.com/en/
http://www.hackintosh.com/
http://lifehacker.com/348653/install-os-x-on-your-hackintosh-pc-no-hacking-required
http://www.sysprobs.com/hackintosh-10-6-7-snow-leopard-on-virtualbox-4-working-sound
http://www.sysprobs.com/install-mac-snow-leopard-1063-oracle-virtualbox-32-apple-intel-pc
http://geeknizer.com/install-snow-leopard-virtualbox/
http://www.youtube.com/watch?v=PLL_qOLpqs4
http://lifehacker.com/5841604/the-always-up+to+date-guide-to-building-a-hackintosh

-- on final cut pro
http://www.disturbingnewtrend.blogspot.com/
http://www.insanelymac.com/forum/index.php?showtopic=69855


-- virtual box preinstalled 
http://isohunt.com/torrent_details/261669825/mac+os+x+snow+leopard+hazard?tab=summary

-- vmware preinstalled
http://isohunt.com/torrent_details/326417697/Mac+OS+X+Snow+Leopard+10.6.8+VMware+Image+Ultimate+Build?tab=summary

-- osx lion 
https://www.virtualbox.org/wiki/Mac%20OS%20X%20build%20instructions
http://www.sysprobs.com/guide-mac-os-x-10-7-lion-on-virtualbox-with-windows-7-and-intel-pc
http://www.sysprobs.com/create-bootable-lion-os-installer-image-vmware-windows-intel-based-computers
http://www.sysprobs.com/working-method-install-mac-107-lion-vmware-windows-7-intel-pc
http://www.youtube.com/watch?v=-fxz7jVI9kQ
http://ewangi.info/275/how-to-install-mac-os-x-lion-in-vmware-or-virtualbox-on-pc/


<<showtoc>>

! ''Comparing Hadoop Appliances'' 
http://www.pythian.com/news/29955/comparing-hadoop-appliances/
http://www.cloudera.com/blog/2010/08/hadoophbase-capacity-planning/

! ''Hadoop VM''
https://ccp.cloudera.com/display/SUPPORT/Cloudera%27s+Hadoop+Demo+VM
https://ccp.cloudera.com/display/SUPPORT/Cloudera%27s+Hadoop+Demo+VM+for+CDH4
https://ccp.cloudera.com/display/SUPPORT/Hadoop+Tutorial


! ''12TB/hour data load'' 
High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database http://www.oracle.com/technetwork/bdc/hadoop-loader/connectors-hdfs-wp-1674035.pdf


! ''Hadoop Applications''
http://blog.revolutionanalytics.com/2010/12/how-orbitz-uses-hadoop-and-r-to-optimize-hotel-search.html


! ''Hadoop Tools''
http://toadforcloud.com/pageloader.jspa?sbinPageName=hadoop.html&sbinPageTitle=Quest%20Solutions%20for%20Hadoop


! ''Guy Harrison Articles''
http://guyharrison.squarespace.com/blog/tag/hadoop
http://guyharrison.squarespace.com/blog/tag/r
http://guyharrison.squarespace.com/blog/tag/cassandra
http://guyharrison.squarespace.com/blog/tag/hive
http://guyharrison.squarespace.com/blog/tag/mongodb
http://guyharrison.squarespace.com/blog/tag/nosql
http://guyharrison.squarespace.com/blog/tag/pig
http://guyharrison.squarespace.com/blog/tag/python
http://guyharrison.squarespace.com/blog/tag/sqoop


! ''Hadoop developer course - follow up''
http://www.cloudera.com/content/cloudera/en/resources/library/training/cloudera-essentials-for-apache-hadoop-the-motivation-for-hadoop.html


! ''Free large data sets'' 
http://stackoverflow.com/questions/2674421/free-large-datasets-to-experiment-with-hadoop


! Hadoop2
Introduction to MapReduce with Hadoop on Linux http://www.linuxjournal.com/content/introduction-mapreduce-hadoop-linux
Hadoop2 http://hortonworks.com/blog/apache-hadoop-2-is-ga/


! ''Hadoop Tutorials''
http://www.cloudera.com/content/cloudera/en/resources/library.html?category=cloudera-resources:using-cloudera/tutorials&p=1
http://hortonworks.com/tutorials/

! end





https://github.com/t3rmin4t0r/notes/wiki/Hadoop-Tuning-notes

{{{
# Timeouts, slowness and issues as you scale your query?

Scaling down data usually brings down the bi-partite traffic (i.e total # of mappers X total # of reducers) and total amount of shuffle load produced by a faster engine.

Tez is faster than MRv2 and that pushes the standard configured linux machine much harder & runs at a higher network utilization.

There's no easy way to detect the following issues from any hadoop logs.

Hortonworks has a learning automation engine to constantly measure & recommend settings for your cluster as it grows - [Hortonworks SmartSense](http://hortonworks.com/info/hortonworks-smartsense/)

## Known bad hardware + kernels

Centos 6.x kernel bugs on Intel 10Gbps drivers

GRO/LRO - https://access.redhat.com/solutions/20278

## Known problem kernel features 

echo never > /sys/kernel/mm/transparent_hugepage/enabled

echo never > /sys/kernel/mm/transparent_hugepage/defrag

echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

Centos 7 with auto cgroup turned on.

$ cat /proc/self/autogroup

and your system loadavg is > the number printed by this script

https://gist.github.com/t3rmin4t0r/605cefddd32c427b7dc0

## Known JVM issues 

If you run Java in server mode, inside a LAN, the following issue is killing your DNS server

http://bugs.java.com/bugdatabase/view_bug.do?bug_id=6247501

/etc/init.d/nscd restart <-- dns fix

## Basic tuning for 10Gbps + >3 nodes

here's pretty much everything that Rajesh & I have collected over 3 years.

(fix ethX params)

```
sysctl -w "net.core.somaxconn=16384"
sysctl -w "net.core.netdev_max_backlog=20000" <— backlog before dropping packets

sysctl –w "net.core.rmem_max = 134217728"<— Max amount of read/write buffers that can be set via setSockOpt/client side

sysctl –w "net.core.wmem_max = 134217728"
sysctl –w "net.core.rmem_default = 524288" <— Default read/write buffers set by kernel
sysctl –w "net.core.wmem_default = 524288"
sysctl -w "net.ipv4.tcp_rmem="4096 65536 134217728" <— min/start/max. even if 30K connections are there in the node, 30K * 64KB ~ 2 GB?. Should be fine in machine with large RAM.
sysctl -w "net.ipv4.tcp_wmem="4096 65536 134217728"
sysctl -w "net.ipv4.ip_local_port_range ="4096 61000"
sysctl -w "net.ipv4.conf.ethX.forwarding=0" <— Change ethX to relevant nic
sysctl -w "net.ipv4.tcp_mtu_probing=1"
sysctl -w "net.ipv4.tcp_fin_timeout=4"
sysctl -w "net.ipv4.conf.lo.forwarding=0"

sysctl -w "vm.dirty_background_ratio=80"
sysctl -w "vm.dirty_ratio=80"
sysctl -w "vm.swappiness=0"
```

Now these aren't really performance options, those are a different discussion (DSack, jumbo frames, slow_start_after_idle), so email me if you've bought some fancy hardware :)

}}}
<<showtoc>>

https://www.udemy.com/home/my-courses/learning/?instructor_filter=14145628

! Learn Big Data: The Hadoop Ecosystem Masterclass
https://www.udemy.com/learn-big-data-the-hadoop-ecosystem-masterclass/learn/v4/content


! Learn DevOps: Scaling apps On-Premise and in the Cloud
https://www.udemy.com/learn-devops-scaling-apps-on-premise-and-in-the-cloud/learn/v4/content


! Learn Devops: Continuously Deliver Better Software
https://www.udemy.com/learn-devops-continuously-deliver-better-software/learn/v4/content










http://perfdynamics.blogspot.com/2013/04/harmonic-averaging-of-monitored-rate.html
http://www.huffingtonpost.com/colm-mulcahy/mean-questions-with-harmonious-answers_b_2469351.html
http://www.pugetsystems.com/labs/articles/Z87-H87-H81-Q87-Q85-B85-What-is-the-difference-473/

http://www.evernote.com/shard/s48/sh/52071d59-e00d-4bc3-8d47-481d382e150f/06fe80eca9566349595203a1255afb2c
http://www.evernote.com/shard/s48/sh/781cbf9a-4ef0-4b97-a9fa-87d8b65c8e52/2b5ff04e9561ad1450e630db09893f5f
http://www.holovaty.com/writing/aws-notes/


Heroku (cloud platform as a service (PaaS))
http://stackoverflow.com/questions/11008787/what-exactly-is-heroku

/***
|Name:|HideWhenPlugin|
|Description:|Allows conditional inclusion/exclusion in templates|
|Version:|3.1 ($Rev: 3919 $)|
|Date:|$Date: 2008-03-13 02:03:12 +1000 (Thu, 13 Mar 2008) $|
|Source:|http://mptw.tiddlyspot.com/#HideWhenPlugin|
|Author:|Simon Baird <simon.baird@gmail.com>|
|License:|http://mptw.tiddlyspot.com/#TheBSDLicense|
For use in ViewTemplate and EditTemplate. Example usage:
{{{<div macro="showWhenTagged Task">[[TaskToolbar]]</div>}}}
{{{<div macro="showWhen tiddler.modifier == 'BartSimpson'"><img src="bart.gif"/></div>}}}
***/
//{{{

window.hideWhenLastTest = false;

window.removeElementWhen = function(test,place) {
	window.hideWhenLastTest = test;
	if (test) {
		removeChildren(place);
		place.parentNode.removeChild(place);
	}
};


merge(config.macros,{

	hideWhen: { handler: function(place,macroName,params,wikifier,paramString,tiddler) {
		removeElementWhen( eval(paramString), place);
	}},

	showWhen: { handler: function(place,macroName,params,wikifier,paramString,tiddler) {
		removeElementWhen( !eval(paramString), place);
	}},

	hideWhenTagged: { handler: function (place,macroName,params,wikifier,paramString,tiddler) {
		removeElementWhen( tiddler.tags.containsAll(params), place);
	}},

	showWhenTagged: { handler: function (place,macroName,params,wikifier,paramString,tiddler) {
		removeElementWhen( !tiddler.tags.containsAll(params), place);
	}},

	hideWhenTaggedAny: { handler: function (place,macroName,params,wikifier,paramString,tiddler) {
		removeElementWhen( tiddler.tags.containsAny(params), place);
	}},

	showWhenTaggedAny: { handler: function (place,macroName,params,wikifier,paramString,tiddler) {
		removeElementWhen( !tiddler.tags.containsAny(params), place);
	}},

	hideWhenTaggedAll: { handler: function (place,macroName,params,wikifier,paramString,tiddler) {
		removeElementWhen( tiddler.tags.containsAll(params), place);
	}},

	showWhenTaggedAll: { handler: function (place,macroName,params,wikifier,paramString,tiddler) {
		removeElementWhen( !tiddler.tags.containsAll(params), place);
	}},

	hideWhenExists: { handler: function(place,macroName,params,wikifier,paramString,tiddler) {
		removeElementWhen( store.tiddlerExists(params[0]) || store.isShadowTiddler(params[0]), place);
	}},

	showWhenExists: { handler: function(place,macroName,params,wikifier,paramString,tiddler) {
		removeElementWhen( !(store.tiddlerExists(params[0]) || store.isShadowTiddler(params[0])), place);
	}},

	hideWhenTitleIs: { handler: function(place,macroName,params,wikifier,paramString,tiddler) {
		removeElementWhen( tiddler.title == params[0], place);
	}},

	showWhenTitleIs: { handler: function(place,macroName,params,wikifier,paramString,tiddler) {
		removeElementWhen( tiddler.title != params[0], place);
	}},

	'else': { handler: function(place,macroName,params,wikifier,paramString,tiddler) {
		removeElementWhen( !window.hideWhenLastTest, place);
	}}

});

//}}}

How to Use the Solaris Truss Command to Trace and Understand System Call Flow and Operation [ID 1010771.1] <— good stuff
Case Study: Using DTrace and truss in the Solaris 10 OS http://www.oracle.com/technetwork/systems/articles/dtrace-truss-jsp-140760.html
How to Analyze High CPU Utilization In Solaris [ID 1008930.1]   <-- lockstat, kstat, dtrace
How to use DTrace and mdb to Interpret vmstat Statistics [ID 1009494.1]  


— sys time kernel profiling
http://dtracebook.com/index.php/Kernel#lockstat_Provider
http://wikis.sun.com/display/DTrace/lockstat+Provider
http://blogs.technet.com/b/markrussinovich/archive/2008/04/07/3031251.aspx
http://helgeklein.com/blog/2010/01/how-to-analyze-kernel-performance-bottlenecks-and-find-that-atis-catalyst-drivers-cause-50-cpu-utilization/
http://prefetch.net/blog/index.php/2010/03/08/breaking-down-system-time-usage-in-the-solaris-kernel/ <— Breaking down system time usage in the Solaris kernel
http://orainternals.wordpress.com/2008/10/31/performance-issue-high-kernel-mode-cpu-usage/ , http://www.orainternals.com/investigations/high_cpu_usage_shmdt.pdf, http://www.pythian.com/news/1324/oracle-performance-issue-high-kernel-mode-cpu-usage/ <— ''riyaj high sys''
http://www.oracledatabase12g.com/archives/resolving-high-cpu-usage-on-oracle-servers.html <— oracle metalink sys high
http://www.freelists.org/post/oracle-l/Solaris-CPU-Consumption,3
http://www.solarisinternals.com/wiki/index.php/CPU/Processor <— ''good drill down examples - filebench''
AAA Pipeline Consumes 100% CPU [ID 1083994.1]
http://www.princeton.edu/~unix/Solaris/troubleshoot/process.html <-- LWP

http://web.archiveorange.com/archive/v/ejz8xZLNsakZx7OAzhCz <-- high sys cpu time, any way to use dtrace to do troubleshooting? 
http://opensolaris.org/jive/thread.jspa?threadID=103737 <-- Thread: DBWR write performance


-- ''lockstat''
http://dtracebook.com/index.php/Kernel#lockstat_Provider
http://wikis.sun.com/display/DTrace/lockstat+Provider
How to Analyze High CPU Utilization In Solaris [ID 1008930.1]   <-- lockstat, kstat, dtrace
A Primer On Lockstat [ID 1005868.1]
https://blogs.oracle.com/sistare/entry/measuring_lock_spin_utilization

-- ''stack trace''
https://blogs.oracle.com/sistare/entry/lies_damned_lies_and_stack


-- ''mdb''
https://blogs.oracle.com/sistare/entry/wicked_fast_memstat









https://blogs.oracle.com/optimizer/entry/how_does_the_method_opt

* use METHOD_OPT=>’FOR ALL COLUMNS SKEW ONLY’ - for initial histogram creation
* then do a METHOD_OPT=>'FOR ALL COLUMNS SIZE REPEAT' - for subsequent runs
** also consider "sample size" 

statistics gathering - locking table... good for static table, and could be bad for low and high values not being representative


http://translate.google.com/translate?sl=auto&tl=en&u=http://www.dbform.com/html/2010/1200.html

http://neerajbhatia.wordpress.com/2010/11/12/everything-you-want-to-know-about-oracle-histograms-part-1/
http://neerajbhatia.files.wordpress.com/2010/11/everything-you-want-to-know-about-oracle-histograms-part-1.pdf
http://structureddata.org/2008/10/14/dbms_stats-method_opt-and-for-all-indexed-columns/


''starting 10g onwards''
- we don't invalidate on the shared pool pace it out up to 5hours before this will take effect... so specify ''NOINVALIDATE=FALSE'' to instantly take effect
- the auto gathering of histograms on "where" clause started



https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=59690156
http://www.hplsql.org/doc
http://www.hplsql.org/features
! course 
* udemy - Talend For Big Data Integration Course : Beginner to Expert - https://www.udemy.com/talend-for-big-data/learn/v4/overview


https://www.youtube.com/results?search_query=talend+hadoop
Talend for Big Data https://www.safaribooksonline.com/library/view/talend-for-big/9781782169499/#toc
http://www.talend.com/products/big-data/big-data-open-studio/
Talend Open Studio for Big Data for Dummies https://info.talend.com/en_bd_bd_dummies.html?type=productspage&_ga=2.77959124.1360921166.1516402754-1190038700.1516402754&_gac=1.205267492.1516402754.EAIaIQobChMItc3Rt5Dl2AIVm4izCh2-XQhcEAAYASAAEgKRn_D_BwE





https://github.com/t3rmin4t0r/notes/wiki/Hive:-Production-Realities-(WIP)
{{{
In the last 2 years, I have heard a lot of customer requirements that are downright ordinary.

Below is a list of these ordinary requirements that are often unvoiced, because they are taken for granted.

I'm a performance engineer - but here are some things that outweigh performance when it comes to really running it in production.

### #1 Handle a concurrent ETL pipeline into the same data warehouse 
   * ETL in new partitions, while a query is running against old partitions
   * Most queries run against fresh partitions within minutes of insertions
   * ETL can be de-prioritized against the rest of the workload

In real production clusters, you don't load up ~1Tb of data once and query the exact same data-set. Nearly every few hours at least, you get new data to insert.

Also Infra teams frown on needing 2 clusters for ETL/query. 

This is by-far the most common reason to use Hive - ETL+JDBC/ODBC in one go.

### #2 ETL needs to be complete once a partition is in place
   * Metadata updates like Statistics should be automatic
   * New partitions are visible to existing clients
   * Adding a partition shouldn't need a full-pass for metadata collection

If your query engine needs some sort of stats, you should collect them during ETL - in production, that also counts as part of the ETL workload. You cannot wave away that cost in your system from the actual insertion pipeline - 

Hive does the right things there, with `hive.stats.autogather` and `hive.stats.dbclass=fs`. 

Even more relevantly, any query speedups by artificially updating statistics by hand or by changing cluster settings to run stats generation. Those are alright for demos, but in reality with an ETL firehose that doesn't work - the Hive implementation extrapolates from existing statistics, when a query spans cold and fresh data.

### #3 ETL+Query is what matters
   * If you can satisfy #1 and #2, then real life performance can be measured
   * Most people care about the freshness of data

Taking 2+ hours of cluster downtime to load, with manual intervention & stats tuning cannot be required for a ~10s query.

That can indeed be useful to demonstrate speed, but the reality ends up taking a huge bite out of those approaches.

### #4 Failure tolerance
   * Automatic failure tolerance is a must for large scale systems
   * Node failures should not affect running queries
   * At the very least, they shouldn't affect the next query

In this context, Hive's ACID impl is built for same minute delivery (#3) into a current partition, with repeatable read during retries (#4).

Several SQL solutions competing with hive are out of the picture already, but there's more to Hive that really helps me not lose sleep if I had a pager.

### #5 Cluster growth - can you add new machines/remove old machines
   * Machines always need maintenance (otherwise they'd have taken over - re: my nick)
   * This is a direct counter-part of #4 
   * If you can fail-over running queries, you can pull worker machines out
   * Adding new machines is more of a soft requirement - otherwise you can't put them back

Hive leaves this up to the well proven YARN implementation to do this.

In this context, blacklisting broken machines is the least the system should do - but old/new machine swaps imply that they are functional, but in need of decommission. 

### #6 Cluster growth - bigger machines or more machines?
   * For production, you should be able to upgrade a cluster simply by adding more machines (assuming #5, then see #10)
   * If your execution engine is limited by a single node's RAM, then this obviously fails
   * Usually new purchases are also faster machines (more RAM, more disk)
   * So your query platform cannot assume identical machines in all query plans
   * Does it need to fixate on the lowest-common h/w config or can it slice/dice work

Your scale of testing can't depend on whether you have 20 x 384Gb vs 60 x 128Gb machines. Sure there's more network traffic, but you can't complain about running out of memory when you scale horizontally with same aggregate RAM.

Hive uses containers of fixed size which is acceptable to YARN, so they can be redistributed across a heterogenous cluster.

### #7 Cluster age - now you have old machines and new machines
   * Old machines tend to have worse hardware (barring bad firmware on new)
   * They tend to bit-rot, throw errors and lose data that was written to disk
   * We can't have any "remove seatbelt" speedups then (disk data + checksums)
   * Tasks need retries when errors are detected and basic failure tolerance

Both Mapreduce and Tez have explicit checksums for the data that moves between machines. So does HDFS.

Because of this Hive is relatively safe from this bit-rot, but this is a performance penalty to keep your data safe.

### #8 Isolation between queries - so you have a bad query
   * Need a query kill mechanism with an inbuilt cleanup
   * Killing a query shouldn't need a cluster restart 
   * Immediately after a bad query is killed, the cluster should return to usable state

YARN does this for Hive, so that it is no different from the cleanup routines for any other MapReduce application.

And since that feature has been robust and well-tested, there will not be any orphan tasks left behind.

### #9 Rolling upgrades/Fast restarts
   * Upgrades don't need downtime - HiveServer2 is actually a client instance
   * Starting a new one is almost immediate - no state is stored/reloaded, so restart is not dependent on hcatalog size or cluster count
   * The server can't fail because there are too many partitions or tables during a restart cycle

This is particularly important in scenarios where a HiveServer2 needs a restart (like to load a new SSL cert).

The production workloads can't wait 10 minutes while it restarts or even fails because there are too many partitions in the cluster (Mithun from Y! has a post talking about nearly ~100k partitions being added per day).

### #10 "Gone Viral" scenario - ~10x data on one day
   * This is the true test of nearly everything in play here
   * Most ETL systems will handle it slower, but complete successfully
   * Adding more machines in a hurry to temporary increase capacity/throughput
 
This happens way more often than anyone predicts. 

The biggest problem is the opportunity cost - at peak load is when the PMs really want to see how the experiments are doing. Not the day to find out that it went over the RAM in the cluster and that the dashboards are empty because queries are failing.
}}}
tableau forecast model - Holt-Winters exponential smoothing
http://onlinehelp.tableausoftware.com/v8.1/pro/online/en-us/help.html#forecast_describe.html

google search - exponential smoothing 
https://www.google.com/search?q=exponential+smoothing&oq=exponential+smoothing&aqs=chrome..69i57j69i60j69i65l3j69i60.3507j0j7&sourceid=chrome&es_sm=119&ie=UTF-8

http://en.wikipedia.org/wiki/Exponential_smoothing

exponential growth functions
https://www.khanacademy.org/math/algebra2/exponential_and_logarithmic_func/exp_growth_decay/v/exponential-growth-functions

simple exponential smoothing 
http://freevideolectures.com/Course/3096/Operations-and-Supply-Chain-Management/2

google search - Holt-Winters exponential smoothing model
https://www.google.com/search?q=Holt-Winters+exponential+smoothing+model&oq=Holt-Winters+exponential+smoothing+model&aqs=chrome..69i57.1391j0j7&sourceid=chrome&es_sm=119&ie=UTF-8

The Holt-Winters Approach to Exponential Smoothing: 50 Years Old and Going Strong - Paul Goodwin
http://www.forecasters.org/pdfs/foresight/free/Issue19_goodwin.pdf

Time series Forecasting using Holt-Winters Exponential Smoothing   <-- good stuff, with good overview of different smoothing models
http://www.it.iitb.ac.in/~praj/acads/seminar/04329008_ExponentialSmoothing.pdf



! some more 

Mod-02 Lec-04 Forecasting -- Winter's model, causal models, Goodness of forecast, Aggregate Planning
http://www.youtube.com/watch?v=MbNmIZNy3qI

Excel - Time Series Forecasting - Part 1 of 3
http://www.youtube.com/watch?v=gHdYEZA50KE

Applied regression analysis
http://blog.minitab.com/blog/adventures-in-statistics/applied-regression-analysis-how-to-present-and-use-the-results-to-avoid-costly-mistakes-part-2

http://www.r-tutor.com/

https://www.quora.com/search?q=holt+winter

google search "mean absolute error"
	https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=mean%20abosolute%20error
	http://www.amazon.com/s/ref=sr_pg_1?rh=i%3Aaps%2Ck%3Amean+absolute+error&keywords=mean+absolute+error&ie=UTF8&qid=1401658352
	http://www.amazon.com/Predictive-Analytics-Dummies-Business-Personal/dp/1118728963/ref=sr_1_15?ie=UTF8&qid=1401658288&sr=8-15&keywords=mean+absolute+error

Mean Absolute Deviation/Error (MAD or MAE)
http://www.vanguardsw.com/101/mean-absolute-deviation-mad-mean-absolute-error-mae.htm




Predictive Analytical Modelling
http://community.tableausoftware.com/thread/112660

Forecasting Help (nonlinear and trends and exponential smoothing)
http://community.tableausoftware.com/thread/131081   <-- I categorize a "Good" forecast as one with a mean absolute scaled error (MASE) of less than 0.4

Scott Tennican 
http://community.tableausoftware.com/people/scotttennican0   <-- the developer of "Foreast" in tableau
http://community.tableausoftware.com/people/scotttennican0/content?filterID=participated

Using R forecasting packages from Tableau
http://boraberan.wordpress.com/2014/01/19/using-r-forecasting-packages-from-tableau/   <-- Program Manager at Tableau Software focus on statistics 

R integration, object of different length than original data 
http://community.tableausoftware.com/thread/137551

running sum of forecast - holt winters
http://community.tableausoftware.com/thread/137167

Exponential smoothing or Forecasting in tableau - guys questioning the accuracy of the forecast feature
http://community.tableausoftware.com/thread/140495


https://plus.google.com/+KennethBlack/posts <-- this guy investigated on the trend models in tableau, and he works for this company http://blog.qualproinc.com/blog-qualpro-mvt/ctl/all-posts/

Additional Insight and Clarification of #Tableau Exponential Trend Models
	http://3danim8.wordpress.com/2013/10/18/additional-insight-and-clarification-of-tableau-exponential-trend-models/
A Help Guide for Better Understanding all of #Tableau Trend Models
	http://3danim8.wordpress.com/2013/10/15/a-help-guide-for-better-understanding-all-of-tableau-trend-models/
How to Better Understand and Use Linear Trend Models in #Tableau	
	http://3danim8.wordpress.com/2013/09/11/how-to-better-understand-and-use-linear-trend-models-in-tableau/	
How to use a trick in #Tableau for adjusting a scatter plot trend line
	http://3danim8.wordpress.com/2013/08/30/how-to-use-a-trick-in-tableau-for-adjusting-a-scatter-plot-trend-line/	
Using #Tableau to Create Dashboards For Tracking Salesman Performance
	http://3danim8.wordpress.com/2013/07/02/using-tableau-to-create-dashboards-for-tracking-salesman-performance/
Tableau, Correlations and Scatter Plots
	http://3danim8.wordpress.com/2013/06/11/tableau-correlations-and-scatter-plots/

Qualpro company 
http://blog.qualproinc.com/blog-qualpro-mvt/ctl/all-posts/
http://blog.qualproinc.com/blog-qualpro-mvt/bid/315478/How-to-Use-Tableau-Turning-Complexity-into-Simplicity


Holt-Winters forecast using ggplot2
http://www.r-bloggers.com/holt-winters-forecast-using-ggplot2/











{{{
. QAS Agent Uninstall Commands
PACKAGECOMMAND
RPM# rpm -e vasclnt
DEB# dpkg -r vaslcnt
Solaris# pkgrm vasclnt
HP-UX# swremove vasclnt
AIX# installp -u vasclnt
Mac OS X/<mount>/Uninstall.app/Contents/MacOS/Uninstall' --console --force vasclnt

Hey, in the meantime what I have done in /etc/sudo.conf: disable the quest modules and re-enable the old ones. works in Linux, not sure if the same file exists in Solaris. It will break QAS but can be enabled later.

I tried just uninstalling the package before and the pam plugin is still enabled, even after reboot, restoring the previous /etc/sudo.conf works (it is saved with some prefix with the qas pam module disabled).


}}}
How to work with NULL
https://livesql.oracle.com/apex/livesql/file/content_NNUGN6Z352RH87FHF1GKWWJIP.html
http://kevinclosson.wordpress.com/2010/09/28/configuring-linux-hugepages-for-oracle-database-is-just-too-difficult-part-i/
http://kevinclosson.wordpress.com/2010/10/21/configuring-linux-hugepages-for-oracle-database-is-just-too-difficult-isn%e2%80%99t-it-part-%e2%80%93-ii/
http://kevinclosson.wordpress.com/2010/11/18/configuring-linux-hugepages-for-oracle-database-is-just-too-difficult-isn%E2%80%99t-it-part-%E2%80%93-iii-do-you-really-want-to-configure-the-absolute-minimum-hugepages/

http://martincarstenbach.wordpress.com/2013/05/13/more-on-use_large_pages-in-linux-and-11-2-0-3/


http://software.intel.com/en-us/articles/intel-performance-counter-monitor/
http://software.intel.com/en-us/articles/performance-insights-to-intel-hyper-threading-technology/
http://software.intel.com/en-us/articles/intel-hyper-threading-technology-analysis-of-the-ht-effects-on-a-server-transactional-workload
http://software.intel.com/en-us/articles/hyper-threading-be-sure-you-know-how-to-correctly-measure-your-servers-end-user-response-time-1
http://software.intel.com/en-us/articles/intel-64-architecture-processor-topology-enumeration
http://cache-www.intel.com/cd/00/00/01/77/17705_htt_user_guide.pdf  <-- Intel Hyper-Threading Technology Technical User's Guide
http://www.evernote.com/shard/s48/sh/6d8994bc-2eb6-4d8c-8880-c5af7a12fbe5/84d73ea45c6bf62779fb9092d4ae3648 <-- Intel Hyper-Threading Technology Technical User's Guide, with annotation

http://herbsutter.com/welcome-to-the-jungle/  <-- cool stuff
http://herbsutter.com/2012/11/30/256-cores-by-2013/

http://plumbr.eu/blog/how-many-threads-do-i-need <-- java thread

http://sg.answers.yahoo.com/question/index?qid=20101013191827AAswM81   <-- clock speed
http://en.wikipedia.org/wiki/Out-of-order_execution
http://en.wikipedia.org/wiki/P6_(microarchitecture)

http://highscalability.com/blog/2013/6/6/paper-memory-barriers-a-hardware-view-for-software-hackers.html <-- hardware view for software hackers
http://www.rdrop.com/users/paulmck/scalability/paper/whymb.2010.07.23a.pdf

''How to Maximise CPU Performance for the Oracle Database on Linux'' https://communities.intel.com/community/itpeernetwork/datastack/blog/2013/08/05/how-to-maximise-cpu-performance-for-the-oracle-database-on-linux





http://afatkulin.blogspot.ca/2013/11/hyperloglog-in-oracle.html
http://blog.aggregateknowledge.com/2012/10/25/sketch-of-the-day-hyperloglog-cornerstone-of-a-big-data-infrastructure/
https://github.com/t3rmin4t0r/notes/wiki/I-Like-Tez,-DevOps-Edition-(WIP)
<<<
I work on Tez, so it would be hard to not like Tez. There's a reason for it too, whenever Tez does something I don't like, I can put my back into it and shove Tez towards that straight & narrow path.

Just before Hortonworks, I was part of the ZCloud division in Zynga - the casual disregard for devs towards operations has hurt my sleep cycle and general peace of mind. I know they're chasing features, but whenever someone puts in a change that takes actual work to rollback, I cringe. And I like how Tez doesn't make the same mistakes here.

First of all, you don't install "Tez" on a cluster. The cluster runs YARN, which means two very important things. 

There is no "installing Tez" on your 350 nodes and waiting for it to start up. You throw a few jars into an HDFS directory and write tez-site.xml on exactly one machine pointing to that HDFS path.

This means several important things for a professional deployment of the platform. There's no real pains about rolling upgrades, because there is nothing to restart - all existing queries use the old version, all new queries will automatically use the new version. This is particularly relevant for a 24 hour round-the-clock data insertion pipeline, but perhaps not for a BI centric service where you can bounce it pretty quickly after emailing a few people.

Letting you run different versions of Tez at the same time is very different from how MR used to behave. Personally on a day to day basis, this helps me a lot to share a multi-tenant dev environment & the overall quality of my work - I test everything I write on a big cluster, without worrying about whether I'll nuke anyone else's Tez builds.

Next up, I like how Tez handles failure. You can lose connectivity to half your cluster and the tasks will keep running, perhaps a bit slowly. YARN takes care of bad nodes, cases where the nodes are having disk failures or any such hiccup in the cluster that is normal when you're maxing out 400+ nodes all day long. And coming from the MR school of thought, the task failiure scenario is pretty much easily covered with re-execution mechanisms. 

There's something important to be covered here with failure. For any task attempt that accidentally kills a container (like a bad UDF with a memory leak) there is no real data loss for any previous data, because the data already committed in a task is not served out of a container at all. The NodeManager serves all the data across the cluster with its own secure shuffle handlers. As long as the NodeManager is running, you could kill the existing containers on that node and hand off that capacity to another task.

This is very important for busy clusters, because as the aphorism goes "The difference between time and space is that you can re-use space". I guess the same applies to a container holding onto an in-memory structure, waiting for its data to be pulled off to another task.

And any hadoop-2 installation already has node manager alerts/restarts already coded in without needing any new devops work to bring back errant nodes back online.

This brings me to the next bit of error tolerance in the system - the ApplicationMaster. The old problem with hadoop-1.x was that the JobTracker was a somewhat single point of failure for any job. With YARN, that went away entirely with the ApplicationMaster being coded particularly for a task type.

Now most applications do not want to write up all the bits and bobs required to run their own ApplicationMaster. Something like Hive could've built its own ApplicationMaster (rather we could've built it as part of our perf effort) - after all Storm did, HBase did and so did Giraph.

The vision of Tez is that there's a possibe generalization for the problem. Just like MR was a simple distribution mechanism for a bi-partite graph which spawned a huge variety of tools, there exists a way to express more complex graphs in a generic way, building a new assembly language for data driven applications.

Make no misake, Tez is an assembly language at its very core. It is raw and expressive but is an expert's tool, meant to be wielded by compiler developers catering to a tool userland. Pig and Hive already have compilers into this new backend. Cascading and then Scalding will add some API fun to the mix, but the framework sits below all those and consolidates everyone's efforts into a common rich baseline for performance. And there's a secret hidden away MapReduce compiler for Tez as well, which get ignored often.

A generalization is fine, but it is often a limitation as well - nearly every tool listed above want to write small parts of the scheduling mechanisms which allows for custom data routing and connecting up task outputs to task inputs manually (like a bucketed map-join). Tez is meant to be a good generalization to build each application's custom components on top of, but without actually writing any of the complex YARN code required to have error tolerance, rack/host locality and recovery from AM crashes. The VertexManager plugin API is one classic example of how an application can now interfere with how a DAG is scheduled and how its individual tasks are managed.

And last of all, I like how Tez is not self centered, it works towards global utilization ratio on a cluster, not just its own latency figures. It can be built to elastically respond to queue/cluster pressures from other tasks running on the cluster. 
 
People are doing Tez a disfavour by comparing it to framworks which rely on keeping slaves running to not just execute CPU tasks but to hold onto temporary storage as well. On a production cluster, getting 4 fewer containers than you asked for will not stall Tez, because of the way it uses the Shuffle mechanism as a temporary data store between DAG vertexes - it is designed to be all-or-best-effort, instead of waiting for the perfect moment to run your entire query. A single stalling reducer doesn't require any of the other JVMs to stay resident and wait. This isn't a problem for a daemon based multi-tenant cluster, because if there is another job for that cluster it will execute, but for a hadoop ecosystem cluster system built on YARN, this means that your cluster utilization takes a nose-dive due to the inability to acquire or release cluster resources incrementally/elastically during your actual data operation.

Between the frameworks I've played with, that is the real differentiating feature of Tez - Tez does not require containers to be kept running to do anything, just the AM running in the idle periods between different queries. You can hold onto containers, but it is an optimization, not a requirement during idle periods for the session.

I might not exactly be a fan of the user-friendliness of this assembly language layer for hadoop, but the flexibility of this more than compensates.
<<<
https://www.evernote.com/shard/s48/sh/ec01b659-2271-453f-b8b4-32e0c29ac848/2c1730994c0352208b373694f16de4e1
IBM Cúram Social Program Management 7.0.10 - 7.0.11

Batch Streaming Architecture
https://www.ibm.com/support/knowledgecenter/SS8S5A_7.0.11/com.ibm.curam.content.doc/BatchPerformanceMechanisms/c_BATCHPER_Architecture1BatchStreamingArchitecture1.html
 
The Chunker
https://www.ibm.com/support/knowledgecenter/SS8S5A_7.0.11/com.ibm.curam.content.doc/BatchPerformanceMechanisms/c_BATCHPER_Architecture1Chunker1.html

The Stream
https://www.ibm.com/support/knowledgecenter/SS8S5A_7.0.11/com.ibm.curam.content.doc/BatchPerformanceMechanisms/c_BATCHPER_Architecture1Stream1.html


! Verifying I/O bandwidth
{{{
-bash-3.00% id
uid=1000(oracle) gid=10000(dba) groups=1001(oinstall)
-bash-3.00% hostname
r09n01.pbm.ihost.com
-bash-3.00% pwd
/bench1/orion
-bash-3.00% ./orion.pl –t dss –f params/dss_params.txt –d 120 –n verification_test1
Checking and processing input arguments..
Workload type is : DSS
Input parameter file is : params/dss_params.txt
Run duration is : 120 (seconds)
Processed all input arguments..
Number of nodes is : 2
Degree of parallelism is 320 on Node r09n01
Degree of parallelism is 320 on Node r09n02
Starting iostat on node : r09n01
Starting Orion on node : r09n01
Starting iostat on node : r09n02
Starting Orion on node : r09n02
ORION: Oracle IO Numbers – Version 11.1.0.4.0
Test will take approximately 3 minutes
Larger caches may take longer
ORION: Oracle IO Numbers – Version 11.1.0.4.0
Test will take approximately 3 minutes
Larger caches may take longer
Copying results to results/verification_test1
From Node r09n01
From Node r09n02
Results from node r09n01
Maximum Large MBPS=1339.17 @ Small=0 and Large=320
Results from node r09n01
Maximum Large MBPS=1342.04 @ Small=0 and Large=320
-bash-3.00$
}}}
The aggregate bandwidth should exceed 1400 MBPS for one node, 2600 MBPS for two nodes, 3700
MBPS for three nodes and 4900 MBPS for four nodes. The preceding test achieved 2681 MBPS for
two nodes and passes the I/O bandwidth verification test.

! Verifying database integrity
The second verification test is to load 100 GB of test data into the TS_DATA table space and then run
a full table scan against the data. This test verifies the database installation, and ensures that an SQL
query can run to completion and that a full table scan can achieve similar I/O performance to the
ORION results. Successful completion of a load and query constitutes passing this database integrity test. 
{{{
Here is the set of scripts to run the full table scan test.
Please do the following things
1. Copy each script below into an executable shell script.
2. Execute the scripts in the order they are presented here
a. Table_creation.sh
b. Data_grow.sh
c. Full_table_scan.sh
3. Compare the MB/sec result from this test to the number achieved with
ORION. If the numbers are comparable then the test has been
successful
#################### Table_creation.sh ################################
# This script creates the user oracle and the table owitest.
sqlplus /nolog<<EOF
connect / as sysdba
drop user oracle cascade;
grant DBA to oracle identified by oracle;
alter user oracle default tablespace ts_data;
alter user oracle temporary tablespace temp;
connect oracle/oracle
create table owitest parallel nologging as select * from sys.dba_extents;
commit;
exit
EOF
#################### Data_grow.sh ####################################
# This script grows the data in the owitest table to over 100GB
(( n=0 ))
while (( n<20 ));do
(( n=n+1 ))
sqlplus -s /NOLOG <<! &
connect oracle/oracle;
set timing on
set time on
alter session enable parallel dml;
insert /*+ APPEND */ into owitest select * from owitest;
commit;
exit;
!
wait
done
wait
#################### full_table_scan.sh
####################################
-- This SQL script is called from the full_test.sh script.
sqlplus -s /NOLOG <<! &
connect oracle/oracle;
set timing on
set echo on
spool all_nodes_full_table_scan.log
col time1 new_value time1
col time2 new_value time2
select to_char(sysdate, 'SSSSS') time1 from dual;
Select count(*) from owitest;
select to_char(sysdate, 'SSSSS') time2 from dual;
select (sum(s.bytes)/1024/1024)/(&&time2 - &&time1) MB_PER_SEC
from sys.dba_segments s
where segment_name='OWITEST';
undef time1
undef time2
spool off
exit;
!
}}}

! setup_ssh.sh script 
{{{
#! /bin/ksh
#
#
HOSTNAME=r09n01
HOME=/home/oracle
#
cd $HOME
mkdir $HOME/.ssh
chmod 700 $HOME/.ssh
touch $HOME/.ssh/authorized_keys
chmod 600 $HOME/.ssh/authorized_keys
cd $HOME/.ssh
#
/usr/bin/ssh-keygen -t rsa
/usr/bin/ssh-keygen -t dsa
#
ssh $HOSTNAME cat $HOME/.ssh/id_rsa.pub >>authorized_keys
ssh $HOSTNAME cat $HOME/.ssh/id_dsa.pub >>authorized_keys
}}}

! ORION dss_params.txt file
{{{
./orion.pl –t dss –f params/dss_params.txt –d 120 –n verification_test1

# DSS workload parameter file (keywords are case in-sensitive, values are
case sensitive)
# disk device or LUN path=number of spindles (one line per device).
/dev/rhdisk12=5
/dev/rhdisk13=5
/dev/rhdisk14=5
/dev/rhdisk15=5
/dev/rhdisk16=5
/dev/rhdisk17=5
/dev/rhdisk18=5
/dev/rhdisk19=5
/dev/rhdisk26=5
/dev/rhdisk27=5
/dev/rhdisk28=5
/dev/rhdisk29=5
/dev/rhdisk30=5
/dev/rhdisk31=5
/dev/rhdisk32=5
/dev/rhdisk33=5
/dev/rhdisk40=5
/dev/rhdisk41=5
/dev/rhdisk42=5
/dev/rhdisk43=5
/dev/rhdisk44=5
/dev/rhdisk45=5
/dev/rhdisk46=5
/dev/rhdisk47=5
/dev/rhdisk53=5
/dev/rhdisk54=5
/dev/rhdisk55=5
/dev/rhdisk56=5
/dev/rhdisk57=5
/dev/rhdisk58=5
/dev/rhdisk59=5
/dev/rhdisk60=5
#
# default large random IO size, should be specified in bytes
dss_io_size=1048576

num_nodes=2
node_names=r09n01, r09n02
dop_per_node=320, 320
orion_location=/bench1/orion/bin/orion
}}}
http://forums.cnet.com/7723-6121_102-392293/confused-primary-vs-secondary-master-vs-slave/
http://wiki.linuxquestions.org/wiki/IDE_master/slave
Identity Management 10.1.4.0 Product Cheat Sheet
  	Doc ID: 	Note:389468.1	
MAA Best Practices - Oracle Fusion Middleware
https://www.oracle.com/database/technologies/high-availability/fusion-middleware-maa.html

10.4.2 Creating a Database Service for Oracle Internet Directory
https://docs.oracle.com/cd/E52734_01/core/IMEDG/db_repos.htm#IMEDG30626

Fusion Middleware Enterprise Deployment Guide for Oracle Identity and Access Management
https://docs.oracle.com/cd/E52734_01/core/IMEDG/toc.htm






https://docs.oracle.com/en/database/oracle/oracle-database/19/inmem/optimizing-queries-with-join-groups.html#GUID-3E5491C4-B345-4A8E-8B1B-8DC150C8A797
https://docs.oracle.com/en/database/oracle/oracle-database/19/inmem/configuring-the-im-column-store.html#GUID-8844C889-E381-4B77-8A51-7AA6462B14D7
https://www.oracletutorial.com/oracle-basics/oracle-insert-all/
https://oracle-base.com/articles/9i/multitable-inserts

example code
{{{
INSERT /*+ APPEND */ ALL
/* FIRST there is a match ie the standard join returns rows */
WHEN (indicator is not null and indicator_sell is not null) THEN
INTO matched
(CODE
,SOURCE_ID
,SOURCE_ACCOUNT
,PROGRAM_ID
,TRANSACTION_DATE
,TRANSACTION_NUMBER
,VALUE
,FUNCTION
,INDICATOR_BUY
,COMMENTS_BUY
,INDICATOR_SELL
,COMMENTS_SELL)
VALUES
(CODE
,SOURCE_ID
,SOURCE_ACCOUNT
,PROGRAM_ID
,TRANSACTION_DATE
,TRANSACTION_NUMBER
,VALUE
,FUNCTION
,INDICATOR
,COMMENTS
,INDICATOR_SELL
,COMMENTS_SELL)
/* SELL row not buy */
WHEN (indicator is null and indicator_sell is not null) THEN
into postmatch_sell(CODE
,SOURCE_ID
,SOURCE_ACCOUNT
,PROGRAM_ID
,TRANSACTION_DATE
,TRANSACTION_NUMBER
,VALUE
,FUNCTION
,INDICATOR
,COMMENTS
)
VALUES
(CODE
,SOURCE_ID
,SOURCE_ACCOUNT
,PROGRAM_ID
,TRANSACTION_DATE
,TRANSACTION_NUMBER
,VALUE
,FUNCTION
,INDICATOR_SELL
,COMMENTS_SELL)
/* Buy row but not sell */
WHEN (indicator is not null and indicator_sell is null) THEN
into postmatch_buy(CODE
,SOURCE_ID
,SOURCE_ACCOUNT
,PROGRAM_ID
,TRANSACTION_DATE
,TRANSACTION_NUMBER
,VALUE
,FUNCTION
,INDICATOR
,COMMENTS
)
VALUES
(CODE
,SOURCE_ID
,SOURCE_ACCOUNT
,PROGRAM_ID
,TRANSACTION_DATE
,TRANSACTION_NUMBER
,VALUE
,FUNCTION
,INDICATOR
,COMMENTS)
select /*+ MONITOR */ /* Match Processing */
buy.*,sell.indicator as indicator_sell,sell.comments as comments_sell
from temp_prematch_buy buy
	FULL OUTER JOIN temp_prematch_sell sell
	ON
	(buy.CODE = sell.CODE
	AND buy.SOURCE_ID = sell.SOURCE_ID
	AND buy.SOURCE_ACCOUNT = sell.SOURCE_ACCOUNT
	AND buy.PROGRAM_ID = sell.PROGRAM_ID
	AND buy.TRANSACTION_DATE = sell.TRANSACTION_DATE
	AND buy.TRANSACTION_NUMBER = sell.TRANSACTION_NUMBER
	AND buy.VALUE = sell.VALUE
	AND buy.FUNCTION = sell.FUNCTION
	)
}}}
http://kevinclosson.wordpress.com/2009/04/28/how-to-produce-raw-spreadsheet-ready-physical-io-data-with-plsql-good-for-exadata-good-for-traditional-storage
{{{
set serveroutput on format wrapped size 1000000

create or replace directory mytmp as '/tmp';

DECLARE
n number;
m number;

gb number := 1024 * 1024 * 1024;
mb number := 1024 * 1024 ;

bpio number; -- 43 physical IO disk bytes
apio number;
disp_pio number(8,0);

bptrb number; -- 39 physical read total bytes
aptrb number;
disp_trb number(8,0);

bptwb number; -- 42 physical write total bytes
aptwb number;
disp_twb number(8,0);

x number := 1;
y number := 0;
fd1 UTL_FILE.FILE_TYPE;
BEGIN
        fd1 := UTL_FILE.FOPEN('MYTMP', 'mon.log', 'w');

        LOOP
                bpio := 0;
                apio := 0;

                select  sum(value) into bpio from gv$sysstat where statistic# = '43';
                select  sum(value) into bptwb from gv$sysstat where statistic# = '42';
                select  sum(value) into bptrb from gv$sysstat where statistic# = '39';

                n := DBMS_UTILITY.GET_TIME;
                DBMS_LOCK.SLEEP(5);

                select  sum(value) into apio from gv$sysstat where statistic# = '43';
                select  sum(value) into aptwb from gv$sysstat where statistic# = '42';
                select  sum(value) into aptrb from gv$sysstat where statistic# = '39';

                m := DBMS_UTILITY.GET_TIME - n ;

                disp_pio := ( (apio - bpio)   / ( m / 100 )) / mb ;
                disp_trb := ( (aptrb - bptrb) / ( m / 100 )) / mb ;
                disp_twb := ( (aptwb - bptwb) / ( m / 100 )) / mb ;

                UTL_FILE.PUT_LINE(fd1, TO_CHAR(SYSDATE,'HH24:MI:SS') || '|' || disp_pio || '|' || disp_trb || '|' || disp_twb || '|');
                UTL_FILE.FFLUSH(fd1);
                x := x + 1;
        END LOOP;

        UTL_FILE.FCLOSE(fd1);
END;
/

}}}
http://www.evernote.com/shard/s48/sh/2250c1af-3a88-482a-aac9-09902a243abf/9a0e5ea8f2bfa65c07fcdff8d8c06519

awr_iowl on sizing IOPS
http://www.evernote.com/shard/s48/sh/a9635aa5-8b78-4355-909f-2503e3a35a94/c0b0763b76a8bf532b597d8bdc08a2e9
http://www.evernote.com/shard/s48/sh/b5340200-f965-4cdd-9b2d-9cc49b8897e1/d366b016ea580e371fea56e44229eea1
What is the suggested I/O scheduler to improve disk performance when using Red Hat Enterprise Linux with virtualization?
http://kbase.redhat.com/faq/docs/DOC-5428

Thread: I/O scheduler in Oracle Linux 5.7
https://forums.oracle.com/forums/thread.jspa?threadID=2263820&tstart=0
http://www.thomas-krenn.com/en/oss/linux-io-stack-diagram.html
http://www.thomas-krenn.com/en/oss/linux-io-stack-diagram/linux-io-stack-diagram_v1.0.png
http://www.evernote.com/shard/s48/sh/dcc1cd2b-a858-424a-a95d-2a667e78eec1/f6a9c6e48d9ac153a9e1326cef1eac5a
http://www.iometer.org/doc/downloads.html


Guides
http://kb.fusionio.com/KB/a41/using-iometer-to-verify-iodrive-performance-on-windows.aspx
http://kb.fusionio.com/KB/a40/verifying-windows-system-performance.aspx
http://greg.porter.name/wiki/HowTo:iometer
http://blog.fosketts.net/2010/03/19/microsoft-intel-starwind-iscsi/

Useful
http://communities.vmware.com/docs/DOC-3961
http://old.nabble.com/understanding-disk-target-%22maximum-disk-size%22-td14341532.html
''smartscanloop'' http://www.evernote.com/shard/s48/sh/5985b021-4f70-4a1f-9578-0f719f8580da/d4aee8ac98395a9b8385eb7ccfb1a6f7
Tool for Gathering I/O Resource Manager Metrics: metric_iorm.pl [ID 1337265.1]
Guy's tool http://guyharrison.squarespace.com/blog/2011/7/31/a-perl-utility-to-improve-exadata-cellcli-statistics.html?lastPage=true#comment15512073
Configuring Exadata I/O Resource Manager for Common Scenarios [ID 1363188.1]
http://www.youtube.com/user/OPITZCONSULTINGpl
http://www.oracle.com/webfolder/technetwork/tutorials/demos/db/11g/r1/exadata_iorm2/exadata_iorm2_viewlet_swf.html

wise words from twitter
{{{
@kevinclosson @orcldoug @timurakhmadeev I haven't seen a client yet who'd prefer inconsistent great performance to consistent decent perf.
}}}


! ''IORM test cases'' 
http://www.evernote.com/shard/s48/sh/300076b7-cdd5-48b9-89af-60acd3130058/972308e8a6a7233a7dec53cd301dfebd

''-- set test case environment''
{{{
Memory Component:		
db_cache_size       		7.00
java_pool_size      		0.06
large_pool_size     		0.91
memory_max_target   		0.00
memory_target       		0.00
pga_aggregate_target		10.00
sga_max_size        		12.16
sga_target          		0.00
shared_pool_size    		4.00


alter system set sga_max_size=10G scope=spfile sid='ACTEST1';
alter system set sga_target=10G scope=spfile sid='ACTEST1';
alter system set pga_aggregate_target=7G scope=spfile sid='ACTEST1';
alter system set db_cache_size=7G scope=spfile sid='ACTEST1';
alter system set shared_pool_size=2G scope=spfile sid='ACTEST1';
alter system set java_pool_size=200M scope=spfile sid='ACTEST1';
alter system set large_pool_size=200M  scope=spfile sid='ACTEST1';



grant EXECUTE ON DBMS_LOCK   to oracle;
grant SELECT ON V_$SESSION   to oracle;
grant SELECT ON V_$STATNAME  to oracle;
grant SELECT ON V_$SYSSTAT   to oracle;
grant SELECT ON V_$LATCH     to oracle;
grant SELECT ON V_$TIMER     to oracle;
grant SELECT ON V_$SQL       to oracle;
grant CREATE TYPE      to oracle;
grant CREATE TABLE     to oracle;
grant CREATE VIEW      to oracle;
grant CREATE PROCEDURE to oracle;


select sum(bytes)/1024/1024 from dba_segments where segment_name = 'OWITEST';
-- 21420.625 MB
-- 34503.375 MB dba_objects
select count(*) from oracle.owitest;
-- 261652480 rows
-- 327680000 rows dba_objects
}}}


''-- IORM commands - NOLIMIT''
{{{
test10-multi-10exadb-iorm-nolimit

sh orion_3_ftsallmulti.sh 10 exadb1
cat *log | egrep "dbm|exadb" | sort -rnk5

# main commands
alter iormplan dbPlan=( -
(name=dbm,    level=1, allocation=60), -
(name=exadb,   level=1, allocation=40), -
(name=other,    level=2, allocation=100));
alter iormplan active
list iormplan detail

list iormplan attributes objective
alter iormplan objective = low_latency



# list 
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan attributes objective'


# implement
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan dbPlan=\( \(name=dbm,    level=1, allocation=60\), \(name=exadb,   level=1, allocation=40\), \(name=other,    level=2, allocation=100\)\);'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan active'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'

dcli -g ~/cell_group -l root 'cellcli -e alter iormplan objective = low_latency'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan attributes objective'



# revert
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan dbPlan=\"\"'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan catPlan=\"\"'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan inactive'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'

dcli -g ~/cell_group -l root 'cellcli -e alter iormplan objective=\"\"'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan attributes objective'
}}}




''-- IORM commands - LIMIT''
{{{
test-10-multi-4dbm-6exadb-iorm-limit

sh saturate 4 dbm1 6 exadb1
cat *log | egrep "dbm|exadb" | sort -rnk5

# main commands
alter iormplan dbPlan=( -
(name=dbm,    level=1, allocation=60, limit=60), -
(name=exadb,   level=1, allocation=40, limit=40), -
(name=other,    level=2, allocation=100));
alter iormplan active
list iormplan detail

list iormplan attributes objective
alter iormplan objective = low_latency



# list 
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan attributes objective'


# implement
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan dbPlan=\( \(name=dbm,    level=1, allocation=60, limit=60\), \(name=exadb,   level=1, allocation=40, limit=40\), \(name=other,    level=2, allocation=100\)\);'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan active'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'

dcli -g ~/cell_group -l root 'cellcli -e alter iormplan objective = low_latency'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan attributes objective'



# revert
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan dbPlan=\"\"'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan catPlan=\"\"'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'
dcli -g ~/cell_group -l root 'cellcli -e alter iormplan inactive'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan detail'

dcli -g ~/cell_group -l root 'cellcli -e alter iormplan objective=\"\"'
dcli -g ~/cell_group -l root 'cellcli -e list iormplan attributes objective'

}}}




''iosaturation toolkit - simple output sort''
{{{
# sort smartscan MB/s 
less smartscanloop.txt | sort -nk5 | tail
}}}


''iosaturation toolkit - advanced output sort''
{{{
# sort AAS 
cat smartscanloop.txt | sort -nk5 | tail

# sort latency 
cat smartscanloop.txt | sort -nk6 | tail

# sort smart scans returned 
cat smartscanloop.txt | sort -nk7 | tail

# sort interconnect 
cat smartscanloop.txt | sort -nk8 | tail

# sort smartscan MB/s 
cat smartscanloop.txt | sort -nk9 | tail
}}}








<<<
You can make use of the cell_iops.sh here http://karlarao.wordpress.com/scripts-resources/ to get a characterization of the IOs across the databases on the cell level. This only have to be executed on one of the cells. And that's a per minute detail.
 
 
 
Whenever I need to characterize the IO profile of the databases for IORM config I would:
> pull the IO numbers from AWR
> pull the awr top events from AWR (this will tell if the DB is IO or CPU bound)
> get all these numbers in a consolidated view
> then from there depending on priority, criticality, workload type I would decide what makes sense for all of them (percentage allocation and IORM objective)
 
 
 
Whenever I need to evaluate an existing config with IORM already in place I would:
> pull the IO numbers from AWR
> pull the awr top events and look at the IO latency numbers (IO wait_class)
> pull the cell_iops.sh output on the workload periods where I'm seeing some high activity
> get all these numbers in a consolidated view
> get the different views of IO performance from AWR on all the databases http://goo.gl/YNUCEE
> validate the IO capacity of both Flash and Hard disk from the workload numbers of both AWR and CELL data
> for per consumer group I would use the "Useful monitoring SQLs" here http://goo.gl/I1mjd
> and if that not enough then I would even do a more fine grained latency & IO monitoring using snapper
 
 
 
On the cell_iops.sh I have yet to add the latency by database and consumer group as well as the IORM_MODE, but the methods I've listed above works very well.
 
 
 
 
 
-Karl
<<<

https://community.oracle.com/thread/2613363

see also [[list metric history / current]]


http://www.slideshare.net/Enkitec/io-resource-management-on-exadata, https://dl.dropbox.com/u/92079964/IORM%20Planning%20Calculator.xlsx



ios dev - itunes U - fall 2013 Semister ffrom Paul Hegady (STANFORD) 
I/O Performance Tuning Tools for Oracle Database 11gR2 
http://www.dbasupport.com/oracle/ora11g/Oracle-Database-11gR2-IO-Tuning02.shtml
''MindMap IOsaturationtoolkit-v2 - IORM instrumentation blueprint'' http://www.evernote.com/shard/s48/sh/d1422308-0127-4c2f-97c3-561c59c9ef80/a93392f3d15097a258333a623da07481
http://www.natecarlson.com/2010/09/10/configuring-ipmp-on-nexentastor-3/
IP network multipathing http://en.wikipedia.org/wiki/IP_network_multipathing
Subject: 	Using IPv6 with Oracle E-Business Suite Releases 11i and 12
  	Doc ID: 	Note:567015.1

Subject: 	Oracle Fusion Middleware Support of IPv6
  	Doc ID: 	Note:431028.1

Subject: 	Does Oracle Application Server 10g R2 Version 10.1.2.0.2 Support IPv6?
  	Doc ID: 	Note:338011.1

Subject: 	Does Oracle 10g / 10gR2 support IPv6 ?
  	Doc ID: 	Note:362956.1

Subject: 	Oracle E-Business Suite R12 Configuration in a DMZ
  	Doc ID: 	Note:380490.1

Subject: 	E-Business Suite Recommended Set Up for Client/Server Products
  	Doc ID: 	Note:277535.1

Subject: 	Oracle Application Server Installer Incorrectly Parses IP6V Entries in /etc/inet/ipnodes on Solaris 10
  	Doc ID: 	Note:438323.1

Subject: 	Oracle Application Server 10g (10.1.3) Requirements for Linux (OEL 5.0 and RHEL 5.0)
  	Doc ID: 	Note:465159.1

Subject: 	Using AutoConfig to Manage System Configurations in Oracle E-Business Suite Release 12
  	Doc ID: 	Note:387859.1

Subject: 	Using AutoConfig to Manage System Configurations with Oracle Applications 11i
  	Doc ID: 	Note:165195.1
ISM or DISM Misconfiguration can Slow Down Oracle Database Performance (Doc ID 1472108.1)
When Will DISM Start On Oracle Database? (Doc ID 778777.1)

http://lifehacker.com/5691489/how-can-i-find-out-if-my-isp-is-limiting-my-download-speed
http://arup.blogspot.com/2011/01/more-on-interested-transaction-lists.html
{{{
There are two basic alternatives to solve the ITL wait problem:
(1) INITRANS
(2) Less Space for Data

select snap_id, ITL_WAITS_TOTAL, ITL_WAITS_DELTA from DBA_HIST_SEG_STAT;
select ini_trans from dba_tables;
}}}
http://www.antognini.ch/2011/04/itl-waits-changes-in-recent-releases/
http://www.antognini.ch/2011/06/itl-waits-changes-in-recent-releases-script/
http://www.antognini.ch/2013/05/itl-deadlocks-script/
http://neeraj-dba.blogspot.com/2012/05/interested-transaction-list-itl-in.html
http://avdeo.com/2008/06/16/interested-transaction-list-itl/  <-- interesting explanation of block, ITL, and tied to Undo transaction table and segments
{{{
Oracle Data block is divided into 3 major portions.
> Oracle Fixed size header
> Oracle Variable size header
> Oracle Data content space
}}}


! mos Troubleshooting waits for 'enq: TX - allocate ITL entry' (Doc ID 1472175.1)
{{{
SYMPTOMS

Observe high waits for event enq: TX - allocate ITL entry

Top 5 Timed Foreground Events

Event                           Waits  Time(s)  Avg wait (ms)  % DB time  Wait Class
enq: TX - allocate ITL entry    1,200   3,129           2607       85.22  Configuration
DB CPU                                                   323        8.79 
gc buffer busy acquire         17,261      50              3        1.37  Cluster
gc cr block 2-way             143,108      48              0        1.32  Cluster
gc current block busy          10,631      46              4        1.24  Cluster

CAUSE

By default INITRANS value for table is 1 and for index is 2. When too many concurrent DML transactions are competing for the same data block we observe this wait event - " enq: TX - allocate ITL entry"

Once the table or index is reorganized by altering the INITRANS or PCTFREE parameter, it helps to reduce "enq: TX - allocate ITL entry" wait events.
 
As per AWR report below are the tables which reported this wait event 

Segments by ITL Waits

  * % of Capture shows % of ITL waits for each top segment compared
  * with total ITL waits for all segments captured by the Snapshot

Owner Tablespace Name Object Name Subobject Name Obj. Type       ITL  Waits % of Capture
PIN   BRM_TABLES      SERVICE_T                  TABLE           188               84.30
PIN   BRM_TABLES      BILLINFO_T  P_R_06202012   TABLE PARTITION  35               15.70

 

To know more details, In the AWR report, search for the section "Segments by ITL Waits"  .

 

SOLUTION

To reduce enq: TX - allocate ITL entry" wait events, We need to follow the below steps.

A)

1) Depending on the amount of transactions in the table we need to alter the value of INITRANS.

alter table <table name> INITRANS 50;

2) Then re-organize the table using move (alter table <table_name> move;)

3) Then rebuild all the indexes of this table as below

alter index <index_name> rebuild INITRANS 50;
 

If the issue is not resolved by the above steps, please try by increasing PCTFREE


B)

1) Spreading rows into more number of blocks will also helps to reduce this wait event.

alter table <table name>  PCTFREE 40;
2) Then re-organize the table using move (alter table service_T move;)

3) Rebuild index

alter index index_name  rebuild PCTFREE 40;
 

OR You can combine steps A and B as below


1) Set INITRANS to 50  pct_free to 40

alter table <table_name> PCTFREE 40  INITRANS 50;

2) Then re-organize the table using move (alter table <table_name> move;)

3) Then rebuild all the indexes of the table as below

alter index <index_name>  rebuild PCTFREE 40 INITRANS 50;


NOTE:
The table/index can be altered to set the new value for INITRANS. But the altered value takes effect for new blocks only. Basically you need to rebuild the objects so that the blocks are initialized again.

For an index this means the index needs to be rebuild or recreated.

For a table this can be achieved through:
exp/imp
alter table move
dbms_redefenition
}}}
IT Systems Management [ID 280.1]
IT Risk Management Advisor: Oracle [ID 318.1]
My Oracle Support Health Check Catalog [ID 868955.1]
https://www.quora.com/search?q=openstack+vs+enterprise+architecture
https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=openstack+enterprise+architecture
https://www.google.com/search?q=openstack&espv=2&rlz=1C5CHFA_enUS696US696&biw=1276&bih=703&source=lnms&tbm=isch&sa=X&ved=0ahUKEwiN5YezpvvPAhWC5iYKHcJDCDcQ_AUICCgD#imgrc=gqzdQ8G0MtutgM%3A
https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=itil%20vs%20enterprise%20architecture
COBIT vs ITIL vs TOGAF https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=itil%20vs%20enterprise%20architecture
https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=ITSM+openstack
https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=database+as+a+service+ITIL&start=10


https://www.openstack.org/assets/path-to-cloud/OpenStack-6x9Booklet-online.pdf
https://www.linkedin.com/pulse/itil-vs-ea-matthew-kern
http://blogs.vmware.com/accelerate/2015/07/itsm-relevance-in-the-cloud.html
ITIL Open Source Solution Stack http://events.linuxfoundation.org/sites/events/files/slides/ITIL-v1.1.pdf
http://www.slideshare.net/rajib_kundu/itil-relation-with-dba
DBAs and the ITIL Framework http://www.sqlservercentral.com/articles/ITIL/131734/
http://www.tesora.com/database-as-a-service/
http://www.oracle.com/technetwork/oem/cloud-mgmt/con3028-dbaasatboeing-2332603.pdf
https://wiki.openstack.org/wiki/Trove , Oracle databases and OpenStack Trove https://www.youtube.com/watch?v=C7iNPv4LNB0
http://www.allenhayden.com/cgi/getdoc.pl?file=doc111.pdf
Identifying Resource Intensive SQL in a production environment - Virag Saksena
/***
|Name|ImageSizePlugin|
|Source|http://www.TiddlyTools.com/#ImageSizePlugin|
|Version|1.2.2|
|Author|Eric Shulman|
|License|http://www.TiddlyTools.com/#LegalStatements|
|~CoreVersion|2.1|
|Type|plugin|
|Description|adds support for resizing images|
This plugin adds optional syntax to scale an image to a specified width and height and/or interactively resize the image with the mouse.
!!!!!Usage
<<<
The extended image syntax is:
{{{
[img(w+,h+)[...][...]]
}}}
where ''(w,h)'' indicates the desired width and height (in CSS units, e.g., px, em, cm, in, or %). Use ''auto'' (or a blank value) for either dimension to scale that dimension proportionally (i.e., maintain the aspect ratio). You can also calculate a CSS value 'on-the-fly' by using a //javascript expression// enclosed between """{{""" and """}}""". Appending a plus sign (+) to a dimension enables interactive resizing in that dimension (by dragging the mouse inside the image). Use ~SHIFT-click to show the full-sized (un-scaled) image. Use ~CTRL-click to restore the starting size (either scaled or full-sized).
<<<
!!!!!Examples
<<<
{{{
[img(100px+,75px+)[images/meow2.jpg]]
}}}
[img(100px+,75px+)[images/meow2.jpg]]
{{{
[<img(34%+,+)[images/meow.gif]]
[<img(21% ,+)[images/meow.gif]]
[<img(13%+, )[images/meow.gif]]
[<img( 8%+, )[images/meow.gif]]
[<img( 5% , )[images/meow.gif]]
[<img( 3% , )[images/meow.gif]]
[<img( 2% , )[images/meow.gif]]
[img(  1%+,+)[images/meow.gif]]
}}}
[<img(34%+,+)[images/meow.gif]]
[<img(21% ,+)[images/meow.gif]]
[<img(13%+, )[images/meow.gif]]
[<img( 8%+, )[images/meow.gif]]
[<img( 5% , )[images/meow.gif]]
[<img( 3% , )[images/meow.gif]]
[<img( 2% , )[images/meow.gif]]
[img(  1%+,+)[images/meow.gif]]
{{tagClear{
}}}
<<<
!!!!!Revisions
<<<
2010.07.24 [1.2.2] moved tip/dragtip text to config.formatterHelpers.imageSize object to enable customization
2009.02.24 [1.2.1] cleanup width/height regexp, use '+' suffix for resizing
2009.02.22 [1.2.0] added stretchable images
2008.01.19 [1.1.0] added evaluated width/height values
2008.01.18 [1.0.1] regexp for "(width,height)" now passes all CSS values to browser for validation
2008.01.17 [1.0.0] initial release
<<<
!!!!!Code
***/
//{{{
version.extensions.ImageSizePlugin= {major: 1, minor: 2, revision: 2, date: new Date(2010,7,24)};
//}}}
//{{{
var f=config.formatters[config.formatters.findByField("name","image")];
f.match="\\[[<>]?[Ii][Mm][Gg](?:\\([^,]*,[^\\)]*\\))?\\[";
f.lookaheadRegExp=/\[([<]?)(>?)[Ii][Mm][Gg](?:\(([^,]*),([^\)]*)\))?\[(?:([^\|\]]+)\|)?([^\[\]\|]+)\](?:\[([^\]]*)\])?\]/mg;
f.handler=function(w) {
	this.lookaheadRegExp.lastIndex = w.matchStart;
	var lookaheadMatch = this.lookaheadRegExp.exec(w.source)
	if(lookaheadMatch && lookaheadMatch.index == w.matchStart) {
		var floatLeft=lookaheadMatch[1];
		var floatRight=lookaheadMatch[2];
		var width=lookaheadMatch[3];
		var height=lookaheadMatch[4];
		var tooltip=lookaheadMatch[5];
		var src=lookaheadMatch[6];
		var link=lookaheadMatch[7];

		// Simple bracketted link
		var e = w.output;
		if(link) { // LINKED IMAGE
			if (config.formatterHelpers.isExternalLink(link)) {
				if (config.macros.attach && config.macros.attach.isAttachment(link)) {
					// see [[AttachFilePluginFormatters]]
					e = createExternalLink(w.output,link);
					e.href=config.macros.attach.getAttachment(link);
					e.title = config.macros.attach.linkTooltip + link;
				} else
					e = createExternalLink(w.output,link);
			} else 
				e = createTiddlyLink(w.output,link,false,null,w.isStatic);
			addClass(e,"imageLink");
		}

		var img = createTiddlyElement(e,"img");
		if(floatLeft) img.align="left"; else if(floatRight) img.align="right";
		if(width||height) {
			var x=width.trim(); var y=height.trim();
			var stretchW=(x.substr(x.length-1,1)=='+'); if (stretchW) x=x.substr(0,x.length-1);
			var stretchH=(y.substr(y.length-1,1)=='+'); if (stretchH) y=y.substr(0,y.length-1);
			if (x.substr(0,2)=="{{")
				{ try{x=eval(x.substr(2,x.length-4))} catch(e){displayMessage(e.description||e.toString())} }
			if (y.substr(0,2)=="{{")
				{ try{y=eval(y.substr(2,y.length-4))} catch(e){displayMessage(e.description||e.toString())} }
			img.style.width=x.trim(); img.style.height=y.trim();
			config.formatterHelpers.addStretchHandlers(img,stretchW,stretchH);
		}
		if(tooltip) img.title = tooltip;

		// GET IMAGE SOURCE
		if (config.macros.attach && config.macros.attach.isAttachment(src))
			src=config.macros.attach.getAttachment(src); // see [[AttachFilePluginFormatters]]
		else if (config.formatterHelpers.resolvePath) { // see [[ImagePathPlugin]]
			if (config.browser.isIE || config.browser.isSafari) {
				img.onerror=(function(){
					this.src=config.formatterHelpers.resolvePath(this.src,false);
					return false;
				});
			} else
				src=config.formatterHelpers.resolvePath(src,true);
		}
		img.src=src;
		w.nextMatch = this.lookaheadRegExp.lastIndex;
	}
}

config.formatterHelpers.imageSize={
	tip: 'SHIFT-CLICK=show full size, CTRL-CLICK=restore initial size',
	dragtip: 'DRAG=stretch/shrink, '
}

config.formatterHelpers.addStretchHandlers=function(e,stretchW,stretchH) {
	e.title=((stretchW||stretchH)?this.imageSize.dragtip:'')+this.imageSize.tip;
	e.statusMsg='width=%0, height=%1';
	e.style.cursor='move';
	e.originalW=e.style.width;
	e.originalH=e.style.height;
	e.minW=Math.max(e.offsetWidth/20,10);
	e.minH=Math.max(e.offsetHeight/20,10);
	e.stretchW=stretchW;
	e.stretchH=stretchH;
	e.onmousedown=function(ev) { var ev=ev||window.event;
		this.sizing=true;
		this.startX=!config.browser.isIE?ev.pageX:(ev.clientX+findScrollX());
		this.startY=!config.browser.isIE?ev.pageY:(ev.clientY+findScrollY());
		this.startW=this.offsetWidth;
		this.startH=this.offsetHeight;
		return false;
	};
	e.onmousemove=function(ev) { var ev=ev||window.event;
		if (this.sizing) {
			var s=this.style;
			var currX=!config.browser.isIE?ev.pageX:(ev.clientX+findScrollX());
			var currY=!config.browser.isIE?ev.pageY:(ev.clientY+findScrollY());
			var newW=(currX-this.offsetLeft)/(this.startX-this.offsetLeft)*this.startW;
			var newH=(currY-this.offsetTop )/(this.startY-this.offsetTop )*this.startH;
			if (this.stretchW) s.width =Math.floor(Math.max(newW,this.minW))+'px';
			if (this.stretchH) s.height=Math.floor(Math.max(newH,this.minH))+'px';
			clearMessage(); displayMessage(this.statusMsg.format([s.width,s.height]));
		}
		return false;
	};
	e.onmouseup=function(ev) { var ev=ev||window.event;
		if (ev.shiftKey) { this.style.width=this.style.height=''; }
		if (ev.ctrlKey)  { this.style.width=this.originalW; this.style.height=this.originalH; }
		this.sizing=false;
		clearMessage();
		return false;
	};
	e.onmouseout=function(ev) { var ev=ev||window.event;
		this.sizing=false;
		clearMessage();
		return false;
	};
}
//}}}
http://www.oracle.com/technetwork/database/database-technologies/timesten/overview/ds-imdb-cache-1470955.pdf?ssSourceSiteId=ocomen

Using Oracle In-Memory Database Cache to Accelerate the Oracle Database 
http://www.oracle.com/technetwork/database/database-technologies/performance/wp-imdb-cache-130299.pdf?ssSourceSiteId=ocomen

@@a columnar, compressed, in-memory cache of your on-disk data@@

http://www.oracle.com/us/corporate/press/2020717
http://www.oracle.com/us/corporate/features/database-in-memory-option/index.html
http://www.oracle.com/us/products/database/options/database-in-memory/overview/index.html

http://oracle-base.com/articles/12c/in-memory-column-store-12cr1.php
http://www.scaleabilities.co.uk/2014/07/25/oracles-in-memory-database-the-true-cost-of-licensing/
Oracle Database 12c In-Memory videos https://www.youtube.com/playlist?list=PLKCk3OyNwIzu4veZ1FFe32ZsvFHGlT4gZ
@@Oracle by Example: Oracle 12c In-Memory series https://apexapps.oracle.com/pls/apex/f?p=44785:24:106572632124906::::P24_CONTENT_ID,P24_PREV_PAGE:10152,24@@

@@official documentation http://www.oracle.com/technetwork/database/in-memory/documentation/index.html @@


! others
using the in-memory column store http://docs.oracle.com/database/121/ADMIN/memory.htm#ADMIN14257
about the in-memory column store http://docs.oracle.com/database/121/TGDBA/tune_sga.htm#TGDBA95379
glossary http://docs.oracle.com/database/121/CNCPT/glossary.htm#CNCPT89131
concepts guide - in-memory column store http://docs.oracle.com/database/121/CNCPT/memory.htm#CNCPT89659
ORACLE DATABASE 12 C IN-MEMORY OPTION http://www.oracle.com/us/solutions/sap/nl23-db12c-imo-en-2209396.pdf
http://www.oracle-base.com/articles/12c/in-memory-column-store-12cr1.php

! and others
http://www.oracle.com/us/solutions/sap/nl23-db12c-imo-en-2209396.pdf
http://www.oracle.com/technetwork/database/in-memory/overview/twp-oracle-database-in-memory-2245633.html
http://www.oracle.com/technetwork/database/options/database-in-memory-ds-2210927.pdf
https://search.oracle.com/search/search?search_p_main_operator=all&group=Blogs&q=engineered%20weblog:In-Memory
http://blog.tanelpoder.com/2014/06/10/our-take-on-the-oracle-database-12c-in-memory-option/
http://www.oracle.com/technetwork/database/in-memory/documentation/index.html
http://docs.oracle.com/database/121/CNCPT/memory.htm#CNCPT89659
http://docs.oracle.com/database/121/ADMIN/memory.htm#ADMIN14257
http://www.rittmanmead.com/2014/08/taking-a-look-at-the-oracle-database-12c-in-memory-option/
http://www.oracle.com/us/products/database/options/database-in-memory/overview/index.html
http://www.oracle.com/technetwork/database/options/dbim-vs-sap-hana-2215625.pdf?ssSourceSiteId=ocomen
http://www.oracle.com/technetwork/database/bi-datawarehousing/data-warehousing-wp-12c-1896097.pdf
http://www.oracle.com/us/solutions/sap/nl23-db12c-imo-en-2209396.pdf
https://docs.oracle.com/database/121/NEWFT/chapter12102.htm#BGBEGFAF

! disable in-memory option
{{{
ALTER SYSTEM SET INMEMORY_FORCE=OFF SCOPE=both sid='*';
ALTER SYSTEM SET INMEMORY_QUERY=DISABLE SCOPE=both sid='*';
ALTER SYSTEM RESET INMEMORY_SIZE SCOPE=SPFILE sid='*';
SHUTDOWN IMMEDIATE;
STARTUP;
}}}












RMAN puzzle: database reincarnation is not in sync with catalog https://blogs.oracle.com/gverma/entry/rman_puzzle_database_reincarna
Randolf Geist on 11g Incremental Statistics
http://www.oaktable.net/content/randolf-geist-11g-incremental-statistics

https://blogs.oracle.com/optimizer/entry/incremental_statistics_maintenance_what_statistics
https://blogs.oracle.com/optimizer/incremental-statistics-maintenance-what-statistics-will-be-gathered-after-dml-occurs-on-the-table  <- comments by maria

http://oracledoug.com/serendipity/index.php?/archives/1596-Statistics-on-Partitioned-Tables-Part-6a-COPY_TABLE_STATS.html

https://blogs.oracle.com/optimizer/efficient-statistics-maintenance-for-partitioned-tables-using-incremental-statistics-part-1


! other speed up options

!! concurrent = true



http://www.dbspecialists.com/blog/uncategorized/index-usage-monitoring-and-keeping-the-horses-out-front/
bde_rebuild.sql - Validates and rebuilds indexes occupying more space than needed
  	Doc ID: 	182699.1

Script to capture INDEX_STAT Information
  	Doc ID: 	35492.1

How Does the Index Block Splitting Mechanism Work for B*tree Indexes?
  	Doc ID: 	183612.1

Note 30405.1 How Btree Indexes Are Maintained 

Script to List Percentage Utilization of Index Tablespace
  	Doc ID: 	1039284.6
Full Coverage in Infiniband Monitoring with OSWatcher 3.0: IB Monitoring
http://husnusensoy.wordpress.com/tag/infiniband/

<<<
Infiniband bonding is somewhat similar to classical network bonding (or aggregation) with some behavioral differences. The major difference is that Infiniband network bonding interface is running in active/passive mode over Infiniband HCAs. No trunking is allowed as it is possible with classical Ethernet network. So if you have two 20 GBit interfaces you will have 20 Gbit theoretical throughput in an active IB network even that you have two (or more) interfaces. This can be seen easily at the output of ifconfig also. While ib0 interface has send/receive statistics, there is almost no traffic running over ib2 interface.

In case of a failure (or it can be done manually) bonding interface will detect the failure in the active component and will failover to the passive one and you will see some informative warning message in the /var/log/messages file just like in Ethernet bonding.
<<<

<<<
''In a successful RAC configuration failover duration should be less than any CRS or watchdog timeout value.'' That’s because for a period of time no interconnect traffic (heartbeats, or cache fusion) will be available. So if this failover duration is too long due to host CPU utilization, a problem in HCA firmware, a configuration problem at IB switch,or any other problem clusterware or some watchdog will assume that node should be evicted from the cluster to protect cluster integrity.
<<<

https://blogs.oracle.com/networking/entry/infiniband_vocabulary
http://www.spinics.net/lists/linux-rdma/msg07546.html
http://www.mail-archive.com/general@lists.openfabrics.org/msg08014.html
http://people.redhat.com/dledford/infiniband_get_started.html
http://www.mail-archive.com/general@lists.openfabrics.org/msg08014.html

/***
|Name|InlineJavascriptPlugin|
|Source|http://www.TiddlyTools.com/#InlineJavascriptPlugin|
|Documentation|http://www.TiddlyTools.com/#InlineJavascriptPluginInfo|
|Version|1.9.5|
|Author|Eric Shulman|
|License|http://www.TiddlyTools.com/#LegalStatements|
|~CoreVersion|2.1|
|Type|plugin|
|Description|Insert Javascript executable code directly into your tiddler content.|
''Call directly into TW core utility routines, define new functions, calculate values, add dynamically-generated TiddlyWiki-formatted output'' into tiddler content, or perform any other programmatic actions each time the tiddler is rendered.
!!!!!Documentation
>see [[InlineJavascriptPluginInfo]]
!!!!!Revisions
<<<
2009.04.11 [1.9.5] pass current tiddler object into wrapper code so it can be referenced from within 'onclick' scripts
2009.02.26 [1.9.4] in $(), handle leading '#' on ID for compatibility with JQuery syntax
|please see [[InlineJavascriptPluginInfo]] for additional revision details|
2005.11.08 [1.0.0] initial release
<<<
!!!!!Code
***/
//{{{
version.extensions.InlineJavascriptPlugin= {major: 1, minor: 9, revision: 5, date: new Date(2009,4,11)};

config.formatters.push( {
	name: "inlineJavascript",
	match: "\\<script",
	lookahead: "\\<script(?: src=\\\"((?:.|\\n)*?)\\\")?(?: label=\\\"((?:.|\\n)*?)\\\")?(?: title=\\\"((?:.|\\n)*?)\\\")?(?: key=\\\"((?:.|\\n)*?)\\\")?( show)?\\>((?:.|\\n)*?)\\</script\\>",

	handler: function(w) {
		var lookaheadRegExp = new RegExp(this.lookahead,"mg");
		lookaheadRegExp.lastIndex = w.matchStart;
		var lookaheadMatch = lookaheadRegExp.exec(w.source)
		if(lookaheadMatch && lookaheadMatch.index == w.matchStart) {
			var src=lookaheadMatch[1];
			var label=lookaheadMatch[2];
			var tip=lookaheadMatch[3];
			var key=lookaheadMatch[4];
			var show=lookaheadMatch[5];
			var code=lookaheadMatch[6];
			if (src) { // external script library
				var script = document.createElement("script"); script.src = src;
				document.body.appendChild(script); document.body.removeChild(script);
			}
			if (code) { // inline code
				if (show) // display source in tiddler
					wikify("{{{\n"+lookaheadMatch[0]+"\n}}}\n",w.output);
				if (label) { // create 'onclick' command link
					var link=createTiddlyElement(w.output,"a",null,"tiddlyLinkExisting",wikifyPlainText(label));
					var fixup=code.replace(/document.write\s*\(/gi,'place.bufferedHTML+=(');
					link.code="function _out(place,tiddler){"+fixup+"\n};_out(this,this.tiddler);"
					link.tiddler=w.tiddler;
					link.onclick=function(){
						this.bufferedHTML="";
						try{ var r=eval(this.code);
							if(this.bufferedHTML.length || (typeof(r)==="string")&&r.length)
								var s=this.parentNode.insertBefore(document.createElement("span"),this.nextSibling);
							if(this.bufferedHTML.length)
								s.innerHTML=this.bufferedHTML;
							if((typeof(r)==="string")&&r.length) {
								wikify(r,s,null,this.tiddler);
								return false;
							} else return r!==undefined?r:false;
						} catch(e){alert(e.description||e.toString());return false;}
					};
					link.setAttribute("title",tip||"");
					var URIcode='javascript:void(eval(decodeURIComponent(%22(function(){try{';
					URIcode+=encodeURIComponent(encodeURIComponent(code.replace(/\n/g,' ')));
					URIcode+='}catch(e){alert(e.description||e.toString())}})()%22)))';
					link.setAttribute("href",URIcode);
					link.style.cursor="pointer";
					if (key) link.accessKey=key.substr(0,1); // single character only
				}
				else { // run script immediately
					var fixup=code.replace(/document.write\s*\(/gi,'place.innerHTML+=(');
					var c="function _out(place,tiddler){"+fixup+"\n};_out(w.output,w.tiddler);";
					try	 { var out=eval(c); }
					catch(e) { out=e.description?e.description:e.toString(); }
					if (out && out.length) wikify(out,w.output,w.highlightRegExp,w.tiddler);
				}
			}
			w.nextMatch = lookaheadMatch.index + lookaheadMatch[0].length;
		}
	}
} )
//}}}

// // Backward-compatibility for TW2.1.x and earlier
//{{{
if (typeof(wikifyPlainText)=="undefined") window.wikifyPlainText=function(text,limit,tiddler) {
	if(limit > 0) text = text.substr(0,limit);
	var wikifier = new Wikifier(text,formatter,null,tiddler);
	return wikifier.wikifyPlain();
}
//}}}

// // GLOBAL FUNCTION: $(...) -- 'shorthand' convenience syntax for document.getElementById()
//{{{
if (typeof($)=='undefined') { function $(id) { return document.getElementById(id.replace(/^#/,'')); } }
//}}}
--------------------------------------------------------------
WHEN INSTALLING ORACLE, GO TO THESE SITES AND METALINK NOTES
--------------------------------------------------------------

# Note 466757.1 Critical Patch Update January 2008 Availability Information for Oracle Server and Middleware Products
		- this is where you check the CPUs that you'll download

		
# Note 466759.1 Known Issues for Oracle Database Critical    Ph Update
		- this document lists the known issues for Oracle Database Critical Patch Update dated January 2008 (CPUJan2008). 
			These known issues are in addition to the issues listed in the individual CPUJan2008 READMEs.
		
			
# Note 394486.1 Risk Matrix Glossary -- terms and definitions for Critical Patch Update risk matrices
		- this explains the columns found on the CPU vulnerability matrix and explains the Common Vulnerability Scoring Standard (CVSS)
	
		
# Note 394487.1 Use of Common Vulnerability Scoring System (CVSS) by Oracle
		- explains the CVSS
	
		
# Note 455294.1 Oracle E-Business Suite Critical Patch Update Note October 2007
		- when you're patching e-Business suite, go to this note
		
		
# Note 438314.1 Critical Patch Update - Introduction to Database n-Apply CPUs
		- merge apply

# Oracle� Database on AIX�,HP-UX�,Linux�,Mac OS� X,Solaris�,Tru64 Unix� Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.1)
 	Doc ID:	Note:169706.1
 	
# ALERT: Oracle 10g Release 2 (10.2) Support Status and Alerts 
  Doc ID:  Note:316900.1 
  
# Upgrade Companion



--------------------------------------------------
SEPARATE ASM ORACLE_HOME AND ORACLE ORACLE_HOME
--------------------------------------------------

separating an ASM ORACLE_HOME and ORACLE ORACLE_HOME was introduced on 10gR2, also includes separating CLUSTERWARE home
so you'll have three (3) ORACLE_HOMEs if you're configuring a RAC environment

ORACLE_HOME
ASM_HOME
CRS_HOME


-----------------------------------
CLUSTER SYNCHRONIZATION SERVICES
-----------------------------------

ORACLE_HOME
ASM_HOME

If you're in ASM to remove the Oracle Software ORACLE_HOME, make sure that CSS is not running on ORACLE_HOME
if it's running then reconfigure the CSS deamon to run on another home (ASM_HOME).. but by default, if you make another HOME
for ASM, then CSS will be created there.


CSS is created when:
1) you use ASM as storage
2) when you install Clusterware (RAC, but Clusterware has its separate home already)


	For Oracle Real Application Clusters installations, the CSS daemon is installed with Oracle Clusterware in a separate Oracle home 
	directory (also called the Clusterware home directory). For single-node installations, the CSS daemon is installed in and runs from 
	the same Oracle home as Oracle Database.
	
	If you plan to have more than one Oracle Database 10g installation on a single system and you want to use Automatic Storage Management 
	for database file storage, then Oracle recommends that you run the CSS daemon and the Automatic Storage Management instance from the 
	same Oracle home directory and use different Oracle home directories for the database instances.


	Oracle� Database Installation Guide 10g Release 2 (10.2) for Linux x86 --> 6 Removing Oracle Software
				Enter the following command to identify the Oracle home directory being used to run the CSS daemon:
				
				# more /etc/oracle/ocr.loc
				
				
				The output from this command is similar to the following:
				
				ocrconfig_loc=/u01/app/oracle/product/10.2.0/db_1/cdata/localhost/local.ocr
				local_only=TRUE
				
				
				The ocrconfig_loc parameter specifies the location of the Oracle Cluster Registry (OCR) used by the CSS daemon. The path up to the cdata directory 
				is the Oracle home directory where the CSS daemon is running (/u01/app/oracle/product/10.2.0/db_1 in this example).
				
				Note:
				If the value of the local_only parameter is FALSE, Oracle Clusterware is installed on this system.
				
				as ROOT
				Set the ORACLE_HOME environment variable to specify the path to this Oracle home directory:
				      Bourne, Bash, or Korn shell:
					
						# ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_2;
						# export ORACLE_HOME
				
				Enter the following command to reconfigure the CSS daemon to run from this Oracle home:
				
						# $ORACLE_HOME/bin/localconfig reset $ORACLE_HOME
						
				This command stops the Oracle CSS daemon, reconfigures it in the new Oracle home, and then restarts it. 
				When the system boots, the CSS daemon starts automatically from the new Oracle home.

				Then edit /etc/oratab..
				+ASM:/u01/app/oracle/product/10.2.0/db_2:N
				

-----------------------------------
MEMORY FAQs
-----------------------------------

32bit linux
shmax max value is up to 4GB but on 32bit Oracle database REGARDLESS OF PLATFORM the SGA is 1.7GB max

32bit windows
memory for 32bit windows is upto 2GB max but the sga is up to 1.7GB max (REGARDLESS OF PLATFORM)


-----------------------------------
Enterprise Manager Grid Control
-----------------------------------

# ORACLE_HOME
	You can install this release more than once on the same system, as long as each installation is done in a separate Oracle home directory.

# Management Agent

	Ensure the Management Agent Oracle home must not contain any other Oracle software installation.
	
	For Management Agent deployments, make sure that /tmp directory has 1300 MB of disk space available on the target machine.

	Before you begin the installation of a Management Agent, ensure that the target host where you want to install the Management Agent has the appropriate users and operating system groups created. For information about creating required users and operating system groups, see Chapter1, "Creating Required Operating System Groups and Users". Also ensure that the target host has the group name as well as the group id created. Otherwise, the installation will fail.
	
	You can install management agent in 7 ways:
		
		Agent Deploy Application
      		(installation types)
				Fresh Installation of the Management Agent
				Installation Using a Shared Agent Home
					
			NOTE:	
					NFS agent deployment is not supported on a cluster. If you want the agent to monitor a cluster and Oracle RAC, you must use the agent deployment with the cluster option, and not the NFS (network file system) deployment method.
	      		
			NOTE:
				Do not attempt to view the prerequisite check status while the prerequisite checks are still in progress. If you do so while the checks are still in progress, the application will display an error.
				Ensure that you do not specify duplicate entries of the host list. If there are duplicate host entries in this list, the application hangs. Also ensure that you use the same host names for which the SSH has been set.
				The important parameters for Agent Installation are -b, -c, -n, -z and optionally -i, -p, -t, -d.
				An unsecure agent cannot upload data to the secure Management Service. Oracle also recommends for security reasons that you change the Management Service password specified here after the installation is complete.
				/etc/sudoers
				After the installation and configuration phase, the Agent Deploy application checks for the existence of the Central Inventory (located at /etc/oraInst.loc). If this is the first Oracle product installation, Agent Deploy executes the following scripts:

				   	1.orainstRoot.sh - UNIX Machines only: This creates oraInst.loc that contains the central inventory.
				   	2. root.sh - UNIX Machines only: This runs all the scripts that must be executed as root.
					If this is not the first Oracle product installation, Agent Deploy executes only the root.sh script.

				
				
				
		nfsagentinstall Script
		
			Sharing the Agent Oracle Home Using the nfsagentinstall Script
				The agent Oracle home cannot be installed in an Oracle Cluster Shared File System (OCFS) drive, but is supported on an NAS (Network Attached Storage) drive.
				You can perform only one nfsagent installation per host. Multiple nfsagent installations on the same host will fail.
				When you are performing an NFS Agent installation, the operating system (and version) of the target machine where the NFS Agent needs to be installed should be the same as the operating system (and version) of the machine where the master agent is located. If the target machine has a different operating system, then the NFS Agent installation will fail. For example, if the master agent is on Red Hat Linux Version 4, then the NFS agent can be installed only on those machines that run Red Hat Linux Version 4. If you try to install on Red Hat Linux Version 3 or a different operating system for that matter, then the NFS installation will fail.
				
				NOTE:
					For NFS Agent installation from 10.2.0.3.0 master agents, the NFS agents will be started automatically after rebooting the machine.
					For NFS Agent installation from 10.2.0.3.0 master agents, agentca script for rediscovery of targets present in the <statedir>/bin directory can be used to rediscover targets on that host.
					
					
					
		agentDownload Script
		
				Use the agentDownload script to perform an agent installation on a cluster environment
				For Enterprise Manager 10g R2, the <version> value in the preceding syntax will be 10.2.0.2.0
				
				NOTE:
					If the Management Service is using a load balancer, you must modify the s_omsHost and s_omsPort values in the <OMS_HOME>/sysman/agent_download/<version>/agentdownload.rsp file to reflect the load balancer host and port before using the agentDownload script.
					
					The base directory for the agent installation must be specified using the -b option. For example, if you specified the parent directory to be agent_download (/scratch/agent_download), then the command to be specified is:
						-b /scratch/agent_download
					The agent Oracle home (agent10g) is created as a subdirectory under this parent directory.				
					
					The agent that you are installing is not secure by default. If you want to secure the agent, you must specify the password using the AGENT_INSTALL_PASSWORD environment variable, or by executing the following command after the installation is complete:
					<Agent_Home>/bin/emctl secure agent
					
					For Enterprise Manager 10.2.0.3.0, if the agent_download.rsp file does not contain the encrypted registration password or the AGENT_INSTALL_PASSWORD environment variable is not set, the agentDownload script in UNIX will prompt for the Agent Registration password which is used for securing the agent. Provide the password to secure the agent. If you do not want to secure the agent, continue running the agentDownload script by pressing Enter.
					
					The root.sh script must be run as root; otherwise, the Enterprise Manager job system will not be accessible to the user. The job system is required for some Enterprise Manager features, such as hardware and software configuration tasks and configuring managed database targets.
					
					This script uses the -ignoresysPrereqs flag to bypass prerequisite check messages for operating system-specific patches during installation; prerequisite checks are still performed and saved to the installer logs. While this makes the Management Agent easier to deploy, check the logs to make sure the target machines on which you are installing Management Agents are properly configured for successful installation.
					
					

				

				
				
		Cluster Agent Installation
		Management Agent Cloning
		Interactive Installation Using Oracle Universal Installer
		Silent Installation
		
	If you are deploying the Management Agent in an environment having multiple Management Service installations that are using a load balancer, you should not access the Agent Deploy application using this load balancer. Oracle recommends that you access the Management Service directly.

	you'll have issue with OMS running on load balancer, have some configurations to do

	The default port value for 10.2 Management Agent is 3872.
	The default port for Grid Control is 4889. This should be available after you install the Management Service.
	
# PATCHING

	For 10.2.0.1, the OMS installation not only installs an OMS, but also automatically installs a Management Agent. However, when you upgrade that OMS to 10.2.0.4.0 using the Patch Set, the Patch Set does not upgrade any of the associated Management Agents. To upgrade the Management Agents, you have to manually apply the Patch Set on each of the Management Agent homes, as they are separate Oracle Homes.
	
	
# POST INSTALL

		Agent Reconfiguration and Rediscovery
			Note:
			You must specify either the -f or -d option when executing this script. Using one of these two options is mandatory.
			
			Caution:
			Do not use the agentca -f option to reconfigure any upgraded agent (standalone and RAC).
	
	
	

-----------------------------------
ROOT.SH
-----------------------------------

Logging In As Root During Installation (UNIX Only)?

At least once during installation, the installer prompts you to log in as the root user and run a script. You must log in as root because the script edits files in the /etc directory.

The installer prompts you to run the root.sh script in a separate window. This script creates files in the local bin directory (/usr/local/bin, by default).

On IBM AIX and HP UX platforms, the script the files in the /var/opt directory.




-----------------------------------
ASMLIB and raw devices
-----------------------------------

by running the /etc/init.d/oracleasm configure, it will configure the /etc/sysconfig/oracleasm file

# it must be DBA and not be OINSTALL, the oraInventory owner should not have access to the disks it should be the DBA

# ASM
raw/raw[67]:oracle:dba:0660
# OCR
raw/raw[12]:root:oinstall:0640
# Voting Disks
raw/raw[3-5]:crs:oinstall:0640   <-- this is usually user ORACLE, in this scenario the owner of the clusterware software is owned by CRS so he has to own the Voting Disks



-----------------------------------
CLONING HOME
-----------------------------------

The cloning process works by copying all of the files from the source Oracle home to the destination Oracle home. Thus, any files used by the source instance that are located outside the source Oracle home's directory structure are not copied to the destination location.
The size of the binaries at the source and the destination may differ because these are relinked as part of the clone operation and the operating system patch levels may also differ between these two locations. Additionally, the number of files in the cloned home would increase because several files copied from the source, specifically those being instantiated, are backed up as part of the clone operation.
OUI Cloning is more beneficial than using the tarball approach because cloning configures the Central Inventory and the Oracle home inventory in the cloned home. Cloning also makes the home manageable and allows the paths in the cloned home and the target home to be different.

The cloning process uses the OUI cloning functionality. This operation is driven by a set of scripts and add-ons that are included in the respective Oracle software. 
	The cloning process has two phases:
	1) Source Preparation Phase 
			- $ORACLE_HOME/clone/bin/prepare_clone.pl needs to be executed only for the Application Server Cloning. Database and CRS Oracle home Cloning does not need this
			- archive the home, exclude the following: 
				*.log, *.dbf, listerner.ora, sqlnet.ora, and tnsnames.ora
			- Also ensure that you do not archive the following folders:
				$ORACLE_HOME/<Hostname>_<SID>
				$ORACLE_HOME/oc4j/j2ee/OC4J_DBConsole_<Hostname>_<SID>

				Create ExcludeFileList.txt:
					[oracle@dg10g2 10.2.0]$ find db_1 -iname *.log > ExcludeFileList.txt
					[oracle@dg10g2 10.2.0]$ find db_1 -iname *.dbf >> ExcludeFileList.txt
					[oracle@dg10g2 10.2.0]$ find db_1 -iname listener.ora >> ExcludeFileList.txt
					[oracle@dg10g2 10.2.0]$ find db_1 -iname sqlnet.ora >> ExcludeFileList.txt
					[oracle@dg10g2 10.2.0]$ find db_1 -iname tnsnames.ora >> ExcludeFileList.txt
					[oracle@dg10g2 10.2.0]$ echo "db_1/dg10g2.us.oracle.com_orcl" >> ExcludeFileList.txt
					[oracle@dg10g2 10.2.0]$ echo "db_1/oc4j/j2ee/OC4J_DBConsole_dg10g2.us.oracle.com_orcl" >> ExcludeFileList.txt
		
				TAR home:
					nohup tar -X ExcludeFileList.txt -cjvpf db_1.tar.bz2 db_1 &
					
	2) Cloning Phase
			
			- 10gR1 run: $ORACLE_HOME\oui\bin\runInstaller.sh ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_2 ORACLE_HOME_NAME=asm_home1 -clone
			- 10gR2 run: perl <Oracle_Home>/clone/bin/clone.pl ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_2 ORACLE_HOME_NAME=asm_home1 
			
	3) Check log files
	
			The cloning script runs multiple tools, each of which may generate its own log files. However, the following log files that OUI and the cloning scripts generate, are the key log files of interest for diagnostic purposes:
				<Central_Inventory>/logs/cloneActions timestamp.log: Contains a detailed log of the actions that occur during the OUI part of the cloning.
				<Central_Inventory>/logs/oraInstall timestamp.err: Contains information about errors that occur when OUI is running.
				<Central_Inventory>/logs/oraInstall timestamp.out: Contains other miscellaneous messages generated by OUI.
				$ORACLE_HOME/clone/logs/clone timestamp.log: Contains a detailed log of the actions that occur during the pre-cloning and cloning operations.
				$ORACLE_HOME/clone/logs/error timestamp.log: Contains information about errors that occur during the pre-cloning and cloning operations.

			To find the location of the Oracle inventory directory:On all UNIX system computers except Linux and IBM AIX, look in /var/opt/oracle/oraInst.loc. On IBM AIX and Linux-based systems look in /etc/oraInst.loc file.
			On Windows system computers, the location can be obtained from the Windows Registry key: HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\INST_LOC.
			After the clone.pl script finishes running, refer to these log files to obtain more information about the cloning process.
	
-- Reference for 11.2 home cloning http://blogs.oracle.com/AlejandroVargas/2010/11/oracle_rdbms_home_install_usin.html	

-----------------------------------
Windows Install 
-----------------------------------

1) Install Loopback Adapter

2) Configure Listener	(port number must be different if installing multiple softwares)

3) Create Database


Scenarios:
==========

1) When already have an existing database with EM, then dropped the database..
	It drops everything including the services, except the LISTENER and iSQLPLUS service.
	Then, when I create again, it creates the database and EM with 5500 port number.

2) Noticed that when I remove this on TNSNAMES.ORA, the EM fails.
	*** This is because, when you configured your LISTENER to be on a different port number (1522)
		it will put a value on the parameter LOCAL_LISTENER=LISTENER_ORA10, and will put a value on TNSNAMES.ORA...

ORA10 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = sqlnbcn-014.corp.sqlwizard.com)(PORT = 1522))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = ora10.ph.oracle.com)
    )
  )


LISTENER_ORA10 =
  (ADDRESS = (PROTOCOL = TCP)(HOST = sqlnbcn-014.corp.sqlwizard.com)(PORT = 1522))	<-- THIS!!!
<<showtoc>>

Oracle� Database on AIX�,HP-UX�,Linux�,Mac OS� X,Solaris�,Tru64 Unix� Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.1)
  	Doc ID: 	Note:169706.1
	-- also located at Installations folder


-- 11R2 Changes
11gR2 Install (Non-RAC): Understanding New Changes With All New 11.2 Installer [ID 884232.1]
11gR2 Clusterware and Grid Home - What You Need to Know [ID 1053147.1]
Requirements for Installing Oracle 11gR2 RDBMS on RHEL (and OEL) 5 on AMD64/EM64T [ID 880989.1]


-- CPU, PSU, SPU - Oracle Critical Patch Update Terminology Update
http://www.integrigy.com/oracle-security-blog/cpu-psu-spu-oracle-critical-patch-update-terminology-update
New Patch Nomenclature for Oracle Products [ID 1430923.1]

MOS Note:1962125.1
Overview of Database Patch Delivery Methods   <- 20150207


-- PATCHES 

Good practices applying patches and patchsets
  	Doc ID: 	Note:176311.1

Oracle Recommended Patches -- Oracle Database [ID 756671.1]

Recommended Patch Bundles  Note 756388.1

Generic Support Status Notes  (strongly recommended to keep an eye on  notes below)

    * For 11.1.0   Note id  454507.1
    * For 10.2.0   Note id  316900.1
    * For 10.1.0   Note id  263719.1
    * For 9.2         Note id  189908.1


-- PATCH SET

Release Schedule of Current Database Patch Sets
  	Doc ID: 	742060.1

rolling back a patchset (new functionality provided with 9.2.0.7 and 10.2)

How to rollback a patchset 
  Doc ID:  Note:334598.1 

How To Find RDBMS patchsets on Metalink 
  Doc ID:  438049.1 

MOS Note 1189783.1 – Important Changes to Oracle Database Patch Sets Starting With 11.2.0.2



-- 11.2 PATCH SET

MOS Note 1189783.1 – Important Changes to Oracle Database Patch Sets Starting With 11.2.0.2

How to deinstall "old" SW after 11.2.0.2 has been applied?
http://blogs.oracle.com/UPGRADE/2010/10/how_to_deinstall_old_sw_after.html





-- PATCH SET UPDATES

Intro to Patch Set Updates (PSU)
  	Doc ID: 	854428.1

Patch Set Updates - One-off Patch Conflict Resolution [ID 1061295.1]



-- CPU 

Reference List of Critical Patch Update Availability Documents For Oracle Database and Fusion Middleware Product
  	Doc ID: 	783141.1

http://www.freelists.org/post/oracle-l/patch-source,4

How To Find The Description/Details Of The Bugs Fixed By A Patch Using Opatch?
  	Doc ID: 	750350.1

http://www.oracle.com/technology/deploy/security/cpu/cpufaq.htm

Critical Patch Update - Introduction to Database n-Apply CPUs
  	Doc ID: 	438314.1

http://blogs.oracle.com/security/2007/07/17/#a62

http://www.integrigy.com/security-resources/whitepapers/IOUG_Oracle_Critical_Patch_Updates_Unwrapped.pdf

Security Alerts and Critical Patch Updates- Frequently Asked Questions
  	Doc ID: 	360470.1

OPatch - New features
  	Doc ID: 	749368.1

How To Find The Description/Details Of The Bugs Fixed By A Patch Using Opatch?
  	Doc ID: 	750350.1

10.2.0.4 Patch Set - List of Bug Fixes by Problem Type
  	Doc ID: 	401436.1

Critical Patch Update April 2009 Database Known Issues
  	Doc ID: 	786803.1



-- PATCHES WINDOWS 

Oracle Database Server and Networking Patches for Microsoft Platforms
  	Doc ID: 	161549.1




-- ROLLING PATCH 

Oracle Clusterware (formerly CRS) Rolling Upgrades
Doc ID: Note:338706.1

Rolling Patch - OPatch Support for RAC [ID 244241.1]





-- ORAINVENTORY

How To Move The Central Inventory To Another Location
  	Doc ID: 	Note:299260.1
  	



-- ORACLE_HOME

MOVING ORACLE_HOME
  	Doc ID: 	Note:28433.1
  	
Can You Rename/Change The Oracle Home Directory After Installation ?
  	Doc ID: 	Note:423285.1
  	



-- OUI 

Overview of the Oracle Universal Installer
  	Doc ID: 	Note:74182.1


-- OUI DEBUG

How to Diagnose Oracle Installer Errors On Unix About Permissions or Lack of Space?
  	Doc ID: 	401317.1

ERROR STARTING RUNINSTALLER /tmp/...../jre/lib/PA_RISC2.0/libmawt.sl: Not enough space
  	Doc ID: 	308199.1
  	




-- DBA_REGISTRY

Information On Installed Database Components and Schemas
  	Doc ID: 	Note:472937.1

How to remove the OLAP Catalog and OLAP APIs from the database
  	Doc ID: 	Note:224746.1

How to Uninstall OLAP Options from ORACLE_HOME?
  	Doc ID: 	Note:331808.1

How To Remove or De-activate OLAP After Migrating From 9i To Standard Edition 10g
  	Doc ID: 	Note:467643.1 	

Database Status Check Before, During And After Migrations And Upgrades
  	Doc ID: 	Note:437794.1 	

What to do if you run an upgrade or migration with invalid objects and no backup
  	Doc ID: 	Note:453642.1

Packages and Types Invalid in Dba_registry
  	Doc ID: 	Note:457861.1

DBA_REGISTRY Shows Components Of A New Database Are At The Base Level, Even Though A Patchset Is Installed
  	Doc ID: 	Note:339614.1

DBA_REGISTRY is invalid
  	Doc ID: 	Note:393319.1

How to see what options are installed
  	Doc ID: 	Note:473542.1

RAC Option Invalid After Migration
  	Doc ID: 	Note:312071.1

DBA_REGISTRY Shows Status of Loaded After Migration to 9.2
  	Doc ID: 	Note:252090.1

How to Diagnose Invalid or Missing Data Dictionary (SYS) Objects
  	Doc ID: 	Note:554520.1

Oracle9.2 New Feature: Migration Infrastructure Improvements
  	Doc ID: 	Note:177382.1




-- DBA_REGISTRY, after wordsize change 10.2.0.4

How to check if Intermedia Audio/Image/Video is Installed Correctly?
  	Doc ID: 	221337.1

Manual upgrade of the 10.2.x JVM fails with ORA-3113 and ORA-7445
  	Doc ID: 	459060.1

Jserver Java Virtual Machine Become Invalid After Catpatch.Sql
  	Doc ID: 	312140.1

How to Reload the JVM in 10.1.0.X and 10.2.0.X
  	Doc ID: 	276554.1

Script to Check the Status of the JVM within the Database
  	Doc ID: 	456949.1

How to Tell if Java Virtual Machine Has Been Installed Correctly
  	Doc ID: 	102717.1



-- RHEL 5

Requirements For Installing Oracle10gR2 On RHEL 5/OEL 5 (x86_64)
  	Doc ID: 	421308.1



-- RHEL4

Requirements for Installing Oracle 10gR2 RDBMS on RHEL 4 on AMD64/EM64T
  	Doc ID: 	Note:339510.1


  	
  	
-- LINUX ITANIUM 

montecito bug
http://k-freedom.spaces.live.com/blog/cns!CF84914AA1F284FD!167.entry

How To Install Oracle RDBMS Software On Itanium Servers With Montecito Processors
 	Doc ID:	Note:400227.1
 
    -- http://www.ora-solutions.net/web/blog/
    Requirements for Installing Oracle 10gR2 RDBMS on RHEL 5 on Linux Itanium (ia64)
	    Doc ID:	Note:748378.1

    Recently, I had to install 10gR2 on Linux Itanium (Montecito CPUs) and found out that the Java version that ships with the binaries does not work on this platform. Therefore you have to download Patch 5390722 and perform the following steps for RAC installation:

      1. Install Patch 5390722: Install JDK into new 10.2 CRS Home, then install JRE into new 10.2 CRS Home.
      2. Take a tar backup of the CRS Home containing these two components. You will need it.
      3. Install 10.2.0.1 Clusterware by running from 10.2.0.1 binaries: ./runInstaller -jreLoc $CRS_HOME/jre/1.4.2
      4. Install Patch 5390722 with the option CLUSTER_NODES={"node1", "node2", ...}: Install JDK into new 10.2 RDBMS Home, then install JRE into new 10.2 RDBMS
      5. Install 10.2.0.1 RDBMS Binaries into the new 10.2 RDBMS: ./runInstaller -jreLoc $ORACLE_HOME/jre/1.4.2
      6. If you want to install the 10.2.0.4 patchset, you will have to follow these steps:
	  for CRS: ./runInstaller -jreLoc $ORA_CRS_HOME/jdk/jre
	  for RDBMS: ./runInstaller -jreLoc $ORACLE_HOME/jdk/jre
      7. After that, you have to repair the JRE because the 10.2.0.4 patchset has overwritten the patched JRE with the defective versions. (7448301)
	  % cd $ORACLE_HOME/jre
	  % rm -rf 1.4.2
	  % tar –xvf $ORACLE_HOME/jre/1.4.2-5390722.tar

    Sources:

	* Note: 404248.1 - How To Install Oracle CRS And RAC Software On Itanium Servers With Montecito Processors
	* Note: 400227.1 - How To Install Oracle RDBMS Software On Itanium Servers With Montecito Processors
	* Bug 7448301 - Linux Itanium: 10.2.0.4 Patchset for Linux Itanium (Montecito) has wrong Java runtime


Support of Linux and Oracle Products on Linux (Doc ID 266043.1)
How To Install Oracle RDBMS Software On Itanium Servers With Montecito Processors (Doc ID 400227.1)
Requirements for Installing Oracle 10gR2 RDBMS on RHEL 5 on Linux Itanium (ia64) (Doc ID 748378.1)
Frequently Asked Questions: Oracle E-Business Suite Support on Itanium (Doc ID 311717.1)
How To Identify A Server Which Has Intel® Montecito Processors Installed (Doc ID 401332.1)
Oracle® Database on Unix AIX®,HP-UX®,Linux®,Mac OS® X,Solaris®,Tru64 Unix® Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2) (Doc ID 169706.1)
Installing Oracle Data Integrator On Intel Itanium (64-bit) Hardware (Doc ID 451928.1)

 	
 	
 	






-- DATABASE VAULT

Note 726568.1 How to Install Database Vault Patches on top of 11.1.0.6

How to Install Database Vault Patches on top of 10.2.0.4
  	Doc ID: 	731466.1

How to Install Database Vault Patches on top of 9.2.0.8.1 and 10.2.0.3
  	Doc ID: 	445092.1





 
-- CRS, ASM, RDBMS HOMES COMPATIBILITY

Note 337737.1 Oracle Clusterware - ASM - Database Version Compatibility
Note 363254.1 Applying one-off Oracle Clusterware patches in a mixed version home environment




-- DEBUG

How to Diagnose Oracle Installer Errors On Unix About Permissions or Lack of Space? 
  Doc ID:  401317.1 



-- CLONE

Cloning A Database Home And Changing The User/Group That Owns It
  	Doc ID: 	558478.1

An Example Of How To Clone An Existing Oracle9i Release 2 (9.2.0.x) RDBMS Installation Using OUI
  	Doc ID: 	559863.1

Cloning An Existing Oracle9i Release 2 (9.2.0.x) RDBMS Installation Using OUI
  	Doc ID: 	559299.1

How To Clone An Existing RDBMS Installation Using EMGC
  	Doc ID: 	549268.1

While Cloning Oracle9i Release 2 (9.2.0.x), OUI Fails With "Exception in thread "main" java.lang.NoClassDefFoundError: oracle/sysman/oii/oiic/OiicInstaller"
  	Doc ID: 	559859.1

Cloning with -ignoreSysPrereqs on OS versions certified after initial release
  	Doc ID: 	443376.1

Cloning An Existing Oracle9i Release 2 (9.2.0.x) RDBMS Installation Using OUI
  	Doc ID: 	559299.1




-- MD5

How To Determine md5 and SHA-1 Check-sum in AIX?
  	Doc ID: 	427591.1



-- UNINSTALL - WINDOWS

WIN: Manually Removing all Oracle Components on Microsoft Windows Platforms
  	Doc ID: 	124353.1


-- REINSTALL 
How to Reinstall ASM or DB HOME on One RAC Node From the Install Media. [ID 864614.1]



-- CASE SENSITIVENESS

ORACLE_SID, TNS Alias,Password File and others Case Sensitiveness
  	Doc ID: 	225097.1



-- OPEN FILES

Can't ssh into the system with specific user account: Connection reset by peer (Doc ID 788064.1)

Check the processes run by user 'oracle':
[oracle@rac2 ~]$ ps -u oracle|wc -l
489

Check the files opened by user 'oracle':
[oracle@rac ~]$ /usr/sbin/lsof -u oracle | wc -l
62490




! DBCA troubleshooting
Master Note: Troubleshooting Database Configuration Assistant (DBCA) (Doc ID 1510457.1)
Master Note: Troubleshooting Database Configuration Assistant (DBCA)_1510457.1  http://blog.itpub.net/17252115/viewspace-1158370/
DBCA/DBUA APPEARS TO HANG AFTER CLICKING FINISH BUTTON (Doc ID 727290.1)
Tracing the Database Configuration Assistant (DBCA) (Doc ID 188134.1)
{{{
-DTRACING.ENABLED=true -DTRACING.LEVEL=2
}}}
dbca setting Fatal Error: ORA-01501 https://www.google.com/search?q=dbca+setting+Fatal+Error%3A+ORA-01501&oq=dbca+setting+Fatal+Error%3A+ORA-01501&aqs=chrome..69i57.1417j0j1&sourceid=chrome&ie=UTF-8
Oracle DBCA hangs at 2% https://xcoolwinds.wordpress.com/2013/06/06/oracle-nh/
DBCA ksvrdp 
DBCA errors when cluster_interconnects is set (Doc ID 1373591.1)
asm Received signal #18, SIGCLD
11.2.0.4 Patch Set - List of Bug Fixes by Problem Type (Doc ID 1562142.1)
SYSDBA Connection Fails With ORA-12547 (Doc ID 1447317.1)
ASMCMD KSTAT_IOC_READ
Shutdown Normal or Immediate Hang Waiting for MMON process (Doc ID 1183213.1)
ASMCMD Is Not Working Due To LIBCLNTSH.SO.11.1 Is Missing Or Corrupted. (Doc ID 1407913.1)
asmcmd slow and high cpu (Doc ID 2217709.1)
How To Trace ASMCMD on Unix (Doc ID 824354.1)
asm enq: FA - access file
ASM Instance Is Hanging On 'ENQ: FA - ACCESS FILE' (Doc ID 1371297.1)
ASM KFN Operation
Onnn (ASM Connection Pool Processes) Present Memory Leaks Over The Time In 11.2.0.X.0 or 12.1.0.1 RAC/Standalone Database Instances. (Doc ID 1639119.1)
asm GCS lock cvt S
Bug 11710422 - Queries against V$ASM_FILE slow - waiting on "GCS lock open S" events (Doc ID 11710422.8)
ASM Instance Is Hanging On 'ENQ: FA - ACCESS FILE' (Doc ID 1371297.1)
ASM KFN Operation
Onnn (ASM Connection Pool Processes) Present Memory Leaks Over The Time In 11.2.0.X.0 or 12.1.0.1 RAC/Standalone Database Instances. (Doc ID 1639119.1)
asm GCS lock cvt S
Bug 11710422 - Queries against V$ASM_FILE slow - waiting on "GCS lock open S" events (Doc ID 11710422.8)
ASM Instance Is Hanging On 'ENQ: FA - ACCESS FILE' (Doc ID 1371297.1)
Bug 6934636 - Hang possible for foreground processes in ASM instance (Doc ID 6934636.8)
Parallel file allocation slowness on ASM even after applying patch 13253549 (Doc ID 1916340.1)
kfk: async disk IO LGWR
Rebalance hang:: waiting for kfk: async disk IO on other node (Doc ID 1556836.1)
ASM log write(even)
ORA-27601 raised in ASM I/O path due to wrong value inside cellinit.ora under /etc/oracle/cell/network-config (Doc ID 2135801.1)
ASM log write(odd) log write(even) - A closer Look inside Oracle ASM - Luca Canali - CERN





Instance Caging is available from Oracle Database 11g Release 2 onwards. 

Database Instance Caging: A Simple Approach to Server  Consolidation  http://www.oracle.com/technetwork/database/focus-areas/performance/instance-caging-wp-166854.pdf

http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CCQQFjAA&url=http%3A%2F%2Fioug.itconvergence.com%2Fpls%2Fapex%2FDWBISIG.download_my_file%3Fp_file%3D2617.&ei=8TNHT_r0HO_HsQLEvoDrCA&usg=AFQjCNE-zX-tmwuuqcz311WuHbBqq4YPpA

Configuring and Monitoring Instance Caging [ID 1362445.1]
CPU count consideration for Oracle Parameter setting when using Hyper-Threading Technology [ID 289870.1]

/***
|Name:|InstantTimestampPlugin|
|Description:|A handy way to insert timestamps in your tiddler content|
|Version:|1.0.10 ($Rev: 3646 $)|
|Date:|$Date: 2008-02-27 02:34:38 +1000 (Wed, 27 Feb 2008) $|
|Source:|http://mptw.tiddlyspot.com/#InstantTimestampPlugin|
|Author:|Simon Baird <simon.baird@gmail.com>|
|License:|http://mptw.tiddlyspot.com/#TheBSDLicense|
!!Usage
If you enter {ts} in your tiddler content (without the spaces) it will be replaced with a timestamp when you save the tiddler. Full list of formats:
* {ts} or {t} -> timestamp
* {ds} or {d} -> datestamp
* !ts or !t at start of line -> !!timestamp
* !ds or !d at start of line -> !!datestamp
(I added the extra ! since that's how I like it. Remove it from translations below if required)
!!Notes
* Change the timeFormat and dateFormat below to suit your preference.
* See also http://mptw2.tiddlyspot.com/#AutoCorrectPlugin
* You could invent other translations and add them to the translations array below.
***/
//{{{

config.InstantTimestamp = {

	// adjust to suit
	timeFormat: 'DD/0MM/YY 0hh:0mm',
	dateFormat: 'DD/0MM/YY',

	translations: [
		[/^!ts?$/img,  "'!!{{ts{'+now.formatString(config.InstantTimestamp.timeFormat)+'}}}'"],
		[/^!ds?$/img,  "'!!{{ds{'+now.formatString(config.InstantTimestamp.dateFormat)+'}}}'"],

		// thanks Adapted Cat
		[/\{ts?\}(?!\}\})/ig,"'{{ts{'+now.formatString(config.InstantTimestamp.timeFormat)+'}}}'"],
		[/\{ds?\}(?!\}\})/ig,"'{{ds{'+now.formatString(config.InstantTimestamp.dateFormat)+'}}}'"]
		
	],

	excludeTags: [
		"noAutoCorrect",
		"noTimestamp",
		"html",
		"CSS",
		"css",
		"systemConfig",
		"systemConfigDisabled",
		"zsystemConfig",
		"Plugins",
		"Plugin",
		"plugins",
		"plugin",
		"javascript",
		"code",
		"systemTheme",
		"systemPalette"
	],

	excludeTiddlers: [
		"StyleSheet",
		"StyleSheetLayout",
		"StyleSheetColors",
		"StyleSheetPrint"
		// more?
	]

}; 

TiddlyWiki.prototype.saveTiddler_mptw_instanttimestamp = TiddlyWiki.prototype.saveTiddler;
TiddlyWiki.prototype.saveTiddler = function(title,newTitle,newBody,modifier,modified,tags,fields,clearChangeCount,created) {

	tags = tags ? tags : []; // just in case tags is null
	tags = (typeof(tags) == "string") ? tags.readBracketedList() : tags;
	var conf = config.InstantTimestamp;

	if ( !tags.containsAny(conf.excludeTags) && !conf.excludeTiddlers.contains(newTitle) ) {

		var now = new Date();
		var trans = conf.translations;
		for (var i=0;i<trans.length;i++) {
			newBody = newBody.replace(trans[i][0], eval(trans[i][1]));
		}
	}

	// TODO: use apply() instead of naming all args?
	return this.saveTiddler_mptw_instanttimestamp(title,newTitle,newBody,modifier,modified,tags,fields,clearChangeCount,created);
}

// you can override these in StyleSheet 
setStylesheet(".ts,.ds { font-style:italic; }","instantTimestampStyles");

//}}}
https://github.com/intel-analytics/BigDL
https://bigdl-project.github.io/0.5.0/#presentations/
https://bigdl-project.github.io/0.5.0/#ScalaUserGuide/examples/
https://github.com/intel-analytics/BigDL-Tutorials
https://github.com/intel-analytics/BigDL-Tutorials/blob/master/notebooks/neural_networks/linear_regression.ipynb
https://github.com/intel-analytics/BigDL-Tutorials/blob/master/notebooks/neural_networks/lstm.ipynb
https://github.com/intel-analytics/BigDL-Tutorials/blob/master/notebooks/spark_basics/DataFrame.ipynb
https://github.com/intel-analytics/BigDL-Tutorials/blob/master/notebooks/spark_basics/spark_sql.ipynb
https://github.com/intel-analytics/BigDL/blob/branch-0.1/pyspark/example/tutorial/simple_text_classification/text_classfication.ipynb
bigdl vs h2o https://www.google.com/search?ei=0PgjW_XcHsHH5gLQnaDAAw&q=bigdl+vs+h2o&oq=bigdl+vs+h2o&gs_l=psy-ab.3..33i22i29i30k1.110827.111412.0.112312.3.3.0.0.0.0.77.216.3.3.0....0...1.1.64.psy-ab..0.2.139...33i160k1.0.lK5AeityuH8
https://www.infoworld.com/article/3158162/artificial-intelligence/intels-bigdl-deep-learning-framework-snubs-gpus-for-cpus.html
https://mapr.com/blog/tensorflow-mxnet-caffe-h2o-which-ml-best/
-tick tock model 
http://en.wikipedia.org/wiki/Intel_Tick-Tock
http://www.intel.com/content/www/us/en/silicon-innovations/intel-tick-tock-model-general.html

http://en.wikipedia.org/wiki/Intel_Tick-Tock
<<<
"Tick-Tock" is a model adopted by chip manufacturer Intel Corporation since 2007 to follow every microarchitectural change with a die shrink of the process technology. Every "tick" is a shrinking of process technology of the previous microarchitecture and every "tock" is a new microarchitecture.[1] Every year, there is expected to be one tick or tock.[1]
<<<
http://en.wikipedia.org/wiki/List_of_Intel_CPU_microarchitectures

http://www.extremetech.com/computing/116561-the-death-of-cpu-scaling-from-one-core-to-many-and-why-were-still-stuck
Identify Data Dictionary Inconsistency 
  Doc ID:  456468.1



X tables

http://www.adp-gmbh.ch/ora/misc/x.html
http://www.stormloader.com/yonghuang/computer/x$table.html


The names for the x$ tables can be queried with 
select kqftanam from x$kqfta;



How To Give Grant Select On X$ Objects In Oracle 10g? 
  Doc ID:  Note:453076.1 

Script to Extract SQL Statements for all V$ Views 
  Doc ID:  Note:132793.1 

-- chinese
http://translate.google.com/translate?sl=auto&tl=en&u=http://www.oracledatabase12g.com/archives/oracle-internal-research.html

How an Oracle block# is mapped to a file offset (in bytes or OS blocks) [ID 761734.1]



-- OCI
Howto Trace Clientside Applications on OCI Level On Windows [ID 749498.1]
How to Perform Client-Side Tracing of Programmatic Interfaces on Windows Platforms [ID 216912.1]


alter index idx_empid invisible;   <-- make the index invisible
select /*+ index(idx_empid) */ * from employee where empid = 1001;   <-- even with invisible, will force it to use index
alter session set optimizer_use_invisible_indexes = true;   <-- with invisible indexes, optimizer will be aware about it and may use the indexes

http://viralpatel.net/blogs/2010/06/invisible-indexes-in-oracle-11g.html
http://oracletoday.blogspot.com/2007/08/invisible-indexes-in-11g.
http://www.orafaq.com/forum/t/159978/0/
http://avdeo.com/2011/03/23/virual-index-and-invisible-index/     <-- virtual and invisible indexes
! CMAN package conflict 

<<<
If you have installed the Cluster RPM group then you will hit an RPM conflict on CMAN.. the workaround is to remove the CMAN package
<<<

! FENCE AGENTS error

<<<
since I got the unsigned fence-agents RPM I have to disable the gpg-check on the yum repo
<<<

! VDS service and LIBVIRTD issue

NOTE: Do this before adding the host if you are going to place RHEVM on the same host

{{{
THE HOST IS UNRESPONSVE AND I HAVE TO RESTART/START THE VDS SERVICE TOGETHER WITH LIBVIRTD

[root@iceman ~]# chkconfig --list | egrep -i "libvirt|vds"
libvirtd       	0:off	1:off	2:off	3:on	4:on	5:on	6:off
vdsmd          	0:off	1:off	2:on	3:on	4:on	5:on	6:off
[root@iceman ~]# 
[root@iceman ~]# 
[root@iceman ~]# 
[root@iceman ~]# service libvirtd status
libvirtd (pid  6745) is running...
[root@iceman ~]# 
[root@iceman ~]# service vdsmd status
Using /usr/share/vdsm/vdsm
VDS daemon server is running

AFTER RESTART

[root@iceman ~]# chkconfig --list | egrep -i "libvirt|vds"
libvirtd       	0:off	1:off	2:off	3:on	4:on	5:on	6:off
vdsmd          	0:off	1:off	2:on	3:on	4:on	5:on	6:off
[root@iceman ~]# 
[root@iceman ~]# 
[root@iceman ~]# service libvirtd status
libvirtd (pid  5387) is running...
[root@iceman ~]# 
[root@iceman ~]# service vdsmd status
Using /usr/share/vdsm/vdsm
VDS daemon is not running
[root@iceman ~]# 
[root@iceman ~]# 
[root@iceman ~]# date
Sat Nov  7 16:34:30 PHT 2009

NOW START THE VDSM AND LIBVIRTD

[root@iceman ~]# service vdsmd stop
Using /usr/share/vdsm/vdsm
Shutting down vdsm daemon: 
vdsm: not running                                          [FAILED]
[root@iceman ~]# 
[root@iceman ~]# 
[root@iceman ~]# 
[root@iceman ~]# service libvirtd stop
Stopping libvirtd daemon:                                  [  OK  ]
[root@iceman ~]# 
[root@iceman ~]# 
[root@iceman ~]# service vdsmd stop
Using /usr/share/vdsm/vdsm
Shutting down vdsm daemon: 
vdsm: not running                                          [FAILED]
[root@iceman ~]# 
[root@iceman ~]# service vdsmd start
Using /usr/share/vdsm/vdsm
Starting up vdsm daemon: 
vdsm start                                                 [  OK  ]
[root@iceman ~]# 
[root@iceman ~]# 
[root@iceman ~]# service libvirtd start
Starting libvirtd daemon:                                  [  OK  ]


FOUND OUT THAT FAILS TO CONNECT TO DB

CHANGE THE FOLLOWING BEFORE RESTART

[root@iceman ~]# chkconfig --list | egrep -i "libvirt|vds"
libvirtd       	0:off	1:off	2:off	3:on	4:on	5:on	6:off
vdsmd          	0:off	1:off	2:on	3:on	4:on	5:on	6:off
[root@iceman ~]# 
[root@iceman ~]# 
[root@iceman ~]# 
[root@iceman ~]# chkconfig --level 2345 libvirtd on
[root@iceman ~]# 
[root@iceman ~]# 
[root@iceman ~]# chkconfig --list | egrep -i "libvirt|vds"
libvirtd       	0:off	1:off	2:on	3:on	4:on	5:on	6:off
vdsmd          	0:off	1:off	2:on	3:on	4:on	5:on	6:off

AFTER RESTART STILL VDSMD NOT RUNNING

[root@iceman ~]# service libvirtd status
libvirtd (pid  5393) is running...
[root@iceman ~]# 
[root@iceman ~]# 
[root@iceman ~]# service vdsmd status
Using /usr/share/vdsm/vdsm
VDS daemon is not running
[root@iceman ~]# 
[root@iceman ~]# 
[root@iceman ~]# date
Sat Nov  7 17:01:18 PHT 2009


YEAH VDS IS NOT REALLY STARTING MAYBE BECAUSE RHEVM IS ON THE SAME SERVER AND IT DOES NOT DETECT THE HOST

[root@iceman vdsm]# ls -ltr
total 4752
drwxr-xr-x 2 vdsm kvm    4096 Oct  1 23:43 backup
-rw-rw---- 1 vdsm kvm       0 Nov  3 13:00 metadata.log
-rw-rw---- 1 vdsm kvm 4848393 Nov  7 16:56 vdsm.log.bak
[root@iceman vdsm]# 
[root@iceman vdsm]# 
[root@iceman vdsm]# cd backup/
[root@iceman backup]# ls
[root@iceman backup]# cd ..
[root@iceman vdsm]# ls
backup  metadata.log  vdsm.log.bak
[root@iceman vdsm]# cat metadata.log 
[root@iceman vdsm]# 
[root@iceman vdsm]# 
[root@iceman vdsm]# service vdsmd status
Using /usr/share/vdsm/vdsm
VDS daemon is not running


AS A WORKAROUND I ADDED THIS LINE

[root@iceman ~]# cat /etc/rc.local 
#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.

touch /var/lock/subsys/local

service libvirtd stop
service vdsmd stop

service vdsmd start
service libvirtd start
}}}


! Mounting NFS RPC host error

<<<
still have to be researched
<<<


! VirtIO

<<<
On Linux, when creating a new virtual disk.. and if you choose VirtIO the device name will be /dev/vda

On Windows, there are specific drivers to use the VirtIO.. see KBASE links
<<<


! When using vmware and KVM together

{{{
[karao@karl ~]$ cat /etc/rc.local 
#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.

touch /var/lock/subsys/local


# temporarily removes the kvm module 
/etc/init.d/libvirtd stop
modprobe -r kvm_intel
modprobe -r kvm
}}}
<<<
The latest Itanium is the Montvale..  http://en.wikipedia.org/wiki/List_of_Intel_Itanium_microprocessors#Montvale_.2890_nm.29

From the Oracle's Certification matrix, it is still the Montecito. Although I saw one benchmark where Montvale was used (http://www.intel.com/performance/server/itanium/summary.htm)

If they want to verify if Montvale is supported they can file an SR for that. Below is the certification for 10gR2 (both single instance & RAC)

   10gR2 64-bit            Linux Itanium    Red Hat Enterprise 5    Certified
   10gR2 64-bit            Linux Itanium    Red Hat Enterprise 4    Certified
   10gR2 64-bit            Linux Itanium    SLES-9    Certified

   10gR2 RAC            Linux Itanium    Red Hat Enterprise 4    Certified
   10gR2 RAC            Linux Itanium    Red Hat Enterprise 3    Certified
   10gR2 RAC            Linux Itanium    SLES-8    Certified
   10gR2 RAC            Linux Itanium    SLES-9    Certified
   10gR2 RAC            Linux Itanium    Red Hat Enterprise 2.1    Certified


If they are on the process of evaluation, I would still go for the multicore Xeon (Nehalem). If they've not heard of the news that RedHat will not support Itanium on RHEL6 better read this http://www.theregister.co.uk/2009/12/18/redhat_rhel6_itanium_dead/


Below are more articles regarding Itanium on the Oracle Support site:

Support of Linux and Oracle Products on Linux (Doc ID 266043.1)
How To Install Oracle RDBMS Software On Itanium Servers With Montecito Processors (Doc ID 400227.1)
Requirements for Installing Oracle 10gR2 RDBMS on RHEL 5 on Linux Itanium (ia64) (Doc ID 748378.1)
Frequently Asked Questions: Oracle E-Business Suite Support on Itanium (Doc ID 311717.1)
How To Identify A Server Which Has Intel® Montecito Processors Installed (Doc ID 401332.1)
Oracle® Database on Unix AIX®,HP-UX®,Linux®,Mac OS® X,Solaris®,Tru64 Unix® Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2) (Doc ID 169706.1)
Installing Oracle Data Integrator On Intel Itanium (64-bit) Hardware (Doc ID 451928.1)
<<<
the folowing are things to try: 
1) Oracle Net Listener Connection Rate Limiter 
http://www.oracle.com/technetwork/database/enterprise-edition/oraclenetservices-connectionratelim-133050.pdf 
> setup another listener
> connect swingbench to that new listener
2) DRCP on JDBC
> setup DRCP on 11.2 database
> run swingbench using JDBC connection 


Example: Identifying Connection String Problems in JDBC Driver
Doc ID: Note:94091.1

https://jonathanlewis.wordpress.com/2015/12/03/five-hints/
https://www.doag.org/formes/pubfiles/7502432/2015-K-DB-Jonathan_Lewis-Five_Hints_for_Optimising_SQL-Praesentation.pdf


{{{

Merge / no_merge — Whether to use complex view merging
Push_pred / no_push_pred — What to do with join predicates to non-merged views
Unnest / no_unnest — Whether or not to unnest subqueries
Push_subq / no_push_subq — When to handle a subquery that has not been unnested
Driving_site — Where to execute a distributed query

}}}


.
http://jsonviewer.stack.hu/

http://www.jcon.no/oracle/?p=1942
<<<


    Part 1: Install/setup Oracle database (in docker)
    Part 2: Installing Java (JDK), Eclipse and Maven
    Part 3: Git, Oracle schemas and your first Java application
    Part 4: Your first JDBC Application
    Part 5: Spring-Boot, JdbcTemplate & DB migration (Using FlywayDB)
    Part 6: Spring-boot, JPA and Hibernate (this)

<<<
http://stackoverflow.com/questions/647116/how-to-decompile-a-whole-jar-file
http://stackoverflow.com/questions/31353/is-jad-the-best-java-decompiler
http://www.youtube.com/watch?v=mcWuYbn4NBg
on rhel 4
{{{
-- INSTALL JAVA FROM SUN 
1) install rpm /usr/java/<version>
2) make symbolic link
	ln -s /usr/java/j2sdk1.4.2_16 /usr/java/jdk
3) "which java"
4) go to /etc/profile.d
5) edit "java.sh"
        [root@sqlnbcn-004 profile.d]# cat java.sh
        export JAVA_HOME='/usr/java/jdk'
        export PATH="${JAVA_HOME}/bin:${PATH}"
}}}

on rhel5
{{{
alternatives --config java

alternatives --install link name path priority
alternatives --install /usr/bin/java java /u01/app/oracle/product/11.2.0/dbhome_1/jdk/bin/java 2

alternatives --config java
java -version


[root@desktopserver ~]# java -version
java version "1.4.2"
gij (GNU libgcj) version 4.1.2 20080704 (Red Hat 4.1.2-51)

Copyright (C) 2006 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.


[root@desktopserver ~]# alternatives --install /usr/bin/java java /u01/app/oracle/product/11.2.0/dbhome_1/jdk/bin/java 2
[root@desktopserver ~]# alternatives --config java

There are 2 programs which provide 'java'.

  Selection    Command
-----------------------------------------------
*+ 1           /usr/lib/jvm/jre-1.4.2-gcj/bin/java
   2           /u01/app/oracle/product/11.2.0/dbhome_1/jdk/bin/java

Enter to keep the current selection[+], or type selection number: 2

[root@desktopserver ~]# alternatives --config java

There are 2 programs which provide 'java'.

  Selection    Command
-----------------------------------------------
*  1           /usr/lib/jvm/jre-1.4.2-gcj/bin/java
 + 2           /u01/app/oracle/product/11.2.0/dbhome_1/jdk/bin/java

Enter to keep the current selection[+], or type selection number: ^C


[root@desktopserver ~]# java -version
java version "1.5.0_30"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_30-b03)
Java HotSpot(TM) 64-Bit Server VM (build 1.5.0_30-b03, mixed mode)
}}}

-- USE JAVA SHIPPED WITH ORACLE SOFTWARE
$ cd $ORACLE_HOME/jre/1.4.2/bin
$ setenv PATH $ORACLE_HOME/jre/1.4.2/bin:$PATH

-- install guides fedora
http://fedorasolved.org/browser-solutions/java-i386
http://www.mjmwired.net/resources/mjm-fedora-f12.html#java
http://oliver.net.au/?p=92

jenkins CI/CD and Github in One Hour Video Course
https://learning.oreilly.com/videos/jenkins-ci-cd-and/50106VIDEOPAIML/50106VIDEOPAIML-c1_s1
http://oraclepoint.com/oralife/2012/02/16/how-to-set-up-the-job-scheduling-via-sudo-on-oem/
https://blogs.oracle.com/optimizer/optimizer-transformation:-join-predicate-pushdown
<<<
The decision whether to push down join predicates into a view is determined by evaluating the costs of the outer query with and without the join predicate pushdown transformation under Oracle's cost-based query transformation framework.
The join predicate pushdown transformation applies to both non-mergeable views and mergeable views and to pre-defined and inline views as well as to views generated internally by the optimizer during various transformations. The following shows the types of views on which join predicate pushdown is currently supported.

UNION ALL/UNION view
Outer-joined view
Anti-joined view
Semi-joined view
DISTINCT view
GROUP-BY view
<<<
! left join and left outer join are the same
https://stackoverflow.com/questions/5706437/whats-the-difference-between-inner-join-left-join-right-join-and-full-join#:~:text=INNER%20JOIN%3A%20returns%20rows%20when,matches%20in%20the%20left%20table.&text=Note%20%3AIt%20will%20return%20all%20selected%20values%20from%20both%20tables.
https://www.quora.com/What-is-the-difference-between-left-join-and-left-outer-join-in-sql#:~:text=In%20SQL%2C%20the%20left%20join,same%20results%20as%20left%20join.


http://jonathanlewis.wordpress.com/2010/08/02/joins/
http://jonathanlewis.wordpress.com/2010/08/09/joins-nlj/
http://jonathanlewis.wordpress.com/2010/08/10/joins-hj/
http://jonathanlewis.wordpress.com/2010/08/15/joins-mj/


-- Optimizing two table join - video! TROUG
http://jonathanlewis.wordpress.com/2011/06/23/video/


SQL Joins Graphically
http://db-optimizer.blogspot.com/2010/09/sql-joins-graphically.html based on http://www.codeproject.com/KB/database/Visual_SQL_Joins.aspx?msg=2919602
http://db-optimizer.blogspot.com/2009/06/sql-joins.html based on http://blog.mclaughlinsoftware.com/oracle-sql-programming/basic-sql-join-semantics/
http://www.gplivna.eu/papers/sql_join_types.htm
http://www.oaktable.net/content/sql-joins-visualized-surprising-way
https://stevestedman.com/2015/05/tsql-join-types-poster-version-4/ 

https://www.techonthenet.com/oracle/joins.php   <-- GOOD STUFF
http://searchoracle.techtarget.com/answer/Alternative-to-LEFT-OUTER-JOIN
http://docwiki.embarcadero.com/DBOptimizer/en/Subquery_Diagramming
http://blog.mclaughlinsoftware.com/oracle-sql-programming/basic-sql-join-semantics/


! visualized 

[img(100%,100%)[https://i.imgur.com/MsLpVJ2.png]]
[img(100%,100%)[https://i.imgur.com/uDbi422.png]]
[img(100%,100%)[https://i.imgur.com/LGlvRD3.png]]








.
http://www.joomla.org/
http://docs.joomla.org/Main_Page
http://www.cloudaccess.net/joomla-training-video-series-beyond-the-basics.html    <-- GOOD STUFF tutorials

http://docs.joomla.org/Can_you_remove_the_%22Powered_by_Joomla!%22_message%3F  <-- remove unnecessary stuff 
http://docs.joomla.org/Changing_the_site_favicon

http://forums.digitalpoint.com/showthread.php?t=526998 <-- AGGREGATOR
http://3dwebdesign.org/view-document-details/16-joomla-rss-feed-aggregator.html
http://www.associatedcontent.com/article/420973/mastering_joomla_how_to_get_rss_news.html
http://goo.gl/4w1lf
http://extensions.joomla.org/extensions/image/14087
http://3dwebdesign.org/en/rss-feed-aggregators-comparison.html
http://3dwebdesign.org/en/joomla-extensions/wordpress-aggregator-lite.html
http://3dwebdesign.org/en/wordpress-aggregators/wordpress-aggregator-platinum




http://blog.scottlowe.org/2012/08/21/working-with-kvm-guests/
https://networkbuilders.intel.com/
Network Function Virtualization Packet Processing Performance of Virtualized Platforms with Linux* and Intel® Architecture® https://networkbuilders.intel.com/docs/network_builders_RA_NFV.pdf
https://confluence.atlassian.com/display/AGILE/Tutorial+-+Tracking+a+Kanban+Team

http://en.wikipedia.org/wiki/Pomodoro_Technique
http://www.businessinsider.com/productivity-hacks-from-startup-execs-2014-5
http://www.quora.com/Productivity/As-a-startup-CEO-what-is-your-favorite-productivity-hack/answer/Paul-A-Klipp?srid=n2Fg&share=1
https://kano.me/app
help.kano.me
kano.me/world
kano.me/shop
youtube/teamkano
* Optimizing Oracle Performance - Chapter 7.1.1 The sys call Transition
* understanding.the.linux.kernel http://oreilly.com/catalog/linuxkernel/chapter/ch10.html
{{{
Be aware that a preempted process is not suspended, since it remains in the TASK_RUNNING state; it simply no longer uses the CPU.

Some real-time operating systems feature preemptive kernels, which means that a process running in Kernel Mode can be interrupted after any instruction, just as it can in User Mode. The Linux kernel is not preemptive, which means that a process can be preempted only while running in User Mode; nonpreemptive kernel design is much simpler, since most synchronization problems involving the kernel data structures are easily avoided (see the section "Nonpreemptability of Processes in Kernel Mode" in Chapter 11, Kernel Synchronization).
}}}


Understanding User and Kernel Mode http://www.codinghorror.com/blog/2008/01/understanding-user-and-kernel-mode.html

http://kevinclosson.wordpress.com/2012/04/16/critical-analysis-meets-exadata/
''Exadata Critical Analysis Part I'' http://www.youtube.com/watch?v=K3lXkIuBJqk&feature=youtu.be
''Exadata Critical Analysis Part II'' http://www.youtube.com/watch?v=0ii5xV9sicM&feature=youtu.be
''Q&A'' http://kevinclosson.wordpress.com/criticalthinking/

''Exadata Deep Dive Part 1'' http://www.youtube.com/watch?v=dw-PnKDrcDE

[[Platform Topics for DBAs]]
http://blog.tanelpoder.com/2010/02/17/how-to-cancel-a-query-running-in-another-session/
http://oracle-randolf.blogspot.com/2011/11/how-to-cancel-query-running-in-another.html

! new 
{{{
set serveroutput on 
BEGIN
      FOR c IN (
          SELECT  username, machine, osuser, sid, serial#, inst_id
          FROM sys.gv_$session
          WHERE sql_id = '549wyn38pr0hd'
          
      )
      LOOP
          EXECUTE IMMEDIATE 'alter system kill session ''' || c.sid || ', ' || c.serial# || ', @' || c.inst_id || ''' immediate';
          dbms_output.put_line('Kill session : ''' || c.username || ', ' || c.machine || ', ' || c.osuser || ', ' || c.sid || ', ' || c.serial# || ', @' || c.inst_id || ''' ');
      END LOOP;
    END;
    /
}}}

{{{
  spool TERMINATE_SESSIONS.SQL

select /* usercheck */ 'alter system disconnect session '''||s.sid||','||s.serial#||''''||' post_transaction;'
from v$process p, v$session s, v$sqlarea sa
where p.addr=s.paddr
and   s.username is not null
and   s.sql_address=sa.address(+)
and   s.sql_hash_value=sa.hash_value(+)
and   s.sql_id = '158gjtpj0vzkc'
--and   sa.sql_text NOT LIKE '%usercheck%'
--and   lower(sa.sql_text) LIKE '%cputoolkit%'
order by status desc;

  spool off
  set echo on
  set feedback on
}}}
<<showtoc>>


! ebook paperwhite convert/transfer
https://calibre-ebook.com/download
http://www.howtogeek.com/69481/how-to-convert-pdf-files-for-easy-ebook-reading/
http://tidbits.com/article/16691
How to send large files to @free.kindle.com to get converted https://www.amazon.com/forum/kindle?_encoding=UTF8&cdForum=Fx1D7SY3BVSESG&cdThread=Tx1EQT6ICAB7D7A
https://transfer.pcloud.com/
Previous Announcements from New in the Knowledge Base
 	Doc ID:	Note:370936.1
Kubernetes Microservices https://learning.oreilly.com/videos/kubernetes-microservices/10000DIHV201804/?autoplay=false


! tutorial for oracle 
https://www.devart.com/dotconnect/oracle/articles/tutorial_linq.html
<<showtoc>>

! what is this? 
there's an intermittent slowness on the SQLLDR process coming from any of the 150 app servers 
the SQLLDR process was just spinning on CPU, the ASH data just shows "ON CPU" and that's it
so what we did is profiled the good and bad long running/slow session with snapper and pstack and compared the numbers 
{{{
---------------------------------------------------------------------------------------------------------------
  ActSes   %Thread | INST | SQL_ID          | SQL_CHILD | EVENT                               | WAIT_CLASS
---------------------------------------------------------------------------------------------------------------
    1.00    (100%) |    1 | 9vgb48rzqvqqz   | 0         | ON CPU                              | ON CPU
}}}

from the pstack comparison of the good and the bad. The KDZH is the EHCC function. So the underlying table is EHCC compressed and we already have a bug open on SQLLDR and EHCC. 
And these are the other related bugs/issues: 
Bug 14690273 : SQLLDR INSERTS VERY SLOW/HANGS WITH ADVANCED COMPRESSION (EXADATA)
ORA-04030 A Direct Load Into Many Partitions With Huge Allocation Request From "KLLCQGF:KLLSLTBA" (Doc ID 1578849.1)

so the session is just spinning on CPU , but under the covers it seems to be waiting on that compression function 
and look how low are the numbers in general for the slow run given that we sampled this for 5 minutes and the good one for just a few seconds (10K range number on ENQG vs the good)
possibly the slow run (for whatever reason) is just holding the other enqueues back and so TM/TX just show up
TX is not even the cause here, it’s more “whatever is left” from the “big chunk of the stuff is stuck” (potentially because of KDZ - compression)


! the commands
{{{
@snapper all,gather=a 5 1 (<instance>, <pid>)        <- create a sql file with 60 lines of this snapper command and spool in a text file, that's 5mins sample 

pstack <ospid>       <- as root user
}}}


! session - good profile
[img(90%,90%)[ https://raw.githubusercontent.com/karlarao/blog/82d44c4578e610044eef25f62b6c25d4ffb181b7/images/20160427_lios/good.png ]]

!! good pstack 
{{{
#0  0x000000000950707d in kcbgtcr () kcb  cache   manages Oracle's buffer cache operation as well as operations used by capabilities such as direct load, has clusters , etc.
#1  0x000000000957a178 in ktrget3 ()  txn/lcltx  ktr - kernel transaction read consistency
#2  0x000000000957981e in ktrget2 ()  txn/lcltx  ktr - kernel transaction read consistency
#3  0x00000000094d5bc7 in kdst_fetch () kds kdt kdu   ram/data    operations on data such as retrieving a row and updating existing row data
#4  0x0000000000cb87ec in kdstfRRRRRRRRRRRkmP ()  kds kdt kdu   ram/data    operations on data such as retrieving a row and updating existing row data
#5  0x00000000094bd0f4 in kdsttgr ()  kds kdt kdu   ram/data    operations on data such as retrieving a row and updating existing row data
#6  0x000000000976f979 in qertbFetch ()  sqlexec/rowsrc row source operators
#7  0x000000000269f3f3 in qergsFetch ()  sqlexec/rowsrc row source operators
#8  0x0000000009615052 in opifch2 ()
#9  0x000000000961457e in opifch ()
#10 0x000000000961b68f in opiodr ()
#11 0x00000000096fbdd7 in rpidrus ()
#12 0x000000000986e3d8 in skgmstack ()
#13 0x00000000096fd8c8 in rpiswu2 ()
#14 0x00000000096fceeb in rpidrv ()
#15 0x00000000096ff420 in rpifch ()
#16 0x00000000010eed56 in ktsi_is_dmts ()
#17 0x0000000000c2e2b0 in kdbl_is_dmts ()
#18 0x0000000000c2bc8a in kdblfpl ()
#19 0x0000000000c0b629 in kdblfl ()
#20 0x000000000203ab01 in klafin ()
#21 0x0000000001cde467 in kpodpfin ()
#22 0x0000000001cdc35b in kpodpmop ()
#23 0x000000000961b68f in opiodr ()
#24 0x000000000980a6af in ttcpip ()
#25 0x000000000196d78e in opitsk ()
#26 0x00000000019722b5 in opiino ()
#27 0x000000000961b68f in opiodr ()
#28 0x00000000026ecb43 in opirip ()
#29 0x000000000196984d in opidrv ()
#30 0x0000000001f56827 in sou2o ()
#31 0x0000000000a2a236 in opimai_real ()
#32 0x0000000001f5cb45 in ssthrdmain ()
#33 0x0000000000a2a12d in main ()
}}}


! session - bad profile
[img(90%,90%)[ https://raw.githubusercontent.com/karlarao/blog/82d44c4578e610044eef25f62b6c25d4ffb181b7/images/20160427_lios/bad.png ]]

!! bad pstack 
{{{

#0  0x0000000002d29459 in kdzca_cval_init ()
#1  0x0000000002d05d4a in kdzcompress ()
#2  0x0000000002d05c12 in kdzcompress_target_size ()
#3  0x0000000000cb994d in kdzhcl () ehcc related
#4  0x0000000000c10818 in kdblsync () kdbl kdc kdd  ram/data    support for direct load operation, cluster space management and deleting rows
#5  0x0000000000c0e851 in kdblcmtt () kdbl kdc kdd  ram/data    support for direct load operation, cluster space management and deleting rows
#6  0x000000000203a814 in kladsv () kla klc klcli klx   tools/sqlldr    support for direct path sql loader operation
#7  0x0000000001cdc3f8 in kpodpmop ()   kpoal8 kpoaq kpob kpodny kpodp kpods kpokgt kpolob kpolon kpon  progint/kpo support for programmatic operations
#8  0x000000000961b68f in opiodr ()
#9  0x000000000980a6af in ttcpip ()
#10 0x000000000196d78e in opitsk ()
#11 0x00000000019722b5 in opiino ()
#12 0x000000000961b68f in opiodr ()
#13 0x00000000026ecb43 in opirip ()
#14 0x000000000196984d in opidrv ()
#15 0x0000000001f56827 in sou2o ()
#16 0x0000000000a2a236 in opimai_real ()
#17 0x0000000001f5cb45 in ssthrdmain ()
#18 0x0000000000a2a12d in main ()

}}}



! non-viz way, just do a grep/sort on raw data


{{{
$ cat snapper_all_bad_5min.txt | grep ENQG | sort -n -k9

    -1  @1,           , ENQG, TX - Transaction                                          ,          8064,      1.53k,         ,             ,          ,           ,
    -1  @1,           , ENQG, TX - Transaction                                          ,          8109,      1.52k,         ,             ,          ,           ,
    -1  @1,           , ENQG, TX - Transaction                                          ,          8447,      1.46k,         ,             ,          ,           ,
    -1  @1,           , ENQG, TX - Transaction                                          ,          8716,      1.65k,         ,             ,          ,           ,
    -1  @1,           , ENQG, TX - Transaction                                          ,          9051,      1.59k,         ,             ,          ,           ,
    -1  @1,           , ENQG, TX - Transaction                                          ,          9196,      1.52k,         ,             ,          ,           ,
    -1  @1,           , ENQG, TX - Transaction                                          ,          9382,      1.76k,         ,             ,          ,           ,
    -1  @1,           , ENQG, TX - Transaction                                          ,          9450,      1.79k,         ,             ,          ,           ,
    -1  @1,           , ENQG, TX - Transaction                                          ,          9940,      1.74k,         ,             ,          ,           ,
    -1  @1,           , ENQG, TX - Transaction                                          ,         11031,      1.85k,         ,             ,          ,           ,
}}}



! references
Tanel’s blog on gather=a option http://blog.tanelpoder.com/2009/11/19/finding-the-reasons-for-excessive-logical-ios/ . 



{{{
ALTER TABLE .. MODIFY LOB (..)(CACHE);
alter table mytable modify lob (mycolumn) (cache) ;
}}}

https://laimisnd.wordpress.com/2011/03/25/lobs-and-flashback-database-performance/
LOB performance guidelines http://www.oracle.com/technetwork/articles/lob-performance-guidelines-128437.pdf
http://support.esri.com/fr/knowledgebase/techarticles/detail/35521
{{{
High fsync() times to VRTSvxfs Files can be reduced using Solaris VMODSORT Feature [ID 842718.1]

Symptoms

When RDBMS processes perform cached writes to files (i.e. writes which are not issued by DBWR) 
such as to a LOB object which is

stored out-of-line (e.g. because the LOB column length exceeds 3964 bytes)
and for which "STORE AS ( NOCACHE )" option has not been used
then increased processing times can be experienced which are due to longer fsync() call times to flush the dirty pages to disk. 

Changes

Performing (datapump) imports or writes to LOB segments and

1. running "truss -faedDl -p " for the shadow or background process doing the writes 
    shows long times spent in fsync() call.

Example:

create table lobtab(n number not null, c clob);

-- insert.sql
declare 
mylob varchar2(4000); 
begin 
for i in 1..10 loop 
mylob := RPAD('X', 3999, 'Z'); 
insert into lobtab values (i , rawtohex(mylob)); 
end loop; 
end; 
/

truss -faedDl sqlplus user/passwd @insert 

shows 10 fsync() calls being executed possibly having high elapsed times:

25829/1: 1.3725 0.0121 fdsync(257, FSYNC) = 0 
25829/1: 1.4062 0.0011 fdsync(257, FSYNC) = 0 
25829/1: 1.4112 0.0008 fdsync(257, FSYNC) = 0 
25829/1: 1.4164 0.0010 fdsync(257, FSYNC) = 0 
25829/1: 1.4213 0.0008 fdsync(257, FSYNC) = 0 
25829/1: 1.4508 0.0008 fdsync(257, FSYNC) = 0 
25829/1: 1.4766 0.0207 fdsync(257, FSYNC) = 0 
25829/1: 1.4821 0.0006 fdsync(257, FSYNC) = 0 
25829/1: 1.4931 0.0063 fdsync(257, FSYNC) = 0 
25829/1: 1.4985 0.0007 fdsync(257, FSYNC) = 0 
25829/1: 1.5406 0.0002 fdsync(257, FSYNC) = 0




2. Solaris lockstat command showing frequent hold events for fsync internal functions: 



Example:

Adaptive mutex hold: 432933 events in 7.742 seconds (55922 events/sec)  
------------------------------------------------------------------------
Count indv cuml rcnt nsec Lock Hottest Caller 
15052 48% 48% 0.00 385437 vph_mutex[32784] pvn_vplist_dirty+0x368 
nsec   ------ Time Distribution ------ count Stack 
8192   |@@@                            1634 vx_putpage_dirty+0xf0 
16384  |                               187 vx_do_putpage+0xac 
32768  |                               10 vx_fsync+0x2a4 
65536  |@@@@@@@@@@@@@@@@@@@@@@         12884 fop_fsync+0x14 
131072 |                               255 fdsync+0x20 
262144 |                               30 syscall_trap+0xac 


   

3. AWR report would show increased CPU activity (SYS_TIME is unusual high in Operating System Statistics section).
Cause

The official Sun document explaining this issue is former Solaris Alert # 201248 and new

"My Oracle Support" Doc Id 1000932.1



From a related Sun document:

Sun introduced a page ordering vnode optimization in Solaris 9 
and 10. The optimization includes a new vnode flag, VMODSORT, 
which, when turned on, indicates that the Virtual Memory (VM) 
should maintain the v_pages list in an order depending on if 
a page is modified or unmodified. 

Veritas File System (VxFS) can now take advantage of that flag, 
which can result in significant performance improvements on 
operations that depend on flushing, such as fsync. 

This optimization requires the fixes for Sun BugID's 6393251 and 
6538758 which are included in Solaris kernel patches listed below. 


Symatec information about VMODSORT can be found in the Veritas 5.0 MP1RP2 Patch README: 

https://sort.symantec.com/patch/detail/276
Solution



The problem is resolved by applying Solaris patches and enabling the VMODSORT
feature in /etc/system:

1. apply patches as per Sun document (please always refer to 
   the Sun alert for the most current recommended version of patches):

SPARC Platform 

VxFS 4.1 (for Solaris 9)  patches 122300-11 and 123828-04 or later 
VxFS 5.0 (for Solaris 9)  patches 122300-11 and 125761-02 or later 
VxFS 4.1 (for Solaris 10) patches 127111-01 and 123829-04 or later 
VxFS 5.0 (for Solaris 10) patches 127111-01 and 125762-02 

x86 Platform 

VxFS 5.0 (for Solaris 10) patches 127112-01 and 125847-01 or later  

2. enable vmodsort in /etc/system and reboot server
   i.e. add line to /etc/system after vxfs forceload: 

   set vxfs:vx_vmodsort=1 * enable vxfs vmodsort

Please be aware that enabling VxFS VMODSORT functionality without 
the correct OS kernel patches can result in data corruption.

 

References

http://sunsolve.sun.com/search/document.do?assetkey=1-66-201248-1

}}}

http://neerajbhatia.wordpress.com/2011/10/07/capacity-planning-and-performance-management-on-ibm-powervm-virtualized-environment/
also check on this youtube video http://www.youtube.com/watch?v=WphGQx-N98U PowerBasics What is a Virtual Processor? and Shared Processor 

{{{


Some possible actions in case of threshold violations can be investigating the individual partition contributing to the server's utilization, workload management if possible or as a last resort stop/migrate least critical partition. Workload behavior of partitions is very important and configuration needs to be done in such a way that not many partitions should compete for the processor resources at the same time.

One gauge of system's health is CPU run queue length. The run-queue length represents the number of processes that are currently running or waiting (queued) to run. Setting thresholds for run queue length is tricky in partitioned environment because uncapped partitioned can potentially consume more than their entitlement up to number of virtual processors. SMT introduced further complexity as it enable parallel execution: 2 simultaneous thread on Power5 and Power6 and 4 on Power7 environments.

To summarize – entitlement should be defined in such a way that it represents “nearly right” capacity requirements for a partition. Thus on average each partition’s entitled capacity utilization would be close to 100 percent and there will be a balance between capacity donors and borrowers in the system. While reviewing a partition’s utilization it’s important to know that any capacity used beyond entitled capacity isn’t guaranteed (as it might be some other partition’s entitlement). Therefore, if a partition’s entitled CPU utilization is beyond 100 percent, it might be forced back down to 100 percent if another partition requires that borrowed capacity. Processing units also decide the number of partitions that can run on a system. As the total processing units of all partitions running on a system cannot more than the number of physical processors, by assigning smaller processing units you can maximize the number of partitions on a system.

„h Have separate shared-processor pools for production partitions. But the scope of this solution is limited as multiple shared-processor pools capability is only available in Power6 and Power7 based systems.
„h Configure the non-production partitions as capped. Capped partitions are restricted to consume additional processor cycles beyond their entitled capacity.
„h A more flexible way is to configure the non-production partitions as uncapped and keep their uncapped weight to minimum. The number of virtual processors should be set to maximum physical CPUs which you think a partition should consume. This will effectively cap the partition at number of virtual processors. The benefits of this approach is that, non-production partitions can get additional resources up to their virtual processors but at the same time will remain harmless to production partitions with higher uncapped weights.


„h Determine the purpose and nature of the applications to be run on the partition, like web server supporting an online web store or batch database of a banking system.
„h Understand the business workload profile.
„h Identify any seasonal or periodic trends and its impact on the workload profile.
„h Understanding of the busiest hour in the working day, the busiest day in the week, busiest month of the year.
„h Calculate the processing requirements necessary to support workload profiles.

It is always better to measure and forecast the capacity in business metric terms because that's what business understands and same units are used by business to perceive the performance, throughput and forecast the business demand. We will call our business metrics as metric1 and metric2.
Clearly current value of entitled capacity of 2.0 processing units is not going to support additional workload. So based on this analysis, we should increase the entitled CPUs to 4 and to keep some margin for unexpected workload, set the virtual processors to 5 or 6. Another option which is worth considering for reducing the pressure on additional processing capacity is to shift metric2 workload by few hours, if possible. It will reduce the chances of running two business processes at the same time and result in CPU spikes. Such workload management options should be more important from the business perspective than their technical implications. I have simplified the illustration a lot but the principle of capacity planning would be the same
}}}


http://www.ibm.com/developerworks/wikis/display/WikiPtype/CPU+frequency+monitoring+using+lparstat
''A Comparison of Virtualization Features of HP-UX, Solaris & AIX''  http://www.osnews.com/comments/20393
''A comparison of virtualization features of HP-UX, Solaris and AIX'' http://www.ibm.com/developerworks/aix/library/au-aixvirtualization/?ca=dgr-jw30CompareFeatures&S_TACT=105AGX59&S_cmp=GRsitejw30



https://www.ibm.com/developerworks/mydeveloperworks/blogs/aixpert/entry/it_s_good_when_it_goes_wrong_and_i_am_on_holiday_nmon_question_peaks291?lang=en
<<<
"Shared CPU, Uncapped LPAR utilisation number do not look right nor does the average of the logical CPUs?"
Correct. They are very misleading.
I have been pointing this out for 5+ years.
For these types of LPARs, you need to monitor the physical CPU use.
The problem is the utilisation numbers (User+System) get to roughly 95% as you get to the Entitlement and stay at just below 100% as you use double, quadruple or higher numbers of physical CPU. They do not show you how much CPU time you are using above Entitlement.
Plus you can't average the logical CPUs (these are the SMT threads) to get the machine average because they are time-sharing the physical CPUs.
Also for Dedicated CPU LPARs all the Shared Processor stats don't mean anything, so they are not collected and there is no LPAR Tab in the nmon Analyser.
Lesson: POWER systems are function rich with advanced features that means we can't use 1990's stats to understand them.
<<<
<<<
There are two main critical LPARs on the heavily over committed machine - By this I mean that if you add up the LPAR Entitlements of a machine they have to add up to at most to the number of physical CPUs in the shared pool. But they have most LPARs Uncapped with the Virtual CPU (spreading factor) number much higher than the Entitlement.  Normally, I don't recommend this for performance, as the LPAR has to compete for CPU cycles above the Entitlement.  In this case, the two main LPARs have an Entitlement of 6 to 10 CPUs but a Virtual CPU of 40.  Now the bad news, these two LPARs are busy at the same time - they are doing a database unload in one and a load of the same data in the other LPAR.   If I tell you the machine has 64 physical CPUs, you can immediately see the problem. Both LPARs can't get 40 CPUs at the same time (we can't run 80 Virtual CPUs flat-out on 64 physical CPUs) and that does not include the other LPARs also running.
<<<

''vmstat physical cpu''
http://aix4admins.blogspot.com/2011/09/vmstat-t-5-3-shows-3-statistics-in-5.html
{{{
To measure cpu utilization measure us+sy together (and compare it to physc):
- if us+sy is always greater than 80%, then CPU is approaching its limits (but check physc as well and in "sar -P ALL" for each lcpu)

- if us+sy = 100% -> possible CPU bottleneck, but in an uncapped shared lpar check physc as well.

- if sy is high, your appl. is issuing many system calls to the kernel and asking the kernel to work. It measures how heavily the appl. is using kernel services.

- if sy  is higher then us, this means your system is spending less time on real work (not good)


Don't forget to compare these values with ouputs where each logical CPU can be seen (like "sar -p ALL 1 5")

Some examples when physical consumption of a CPU should be also looked when smt is on.:
- usr+sys=16%, but physc=0.56, it means i see 16% is utliized of a CPU, but actually half of the physical CPU (0.56) is used.

- if us+sys=100 and physc=0.45 we have to look both. If someone says 100% percent is used, then 100% of what? The 100% of the half of the CPU (physc=0.45) is used.

- %usr+%sys=83% for lcpu 0 (output from command sar). It looks a high number at the first sight,  but if you check physc, you can see only 0.01 physical core has been used, and the entitled capacityis 0.20, so this 83% is actually very little CPU consumption.

}}}







my LVM config conversation with Rhojel Echano showing how I configured the devices and the idea/reasoning behind it, also showing the partition table and layout
https://www.evernote.com/shard/s48/sh/fd84183b-293b-45b1-8d89-3fc13e945506/16f222922fe85eeed19aaa722bf1ff42

remember beginning of the disk is at the outer edge (faster), so /dev/sdb1 is at the outer and goes inwards (slower) the next partitions
http://techreport.com/forums/viewtopic.php?f=5&t=3843
[img(95%,95%)[ https://lh4.googleusercontent.com/-hzWcpuQsKmw/UjN31J0Zz_I/AAAAAAAACBo/AilxCoeE0w4/w1185-h450-no/desktopserverdisklayout.png ]]


! OEL 6 (current)
{{{

# SWAP
pvcreate /dev/sda3 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1
vgcreate vgswap /dev/sda3 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1
lvcreate -n lvswap -i 8 -I 4096 vgswap -l 5112
mkswap /dev/vgswap/lvswap
/dev/vgswap/lvswap      swap    swap    defaults        0 0     <-- add this in fstab
swapon -va
cat /proc/swaps

# Oracle
pvcreate /dev/sda5 /dev/sdb2 /dev/sdc2 /dev/sdd2 /dev/sde2 /dev/sdf2 /dev/sdg2 /dev/sdh2
vgcreate vgoracle /dev/sda5 /dev/sdb2 /dev/sdc2 /dev/sdd2 /dev/sde2 /dev/sdf2 /dev/sdg2 /dev/sdh2
lvcreate -n lvoracle -i 8 -I 4096 vgoracle -l 15368
mkfs.ext3 /dev/vgoracle/lvoracle

# VBOX
pvcreate /dev/sda6 /dev/sdb3 /dev/sdc3 /dev/sdd3 /dev/sde3 /dev/sdf3 /dev/sdg3 /dev/sdh3
vgcreate vgvbox /dev/sda6 /dev/sdb3 /dev/sdc3 /dev/sdd3 /dev/sde3 /dev/sdf3 /dev/sdg3 /dev/sdh3
lvcreate -n lvvbox -i 8 -I 4096 vgvbox -l 624648
mkfs.ext3 /dev/vgvbox/lvvbox

# ASM  <-- ASM disks  
pvcreate /dev/sda7 /dev/sdb5 /dev/sdc5 /dev/sdd5 /dev/sde5 /dev/sdf5 /dev/sdg5 /dev/sdh5


# RECO
pvcreate /dev/sda8 /dev/sdb6 /dev/sdc6 /dev/sdd6 /dev/sde6 /dev/sdf6 /dev/sdg6 /dev/sdh6
vgcreate vgreco /dev/sda8 /dev/sdb6 /dev/sdc6 /dev/sdd6 /dev/sde6 /dev/sdf6 /dev/sdg6 /dev/sdh6
lvcreate -n lvreco -i 8 -I 4096 vgreco -l 370104
mkfs.ext3 /dev/vgreco/lvreco


[root@desktopserver dev]# lvdisplay  | egrep "LV Name|Size"
  LV Name                lvreco
  LV Size                1.41 TiB
  LV Name                lvvbox
  LV Size                2.38 TiB
  LV Name                lvoracle
  LV Size                60.03 GiB
  LV Name                lvswap
  LV Size                19.97 GiB				


[root@desktopserver dev]# df -Th
Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/sda2     ext4     22G   15G  5.9G  72% /
tmpfs        tmpfs    7.9G   76K  7.9G   1% /dev/shm
/dev/sda1     ext4    291M   55M  221M  20% /boot
/dev/mapper/vgoracle-lvoracle
              ext3     60G  180M   56G   1% /u01
/dev/mapper/vgvbox-lvvbox
              ext3    2.4T  200M  2.3T   1% /vbox
/dev/mapper/vgreco-lvreco
              ext3    1.4T  198M  1.4T   1% /reco





#### UDEV!!!


-- oel6
[root@desktopserver ~]# scsi_id -g -u -d /dev/sda7
35000c50038257afa
[root@desktopserver ~]# scsi_id -g -u -d /dev/sdb5
35000c50038276171
[root@desktopserver ~]# scsi_id -g -u -d /dev/sdc5
350014ee2b2d7f017
[root@desktopserver ~]# scsi_id -g -u -d /dev/sdd5
350014ee2082d419c
[root@desktopserver ~]# scsi_id -g -u -d /dev/sde5
35000c500382b0b28
[root@desktopserver ~]# scsi_id -g -u -d /dev/sdf5
35000c50038274bcb
[root@desktopserver ~]# scsi_id -g -u -d /dev/sdg5
35000c50038270d54
[root@desktopserver ~]# scsi_id -g -u -d /dev/sdh5
35000c50038278abf



If you are using the subpartition of the device (for short stroking on the fast area of disk), better filter it with the device name and the major minor of the subpartition




   * edit the scsi_id.config file




[root@desktopserver ~]# vi /etc/scsi_id.config
# add this line
options=-g



   * create the UDEV rules

vi /etc/udev/rules.d/99-oracle-asmdevices.rules


KERNEL=="sda7", SYSFS{dev}=="8:7"  , NAME="asm-disk1", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdb5", SYSFS{dev}=="8:21" , NAME="asm-disk2", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdc5", SYSFS{dev}=="8:37" , NAME="asm-disk3", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdd5", SYSFS{dev}=="8:53" , NAME="asm-disk4", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sde5", SYSFS{dev}=="8:69" , NAME="asm-disk5", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdf5", SYSFS{dev}=="8:85" , NAME="asm-disk6", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdg5", SYSFS{dev}=="8:101", NAME="asm-disk7", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdh5", SYSFS{dev}=="8:117", NAME="asm-disk8", OWNER="oracle", GROUP="dba", MODE="0660"



   * test the UDEV rules


-- oel6 
udevadm test /block/sda/sda7
udevadm test /block/sdb/sdb5
udevadm test /block/sdc/sdc5
udevadm test /block/sdd/sdd5
udevadm test /block/sde/sde5
udevadm test /block/sdf/sdf5
udevadm test /block/sdg/sdg5
udevadm test /block/sdh/sdh5 


   * activate the rules


# #OL6
udevadm control --reload-rules

# #OL5 and OL6
/sbin/start_udev


[root@desktopserver dev]# ls -ltr /dev/asm*
brw-rw---- 1 oracle root 8,  53 Sep 12 17:05 /dev/asm-disk4
brw-rw---- 1 oracle root 8,  37 Sep 12 17:05 /dev/asm-disk3
brw-rw---- 1 oracle root 8,  69 Sep 12 17:05 /dev/asm-disk5
brw-rw---- 1 oracle root 8,  85 Sep 12 17:05 /dev/asm-disk6
brw-rw---- 1 oracle root 8, 101 Sep 12 17:05 /dev/asm-disk7
brw-rw---- 1 oracle root 8,  21 Sep 12 17:05 /dev/asm-disk2
brw-rw---- 1 oracle root 8, 117 Sep 12 17:05 /dev/asm-disk8
brw-rw---- 1 oracle root 8,   7 Sep 12 17:06 /dev/asm-disk1



[root@desktopserver ~]# ls /dev
adsp           disk     loop4     parport2  ram3     sda4  sdc4  sde6  sdh1        shm       tty16  tty30  tty45  tty6            usbdev1.2       vcs    vgoracle
asm-disk1      dm-0     loop5     parport3  ram4     sda5  sdc5  sdf   sdh2        snapshot  tty17  tty31  tty46  tty60           usbdev1.2_ep00  vcs2   vgreco
asm-disk2      dsp      loop6     port      ram5     sda6  sdc6  sdf1  sdh3        snd       tty18  tty32  tty47  tty61           usbdev1.2_ep81  vcs3   vgswap
asm-disk3      fd       loop7     ppp       ram6     sda7  sdd   sdf2  sdh4        stderr    tty19  tty33  tty48  tty62           usbdev1.3       vcs4   vgvbox
asm-disk4      full     MAKEDEV   ptmx      ram7     sda8  sdd1  sdf3  sdh5        stdin     tty2   tty34  tty49  tty63           usbdev2.1       vcs5   VolGroup00
asm-disk5      fuse     mapper    pts       ram8     sdb   sdd2  sdf4  sdh6        stdout    tty20  tty35  tty5   tty7            usbdev2.1_ep00  vcs6   X0R
asm-disk6      gpmctl   mcelog    ram       ram9     sdb1  sdd3  sdf5  sequencer   systty    tty21  tty36  tty50  tty8            usbdev2.1_ep81  vcs7   zero
asm-disk7      hpet     md0       ram0      ramdisk  sdb2  sdd4  sdf6  sequencer2  tty       tty22  tty37  tty51  tty9            usbdev2.2       vcs8
asm-disk8      initctl  mem       ram1      random   sdb3  sdd5  sdg   sg0         tty0      tty23  tty38  tty52  ttyS0           usbdev2.2_ep00  vcsa
audio          input    mixer     ram10     rawctl   sdb4  sdd6  sdg1  sg1         tty1      tty24  tty39  tty53  ttyS1           usbdev2.2_ep81  vcsa2
autofs         kmsg     net       ram11     root     sdb5  sde   sdg2  sg2         tty10     tty25  tty4   tty54  ttyS2           usbdev2.3       vcsa3
bus            log      null      ram12     rtc      sdb6  sde1  sdg3  sg3         tty11     tty26  tty40  tty55  ttyS3           usbdev2.3_ep00  vcsa4
console        loop0    nvram     ram13     sda      sdc   sde2  sdg4  sg4         tty12     tty27  tty41  tty56  urandom         usbdev2.3_ep02  vcsa5
core           loop1    oldmem    ram14     sda1     sdc1  sde3  sdg5  sg5         tty13     tty28  tty42  tty57  usbdev1.1       vboxdrv         vcsa6
cpu            loop2    parport0  ram15     sda2     sdc2  sde4  sdg6  sg6         tty14     tty29  tty43  tty58  usbdev1.1_ep00  vboxnetctl      vcsa7
device-mapper  loop3    parport1  ram2      sda3     sdc3  sde5  sdh   sg7         tty15     tty3   tty44  tty59  usbdev1.1_ep81  vboxusb         vcsa8



# the subpartitions (no output, the udev took it)
[root@desktopserver rules.d]# ls -l /dev/sd*7
[root@desktopserver rules.d]# ls -l /dev/sd*5 | grep -v sda




   * the asm_diskstring would be '/dev/asm-disk*'

}}}

! OEL 5 (before the disk failure)
{{{

# SWAP
pvcreate /dev/sda3 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1
vgcreate vgswap /dev/sda3 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1
lvcreate -n lvswap -i 8 -I 4096 vgswap -l 4888
mkswap /dev/vgswap/lvswap
/dev/vgswap/lvswap      swap    swap    defaults        0 0     <-- add this in fstab
swapon -va
cat /proc/swaps

# Oracle
pvcreate /dev/sda5 /dev/sdb2 /dev/sdc2 /dev/sdd2 /dev/sde2 /dev/sdf2 /dev/sdg2 /dev/sdh2
vgcreate vgoracle /dev/sda5 /dev/sdb2 /dev/sdc2 /dev/sdd2 /dev/sde2 /dev/sdf2 /dev/sdg2 /dev/sdh2
lvcreate -n lvoracle -i 8 -I 4096 vgoracle -l 14664
mkfs.ext3 /dev/vgoracle/lvoracle

# VBOX
pvcreate /dev/sda6 /dev/sdb3 /dev/sdc3 /dev/sdd3 /dev/sde3 /dev/sdf3 /dev/sdg3 /dev/sdh3
vgcreate vgvbox /dev/sda6 /dev/sdb3 /dev/sdc3 /dev/sdd3 /dev/sde3 /dev/sdf3 /dev/sdg3 /dev/sdh3
lvcreate -n lvvbox -i 8 -I 4096 vgvbox -l 625008
mkfs.ext3 /dev/vgvbox/lvvbox

# ASM  <-- ASM disks 
pvcreate /dev/sda7 /dev/sdb5 /dev/sdc5 /dev/sdd5 /dev/sde5 /dev/sdf5 /dev/sdg5 /dev/sdh5


this is what I used for the  udev rules, see here for more details --> 
udev ASM - single path - https://www.evernote.com/shard/s48/sh/485425bc-a16f-4446-aebd-988342e3c30e/edc860d713dd4a66ff57cbc920b4a69c

$ cat 99-oracle-asmdevices.rules 
KERNEL=="sda7", SYSFS{dev}=="8:7"  , NAME="asm-disk1", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdb5", SYSFS{dev}=="8:21" , NAME="asm-disk2", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdc5", SYSFS{dev}=="8:37" , NAME="asm-disk3", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdd5", SYSFS{dev}=="8:53" , NAME="asm-disk4", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sde5", SYSFS{dev}=="8:69" , NAME="asm-disk5", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdf5", SYSFS{dev}=="8:85" , NAME="asm-disk6", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdg5", SYSFS{dev}=="8:101", NAME="asm-disk7", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdh5", SYSFS{dev}=="8:117", NAME="asm-disk8", OWNER="oracle", GROUP="dba", MODE="0660"


# RECO
pvcreate /dev/sda8 /dev/sdb6 /dev/sdc6 /dev/sdd6 /dev/sde6 /dev/sdf6 /dev/sdg6 /dev/sdh6
vgcreate vgreco /dev/sda8 /dev/sdb6 /dev/sdc6 /dev/sdd6 /dev/sde6 /dev/sdf6 /dev/sdg6 /dev/sdh6
lvcreate -n lvreco -i 8 -I 4096 vgreco -l 596672



[root@localhost orion]# lvdisplay  | egrep "LV Name|Size"
  LV Name                /dev/vgreco/lvreco
  LV Size                2.28 TB
  LV Name                /dev/vgvbox/lvvbox
  LV Size                2.38 TB
  LV Name                /dev/vgoracle/lvoracle
  LV Size                57.28 GB
  LV Name                /dev/vgswap/lvswap
  LV Size                19.09 GB
  LV Name                /dev/VolGroup00/lvroot
  LV Size                20.00 GB


[root@localhost ~]# df -Th
Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-lvroot
              ext3     20G   14G  4.7G  75% /
/dev/sda1     ext3    244M   24M  208M  11% /boot
tmpfs        tmpfs    7.8G     0  7.8G   0% /dev/shm
/dev/mapper/vgoracle-lvoracle
              ext3     57G  180M   54G   1% /u01
/dev/mapper/vgvbox-lvvbox
              ext3    2.4T  200M  2.3T   1% /vbox
/dev/mapper/vgreco-lvreco
              ext3    2.3T  201M  2.2T   1% /reco
}}}
http://book.soundonair.ru/hall2/ch06lev1sec1.html Got the cool trick here 6.1 LVM Striping (RAID 0)

''Distributed Logical Volume Trick''
{{{
NOTE: you have to increase the /etc/lvm directory

pvcreate --metadatasize 1000000K /dev/sdb1
pvcreate --metadatasize 1000000K /dev/sdc1
pvcreate --metadatasize 1000000K /dev/sdd1
pvcreate --metadatasize 1000000K /dev/sde1
vgcreate vgshortstroke /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
lvcreate -n shortstroke -l 1 vgshortstroke
vgdisplay

PV1=/dev/sdb1
PV2=/dev/sdc1
PV3=/dev/sdd1
PV4=/dev/sde1
SIZE=145512                     <-- from vgdisplay output
COUNT=1

while [ $COUNT -le $SIZE ]
do
lvextend -l $COUNT /dev/vgshortstroke/shortstroke $PV1
let COUNT=COUNT+1
lvextend -l $COUNT /dev/vgshortstroke/shortstroke $PV2
let COUNT=COUNT+1
lvextend -l $COUNT /dev/vgshortstroke/shortstroke $PV3
let COUNT=COUNT+1
lvextend -l $COUNT /dev/vgshortstroke/shortstroke $PV4
let COUNT=COUNT+1
done

lvdisplay -vm /dev/vgshortstroke/shortstroke | less
}}}


''LVM kilobyte-striping''
{{{
"lvcreate -i 3 -I 8 -L 100M vg00" tries to create a striped logical volume with 3 stripes, a stripesize of 8KB and a  size
       of 100MB in the volume group named vg00. The logical volume name will be chosen by lvcreate.
}}}

started 3:26PM
end 3:48PM 11GB
rate of 171MB/minute whoa this is way too slow.. 
but this volume is the same performance as 4 raw short stroked disk (partition)  :)

Orion run here 
{{{
ORION VERSION 11.1.0.7.0

Commandline:
-run simple -testname mytest -num_disks 4 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      0,      1,      2,      3,      4,      5,      6,      7,      8
Total Data Points: 29

Name: /dev/vgshortstroke/shortstroke	Size: 13514047488
1 FILEs found.

Maximum Large MBPS=232.00 @ Small=0 and Large=8
Maximum Small IOPS=942 @ Small=20 and Large=0
Minimum Small Latency=6.61 @ Small=1 and Large=0
}}}

Other experiments ongoing.. 

Here's the HD used
Barracuda 7200 SATA 3Gb/s (375MB/s) interface 1TB Hard Drive 
http://www.seagate.com/ww/v/index.jsp?vgnextoid=20b92d0ca8dce110VgnVCM100000f5ee0a0aRCRD#tTabContentOverview
the regular kernel gives lower MB/s on sequential reads/writes
http://www.evernote.com/shard/s48/sh/36636b46-995a-4812-bd07-e88fa0dfd191/d36f37565243025e7b5792f496dc5a37



! 2020 
http://sethmiller.org/it/oracleasmlib-not-necessary/
https://titanwolf.org/Network/Articles/Article?AID=76740ebe-e81a-4f0f-8c23-ab482de97ba9#gsc.tab=0
https://community.oracle.com/mosc/discussion/2937970/asmlib-uek-kernel
https://oracle-base.com/blog/2012/03/16/oracle-linux-5-8-and-udev-issues/
https://blogs.oracle.com/wim/asmlib
Oracleasm Kernel Driver for the 64-bit (x86_64) Red Hat Compatible Kernel for Oracle Linux 6 (Doc ID 1578579.1)
<<<
ASMLib is a support library for the Automatic Storage Management feature used in Oracle Databases running on the Oracle Linux Unbreakable Enterprise Kernel (UEK) and RedHat Compatible Kernel (RHCK). Oracle ASMLib is included in the UEK kernel, but must be installed as a separate package for RHCK. This document provides a set of steps on how to get the  oracleasm kernel driver for Oracle Linux 6 RHCK and also how to validate the driver was provided by Oracle and not another vendor. 
<<<


!! to asmlib or not asmlib
<<<
In terms of performance, I think the kernel would matter vs the asmlib or udev

On my previous benchmark (this was on my R&D server way back OEL5). When I used the UEK kernel it gave me higher MB/s on sequential reads/writes for both LVM and ASM

see the numbers here
http://www.evernote.com/shard/s48/sh/36636b46-995a-4812-bd07-e88fa0dfd191/d36f37565243025e7b5792f496dc5a37

<<<

<<<
Running RedHat or Oracle Linux?  If it's RedHat, I'd be 100% udev.  Doesn't require a separate package, and easier to set up via config files.  Even if it's Oracle Linux, I'd still go the udev route for those same reasons.

I don't think there's a difference in performance by using ASMlib, either good or bad.
<<<

<<<
This has been discussed a lot of times before.

To shortcut to my preference: udev.


When ASMLib was still current (AFD, asm filter driver is the new, current version), and oracle was actively supporting it, I did ask wim coeckaerts what the hell the actual performance features were, because I couldn’t see it, nor measure it. It turns out it’s pooling file descriptors (you cannot get a huge performance boost from that).


There are two other advantages of asmlib:

    It’s a kernel module which scans the headers of the block devices that are visible to the kernel, and provides the ASM devices as asmlib devices, based on the information in the device header, not requiring any unique device data to make it an ASM device. In one situation, when using oracle cloud V1 (OCI for me is still the oracle c interface 😊), the block devices did not provide any unique information, and thus UDEV could not be used. The linux kernel names devices as they become visible to the kernel, which can differ between reboots, so you should never use the /dev/sdb naming (for SCSI devices, but equally for other native kernel namings).
    ASMLib scans IO going to asmlib devices, and will reject non-oracle IO.

 
However, the inner working of asmlib is absolutely and totally undocumented. Also, the way IO is done changes in the oracle engine, you will see other system calls (yes, really, despite how surreal that sounds). This means that if it all of a sudden doesn’t work, there is literally nobody that can help you. Or alternatively described: you are left to oracle support. The only human being who wrote something about the inner working is James Morle.


Udev isn’t that well documented, but there are several blogs (including mine) that describe how to configure and troubleshoot it. So that means it’s not a black box, it is possible to investigate issues. I don’t like that you have to change the udev scripts between OL6 and OL7 (OL8 seems not requiring a change), but still, once you know how to troubleshoot it, it’s doable.
<<<











..
''HOWTO'' http://www.math.umbc.edu/~rouben/beamer/

http://wifo.eecs.berkeley.edu/wiki/doku.php/latex:latex_resources
http://superuser.com/questions/221624/latex-vs-powerpoint-for-presentations  
https://bitbucket.org/rivanvx/beamer/overview
http://readingsml.blogspot.com/2009/11/keynote-vs-powerpoint-vs-beamer.html
http://www.johndcook.com/blog/2008/07/24/latex-and-powerpoint-presentations/
http://www.johndcook.com/blog/2008/07/24/including-images-in-latex-files/
http://sourceforge.net/projects/latex-beamer/
http://sourceforge.net/projects/latex-beamer/forums/forum/319190
http://www.johndcook.com/blog/2008/07/24/latex-and-powerpoint-presentations/




/***
|Name:|LessBackupsPlugin|
|Description:|Intelligently limit the number of backup files you create|
|Version:|3.0.1 ($Rev: 2320 $)|
|Date:|$Date: 2007-06-18 22:37:46 +1000 (Mon, 18 Jun 2007) $|
|Source:|http://mptw.tiddlyspot.com/#LessBackupsPlugin|
|Author:|Simon Baird|
|Email:|simon.baird@gmail.com|
|License:|http://mptw.tiddlyspot.com/#TheBSDLicense|
!!Description
You end up with just backup one per year, per month, per weekday, per hour, minute, and second.  So total number won't exceed about 200 or so. Can be reduced by commenting out the seconds/minutes/hours line from modes array
!!Notes
Works in IE and Firefox only.  Algorithm by Daniel Baird. IE specific code by by Saq Imtiaz.
***/
//{{{

var MINS  = 60 * 1000;
var HOURS = 60 * MINS;
var DAYS  = 24 * HOURS;

if (!config.lessBackups) {
	config.lessBackups = {
		// comment out the ones you don't want or set config.lessBackups.modes in your 'tweaks' plugin
		modes: [
			["YYYY",  365*DAYS], // one per year for ever
			["MMM",   31*DAYS],  // one per month
			["ddd",   7*DAYS],   // one per weekday
			//["d0DD",  1*DAYS],   // one per day of month
			["h0hh",  24*HOURS], // one per hour
			["m0mm",  1*HOURS],  // one per minute
			["s0ss",  1*MINS],   // one per second
			["latest",0]         // always keep last version. (leave this).
		]
	};
}

window.getSpecialBackupPath = function(backupPath) {

	var now = new Date();

	var modes = config.lessBackups.modes;

	for (var i=0;i<modes.length;i++) {

		// the filename we will try
		var specialBackupPath = backupPath.replace(/(\.)([0-9]+\.[0-9]+)(\.html)$/,
				'$1'+now.formatString(modes[i][0]).toLowerCase()+'$3')

		// open the file
		try {
			if (config.browser.isIE) {
				var fsobject = new ActiveXObject("Scripting.FileSystemObject")
				var fileExists  = fsobject.FileExists(specialBackupPath);
				if (fileExists) {
					var fileObject = fsobject.GetFile(specialBackupPath);
					var modDate = new Date(fileObject.DateLastModified).valueOf();
				}
			}
			else {
				netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect");
				var file = Components.classes["@mozilla.org/file/local;1"].createInstance(Components.interfaces.nsILocalFile);
				file.initWithPath(specialBackupPath);
				var fileExists = file.exists();
				if (fileExists) {
					var modDate = file.lastModifiedTime;
				}
			}
		}
		catch(e) {
			// give up
			return backupPath;
		}

		// expiry is used to tell if it's an 'old' one. Eg, if the month is June and there is a
		// June file on disk that's more than an month old then it must be stale so overwrite
		// note that "latest" should be always written because the expiration period is zero (see above)
		var expiry = new Date(modDate + modes[i][1]);
		if (!fileExists || now > expiry)
			return specialBackupPath;
	}
}

// hijack the core function
window.getBackupPath_mptw_orig = window.getBackupPath;
window.getBackupPath = function(localPath) {
	return getSpecialBackupPath(getBackupPath_mptw_orig(localPath));
}

//}}}
http://orainternals.wordpress.com/2009/06/02/library-cache-lock-and-library-cache-pin-waits/
http://dioncho.wordpress.com/2009/05/15/releasing-library-cache-pin/
http://oracle-study-notes.blogspot.com/2009/05/resolving-library-cache-lock-issue.html
Library Cache Pin/Lock Pile Up hangs the application [ID 287059.1]
HOW TO FIND THE SESSION HOLDING A LIBRARY CACHE LOCK [ID 122793.1]
Database Hangs with Library Cache Lock and Pin Waits [ID 338367.1]
How to Find the Blocker of the 'library cache pin' in a RAC environment? [ID 780514.1]
How to analyze ORA-04021 or ORA-4020 errors? [ID 169139.1]
WAITEVENT: "library cache pin" Reference Note [ID 34579.1]


http://oracleprof.blogspot.com/2010/07/process-hung-on-library-cache-lock.html
http://logicalread.solarwinds.com/oracle-library-cache-pin-wait-event-mc01/#.VtnJcvkrLwc
https://sites.google.com/site/embtdbo/wait-event-documentation/oracle-library-cache#TOC-latch:-library-cache-lock-


WAITEVENT: "library cache lock" Reference Note (Doc ID 34578.1)
SRDC - How to Collect Standard Information for an Issue Where 'library cache lock' Waits Are the Primary Waiters on the Database (Doc ID 1904807.1)
Truncate - Causes Invalidations in the LIBRARY CACHE (Doc ID 123214.1)
'library cache lock' Waits: Causes and Solutions (Doc ID 1952395.1)
Troubleshooting Library Cache: Lock, Pin and Load Lock (Doc ID 444560.1)
How to Find which Session is Holding a Particular Library Cache Lock (Doc ID 122793.1)





http://www.oraclemusings.com/?p=103

https://oracle-base.com/blog/2013/12/11/oracle-license-audit/
As a DBA you have to know the licensing schemes of Oracle.. 

http://www.orafaq.com/wiki/Oracle_Licensing

http://download.oracle.com/docs/cd/E11882_01/license.112/e10594/toc.htm
http://www.oracle.com/corporate/pricing/sig.html
https://docs.google.com/viewer?url=http://www.oracle.com/corporate/pricing/application_licensing_table.pdf
http://www.oracle.com/corporate/pricing/askrightquestions.html

http://www.liferay.com/home
http://blog.scottlowe.org/2012/10/26/link-aggregation-and-vlan-trunking-with-brocade-fastiron-switches/

http://www.linuxjournal.com/content/containers%E2%80%94not-virtual-machines%E2%80%94are-future-cloud?page=0,1&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A%20linuxjournalcom%20%28Linux%20Journal%20-%20The%20Original%20Magazine%20of%20the%20Linux%20Community%29

* containers are build on namespaces and cgroups
* namespaces provide isolation similar to hypervisors
* cgroups provide resource limiting and accounting
* these tools can be mixed to create hybrids


http://lxr.free-electrons.com/source/mm/compaction.c?v=2.6.35
http://ltp.sourceforge.net/tooltable.php

{{{
Linux Test Tools

The purpose of this Linux Test Tools Table is to provide the open-source community with a comprehensive list of tools commonly used for testing the various components of Linux.
My hope is that the community will embrace and contribute to this list making it a valuable addition to the Linux Test Project.

Please feel free to send additions, updates or suggestions to Jeff Martin. Last update:07/12/06

Cluster
HINT 	allows fair comparisons over extreme variations in computer architecture, absolute performance, storage capacity, and precision. 	It's listed as a Past Projectwith a link to http://hint.byu.edu but I have not been able to find where it is being maintained. If you know, please drop me a note. 
Code Coverage Analysis
gcov 	Code analysis tool for profiling code and determining: 1) how often each line of code executes, 2) what lines of code are actually executed, 3.) how much computing time each section of codeuses  	 
lcov 	LCOV is an extension of GCOV, a GNU tool which provides information about what parts of a program are actually executed (i.e. "covered") while running a particular test case. The extension provides HTML output and support for large projects. 	 
Database
DOTS 	Database Opensource Test Suite 	 
dbgrinder 	perl script to inflict stress on a mysql server 	 
OSDL Database Testsuite 	OSDL Database Testsuite 	 
Debug
Dynamic Probes 	Dynamic Probes is a generic and pervasive debugging facility. 	 
Kernel Debug (KDB) 	KDB is an interactive debugger built into the Linux kernel. It allows the user to examine kernel memory, disassembled code and registers. 	 
Linux Kernel Crash Dump 	LKCD project is designed to help detect, save and examine system crashes and crash info. 	 
Linux Trace Toolkit (LTT) 	The Linux Trace Toolkit is a fully-featured tracing system for the Linux kernel. 	 
Defect Tracking
Bugzilla 	allows individuals or groups of developers to keep track of outstanding bugs in their product effectively 	 
Desktop/GUI Libraries
Android 	open source testing tool for GUI programs 	 
ldtp	GNU/Linux Desktop Testing Project	 
Event Logging
included tests 	Various tests are included in the tarball 	 
Filesystems
Bonnie 	Bonnie++ is test suite, which performs several hard drive/ filesystem tests. 	 
dbench 	Filesystem benchmark that generates good filesystem load 	 
fs_inode 	Part of the LTP: This test creates several subdirectories and files off of two parent directories and removes directories and files as part of the test. 	 
fs_maim 	Part of the LTP: a set of scripts to test and stress filesystem and storage management utilities 	 
IOZone 	Filesystem benchmark tool (read, write, re-read, re-write, read backwards, read strided, fread, fwrite, random read, pread, aio_read, aio_write) 	 
lftest 	Part of the LTP:lftest is a tool/test designed to create large files and lseek from the beginning of the file to the end of the file after each block write. This test verifies large file support and can be used to generate large files for other filesystem tests. 	Files up to 2Tb have been created using this tool. This test is VERY picky about glibc version. 
LTP 	The Linux Test Project is a collection of tools for testing the Linux kernel and related features. 	 
PostMark 	Filesystem benchmark that simulates load generated by enterprise applications such as email, news and web-based commerce. 	 
stress 	puts the system under a specified amount of load 	 
mongo 	set of the programs to test linux filesystems for performance and functionality 	 
fsx 	File system exerciser from Apple. 	The test is most effective if you let it run for a minute or two, so that it overlaps the periodic sync that most Unix systems do. 
xdd	Storage I/O Performance Characterization tool that runs on most UNIX-like systems and Windows.	Has been around since 1992 and is in use at various government labs.
Harnesses
Cerberus 	The Cerberus Test Control System(CTCS) is a free (freedom) test suite for use by developers and others to test hardware. It generates good filesystem stress in the process. 	 
STAF 	The Software Testing Automation Framework (STAF) is an open source framework designed to improvethe level of reuse and automation in test cases and test environments.  	 
I/O & Storage
tiobench 	Portable, robust, fully-threaded I/O benchmark program 	 
xdd	Storage I/O Performance Characterization tool that runs on most UNIX-like systems and Windows.	Has been around since 1992 and is in use at various government labs.
Kernel System Calls
crashme 	a tool for testing the robustness of an operating environment using a technique of "Random Input" response analysis 	 
LTP 	The Linux Test Project is a collection of tools for testing the Linux kernel and related features. 	 
Network
Connectathon NFS Testsuite 	This testsuite tests the NFS Protocol 	 
ISIC 	ISIC is a suite of utilities to exercise the stability of an IP Stack and its component stacks 	 
LTP 	The Linux Test Project has a collection of tools for testing the network components of the Linux kernel. 	 
netperf 	Netperf is a benchmark that can be used to measure the performance of many different types of networking. 	 
NetPIPE 	Variable time bench mark, ie, it measures network performance using variable sized communiation transfers 	 
TAHI 	Providesinteroperability and conformance tests for IPv6 	 
VolanoMark 	A java chatroom benchmark/stress 	 
UNH IPv6 Tests 	there are several IPv6 tests on this site 	 
Iperf 	for measuring TCP and UDP bandwidth performance 	 
Network Security
Kerberos Test suite 	These tests are for testing Kerberos clients (kinit,klist and kdestroy) and Kerberized Applications, ftp and telnet. 	 
Other
cpuburn 	This program was designed by Robert Redelmeier to heavily loadCPU chips. 	 
Performance
contest 	test system responsiveness by running kernel compilation under anumber of different load conditions 	 
glibench/clibench 	benchmarking tool to check your computer CPU and hard disk performance 	 
lmbench 	Suite of simple, portable benchmarks 	 
AIM Benchmark 	Performance benchmark 	 
unixbench 	Performance benchmark based on the early BYTE UNIX Benchmarks 	"retired" since about 1997, but still used by some testers 
Scalability
dbench 	Used for dcache scalability testing 	 
Chat 	Used for file_struct scalability testing 	 
httperf 	Used for dcache scalability testing 	 
Scheduler
LTP 	The Linux Test Project is a collection of tools for testing the Linux kernel and related features. 	sched_stress and process_stress 
VolanoMark 	A java chatroom benchmark/stress 	VolanoMark has been used to stress the scheduler. 
SCSI Hardening
Bonnie 	Bonnie is test suite, which performs several hard drive and filesystem tests.  	 
LTP 	The Linux Test Project is a collection of toolsfor testing the Linux kernel and related features. 	disktest 
dt 	dt (Data Test) is a generic data test program used to verify proper operation of peripherals, file systems, device drivers, or any data stream supported by the operating system 	 
Security
Nessus 	remote security scanner 	 
Standards
LSB 	Test suites used for LSB compliance testing 	 
Stream Controlled Transmission Protocol
LTP 	The Linux Test Project is a collection of tools for testing the Linux kernel and related features. 	 
System Management
sblim 	The "SBLIM Reference Implementation (SRI)" is a component of the SBLIM project. Its purposes are (among others): (1) easily set up, run and test systems management scenarios based on CIM/CIMOM technology (2) test CIM Providers (on local and/or remote Linux machines) 	 
Threads
LTP 	The Linux Test Project is a collection of tools for testing the Linux kernel and related features. 	 
VSTHlite 	Tests for compliance with IEEE POSIX 1003.1c extensions (pthreads). 	 
USB
usbstress 	Sent to us by the folks at Linux-usb.org 	 
Version Control
cvs 	the dominant open-source network-transparent version control system 	 
BitKeeper 	BK/Pro is a scalable configuration management system, supporting globally distributed development, disconnected operation, compressed repositories, change sets, and repositories as branches. 	Read the licensing info 
Subversion 	 	 
VMM
vmregress 	regrssion, testing and benchmark tool 	 
LTP 	The Linux Test Project is a collection of tools for testing the Linux kernel and related features. 	 
memtest86 	A thorough real-mode memory tester 	 
stress 	puts the system under a specified amount of load 	 
memtest86+ 	fork / enhanced version of the memtest86 	 
memtester 	Utility to test for faulty memory subsystem 	 
Web Server
Hammerhead 	Hammerhead is a web server stress tool that can simulate multiple connections and users. 	 
httperf 	httperf is a popular web server benchmark tool for measuring web server performance 	 
siege 	Siege is an http regression testing and benchmarking utility. 	 
PagePoker 	for loadtesting and benchmarking web servers 
}}}
just make use of this tool, and download an ubuntu live DVD
http://www.pendrivelinux.com/universal-usb-installer-easy-as-1-2-3/
Centrify - Linux AD authentication
http://goo.gl/R1hRL
{{{

.bashprofile	
# .bash_profile	

# Get the aliases and functions	
if [ -f ~/.bashrc ]; then	
	. ~/.bashrc
fi	

# User specific environment and startup programs	

PATH=$PATH:$HOME/bin	

export PATH	
unset USERNAME	


### PARAMETERS FOR ORACLE DATABASE 10G
umask 022

export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1
export ORACLE_BASE=/u01/app/oracle
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export ORACLE_SID=orcl

PATH=$ORACLE_HOME/bin:$PATH

}}}
{{{

# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi
export LD_ASSUME_KERNEL=2.4.1
# Oracle Environment
export ORACLE_BASE=/u01/oracle
export ORACLE_HOME=/u01/oracle/product/9.2.0
export ORACLE_SID=PETDB1
export ORACLE_TERM=xterm
export TNS_ADMIN=$ORACLE_HOME/network/admin

# Optional Oracle Environment
export NLS_LANG=AMERICAN
export ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
export LD_LIBRARY_PATH

# Set shell search path
PATH=$PATH:/sbin:$ORACLE_HOME/bin

# Display Environment
DISPLAY=127.0.0.1:0.0
DISPLAY=192.9.200.7:0.0
export DISPLAY

# Oracle CLASSPATH Environment
# CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
# CLASSPATH=$CLASSPATH:$ORACLE_HOME/network/jlib
# export CLASSPATH

export PATH
unset USERNAME

}}}
http://www.techrepublic.com/blog/10things/10-outstanding-linux-backup-utilities/895


{{{

http://ubuntu-rescue-remix.org/node/6
http://ubuntuforums.org/showthread.php?s=bb3a288a58fdd087cca4367677b2544a&t=417761&page=2
http://www.cgsecurity.org/wiki/TestDisk
http://www.cgsecurity.org/wiki/PhotoRec
http://www.linux-ntfs.org/doku.php?id=ntfs-en
http://www.linux-ntfs.org/doku.php?id=howto:hexedityourway
http://www.student.dtu.dk/~s042078/magicrescue/manpage.html
http://www.cgsecurity.org/wiki/Intel_Partition_Table
https://answers.launchpad.net/ubuntu/+question/2178
http://www.cgsecurity.org/wiki/HowToHelp
http://www.cgsecurity.org/wiki/After_Using_PhotoRec

}}}



http://dolavim.us/blog/archives/2007/11/linux-kernel-lo.html

''you can't have lockstat on rhel5''
http://dag.wieers.com/blog/rpm-packaging-news-lockstat-and-httpreplicator
https://forums.oracle.com/forums/thread.jspa?messageID=4535884
http://dolavim.us/blog/2007/11/06/linux-kernel-lock-profiling-with-lockstat/
Oracle� Database on AIX�,HP-UX�,Linux�,Mac OS� X,Solaris�,Tru64 Unix� Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.1)
 	Doc ID:	Note:169706.1

Linux Quick Reference
  	Doc ID: 	Note:341782.1

Things to Know About Linux
  	Doc ID: 	Note:265262.1

Server Architecture on UNIX and NT
  	Doc ID: 	Note:48681.1

Unix Commands on Different OS's 
  Doc ID:  293561.1 

Oracle's 9i Platform Strategy Advisory
  	Doc ID: 	Note:149914.1


-- INSTALLATION

Defining a "default RPMs" installation of the RHEL OS
  	Doc ID: 	Note:376183.1


-- SUPPORT

Support of Linux and Oracle Products on Linux 
  Doc ID:  Note:266043.1 

Linux Kernel Support - Policy on Tainted Kernels 
  Doc ID:  Note:284823.1 

Unbreakable Linux Support Policies For Virtualization And Emulation 
  Doc ID:  Note:417770.1 
  	


-- MIGRATION FROM 32 to 64

How to convert a 32-bit database to 64-bit database on Linux?
  	Doc ID: 	Note:341880.1 	

How to Determine Whether the OS is 32-bit or 64-bit
  	Doc ID: 	421453.1

How to Determine a Linux OS and the OS Association of Staged and Installed Oracle Products
  	Doc ID: 	752155.1



-- MIGRATION/UPGRADE OF OS VERSION

Preserving Your Oracle Database 10g Environment
when Upgrading from Red Hat Enterprise Linux 2.1
AS to Red Hat Enterprise Linux 3 
	-- located as Oracle on Linux directory

Is Relinking Of Oracle (Relink All) Required After Patching OS? 
  Doc ID:  395605.1 

When is a relink required after an AIX OS upgrade --- YES
  Doc ID:  726811.1 

Upgrading RHEL 3 To RHEL 4 With Oracle Database 
  Doc ID:  416005.1 

How to Relink Oracle Database Software on UNIX 
  Doc ID:  131321.1 







-- ITANIUM SERVER ISSUE

Messages In Console: Oracle(9581): Floating-Point Assist Fault At Ip		-- for itanium servers
  	Doc ID: 	Note:279456.1

What's up with those "floating-point assist fault" messages? - Linux on Itanium�
http://h21007.www2.hp.com/portal/site/dspp/menuitem.863c3e4cbcdc3f3515b49c108973a801/?ciid=62080055abe021100055abe02110275d6e10RCRD

Bug No. 	3777000	
FLOATING-POINT ASSIST FAULT(FPSWA) CAUSES POOR PERFORMANCE 

Bug No. 	3796598
KERNEL: ORACLE(570): FLOATING-POINT ASSIST FAULT AT IP CAUSES CONNECTION PROBLEM 

Bug No. 	3437795
RMAN BACKUP HANGS INSTANCE IN RAC, 'DATAFILECOPY HEADER VALIDATION FAILURE' 

Oracle RDBMS and RedHat Linux AS on a Box with AMD Processor
  	Doc ID: 	Note:227904.1


What about this floating-point assist fault?
--------------------------------------------
When one does computation involving floats, the result may not always be turned into normalized representation, these numbers are called "denormals". They can be thought of as really tiny numbers (almost zero). The IEEE754 standard
handles these cases, but not always does the Floating-Point Unit. There are two ways to deal with this problem:

-Silently ignore it (maybe by turning the number into zero)
-Inform the user that the result is a denormal and let him do what he wants with it (=we ask the user and his software to assist the FPU).

The Intel Itanium does not fully support IEEE denormals and requires software assistance to handle them. Without further informations, the ia64 GNU/Linux kernel triggers a fault when denormals are computed. This is the "floating-point
software assist" fault (FPSWA) in the kernel messages. It is the user's task to clearly design his program to prevent such cases .< ===== (this sentence implies Oracle code) 



-- SERVICES

Linux OS Service 'xendomains'
  	Doc ID: 	Note:558719.1



-- OCFS1

Installing and setting up ocfs on Linux - Basic Guide
  	Doc ID: 	220178.1

Step-By-Step Upgrade of Oracle Cluster File System (OCFS v1) on Linux
 	Doc ID:	Note:251578.1  	

Linux OCFS - Best Practices
  	Doc ID: 	237997.1

Automatic Storage Management (ASM) and Oracle Cluster File System (OCFS) in Oracle10g
  	Doc ID: 	255359.1

OCFS mount point does not mount for the first time
  	Doc ID: 	302206.1

Update on OCFS for Linux
  	Doc ID: 	252331.1



-- OCFS1 DEBUG

OCFS Most Common Defects / Bugs
  	Doc ID: 	430451.1



-- OCFS1 ON WINDOWS

Raw Devices and Cluster Filesystems With Real Application Clusters	<-- windows 2k3 
  	Doc ID: 	183408.1

Installing CRS on Windows 2008 Fails When Checking OCFS and Orafence Driver's Signatures
  	Doc ID: 	762193.1

OCFS for EM64T SMP not available on OSS website.
  	Doc ID: 	315734.1

WINDOWS 64-BIT: OCFS Drives Formatted Under 10.2.0.1/10.2.0.2/10.2.0.3 May Need Reformatting
  	Doc ID: 	749006.1

How to Add Another OCFS Drive for RAC on Windows
  	Doc ID: 	229060.1

How to Change a Drive Letter Associated with an OCFS Drive on Windows
  	Doc ID: 	338852.1

How to Use More Than 26 Drives With OCFS on Windows
  	Doc ID: 	357698.1

WIN RAC: How to Remove a Failed OCFS Install
  	Doc ID: 	230290.1

OCFS: Blue Screen After A Reboot of a Node
  	Doc ID: 	372986.1

WIN: Does Oracle Cluster File System (OCFS) Support Access from Mapped Drives?
  	Doc ID: 	225550.1

OCFS Most Common Defects / Bugs
  	Doc ID: 	430451.1

Cabnot Resize Datafile on OCFS Even If There is Sufficient Free Space
  	Doc ID: 	338080.1

can not delete the file physically From Ocfs after dropping tablespace
  	Doc ID: 	284775.1

Where Can I Find Ocfs For Windows Documentation
  	Doc ID: 	269855.1

How Do We Find Out The Version Of Ocfs That'S Installed?
  	Doc ID: 	302503.1

DBCA Failure on OCFS
  	Doc ID: 	234700.1

New Partitions in Windows 2003 RAC Environments Not Visible on Remote Nodes
  	Doc ID: 	454607.1



-- OCFS1 ADD NODE

How to add a new node to the existing OCFS setup on Windows
  	Doc ID: 	316410.1



-- OCFS2

OCFS2: Considerations and requirements for working with BCV/cloned volumes
  	Doc ID: 	Note:567604.1

Linux OCFS2 - Best Practices
 	Doc ID:	Note:603080.1

OCFS2: Supportability as a general purpose filesystem
 	Doc ID:	Note:421640.1

Common reasons for OCFS2 Kernel Panic or Reboot Issues
 	Doc ID:	Note:434255.1

OCFS2 User's Guide for Release 1.4
 	Doc ID:	Note:736223.1

OCFS2 Version 1.4 New Features
 	Doc ID:	Note:736230.1

OCFS2 - FREQUENTLY ASKED QUESTIONS
 	Doc ID:	Note:391771.1

A Reference Guide for Upgrading OCFS2
 	Doc ID:	Note:603246.1

Supportability of OCFS2 on certified and non-certified Linux distributions
 	Doc ID:	Note:566819.1

OCFS2: Supportability as a general purpose filesystem
 	Doc ID:	Note:421640.1

How to resize an OCFS2 filesystem
 	Doc ID:	Note:445082.1

How to find the current OCFS or OCFS2 version for Linux
 	Doc ID:	Note:238278.1

Problem Using Labels On OCFS2
  	Doc ID: 	579153.1



-- OCFS/2 BLOCK SIZE

How to Query the blocksize of OCFS or OCFS2 Filesystem
  	Doc ID: 	469404.1


-- OCFS2 SAN

OCFS2 and SAN Interactions
  	Doc ID: 	603038.1

Host-Based Mirroring and OCFS2
  	Doc ID: 	413195.1



-- OCFS2 SETUP, NETWORK, TIMEOUT

OCFS2 Fencing, Network, and Disk Heartbeat Timeout Configuration
  	Doc ID: 	457423.1

Some Symptoms of OCFS2 Not Functioning when SELinux is Enabled
  	Doc ID: 	432740.1

Using Bonded Network Device Can Cause OCFS2 to Detect Network Outage
  	Doc ID: 	423183.1

Common reasons for OCFS2 o2net Idle Timeout
  	Doc ID: 	734085.1

How to Use "tcpdump" to Log OCFS2 Interconnect (o2net) Messages
  	Doc ID: 	789010.1

Heartbeat/Voting/Quorum Related Timeout Configuration for Linux, OCFS2, RAC Stack to avoid unnessary node fencing, panic and reboot
  	Doc ID: 	395878.1

Common reasons for OCFS2 Kernel Panic or Reboot Issues
  	Doc ID: 	434255.1

http://oss.oracle.com/pipermail/ocfs2-users/2007-January/001159.html
http://oss.oracle.com/projects/ocfs2/dist/documentation/ocfs2_faq.html#TIMEOUT
http://www.mail-archive.com/ocfs2-users@oss.oracle.com/msg00426.html	<-- using tcpdump
http://www.mail-archive.com/ocfs2-users@oss.oracle.com/msg00409.html	


-- OCFS2 DEBUG

Script to gather OCFS2 diagnostic information
  	Doc ID: 	391292.1

OCFS2: df and du commands display different results
  	Doc ID: 	558824.1

OCFS2 Performance: Measurement, Diagnosis and Tuning
  	Doc ID: 	727866.1

Troubleshooting a multi-node OCFS2 installation
  	Doc ID: 	806645.1

Trouble Mounting OCFS File System after changing Network Card
  	Doc ID: 	298889.1







-- X

Enterprise Linux: Common GUI / X-Window Issues
  	Doc ID: 	Note:418963.1

How to configure, manage and secure user access to the Linux X server
  	Doc ID: 	Note:459029.1 	


-- KERNEL

Linux: Tainted Kernels, Definitions, Checking and Diagnosing
  	Doc ID: 	Note:395353.1




-- ORACLE VALIDATED

Linux OS Installation with Reduced Set of Packages for Running Oracle Database Server
  	Doc ID: 	Note:728346.1
  	
Linux OS Installation with Reduced Set of Packages for Running Oracle Database Server without ULN/RHN
 	Doc ID:	Note:579101.1
 	
Defining a "default RPMs" installation of the Oracle Enterprise Linux (OEL) OS
 	Doc ID:	Note:401167.1
 	
Defining a "default RPMs" installation of the RHEL OS
 	Doc ID:	Note:376183.1
 	
Defining a "default RPMs" installation of the SLES OS
 	Doc ID:	Note:386391.1
 	
The 'oracle-validated' RPM Package for Installation Prerequisities
 	Doc ID:	Note:437743.1
 	
 	
 	
-- RELINK

How to Relink Oracle Database Software on UNIX
 	Doc ID:	Note:131321.1
 	
 	
 	
-- ASYNC IO

Kernel Parameter "aio-max-size" does not exist in RHEL4 / EL4 / RHEL5 /EL5
 	Doc ID:	Note:549075.1
 	
"Warning: OS async I/O limit 128 is lower than recovery batch 1024" in Alert log
 	Doc ID:	Note:471846.1
 	
Asynchronous I/O (aio) on RedHat Advanced Server 2.1 and RedHat Enterprise Linux 3
 	Doc ID:	Note:225751.1
 	



-- ORACLE VM

Oracle VM and External Storage Systems
 	Doc ID:	Note:558041.1

Steps to Create Test RAC Setup On Oracle VM
 	Doc ID:	Note:742603.1



-- SHUTDOWN ABORT HANG

Shutdown Abort Hangs
  	Doc ID: 	Note:161234.1
  	
  	
  	
  	
-- MEMORY 

Oracle Background Processes Memory Consumption
  	Doc ID: 	77547.1

Monitoring Memory Use
  	Doc ID: 	Note:2060096.6

TECH: Unix Virtual Memory, Paging & Swapping explained
  	Doc ID: 	Note:17094.1

UNIX: Determining the Size of an Oracle Process
  	Doc ID: 	Note:174555.1

How to Check the Environment Variables for an Oracle Process
  	Doc ID: 	Note:373303.1

How to Configure RHEL/OEL 4/5 32-bit for Very Large Memory with ramfs and HugePages
  	Doc ID: 	Note:317141.1
  	
HugePages on Linux: What It Is... and What It Is Not...
  	Doc ID: 	Note:361323.1
  	
Linux IA64 example of allocating 48GB SGA using hugepages
  	Doc ID: 	Note:397568.1
  	
Shell Script to Calculate Values Recommended HugePages / HugeTLB Configuration
  	Doc ID: 	Note:401749.1
  	
Linux: How to Check Current Shared Memory, Semaphore Values
  	Doc ID: 	Note:226209.1
  	
Maximum SHMMAX values for Linux x86 and x86-64
  	Doc ID: 	Note:567506.1
  	
TECH: Unix Semaphores and Shared Memory Explained
  	Doc ID: 	Note:15566.1
  	
SHARED MEMORY REQUIREMENTS ON UNIX
  	Doc ID: 	Note:1011658.6
  	
Linux Big SGA, Large Memory, VLM - White Paper
  	Doc ID: 	Note:260152.1
  	
OS Configuration for large SGA
  	Doc ID: 	Note:225220.1
  	
Configuring 2.7Gb SGA in RHEL by Relocating the SGA Attach Address
  	Doc ID: 	Note:329378.1
  	
How To Set SHMMAX On SOLARIS 10 From CLI
  	Doc ID: 	Note:372972.1
  	
How Important It Is To Set shmsys:shminfo_shmmax Above 4 GB
  	Doc ID: 	Note:467960.1
  	
DETERMINING WHICH INSTANCE OWNS WHICH SHARED MEMORY & SEMAPHORE SEGMENTS
  	Doc ID: 	Note:68281.1
  	
Operating System Tuning Issues on Unix
  	Doc ID: 	Note:1012819.6
  	
Linux Big SGA, Large Memory, VLM - White Paper
  	Doc ID: 	Note:260152.1
  	
How to Configure RHEL 3.0 32-bit for Very Large Memory and HugePages
  	Doc ID: 	Note:317055.1
  	
How to Configure RHEL 3.0 32-bit for Very Large Memory and HugePages
  	Doc ID: 	Note:317055.1

ORA-824, ORA-1078 When Enabling PAE on VLM on 10g When Sga_Target Parameter is Set
  	Doc ID: 	Note:286093.1

Linux: How to Check Current Shared Memory, Semaphore Values
  	Doc ID: 	Note:226209.1
  	
UNIX VIRTUAL MEMORY: UNDERSTANDING AND MEASURING MEMORY USAGE
  	Doc ID: 	Note:1012017.6
  	
HOW TO INVESTIGATE THE USE OF SHARED MEMORY SEGMENTS AND SEMAPHORES AT A UNIX LEVEL?
  	Doc ID: 	Note:1007971.6
  	
How To Identify Shared Memory Segments for Each Instance		<-- dump it.. 
  	Doc ID: 	Note:1021010.6
  	



-- SHARED MEMORY / SEMAPHORES

TECH: Calculating Oracle's SEMAPHORE Requirements
  	Doc ID: 	15654.1

TECH: Unix Semaphores and Shared Memory Explained
  	Doc ID: 	15566.1

Linux Big SGA, Large Memory, VLM - White Paper
  	Doc ID: 	Note:260152.1

Modifying Kernel Parameters on RHEL, SLES, and Oracle Enterprise Linux using sysctl
  	Doc ID: 	Note:390279.1
  	  	
Linux: How to Check Current Shared Memory, Semaphore Values
  	Doc ID: 	Note:226209.1
  	
How to permanently set kernel parameters on Linux
  	Doc ID: 	Note:242529.1 	
  	
Configuring 2.7Gb SGA in RHEL by Relocating the SGA Attach Address
  	Doc ID: 	Note:329378.1
  	
Linux IA64 example of allocating 48GB SGA using hugepages
  	Doc ID: 	Note:397568.1
  	
How to Configure RHEL 3.0 32-bit for Very Large Memory and HugePages
  	Doc ID: 	Note:317055.1



-- HUGE PAGES, VLM

Configuring RHEL 3 and Oracle 9iR2 32-bit with Hugetlb and Remap_file_pages
  	Doc ID: 	Note:262004.1

Database Buffer Cache is not Loaded into Shared Memory when using VLM
  	Doc ID: 	Note:454465.1

OS Configuration for large SGA
  	Doc ID: 	Note:225220.1
  	
Increasing Usable Address Space for Oracle on 32-bit Linux
  	Doc ID: 	Note:200266.1
  	
How to Configure RHAS 2.1 32-bit for Very Large Memory (VLM) with shmfs and bigpages
  	Doc ID: 	Note:211424.1
  	
ORA-27123: 3.6 GB SGA size on Red Hat 3.0
  	Doc ID: 	Note:273544.1
  	
Red Hat Release 3.0; Advantages for Oracle
  	Doc ID: 	Note:259772.1
  	
HugePages on Linux: What It Is... and What It Is Not...
  	Doc ID: 	Note:361323.1
  	
Oracle Database Server and the Operating System Memory Limitations
  	Doc ID: 	Note:269495.1
  	





-- REMOVE DISK

How to Dynamically Add and Remove SCSI Devices on Linux
  	Doc ID: 	603868.1



-- DEVICE PERSISTENCE

How to set device persistence for RAC Oracle on Linux
  	Doc ID: 	729613.1




-- DEBUG

How to generate and analyze the core files on linux
  	Doc ID: 	278173.1



-- MDAM

Doc ID 759260.1 How to Configure Oracle Enterprise Linux to be Highly Available Using RAID1
Doc ID 343092.1 How to setup Linux md devices for CRS and ASM



-- SCSI

How to Dynamically Add and Remove SCSI Devices on Linux
  	Doc ID: 	603868.1

    Note 357472.1 - Configuring device-mapper for CRS/ASM
    Note 414897.1 - How to Setup UDEV Rules for RAC OCR & Voting devices on SLES10, RHEL5, OEL5
    Note 456239.1 - Understanding Device-mapper in Linux 2.6 Kernel
    Note 465001.1 - Configuring raw devices (singlepath) for Oracle Clusterware 10g Release 2 (10.2.0) on RHEL5/OEL5
    Note 564580.1 - Configuring raw devices (multipath) for Oracle Clusterware 10g Release 2 (10.2.0) on RHEL5/OEL5
    Note 605828.1 - Configuring non-raw multipath devices for Oracle Clusterware 11g (11.1.0) on RHEL5/OEL5
    udev(8) man page
    mount(8) man page



-- x25-M - FLASH STORAGE
http://guyharrison.squarespace.com/blog/2009/11/24/using-the-oracle-11gr2-database-flash-cache.html
http://tholis.webnode.com/news/hardware-adventures/
http://www.hardwarezone.com/articles/view.php?cid=10&id=2990
http://www.hardwarezone.com/articles/view.php?cid=10&id=2697&pg=2
http://www.tipidpc.com/viewitem.php?iid=4580739
http://www.everyjoe.com/thegadgetblog/160gb-intel-x25-m-ssd-for-sale/
http://computerworld.com.ph/intel-releases-windows-7-ssd-optimization-toolbox/
http://www.villman.com/Product-Detail/Intel_80GB_SSD_X25-M
http://www.anandtech.com/cpuchipsets/Intel/showdoc.aspx?i=3403&cp=4
http://www.youtube.com/watch?v=-rCC9y1u-8c



-- SWAP

Swap Space on RedHat Advanced Server
  	Doc ID: 	Note:225451.1


-- CUSTOM SHUTDOWN / STARTUP

How to Automate Startup/Shutdown of Oracle Database on Linux
  	Doc ID: 	Note:222813.1
  	
Customizing System Startup in RedHat Linux
  	Doc ID: 	Note:126146.1




-- HUGEMEM KERNEL
-- as per RHCE notes, hugemem is not anymore available in rhel5 when you use x86-64 kernel you have a really high limit
https://blogs.oracle.com/gverma/entry/common_incorrect_beliefs_about_1
https://blogs.oracle.com/gverma/entry/redhat_linux_kernels_and_proce_1



Mind the Gap http://static.usenix.org/event/hotos11/tech/final_files/Mogul.pdf




http://perfdynamics.blogspot.com/2012/08/littles-law-and-io-performance.html

fusion io
http://www.theregister.co.uk/2012/01/06/fusion_billion_iops/
http://www.theregister.co.uk/2011/10/04/fusion_io_gen_2/
/***
|''Name:''|LoadRemoteFileThroughProxy (previous LoadRemoteFileHijack)|
|''Description:''|When the TiddlyWiki file is located on the web (view over http) the content of [[SiteProxy]] tiddler is added in front of the file url. If [[SiteProxy]] does not exist "/proxy/" is added. |
|''Version:''|1.1.0|
|''Date:''|mar 17, 2007|
|''Source:''|http://tiddlywiki.bidix.info/#LoadRemoteFileHijack|
|''Author:''|BidiX (BidiX (at) bidix (dot) info)|
|''License:''|[[BSD open source license|http://tiddlywiki.bidix.info/#%5B%5BBSD%20open%20source%20license%5D%5D ]]|
|''~CoreVersion:''|2.2.0|
***/
//{{{
version.extensions.LoadRemoteFileThroughProxy = {
 major: 1, minor: 1, revision: 0, 
 date: new Date("mar 17, 2007"), 
 source: "http://tiddlywiki.bidix.info/#LoadRemoteFileThroughProxy"};

if (!window.bidix) window.bidix = {}; // bidix namespace
if (!bidix.core) bidix.core = {};

bidix.core.loadRemoteFile = loadRemoteFile;
loadRemoteFile = function(url,callback,params)
{
 if ((document.location.toString().substr(0,4) == "http") && (url.substr(0,4) == "http")){ 
 url = store.getTiddlerText("SiteProxy", "/proxy/") + url;
 }
 return bidix.core.loadRemoteFile(url,callback,params);
}
//}}}
A Locking Mechanism in Oracle 10g for Web Applications
http://husnusensoy.wordpress.com/2007/07/28/a-locking-mechanism-in-oracle-10g-for-web-applications/
http://www.evernote.com/shard/s48/sh/194a9a05-18ce-4a9b-9cae-1fa9f230d94a/6fc7a6c6bde37e5910b3c8464ed17df4
-- LOG MINER

Truncate Statement is not Detected by Log Miner -- not true on 10gR2
  	Doc ID: 	168738.1

Capture is Slow to Mine Redo Containing a Significant Number of DDL Operations.
  	Doc ID: 	564772.1

Can not delete Archive Log used in a CONTINUOUS_MINE mode Logminer Session
  	Doc ID: 	763700.1

Log Miner Generating Huge Amount Of Undo
  	Doc ID: 	353780.1

How to Recover from a Truncate Command
  	Doc ID: 	117055.1

Doc ID: 223543.1 How to Recover From a DROP / TRUNCATE / DELETE TABLE with RMAN
Doc ID: 141194.1 How to Recover from a Truncate Command on the Wrong Table

Avoiding the truncate during a complete snapshot / materialized view refresh
  	Doc ID: 	1029824.6
http://www.antognini.ch/2012/03/analysing-row-lock-contention-with-logminer/
http://www.nocoug.org/download/2008-05/LogMiner4.pdf
http://docs.oracle.com/cd/B12037_01/server.101/b10825/logminer.htm
https://oraclespin.wordpress.com/category/general-dba/log-miner/

http://oracle-randolf.blogspot.de/2011/07/logical-io-evolution-part-1-baseline.html
http://oracle-randolf.blogspot.com/2011/07/logical-io-evolution-part-2-9i-10g.html

http://alexanderanokhin.wordpress.com/2012/07/26/buffer-is-pinned-count/ 
http://alexanderanokhin.wordpress.com/tools/digger/

''LIO reasons''
http://blog.tanelpoder.com/2009/11/19/finding-the-reasons-for-excessive-logical-ios/
http://www.jlcomp.demon.co.uk/buffer_usage.html
http://hoopercharles.wordpress.com/2011/01/24/watching-consistent-gets-10200-trace-file-parser/
http://oracle-randolf.blogspot.com/2011/05/assm-bug-reprise-part-1.html
http://oracle-randolf.blogspot.com/2011/05/assm-bug-reprise-part-2.html
http://structureddata.org/2008/09/08/understanding-performance/



-- simulate a logical corruption
http://goo.gl/bhXgh
{{{
create or replace
trigger sys.etl_logon
after logon
on database
begin
if user = 'CCMETL' then
execute immediate 'alter session set "_serial_direct_read" = ''ALWAYS''';
else null;
end if;
end;
}}}


''for SAP active data guard (execute on primary)''
{{{
CREATE OR REPLACE TRIGGER adg_pxforce_trigger
AFTER LOGON ON database
WHEN (USER in ('ENTERPRISE'))
BEGIN
IF (SYS_CONTEXT('USERENV','DATABASE_ROLE') IN ('PHYSICAL STANDBY'))   -- check if standby
AND (UPPER(SUBSTR(SYS_CONTEXT ('USERENV','SERVER_HOST'),1,4)) IN ('X4DP'))  -- check if the ADG cluster
THEN
execute immediate 'alter session force parallel query parallel 4';
END IF;
END;
/
}}}

use [[SYS_CONTEXT]] to instrument


http://www.oracle.com/us/products/servers-storage/storage/storage-software/031855.htm
http://wiki.lustre.org/index.php/Main_Page
http://lists.lustre.org/pipermail/lustre-announce/attachments/20100414/34394870/attachment-0001.pdf
http://lists.lustre.org/pipermail/lustre-discuss/2011-June/015655.html
! M6, M5, T5

M6 is just the same speed as M5 and T5, compared to M5 the M6 just have double the number of cores (12 vs 6 cores). 
So you can’t really say “M6 is XX% faster” but with more cores you can say that in M6 you can put/consolidate more workload 

Also on the SPECint_rate2006 they don’t have any M5 or M6 available. So the speed/core around 29 to 30 across the T5 flavors
3750/128=29.296875

-- below are the variable values (raw and final header)
Result/# Cores, # Cores, # Chips, # Cores Per Chip, # Threads Per Core, Baseline, Result, Hardware Vendor, System, Published

$ less spec.txt | sort -rnk1 | grep -i sparc | grep -i oracle
30.5625, 16, 1, 16, 8, 441, 489, Oracle Corporation, SPARC T5-1B, Oct-13
29.2969, 128, 8, 16, 8, 3490, 3750, Oracle Corporation, SPARC T5-8, Apr-13
29.1875, 16, 1, 16, 8, 436, 467, Oracle Corporation, SPARC T5-1B, Apr-13
18.6, 2, 1, 2, 2, 33.7, 37.2, Oracle Corporation, SPARC Enterprise M3000, Apr-11
14.05, 4, 1, 4, 2, 50.3, 56.2, Oracle Corporation, SPARC Enterprise M3000, Apr-11
13.7812, 64, 16, 4, 2, 806, 882, Oracle Corporation, SPARC Enterprise M8000, Dec-10
13.4375, 128, 32, 4, 2, 1570, 1720, Oracle Corporation, SPARC Enterprise M9000, Dec-10
12.3047, 256, 64, 4, 2, 2850, 3150, Oracle Corporation, SPARC Enterprise M9000, Dec-10
11.1875, 16, 4, 4, 2, 158, 179, Oracle Corporation, SPARC Enterprise M4000, Dec-10
11, 32, 8, 4, 2, 313, 352, Oracle Corporation, SPARC Enterprise M5000, Dec-10
10.4688, 32, 2, 16, 8, 309, 335, Oracle Corporation, SPARC T3-2, Feb-11
10.4062, 64, 4, 16, 8, 614, 666, Oracle Corporation, SPARC T3-4, Feb-11
10.375, 16, 1, 16, 8, 153, 166, Oracle Corporation, SPARC T3-1, Jan-11


References below:

http://www.oracle.com/us/corporate/features/sparc-m6/index.html
 By leveraging a common set of technologies across product lines, the price/performance metric of a SPARC M6-32 server with 32 processors is similar to Oracle's SPARC T5 server with 2, 4 or 8 processors.

http://www.oracle.com/us/products/servers-storage/servers/sparc/oracle-sparc/m6-32/overview/index.html
Unlike competitive large servers the SPARC M6-32 has the same price/performance as entry-level servers meaning no price premium for the benefits of a large server.
Near-linear pricing delivers the same price/performance as Oracle's smaller T-series servers and provides large-scale server benefits without the pricing premium of big servers

http://en.wikipedia.org/wiki/SPARC
 

http://www.oracle.com/technetwork/server-storage/sun-sparc-enterprise/documentation/o13-066-sparc-m6-32-architecture-2016053.pdf
 

! compared to X4-8 

But.. X4-8 is faster than T5,M5,M6 which has speed of 38 vs 30

X4-8 
https://twitter.com/karlarao/status/435882623500423168

and X4-8 is pretty much the same speed as the compute nodes of X4-2 so you also get that linear scaling that they’re saying in T5,M5,M6 but a much faster CPU
also here’s my “T5-8 vs IBM P780 SPECint_rate2006” comparison http://goo.gl/xj7o8


! compared to IBM 
[[T5-8 vs IBM P780 SPECint_rate2006]]


https://blogs.oracle.com/EMMAA/entry/exadata_health_and_resource_usage1
<<<
A newly updated version of the Exadata  Health and Resource Usage monitoring has been released! This white paper documents an end to end approach to health and resource utilization monitoring for Oracle Exadata Environments. The document has been substantially modified to help Exadata administrators easily follow the troubleshooting methodology defined. Other additions include:

Exadata 12.1.0.6 plugin for Enterprise Manager new features
Enterprise Manager 12.1.0.4 updates
Updates to Include X4 environment

Download the white paper as the link below: 
http://www.oracle.com/technetwork/database/availability/exadata-health-resource-usage-2021227.pdf
<<<
https://blogs.oracle.com/EMMAA/entry/exadata_health_and_resource_usage
<<<
MAA has recently published a new whitepaper documenting an end to end approach to health and resource utilization monitoring for Oracle Exadata Environments. In an addition to technical details a troubleshooting methodology will be explored that allows administrators to quickly identify and correct issues in an expeditious manner. 

The document takes a “rule out” approach in that components of the system will be verified as performing correctly to eliminate its role in the incident. There will be five areas of concentration in the overall system diagnosis 
1. Steps to take before problems occur that can assist in troubleshooting 
2. Changes made to the system 
3. Quick analysis 
4. Baseline comparison 
5. Advanced diagnostics
http://www.oracle.com/technetwork/database/availability/exadata-health-resource-usage-2021227.pdf
<<<
''homepage'' http://www.oracle.com/technetwork/database/features/availability/maa-090890.html

''MAA Best Practices - Oracle Database '' http://www.oracle.com/technetwork/database/features/availability/oracle-database-maa-best-practices-155386.html
''High Availability Customer Case Studies, Presentations, Profiles, Analyst Reports, and Press Releases'' http://www.oracle.com/technetwork/database/features/availability/ha-casestudies-098033.html
''High Availability Demonstrations'' http://www.oracle.com/technetwork/database/features/availability/demonstrations-092317.html
''MAA Articles'' http://www.oracle.com/technetwork/database/features/availability/ha-articles-099205.html


! oracle cloud 
this oaktable thread (https://mail.google.com/mail/u/0/#inbox/15bef235f979050a) got me curious on looking up "Maximum Cloud Availability Architecture" and found this http://www.oracle.com/technetwork/database/features/availability/oracle-cloud-maa-3046100.html
Oracle Private Database Cloud using Cloud Control 13c https://www.udemy.com/oracle-private-database-cloud/learn/v4/overview


! AWS reference architecture oracle database availability
Get Oracle Flying in AWS Cloud https://www.udemy.com/get-oracle-flying-in-aws-cloud/learn/v4/content
Best Practices for Running Oracle Database on Amazon Web Services https://d0.awsstatic.com/whitepapers/best-practices-for-running-oracle-database-on-aws.pdf
Oracle Database on the AWS Cloud https://s3.amazonaws.com/quickstart-reference/oracle/database/latest/doc/oracle-database-on-the-aws-cloud.pdf
Advanced Architectures for Oracle Database on Amazon EC2 https://d0.awsstatic.com/enterprise-marketing/Oracle/AWSAdvancedArchitecturesforOracleDBonEC2.pdf



! Azure 
Cloud Design Patterns for Azure: Availability and Resilience https://www.pluralsight.com/courses/azure-design-patterns-availability-resilience
<<<


Do any of you know how to check if E5-2650 v2 is a MCM chip or even better do you know of a official Intel list of MCM chips. The reason i ask is that it hat impact on Oracle Standard edition licenses (SE) & (SEO)


https://communities.intel.com/message/239585#239585
https://communities.intel.com/message/239195#239195  -- " I regret to inform you that Intel does not have a list of  MCM processors available on the Intel web site."


http://unix.ittoolbox.com/groups/technical-functional/ibm-aix-l/need-help-in-understanding-the-cpu-cores-concept-on-the-pseries-machines-4345233
http://oracleoptimization.com/2010/03/15/multi-chip-modules/
https://community.oracle.com/thread/925590?start=0&tstart=0
https://neerajbhatia.wordpress.com/2011/01/17/understanding-oracle-database-licensing-policies/
http://research.engineering.wustl.edu/~songtian/pdf/intel-haswell.pdf  <-- desktop
http://en.wikipedia.org/wiki/Broadwell_%28microarchitecture%29 <-- desktop/mobile
http://www.fudzilla.com/home/item/26786-intel-migrates-to-desktop-multi-chip-module-mcm-with-14nm-broadwell <-- desktop
amd http://www.internetnews.com/hardware/article.php/3745836/Why+AMD+Went+the+MultiChip+Module+Route.htm <-- amd mcm
intel forums
https://communities.intel.com/search.jspa?q=multi+chip+module
http://help.howproblemsolution.com/777220/is-it-intel-xeon-e5-2609-processors-is-mcm-multi-chip-module
https://communities.intel.com/message/188146
https://communities.intel.com/thread/48897
https://communities.intel.com/message/230883#230883
https://communities.intel.com/message/252954
https://communities.intel.com/message/259243#259243
https://communities.intel.com/message/239195#239195

<<<
<<showtoc>>

! high level 
[img(60%,60%)[ https://i.imgur.com/1amolkO.png ]]
* http://senthilmkumar-utilities.blogspot.com/2013/11/oracle-utilties-meter-data-management.html
<<showtoc>>


! high level 
[img(60%,60%)[ https://i.imgur.com/1amolkO.png ]]



https://www.google.com/search?q=master+data+management+in+data+lake&oq=master+data+management+in+data+lake&aqs=chrome..69i57.4474j0j7&sourceid=chrome&ie=UTF-8

https://www.udemy.com/courses/search/?src=ukw&q=master%20data%20management
https://www.udemy.com/master-data-management/
https://www.udemy.com/informatica-master-data-management-hub-tool/
https://www.udemy.com/user/sandip-mohite/
https://www.udemy.com/overview-of-informatica-data-director-idd/
https://learning.oreilly.com/library/view/master-data-management/9781118085684/
https://learning.oreilly.com/library/view/master-data-management/9780123742254/
https://learning.oreilly.com/library/view/building-a-scalable/9780128026489/B978012802510900009X/B978012802510900009X.xhtml



https://www.youtube.com/results?search_query=master+data+management+repository+data+lake
The Big Picture of Metadata Management for Data Governance & Enterprise Architecture https://www.youtube.com/watch?v=Zg9BNGV_DAg
What is a Data Lake https://www.youtube.com/watch?v=LxcH6z8TFpI
Big Data & MDM https://www.youtube.com/watch?v=67d8QIg9k9s
Informatica Big Data Management with Intelligent Data Lake Deep Dive and Demo https://www.youtube.com/watch?v=FUXP4nI92l8
Ten Best Practices for Master Data Management and Data Governance https://www.youtube.com/watch?v=kFok_3SPmKw
How to Use Azure Data Catalog https://www.youtube.com/watch?v=Ei7UynF_S_s
Enterprise Data Architecture Strategy - Build a Meta Data Repository https://www.youtube.com/watch?v=HAt22r_KNJI
DW vs MDM https://www.youtube.com/watch?v=XF4p2gZNLvQ
Data Lake VS Data Warehouse https://www.youtube.com/watch?v=AwbKwcw7bgg
https://www.youtube.com/user/Intricity101/featured
https://www.google.com/search?sxsrf=ACYBGNTQmHf9SsK0FvLtntLkoT6nXfZh5g%3A1564161164855&ei=jDQ7XbPiM7Kvggexm42IDw&q=oracle+master+data+management+install&oq=oracle+master+data+management+install&gs_l=psy-ab.3..33i22i29i30l2.14153.16006..16299...0.0..0.117.696.7j1......0....1..gws-wiz.......0i71j0j0i22i30.Lb8tNhjwOw0&ved=0ahUKEwiz2Oi0itPjAhWyl-AKHbFNA_EQ4dUDCAo&uact=5
http://www.oracle.com/us/products/applications/master-data-management/mdm-overview-1954202.pdf
https://www.google.com/search?q=rules+engine+master+data+management&oq=rules+engine+master+data+management&aqs=chrome..69i57j33.5062j0j7&sourceid=chrome&ie=UTF-8
http://www.oracle.com/us/products/applications/master-data-management/018874.pdf



! MDM services 
https://www.intricity.com/data-management-health-checks/
https://www.intricity.com/category/videos/
https://www.youtube.com/user/Intricity101
https://aws.amazon.com/mp/scenarios/bi/mdm/
https://learning.oreilly.com/search/?query=master%20data%20management%20tools&extended_publisher_data=true&highlight=true&include_assessments=false&include_case_studies=true&include_courses=true&include_orioles=true&include_playlists=true&include_collections=false&include_notebooks=false&is_academic_institution_account=false&sort=relevance&facet_json=true&page=0
https://learning.oreilly.com/library/view/cloud-data-design/9781484236154/A448498_1_En_5_Chapter.html
https://learning.oreilly.com/library/view/a-practical-guide/0738438022/8084ch02.xhtml



! MDM tools used 
looker https://looker.com/product/new-features
collibra https://www.collibra.com/ , https://www.youtube.com/results?search_query=collibra , https://www.youtube.com/watch?v=ncLqaBYa0NE
informatica https://www.informatica.com/products/big-data/enterprise-data-catalog.html#fbid=TwzPerm1Zph
ibm https://www.ibm.com/us-en/marketplace/ibm-infosphere-master-data-management
pimcore https://pimcore.com/en/lp/mdm?gclid=Cj0KCQjwyerpBRD9ARIsAH-ITn9vykyfOcPeDQSwRrxPYrAkEC6CQXnSSEex2_3v-BiPEsnnc7Recm0aAhKMEALw_wcB
cloudera navigator https://www.cloudera.com/products/product-components/cloudera-navigator.html
https://atlas.apache.org/#/Architecture


!! mother of all MDM 
http://metaintegration.net/
<<<
Informatica https://youtu.be/l50H3nLfyng?t=244
Collibra https://www.collibra.com/
Oracle has a powerful metadata harvester https://www.oracle.com/a/tech/docs/omm-12213-help-userguide.pdf
Centurylink uses Cloudera Navigator to augment their custom developed MDM
 
 
And, I just found out that under the hood everyone (Oracle, Informatica and others) uses this company’s metadata harvester tool http://metaintegration.net/Company/ (you’ll see on the section “Vendors embedding Meta Integration components in their software”)
<<<













.



http://hemantoracledba.blogspot.com/2010/08/adding-datafile-that-had-been-excluded.html

! aws sagemaker vs azure ml vs google ml

https://www.altexsoft.com/blog/datascience/comparing-machine-learning-as-a-service-amazon-microsoft-azure-google-cloud-ai-ibm-watson/
https://towardsdatascience.com/aws-sagemaker-vs-azure-machine-learning-3ac0172495da
limitations: 
* https://sqlmaria.com/2017/08/01/getting-the-most-out-of-oracle-sql-monitor/
* https://blogs.oracle.com/optimizer/using-sql-patch-to-add-hints-to-a-packaged-application

script 
https://carlos-sierra.net/2014/06/19/skipping-acs-ramp-up-using-a-sql-patch/
https://carlos-sierra.net/2016/02/29/sql-monitoring-without-monitor-hint/

{{{
----------------------------------------------------------------------------------------
--
-- File name:   sqlpch.sql
--
-- Purpose:     Create Diagnostics SQL Patch for one SQL_ID
--
-- Author:      Carlos Sierra
--
-- Version:     2013/12/28
--
-- Usage:       This script inputs two parameters. Parameter 1 the SQL_ID and Parameter 2
--              the set of Hints for the SQL Patch (default to GATHER_PLAN_STATISTICS 
--              MONITOR BIND_AWARE).
--
-- Example:     @sqlpch.sql f995z9antmhxn BIND_AWARE
--
--  Notes:      Developed and tested on 11.2.0.3 and 12.0.1.0
--             
---------------------------------------------------------------------------------------
SPO sqlpch.txt;
DEF def_hint_text = 'GATHER_PLAN_STATISTICS MONITOR BIND_AWARE';
SET DEF ON TERM OFF ECHO ON FEED OFF VER OFF HEA ON LIN 2000 PAGES 100 LONG 8000000 LONGC 800000 TRIMS ON TI OFF TIMI OFF SERVEROUT ON SIZE 1000000 NUMF "" SQLP SQL>;
SET SERVEROUT ON SIZE UNL;
COL hint_text NEW_V hint_text FOR A300;
SET TERM ON ECHO OFF;
PRO
PRO Parameter 1:
PRO SQL_ID (required)
PRO
DEF sql_id_1 = '&1';
PRO
PRO Parameter 2:
PRO HINT_TEXT (default: &&def_hint_text.)
PRO
DEF hint_text_2 = '&2';
PRO
PRO Values passed:
PRO ~~~~~~~~~~~~~
PRO SQL_ID   : "&&sql_id_1."
PRO HINT_TEXT: "&&hint_text_2." (default: "&&def_hint_text.")
PRO
SET TERM OFF ECHO ON;
SELECT TRIM(NVL(REPLACE('&&hint_text_2.', '"', ''''''), '&&def_hint_text.')) hint_text FROM dual;
WHENEVER SQLERROR EXIT SQL.SQLCODE;
 
-- trim sql_id parameter
COL sql_id NEW_V sql_id FOR A30;
SELECT TRIM('&&sql_id_1.') sql_id FROM DUAL;
 
VAR sql_text CLOB;
VAR sql_text2 CLOB;
EXEC :sql_text := NULL;
EXEC :sql_text2 := NULL;
 
-- get sql_text from memory
DECLARE
  l_sql_text VARCHAR2(32767);
BEGIN -- 10g see bug 5017909
  FOR i IN (SELECT DISTINCT piece, sql_text
              FROM gv$sqltext_with_newlines
             WHERE sql_id = TRIM('&&sql_id.')
             ORDER BY 1, 2)
  LOOP
    IF :sql_text IS NULL THEN
      DBMS_LOB.CREATETEMPORARY(:sql_text, TRUE);
      DBMS_LOB.OPEN(:sql_text, DBMS_LOB.LOB_READWRITE);
    END IF;
    l_sql_text := REPLACE(i.sql_text, CHR(00), ' '); -- removes NUL characters
    DBMS_LOB.WRITEAPPEND(:sql_text, LENGTH(l_sql_text), l_sql_text); 
  END LOOP;
  -- if found in memory then sql_text is not null
  IF :sql_text IS NOT NULL THEN
    DBMS_LOB.CLOSE(:sql_text);
  END IF;
EXCEPTION
  WHEN OTHERS THEN
    DBMS_OUTPUT.PUT_LINE('getting sql_text from memory: '||SQLERRM);
    :sql_text := NULL;
END;
/
 
SELECT :sql_text FROM DUAL;
 
-- get sql_text from awr
DECLARE
  l_sql_text VARCHAR2(32767);
  l_clob_size NUMBER;
  l_offset NUMBER;
BEGIN
  IF :sql_text IS NULL OR NVL(DBMS_LOB.GETLENGTH(:sql_text), 0) = 0 THEN
    SELECT sql_text
      INTO :sql_text2
      FROM dba_hist_sqltext
     WHERE sql_id = TRIM('&&sql_id.')
       AND sql_text IS NOT NULL
       AND ROWNUM = 1;
  END IF;
  -- if found in awr then sql_text2 is not null
  IF :sql_text2 IS NOT NULL THEN
    l_clob_size := NVL(DBMS_LOB.GETLENGTH(:sql_text2), 0);
    l_offset := 1;
    DBMS_LOB.CREATETEMPORARY(:sql_text, TRUE);
    DBMS_LOB.OPEN(:sql_text, DBMS_LOB.LOB_READWRITE);
    -- store in clob as 64 character pieces 
    WHILE l_offset < l_clob_size
    LOOP
      IF l_clob_size - l_offset > 64 THEN
        l_sql_text := REPLACE(DBMS_LOB.SUBSTR(:sql_text2, 64, l_offset), CHR(00), ' ');
      ELSE -- last piece
        l_sql_text := REPLACE(DBMS_LOB.SUBSTR(:sql_text2, l_clob_size - l_offset + 1, l_offset), CHR(00), ' ');
      END IF;
      DBMS_LOB.WRITEAPPEND(:sql_text, LENGTH(l_sql_text), l_sql_text);
      l_offset := l_offset + 64;
    END LOOP;
    DBMS_LOB.CLOSE(:sql_text);
  END IF;
EXCEPTION
  WHEN OTHERS THEN
    DBMS_OUTPUT.PUT_LINE('getting sql_text from awr: '||SQLERRM);
    :sql_text := NULL;
END;
/
 
SELECT :sql_text2 FROM DUAL;
SELECT :sql_text FROM DUAL;
 
-- validate sql_text
BEGIN
  IF :sql_text IS NULL THEN
    RAISE_APPLICATION_ERROR(-20100, 'SQL_TEXT for SQL_ID &&sql_id. was not found in memory (gv$sqltext_with_newlines) or AWR (dba_hist_sqltext).');
  END IF;
END;
/
 
PRO generate SQL Patch for SQL "&&sql_id." with CBO Hints "&&hint_text."
SELECT loaded_versions, invalidations, address, hash_value
FROM v$sqlarea WHERE sql_id = '&&sql_id.' ORDER BY 1;
SELECT child_number, plan_hash_value, executions, is_shareable
FROM v$sql WHERE sql_id = '&&sql_id.' ORDER BY 1, 2;
 
-- drop prior SQL Patch
WHENEVER SQLERROR CONTINUE;
PRO ignore errors
EXEC DBMS_SQLDIAG.DROP_SQL_PATCH(name => 'sqlpch_&&sql_id.');
WHENEVER SQLERROR EXIT SQL.SQLCODE;
 
-- create SQL Patch
PRO you have to connect as SYS
BEGIN
  SYS.DBMS_SQLDIAG_INTERNAL.I_CREATE_PATCH (
    sql_text    => :sql_text,
    hint_text   => '&&hint_text.',
    name        => 'sqlpch_&&sql_id.',
    category    => 'DEFAULT',
    description => '/*+ &&hint_text. */'
  );
END;
/
 
-- flush cursor from shared_pool
PRO *** before flush ***
SELECT inst_id, loaded_versions, invalidations, address, hash_value
FROM gv$sqlarea WHERE sql_id = '&&sql_id.' ORDER BY 1;
SELECT inst_id, child_number, plan_hash_value, executions, is_shareable
FROM gv$sql WHERE sql_id = '&&sql_id.' ORDER BY 1, 2;
PRO *** flushing &&sql_id. ***
BEGIN
  FOR i IN (SELECT address, hash_value
              FROM gv$sqlarea WHERE sql_id = '&&sql_id.')
  LOOP
    DBMS_OUTPUT.PUT_LINE(i.address||','||i.hash_value);
    BEGIN
      SYS.DBMS_SHARED_POOL.PURGE (
        name => i.address||','||i.hash_value,
        flag => 'C'
      );
    EXCEPTION
      WHEN OTHERS THEN
        DBMS_OUTPUT.PUT_LINE(SQLERRM);
    END;
  END LOOP;
END;
/
PRO *** after flush ***
SELECT inst_id, loaded_versions, invalidations, address, hash_value
FROM gv$sqlarea WHERE sql_id = '&&sql_id.' ORDER BY 1;
SELECT inst_id, child_number, plan_hash_value, executions, is_shareable
FROM gv$sql WHERE sql_id = '&&sql_id.' ORDER BY 1, 2;
 
WHENEVER SQLERROR CONTINUE;
SET DEF ON TERM ON ECHO OFF FEED 6 VER ON HEA ON LIN 80 PAGES 14 LONG 80 LONGC 80 TRIMS OFF TI OFF TIMI OFF SERVEROUT OFF NUMF "" SQLP SQL>;
SET SERVEROUT OFF;
PRO
PRO SQL Patch "sqlpch_&&sql_id." will be used on next parse.
PRO To drop SQL Patch on this SQL:
PRO EXEC DBMS_SQLDIAG.DROP_SQL_PATCH(name => 'sqlpch_&&sql_id.');
PRO
UNDEFINE 1 2 sql_id_1 sql_id hint_text_2 hint_text
CL COL
PRO
PRO sqlpch completed.
SPO OFF;
}}}
"DB CPU" / "CPU + Wait for CPU" / "CPU time" Reference Note (Doc ID 1965757.1)
MPTW is a distribution or edition of TiddlyWiki that includes a standard TiddlyWiki core packaged with some plugins designed to improve usability and provide a better way to organise your information. For more information see http://mptw.tiddlyspot.com/.
http://gigaom.com/2012/11/12/mram-takes-another-step-closer-to-the-real-world/

! transfer outlook to new computer 
https://www.stellarinfo.com/blog/transfer-outlook-data-to-new-computer/

! manual archive 
http://office.microsoft.com/en-us/outlook-help/archive-a-folder-manually-HA001121610.aspx
https://support.office.com/en-us/article/archive-items-manually-ecf54f37-14d7-4ee3-a830-46a5c33274f6

! turn off auto archive 
<<<
To archive only when you want, turn off AutoArchive.

Click File > Options > Advanced.

Under AutoArchive, click AutoArchive Settings.

Uncheck the Run AutoArchive every n days box.
<<<
http://office.microsoft.com/en-us/powerpoint-help/view-your-speaker-notes-privately-while-delivering-a-presentation-on-multiple-monitors-HA010067383.aspx
http://www.labnol.org/software/see-speaker-notes-during-presentation/17927/
http://techmonks.net/using-the-presenter-view-in-microsoft-powerpoint/

! ink and erase
{{{
CTRL-P for "pen"
press E for "erase"
}}}
http://office.microsoft.com/en-us/project-help/setting-working-times-and-days-off-by-using-project-calendars-HA001020995.aspx
http://forums.techarena.in/microsoft-project/1264277.htm
The silly IO test
http://www.facebook.com/photo.php?pid=6096927&l=5082945abc&id=552113028
<<<
A simple IO test on a Macbook Air 11" 2GB memory 64GB SSD..
the peak write IOPS is just too high for this small lightweight laptop..

For a clearer image http://lh3.ggpht.com/_F2x5WXOJ6Q8/TTwvCC02nWI/AAAAAAAABBU/WBP3z81nifM/SillyTest.jpg

Also you can compare the performance numbers of 4 disk spindles.. short stroked or not.. here http://karlarao.tiddlyspot.com/#OrionTestCases
<<<


Gaja buys MacAir with 1 CPU with 2 cores (2.18Ghz), 4GB of RAM and 256GB of flash storage
http://www.facebook.com/Leo4Evr/posts/10150123422217659
<<<
Karl Arao Hi Gaja.. I would be interested to see the output of this silly IO test on your new Mac http://goo.gl/lZdUw ;)
---------------------------------------------------------------------------------------------------------------------
Gaja Krishna Vaidyanatha ‎@Karl - Silly Test...indeed...one of the reason for all of the memory being consumed is due to the excessive growth of the filesystem buffer cache (FSBC)/Page Cache, due to an increased amount of I/O load on the system. That in turn causes the paging/swapping daemon to be overactive, thus inflating the CPU consumption on the machine. It is a classic case of buffered I/O killing your system. Realistically, this test is more about creating an severe I/O bottleneck instead of measuring IOPS and transfer rates. 

A true IOPS test will entail doing just direct I/O and bypassing the FSBC/Page cache. One way to simulate that(if the FSBC/Page cache cannot be bypassed) is to do a dd of a large file that has never been read before. Reboot the system and repeat as often as needed. I just did a few of those and got approx 10,000 - 20,000 IOPS (depending on the bs size) with a transfer rate of approx 200MB/sec. The dd bs (block size) that I tried were 8K and 16K. The numbers are good enough for me :)))

Gaja Krishna Vaidyanatha One more thing...if you run "dd" a very small blocksize (default), it will generate more overhead due to the large number of I/O requests, potentially spending more time under "%sys" instead of "%usr"
---------------------------------------------------------------------------------------------------------------------
Karl Arao That is awesome and the numbers are just impressive... I want to have one! :)

Yes, I made that IO test with the intention of bringing the system down to its knees and characterizing the IO performance on that level of stress. That time I want to know if a laptop on SSD will out number the IO performance of my R&D server http://goo.gl/eLVo2 (running lots of VMs) having 8GB memory, IntelCore2Quad Q9500 & 5 1TB short stroked disk (on 100GB area) on an LVM stripe that's about 900+ IOPS & 300+ MB/s on my Orion and dbms_resource_manager.calibrate_io runs and actually running 250 parallel sessions doing SELECT * on a 300GB table http://goo.gl/PYYyH (the same disks but as ASM disks on the next 100GB area - short stroked). 

Also prior to running that IO test on a MacAir I ran the same DD command on my old laptop w/ 4GB memory, IntelCore2 T8100 & 160GB SATA.. just two DDs will instantly go IO WAIT% of 60% going to 90% and load average shooting up and will be completely unresponsive. I was monitoring the General workload, CPU, & Disk using COLLECTL and I can see that the disk is being hammered with lots of 4K blocks IO size & I'm really having high Qlen,Wait, & 100% Disk Utilization. And I have to restart the laptop to be usable again.

So on the IO test on MacAir, that's my first time seeing the GUI perf monitor which I noticed there's no IO WAIT% on the metrics and see SYS% shoot up as I invoke more DDs (I don't know if that's really the nature of machines on SSDs). And surprisingly after 60 DDs I can still move my mouse and the system is still responsive. Cool!

On the test you did that is interesting and you had really nice insights on your reply, I'd also like to try that sometime ;) Can you mail me the exact commands for the test case? karlarao@gmail.com

BTW for an R&D machine that weights 1+kg, 10K IOPS, 200MB/s that's not bad!
<<<
<<showtoc>>

! annoyances
Top Mac OS X annoyances and how to fix them http://www.voipsec.eu/?p=740
path finder and dropbox integration http://blip.tv/appshrink/os-x-tips-and-tweaks-how-to-enable-dropbox-contextual-menu-items-in-pathfinder-6110422
how to lock your mac http://www.howtogeek.com/howto/32810/how-to-lock-your-mac-os-x-display-when-youre-away/
http://gadgetwise.blogs.nytimes.com/2011/05/02/qa-changing-the-functions-of-a-macs-f-keys/
http://osxdaily.com/2010/09/06/change-your-mac-hostname-via-terminal/
http://apple.stackexchange.com/questions/66611/how-to-change-computer-name-so-terminal-displays-it-in-mac-os-x-mountain-lion
http://www.cultofmac.com/108120/how-to-change-the-scrolling-direction-in-lion-os-x-tips/
http://support.apple.com/kb/ht2490
http://superuser.com/questions/322983/how-to-let-ctrl-page-down-switch-tabs-inside-vim-in-terminal-app
http://askubuntu.com/questions/105224/ctrl-page-down-ctrl-page-up
http://www.danrodney.com/mac/
http://www.mac-forums.com/forums/switcher-hangout/121984-easy-way-show-desktop.html
http://www.silvermac.com/2010/show-desktop-on-mac/
alt-enter on excel http://dropline.net/2009/02/adding-new-lines-to-cells-in-excel-for-the-mac/
damn you autocorrect http://osxdaily.com/2011/07/28/turn-off-auto-correct-in-mac-os-x-lion/
windows key https://forums.virtualbox.org/viewtopic.php?f=1&t=17641
sublime text column selection https://www.sublimetext.com/docs/3/column_selection.html


! software 
homebrew http://brew.sh/ (to install wget, parallel)
uninstall http://lifehacker.com/5828738/the-best-app-uninstaller-for-mac
ntfs mounts http://macntfs-3g.blogspot.com/, http://www.tuxera.com/products/tuxera-ntfs-for-mac/
filesystem space analyzer http://www.derlien.com/downloads/index.html
jedit, textwrangler https://groups.google.com/forum/?fromgroups=#!topic/textwrangler/nb3Nw1GC4Fo
teamviewer
dropbox
evernote
tiddlywiki
show desktop
virtualbox
ms office for mac
skype
sqldeveloper
picasa
fx photo studio pro
camtasia
little snapper
mpeg streamclip
mucommander
chicken vnc
crossover
flashplayer
Firefox 4.0 RC 2 , do this after the install https://discussions.apple.com/message/21335991#21335991
filezilla
appcleaner
http://www.ragingmenace.com/software/menumeters/index.html#sshot
kdiff
http://manytricks.com/timesink/ alternative to manictime
http://www.macupdate.com/app/mac/28171/ichm
http://i-funbox.com/ifunboxmac/ copying from iphone to mac
terminator http://software.jessies.org/terminator/#downloads, https://drive.google.com/folderview?id=0BzZNCgKvEkQYZDBNTm1HWThOaEU&usp=drive_web#list
http://www.freemacware.com/jellyfissh/ <-- can save passwords
nmap http://nmap.org/download.html#macosx
http://adium.im/ <-- instant messenger
https://itunes.apple.com/us/app/battery-time/id547105832 <-- battery time
ithoughtsx https://itunes.apple.com/us/app/ithoughtsx/id720669838?mt=12, http://toketaware.com/howto/
snagit, http://feedback.techsmith.com/techsmith/topics/snagit_file_back_up?page=1#reply_8579747
path finder
ntfs-3g tuxerant
http://support.agilebits.com/kb/syncing/how-to-move-your-1password-data-file-between-pc-and-mac
http://www.donationcoder.com/Software/Mouser/screenshotcaptor/, http://download.cnet.com/Screenshot-Captor/3000-20432_4-10433616.html <-- long screenshots
http://thepdf.com/unlock-pdf.html   <-- unlock PDF restrictions
[[licecap - gif]] - gif creator 
istat menus
cleanmymac3


pending:
http://lifehacker.com/5880540/the-best-screen-capture-tool-for-mac-os-x
http://mac.appstorm.net/roundups/utilities-roundups/10-screen-recording-tools-for-mac/

oracle and mac
http://tjmoracle.tumblr.com/post/26025230295/os-x-software-for-oracle-developers
http://blog.enkitec.com/2011/08/get-oracle-instant-client-working-on-mac-os-x-lion/


! macport and fink
http://macosx.com/forums/mac-os-x-system-mac-software/306582-yum-apt-get.html
http://sparkyspider.blogspot.com/2010/03/apt-get-install-yum-install-on-mac-os-x.html
http://www.macports.org/index.php
https://developer.apple.com/xcode/
http://forums.macrumors.com/showthread.php?t=720035
http://scottlab.ucsc.edu/~wgscott/xtal/wiki/index.php/Main_Page


! hibernate
http://www.youtube.com/watch?feature=fvwp&v=XA0MnnEFmDQ&NR=1
http://forums.macrumors.com/showthread.php?t=1491002
http://apple.stackexchange.com/questions/26842/is-there-a-way-to-hibernate-in-mac
http://www.macworld.com/article/1053471/sleepmode.html
http://deepsleep.free.fr/deepsleep.pdf
http://www.geekguides.co.uk/104/how-to-enable-hibernate-mode-on-a-mac/
http://blog.kaputtendorf.de/2007/08/17/hibernation-tool-for-mac-os/
http://www.garron.me/mac/macbook-hibernate-sleep-deep-standby.html
http://etherealmind.com/osx-hibernate-mode/
http://www.jinx.de/SmartSleep.html
https://itunes.apple.com/au/app/smartsleep/id407721554?mt=12


! presentations
https://georgecoghill.wordpress.com/2012/08/12/highlight-draw-on-your-mac-screen/
http://lifehacker.com/304418/rock-your-presentation-with-the-right-tools-and-apps
http://lifehacker.com/281921/call-out-anything-on-your-screen-with-highlight?tag=softwarefeaturedmacdownload
http://lifehacker.com/255361/mac-tip--zoom-into-any-area-on-the-screen?tag=softwaremacosx
http://lifehacker.com/191126/download-of-the-day--doodim?tag=softwaredownloads
http://forums.macrumors.com/showthread.php?t=1195196
http://www.dummies.com/how-to/content/erase-pen-and-highlighter-drawings-on-your-powerpo.html


! mount iso
http://osxdaily.com/2008/04/22/easily-mount-an-iso-in-mac-os-x/
{{{
using disk utility 
or 
hdiutil mount sample.iso
}}}

! create iso compatible on windows
http://www.makeuseof.com/tag/how-to-create-windows-compatible-iso-disc-images-in-mac-os-x/
{{{
* use disk utility to create the CDR file
* then, enter this line of code to transform the .cdr to an ISO file:
hdiutil makehybrid -iso -joliet -o [filename].iso [filename].cdr
}}}
or just do this all in command line
{{{
hdiutil makehybrid -iso -joliet -o tmp.iso tmp -ov
}}}

! burn DVD
http://www.youtube.com/watch?v=5x7jpIoFixc

! burn ISO linux installer 
http://switchingtolinux.blogspot.com/2007/07/burning-ubuntu-iso-in-mac-os-x.html


! migrate boot device to SSD
http://www.youtube.com/watch?v=Zda6pGH8_1Q



! programming editors
I got the sublime text 2 and text wrangler 
http://smyck.net/2011/10/02/text-editors-for-programmers-on-the-mac/
http://sixrevisions.com/web-development/the-15-most-popular-text-editors-for-developers/
http://mac.appstorm.net/roundups/office-roundups/top-10-mac-text-editors/
http://meandmark.com/blog/2010/01/getting-started-with-mac-programming/
sublime text tutorial http://www.youtube.com/watch?v=TZ-bgcJ6fQo


! install fonts
http://www.youtube.com/watch?v=3AIR7_ch9No
 

! juniper vpn
http://wheatoncollege.edu/technology/started/networks-wheaton/juniper-vpn-instructions/juniper-vpn-instructions-for-macintosh/


! compare folders
http://www.macworld.com/article/1167853/use_visualdiffer_to_compare_the_contents_of_folders_and_files.html


! SecureCrt
http://www.vandyke.com/support/tips/backupsessions.html
https://www.vandyke.com/products/securecrt/faq/025.html
http://www.vandyke.com/download/securecrt/5.2/index.html, http://www.itpub.net/forum.php?mod=viewthread&tid=739092
{{{

1) Install SecureCRT on windows and point it to C:\Dropbox\Putty\SecureCRT\Config location
2) Install SecureCRT on mac and copy all Sessions file to windows

Karl-MacBook:Sessions karl$ pwd
/Users/karl/Library/Application Support/VanDyke/SecureCRT/Config/Sessions

Karl-MacBook:Sessions karl$ cp -rpv * /Users/karl/Dropbox/Putty/SecureCRT/Config/Sessions/
Default.ini -> /Users/karl/Dropbox/Putty/SecureCRT/Config/Sessions/Default.ini
__FolderData__.ini -> /Users/karl/Dropbox/Putty/SecureCRT/Config/Sessions/__FolderData__.ini
v2 -> /Users/karl/Dropbox/Putty/SecureCRT/Config/Sessions/v2
v2/__FolderData__.ini -> /Users/karl/Dropbox/Putty/SecureCRT/Config/Sessions/v2/__FolderData__.ini
v2/enkdb01.ini -> /Users/karl/Dropbox/Putty/SecureCRT/Config/Sessions/v2/enkdb01.ini

3) Create symbolic link on Sessions folder to Dropbox

cd /Users/karl/Library/Application Support/VanDyke/SecureCRT/Config/
rm -rf Sessions/
ln -s /Users/karl/Dropbox/Putty/SecureCRT/Config/Sessions Sessions
}}}


! outlook on vbox with zimbra connector 
Get the connector from this site 
https://mail.physics.ucla.edu/downloads/ZimbraConnectorOLK_7.0.1.6307_x86.msi
and the outlook SP3 here, the connector requires SP3.. so you have to install this first..
http://www.microsoft.com/en-us/download/details.aspx?id=27838
then setup a shared folder across the mac and VM called /Users/karl/Dropbox/tmp which is also selectively synced by Dropbox on the windows VM


! safeboot, rescue mode
http://www.macworld.com/article/2018853/when-good-macs-go-bad-steps-to-take-when-your-mac-wont-start-up.html

! format hard disk for time machine - MAC and NTFS
http://www.youtube.com/watch?v=hdDSpIkv-4o

! enable SSH to localhost 
http://bluishcoder.co.nz/articles/mac-ssh.html
http://superuser.com/questions/555810/how-do-i-ssh-login-into-my-mac-as-root

! SSD migration
http://www.amazon.com/Samsung-Electronics-EVO-Series-2-5-Inch-MZ-7TE250BW/dp/B00E3W1726/ref=pd_bxgy_pc_img_y
http://www.amazon.com/Doubler-Converter-Solution-selected-SuperDrive/dp/B00724W0N2
http://www.youtube.com/watch?v=YWUKAUlxrkg.
google search https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=mac%20os%20x%20migrate%20to%20ssd

! restart audio/sound service without reboot 
http://apple.stackexchange.com/questions/16842/restarting-sound-service
{{{
sudo kill -9 `ps ax|grep 'coreaudio[a-z]' | awk '{print $1}'`
sudo kextunload /System/Library/Extensions/AppleHDA.kext 
sudo kextload /System/Library/Extensions/AppleHDA.kext
}}}

! command tab doesn't work, dock not responding
http://superuser.com/questions/7715/cmd-tab-suddenly-stopped-working-and-my-dock-is-unresponsive-what-do-i-do
{{{
killall -9 Dock
}}}

! killstuff
{{{
Karl-MacBook:~ root# cat killstuff.sh 
kill -9 `ps -ef | grep -i "macos/iphoto" | grep -v grep | awk '{print $2}'`
kill -9 `ps -ef | grep -i "macos/itunes" | grep -v grep | awk '{print $2}'`
kill -9 `ps -ef | grep -i "GoogleSoftwareUpdate" | grep -v grep | awk '{print $2}'`
}}}


! vnc, screensharing
http://www.davidtheexpert.com/post.php?id=5
open safari and type
{{{
vnc://192.168.1.9
}}}

! verify, repair hard disk
http://www.macissues.com/2014/03/22/how-to-verify-and-repair-your-hard-disk-in-os-x/


! dot_clean ._ underscore files 
https://coderwall.com/p/yf7yjq/clean-up-osx-dotfiles



! get CPU information 
{{{
-- 15 inch 
AMAC02P37MYG3QC:~ kristofferson.a.arao$ system_profiler SPHardwareDataType
Hardware:

    Hardware Overview:

      Model Name: MacBook Pro
      Model Identifier: MacBookPro11,2
      Processor Name: Intel Core i7
      Processor Speed: 2.2 GHz
      Number of Processors: 1
      Total Number of Cores: 4
      L2 Cache (per Core): 256 KB
      L3 Cache: 6 MB
      Memory: 16 GB
      Boot ROM Version: MBP112.0138.B16
      SMC Version (system): 2.18f15
      Serial Number (system): C02P37MYG3QC
      Hardware UUID: F0DCC410-9E8A-5D77-98E3-C7767EB0CF8F


-- 13 inch 
Karl-MacBook:~ karl$ system_profiler SPHardwareDataType
Hardware:

    Hardware Overview:

      Model Name: MacBook Pro
      Model Identifier: MacBookPro9,2
      Processor Name: Intel Core i7
      Processor Speed: 2.9 GHz
      Number of Processors: 1
      Total Number of Cores: 2
      L2 Cache (per Core): 256 KB
      L3 Cache: 4 MB
      Memory: 16 GB
      Boot ROM Version: MBP91.00D3.B0C
      SMC Version (system): 2.2f44
      Serial Number (system): C1MK911MDV31
      Hardware UUID: 78B26406-BF78-531D-BCFB-1C3289BD44A5
      Sudden Motion Sensor:
          State: Enabled

}}}

http://fortysomethinggeek.blogspot.com/2012/11/getting-cpu-info-from-command-line-in.html
<<<
sysctl -n machdep.cpu.brand_string  
<<<
{{{
Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz
}}}

<<<
sysctl -a | grep machdep.cpu
<<<
{{{
machdep.cpu.max_basic: 13
machdep.cpu.max_ext: 2147483656
machdep.cpu.vendor: GenuineIntel
machdep.cpu.brand_string: Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz
machdep.cpu.family: 6
machdep.cpu.model: 70
machdep.cpu.extmodel: 4
machdep.cpu.extfamily: 0
machdep.cpu.stepping: 1
machdep.cpu.feature_bits: 9221959987971750911
machdep.cpu.leaf7_feature_bits: 10155
machdep.cpu.extfeature_bits: 142473169152
machdep.cpu.signature: 263777
machdep.cpu.brand: 0
machdep.cpu.features: FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE SSE3 PCLMULQDQ DTES64 MON DSCPL VMX EST TM2 SSSE3 FMA CX16 TPR PDCM SSE4.1 SSE4.2 x2APIC MOVBE POPCNT AES PCID XSAVE OSXSAVE SEGLIM64 TSCTMR AVX1.0 RDRAND F16C
machdep.cpu.leaf7_features: SMEP ERMS RDWRFSGS TSC_THREAD_OFFSET BMI1 AVX2 BMI2 INVPCID FPU_CSDS
machdep.cpu.extfeatures: SYSCALL XD 1GBPAGE EM64T LAHF LZCNT RDTSCP TSCI
machdep.cpu.logical_per_package: 16
machdep.cpu.cores_per_package: 8
machdep.cpu.microcode_version: 15
machdep.cpu.processor_flag: 5
machdep.cpu.mwait.linesize_min: 64
machdep.cpu.mwait.linesize_max: 64
machdep.cpu.mwait.extensions: 3
machdep.cpu.mwait.sub_Cstates: 270624
machdep.cpu.thermal.sensor: 1
machdep.cpu.thermal.dynamic_acceleration: 1
machdep.cpu.thermal.invariant_APIC_timer: 1
machdep.cpu.thermal.thresholds: 2
machdep.cpu.thermal.ACNT_MCNT: 1
machdep.cpu.thermal.core_power_limits: 1
machdep.cpu.thermal.fine_grain_clock_mod: 1
machdep.cpu.thermal.package_thermal_intr: 1
machdep.cpu.thermal.hardware_feedback: 0
machdep.cpu.thermal.energy_policy: 1
machdep.cpu.xsave.extended_state: 7 832 832 0
machdep.cpu.xsave.extended_state1: 1 0 0 0
machdep.cpu.arch_perf.version: 3
machdep.cpu.arch_perf.number: 4
machdep.cpu.arch_perf.width: 48
machdep.cpu.arch_perf.events_number: 7
machdep.cpu.arch_perf.events: 0
machdep.cpu.arch_perf.fixed_number: 3
machdep.cpu.arch_perf.fixed_width: 48
machdep.cpu.cache.linesize: 64
machdep.cpu.cache.L2_associativity: 8
machdep.cpu.cache.size: 256
machdep.cpu.tlb.inst.large: 8
machdep.cpu.tlb.data.small: 64
machdep.cpu.tlb.data.small_level1: 64
machdep.cpu.tlb.shared: 1024
machdep.cpu.address_bits.physical: 39
machdep.cpu.address_bits.virtual: 48
machdep.cpu.core_count: 4
machdep.cpu.thread_count: 8
machdep.cpu.tsc_ccc.numerator: 0
machdep.cpu.tsc_ccc.denominator: 0
}}}

{{{
sysctl -a | grep machdep.cpu | grep core_count
machdep.cpu.core_count: 4

sysctl -a | grep machdep.cpu | grep thread_count
machdep.cpu.thread_count: 8
}}}




! get memory information 
{{{
hostinfo | grep memory
}}}


! 850 EVO mSATA vs 2.5 
http://www.samsung.com/global/business/semiconductor/minisite/SSD/global/html/ssd850evo/specifications.html
http://www.legitreviews.com/samsung-evo-850-msata-m2-ssd-review_160540/7
http://www.storagereview.com/samsung_ssd_850_evo_ssd_review
http://www.storagereview.com/samsung_850_evo_msata_ssd_review


! Caffeine - keep your mac awake
http://apple.stackexchange.com/questions/76107/how-can-i-keep-my-mac-awake-and-locked


! teamviewer reset id 
http://changeteamviewerid.blogspot.com/2012/10/get-new-teamviewer-id-on-windows.html


! install gnu parallel
homebrew http://brew.sh/ (to install wget, parallel)
https://www.0xcb0.com/2011/10/19/running-parallel-bash-tasks-on-os-x/
https://darknightelf.wordpress.com/2015/01/01/gnu-parallel-on-osx/

! pdftotext 
"brew install poppler"


! ._ in dropbox SSD 
https://www.dropboxforum.com/t5/Installation-and-desktop-app/Dot-underscore-files-appeared-after-moving-Dropbox-location-on/td-p/107034/page/2


! ms word blank images 
http://www.worldstart.com/seeing-blank-boxes-instead-of-pasted-pictures-in-ms-word/   "show picture placeholders" settings


! dropbox conflicted copy
https://www.dropbox.com/en/help/36
https://www.dropbox.com/en/help/7674
https://ttboj.wordpress.com/2014/09/30/fixing-dropbox-conflicted-copy-problems/
https://gist.github.com/purpleidea/0ed86f735807759d455c
https://www.dropboxforum.com/t5/Installation-and-desktop-app/How-do-you-avoid-to-create-conflicted-copies-when-you-make-some/td-p/45234
https://www.engadget.com/2013/02/20/finding-dropbox-conflicted-copy-files-automatically/


! migrate dropbox to new drive from 1TB to 2TB
<<<
* here the vm drive is 1TB
* format new ssd drive named it as vm2, encrypted and password protected (GUID)
* use carbon copy cloner to clone 1TB to 2TB
* rename vm2 to vm
* also rename the old drive 

all paths stay the same!
<<<


! How Dropbox handles downgrades
https://news.ycombinator.com/item?id=16445751



! macbook recovery 
http://www.toptenreviews.com/software/backup-recovery/best-mac-hard-drive-recovery/
https://www.youtube.com/watch?v=PKUlMHkCXUk
https://www.prosofteng.com/data-rescue-recovery-software/
https://discussions.apple.com/thread/2653403?start=0&tstart=0
http://www.tomshardware.com/answers/id-2625068/accidentally-formatted-full-1tb-hard-drive-disk-manager-blank-chance-recovering-data.html


! closed lid no sleep
http://superuser.com/questions/38840/is-there-a-way-to-close-the-lid-on-a-macbook-without-putting-it-sleep
http://lifehacker.com/5934158/nosleep-for-mac-prevents-your-macbook-from-sleeping-when-you-close-the-lid



! disk encryption
http://www.imore.com/encrypted-disk-images-dropbox-protect-sensitive-files
http://lifehacker.com/5794486/how-to-add-a-second-layer-of-encryption-to-dropbox
http://apple.stackexchange.com/questions/42257/how-can-i-mount-an-encrypted-disk-from-the-command-line
{{{
diskutil cs list 
diskutil list
diskutil cs unlockVolume 110B820A-52A5-4E3D-AB1C-FCF0263DC5A6 
diskutil mount /dev/disk3

diskutil list | grep "vm" | awk '{print $NF}' | sed -e 's/s[0-9].*$//'
diskutil eject disk4
}}}


! disk unlock encrypted 
External Hard Disc won't mount - Partition Map error.  Can't repair with Disk Utility https://discussions.apple.com/thread/6775802?start=0&tstart=0
https://derflounder.wordpress.com/2011/11/23/using-the-command-line-to-unlock-or-decrypt-your-filevault-2-encrypted-boot-drive/


! Spinning Down Unmounted Disks in OS X
https://anders.com/cms/405/Mac/OS.X/Hard.Disk/Spindown
External Hard Drive - Does it Need To Spin Down? Is "Safely Remove" Enough https://ubuntuforums.org/showthread.php?t=2235050
https://www.reddit.com/r/techsupport/comments/2s6xnu/external_hd_not_spinning_down_after_unmounting/


! ulimit - too many open files 
http://superuser.com/questions/433746/is-there-a-fix-for-the-too-many-open-files-in-system-error-on-os-x-10-7-1
{{{
seems like there is an entirely different method for changing the open files limit for each version of OS X!

For OS X Sierra (10.12.X) you need to:

1. In Library/LaunchDaemons create a file named limit.maxfiles.plist and paste the following in (feel free to change the two numbers (which are the soft and hard limits, respectively):

<?xml version="1.0" encoding="UTF-8"?>  
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"  
        "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">  
  <dict>
    <key>Label</key>
    <string>limit.maxfiles</string>
    <key>ProgramArguments</key>
    <array>
      <string>launchctl</string>
      <string>limit</string>
      <string>maxfiles</string>
      <string>64000</string>
      <string>524288</string>
    </array>
    <key>RunAtLoad</key>
    <true/>
    <key>ServiceIPC</key>
    <false/>
  </dict>
</plist> 
2. Change the owner of your new file:

sudo chown root:wheel /Library/LaunchDaemons/limit.maxfiles.plist
3. Load these new settings:

sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist
4. Finally, check that the limits are correct:

launchctl limit maxfiles
}}}

! macos sierra bugs

!! Not Allowing Identified Developer App Downloads
https://support.apple.com/kb/PH25088?locale=en_US
Mac OS Sierra: Install apps from unidentified developers (anywhere) https://www.youtube.com/watch?v=AA9jQFxn9Pw
https://www.tekrevue.com/tip/gatekeeper-macos-sierra/
{{{
sudo spctl --master-disable
}}}

!! OS X kernel asynchronous I/O limits 
http://www.firewing1.com/blog/2013/08/02/setting-os-x-kernel-asynchronous-io-limits-avoid-virtualbox-crashing-os-x
{{{
sudo sysctl -w  kern.aiomax=10240 kern.aioprocmax=10240 kern.aiothreads=16
sudo spctl --master-disable

}}}

!! Remove Symantec software for Mac OS using RemoveSymantecMacFiles
https://discussions.apple.com/message/31506414#31506414  MacBookPro13,3 2016 w/ touchbar frequent crash (Sierra 10.12.3)
https://support.symantec.com/en_US/article.TECH103489.html ftp://ftp.symantec.com/misc/tools/mactools/RemoveSymantecMacFiles.zip 
Mac Malware Guide : How does Mac OS X protect me? http://www.thesafemac.com/mmg-builtin/

!! read mac crash reports 
https://www.cnet.com/news/tutorial-an-introduction-to-reading-mac-os-x-crash-reports/

!! mds_Stores some times used 100% of CPU
https://discussions.apple.com/thread/5779822?tstart=0

! how to run a script in mac on startup
http://stackoverflow.com/questions/6442364/running-script-upon-login-mac <- use automator
http://stackoverflow.com/questions/6442364/running-script-upon-login-mac/13372744#13372744
http://www.developernotes.com/archive/2011/04/06/169.aspx


! uninstall spotify 
https://community.spotify.com/t5/Desktop-Linux-Windows-Web-Player/Spotify-constantly-crashes-on-my-MAC/td-p/15283


! disable internal keyboard 
https://pqrs.org/osx/karabiner/
http://www.mackungfu.org/cat-proofing-a-macbook-keyboard


! macbook double key press fix 
https://github.com/aahung/Unshaky
https://unshaky.nestederror.com/?ref=producthunt
https://www.wsj.com/graphics/apple-still-hasnt-fixed-its-macbook-keyboard-problem/?ns=prod/accounts-wsj
https://www.reddit.com/r/macbook/comments/9n8hgi/my_experience_with_macbook_pro_2018_keyboard/


! ms paint in macbook 
https://paintbrush.sourceforge.io/downloads/


! mount from read only to RW , external drive 
{{{
sudo mount -u -o rw,noowners /Volumes/vm
}}}
https://apple.stackexchange.com/questions/92979/how-to-remount-an-internal-drive-as-read-write-in-mountain-lion



! zsh as the default shell on your Mac
https://support.apple.com/en-us/HT208050
{{{
To silence this warning, you can add this command to ~/.bash_profile or ~/.profile:
export BASH_SILENCE_DEPRECATION_WARNING=1
}}} 




! end







[[About]] [[RSS & Search]] [[TagCloud]] [[Oracle]] [[.MOSNotes]] [[OraclePerformance]] [[Benchmark]] [[Capacity Planning]] [[Hardware and OS]] [[EngineeredSystems]] [[Exadata]] [[High Availability]] [[PerformanceTools]] [[Troubleshooting & Internals]] [[EnterpriseManager]] [[MigrationUpgrade]] [[BackupAndRecovery]] [[Solaris]] [[AIX]] [[Linux]] [[DevOps]] [[Data Engineering]] [[Data Science]] [[AI]] [[CodeNinja]] [[Coderepo]] [[SQL Tuning]] [[DataWarehouse]] [[DB flavors]] [[etc..]]
! Mainframe (MIPS) to Sparc sizing
see discussions here https://www.evernote.com/l/ADCaWiqj_VxB9anL0h3PAbDiv8fzv4G48pU

! stromasys
https://stromasys.atlassian.net/wiki/spaces/KBP/pages/17039488/CHARON+Linux+server+-+Connection+to+guest+console+blocked+by+firewall
https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:16370675423662:

<<<
- We have a third party application we use for selecting specific
values from several tables. The character values stored in a field could be in any of the following formats:
"Bell", "bell" , "BELL" , "beLL" etc..
1) Is there a way to force (change) values (either to UPPER or Lower case) during a DML other than using triggers? 
if not,
2) Is there a way to force a select query (other than using functions UPPER or LOWER) to return all the values regardless of the case that a user enters? 
<<<


! nls_comp and nls_sort at the system level or logon trigger 
{{{
1) no

2) in 10g, yes.

ops$tkyte@ORA10G> create table t ( data varchar2(20) );

Table created.

ops$tkyte@ORA10G>
ops$tkyte@ORA10G> insert into t values ( 'Hello' );

1 row created.

ops$tkyte@ORA10G> insert into t values ( 'HeLlO' );

1 row created.

ops$tkyte@ORA10G> insert into t values ( 'HELLO' );

1 row created.

ops$tkyte@ORA10G>
ops$tkyte@ORA10G> create index t_idx on
2 t( nlssort( data, 'NLS_SORT=BINARY_CI' ) );

Index created.

ops$tkyte@ORA10G> pause

ops$tkyte@ORA10G>
ops$tkyte@ORA10G> variable x varchar2(25)
ops$tkyte@ORA10G> exec :x := 'hello';

PL/SQL procedure successfully completed.

ops$tkyte@ORA10G>
ops$tkyte@ORA10G> select * from t where data = :x;

no rows selected

ops$tkyte@ORA10G> pause

ops$tkyte@ORA10G>
ops$tkyte@ORA10G> alter session set nls_comp=ansi;

Session altered.

ops$tkyte@ORA10G> alter session set nls_sort=binary_ci;

Session altered.

ops$tkyte@ORA10G> select * from t where data = :x;

DATA
--------------------
Hello
HeLlO
HELLO

ops$tkyte@ORA10G> pause

ops$tkyte@ORA10G>
ops$tkyte@ORA10G> set autotrace on
ops$tkyte@ORA10G> select /*+ first_rows */ * from t where data = :x;

DATA
--------------------
Hello
HeLlO
HELLO


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=2 Card=1 Bytes=12)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'T' (TABLE) (Cost=2 Card=1 Bytes=12)
2 1 INDEX (RANGE SCAN) OF 'T_IDX' (INDEX) (Cost=1 Card=1)
}}}
''Management Repository Views'' http://docs.oracle.com/cd/B16240_01/doc/em.102/b40007/views.htm#BACCEIBI

http://nyoug.org/Presentations/2011/March/Iotzov_OEM_Repository.pdf
{{{
MGMT$METRIC_CURRENT 
MGMT$METRIC_DETAILS 
MGMT$METRIC_HOURLY 
MGMT$METRIC_DAILY

-- Reference Data
MGMT$TARGET_TYPE
MGMT$GROUP_DERIVED_MEMBERSHIPS
}}}
http://www.oracledbasupport.co.uk/querying-grid-repository-tables/
{{{
Modify these retention policies by updating the mgmt_parameters table in the OMR.
Table Name                   Retention Parameter                  Retention Days
MGMT_METRICS_RAW             mgmt_raw_keep_window                    7
MGMT_METRICS_1HOUR           mgmt_hour_keep_window                   31
MGMT_METRICS_1DAY            mgmt_day_keep_window                    365
}}}

10gR2 MGMT$METRIC_DAILY http://docs.oracle.com/cd/B19306_01/em.102/b16246/views.htm#sthref444
12cR2 MGMT$METRIC_DAILY http://docs.oracle.com/cd/E24628_01/doc.121/e25161/views.htm#sthref1824
12cR2 GC$METRIC_VALUES_DAILY http://docs.oracle.com/cd/E24628_01/doc.121/e25161/views.htm#BABFCJBD   <-- new in 12c
gc_metric_values_daily/hourly are capacity planning gold mine views in #em12c #upallnightinvestigatingem12c http://goo.gl/YgMox  <-- coolstuff!



! purging before 13c 
http://learnwithmedba.blogspot.com/2013/01/increasemodifypurging-retention-for.html


! purging 13c 
https://support.oracle.com/knowledge/Enterprise%20Management/2251910_1.html
https://gokhanatil.com/2016/08/how-to-modify-the-retention-time-for-metric-data-in-em13c.html



! OEM sampling 
{{{
The collection is every 15 minutes, I made use of the following: 

$ cat metric_current.sql
col tm format a20
col target_name format a20
col metric_label format a20
col metric_column format a20
col value format a20
select TO_CHAR(collection_timestamp,'MM/DD/YY HH24:MI:SS') tm, target_name, metric_label, metric_column, value
from mgmt$metric_current
where metric_label = 'Load'
and metric_column = 'cpuUtil'
and target_name = 'desktopserver.local';

oracle@desktopserver.local:/home/oracle/dba/karao/scripts:emrep
$ cat metric_currentloop
#!/bin/bash

while :; do
sqlplus "/ as sysdba" <<! &
spool metric_current.txt append
set lines 300
@metric_current.sql
exit
!
sleep 10
echo
done

while : ; do cat metric_current.txt  | awk '{print $2}' | sort | uniq ; echo "---"; sleep 2; done

--------------------
15:11:59
15:26:59
15:41:59
15:56:59
}}}

{{{

col tm format a20
col target_name format a20
col metric_label format a20
col metric_column format a20
col value format a20
select TO_CHAR(collection_timestamp,'MM/DD/YY HH24:MI:SS') tm, target_name, metric_label, metric_column, value 
from mgmt$metric_current
where metric_label = 'Load'
and metric_column = 'cpuUtil'
and target_name = 'desktopserver.local';

select * from (
select TO_CHAR(rollup_timestamp,'MM/DD/YY HH24:MI:SS') tm, target_name, metric_label, metric_column, sample_count, average
from mgmt$metric_hourly
where metric_label = 'Load'
and metric_column = 'cpuUtil'
and target_name = 'desktopserver.local'
order by tm desc
) 
where rownum < 11;
}}}



! other references
http://www.slideshare.net/MaazAnjum/maaz-anjum-ioug-em12c-capacity-planning-with-oem-metrics
http://www.rmoug.org/wp-content/uploads/News-Letters/fall13web.pdf
http://www.oracle.com/webfolder/technetwork/tutorials/obe/em/em12c/metric_extensions/Metric_Extensions.html
http://docs.oracle.com/cd/E24628_01/doc.121/e24473/metric_extension.htm#EMADM10033

http://www.oracledbasupport.co.uk/querying-grid-repository-tables/
http://blog.dbi-services.com/query-the-enterprise-manager-collected-metrics/
http://www.nyoug.org/Presentations/2011/March/Iotzov_OEM_Repository.pdf
http://www.slideshare.net/Datavail/optimizing-alert-monitoring-with-oracle-enterprise-manager?next_slideshow=1






http://www.manictime.com/Support/Help/v15/2/how-do-i-transfer-data-to-another-computer

! manic time on mac
!! install the server 
https://www.manictime.com/Teams/How-To-Install-linux-mac
http://localhost:8080/#/personal/day-view?dayDate=8-21-2018&groupBy=2&userIds=0
!! then install the mac client 
https://www.manictime.com/mac/download
https://blogs.oracle.com/optimizer/entry/how_does_sql_plan_management
<<<
A signature is a unique SQL identifier generated from the normalized SQL text (uncased and with whitespaces removed). This is the same technique used by SQL profiles and SQL patches. This means, if you issue identical SQL statements from two different schemas they would resolve to the same SQL plan baseline.
<<<
<!-- Start of StatCounter Code -->
<script type="text/javascript">
var sc_project=5604557; 
var sc_invisible=1; 
var sc_partition=63; 
var sc_click_stat=1; 
var sc_security="e91e8daa"; 
</script>

<script type="text/javascript"
src="http://www.statcounter.com/counter/counter.js"></script><noscript><div
class="statcounter"><a title="create counter"
href="http://www.statcounter.com/free_hit_counter.html"
target="_blank"><img class="statcounter"
src="http://c.statcounter.com/5604557/0/e91e8daa/1/"
alt="create counter" ></a></div></noscript>
<!-- End of StatCounter Code -->

{{{
http://goo.gl/qxTw0
http://www.techimo.com/forum/technical-support/220605-new-hard-drive-not-detected-bios.html
http://www.sevenforums.com/hardware-devices/140312-hdd-sata3-working-win7-but-not-detected-bios.html

--crapy siig controller
http://goo.gl/3CwQc
http://www.newegg.com/Product/Product.aspx?Item=N82E16816150028

--sata3 backwards compatible
http://www.techpowerup.com/forums/showthread.php?t=125631
http://www.tomshardware.com/forum/262341-32-sata-hard-disk-pluged-sata-port
http://en.wikipedia.org/wiki/Serial_ATA#Backward_and_forward_compatibility


-- find disk on SATA slot, find disk SATA speed
http://serverfault.com/questions/194506/find-out-if-disk-is-ide-or-sata
http://hardforum.com/showthread.php?t=1619242
http://www.cyberciti.biz/tips/how-fast-is-linux-sata-hard-disk.html
http://forums.gentoo.org/viewtopic-t-883181-start-0.html             <-- GOOD STUFF
http://ubuntuforums.org/archive/index.php/t-1635904.html
http://www.spinics.net/lists/raid/msg32885.html
http://www.linux-archive.org/centos/316405-how-map-ata-numbers-dev-sd-numbers.html
http://www.issociate.de/board/post/507665/Possible_HDD_error,_how_do_I_find_which_HDD_it_is?.html
http://serverfault.com/questions/5336/how-do-i-make-linux-recognize-a-new-sata-dev-sda-drive-i-hot-swapped-in-without
http://forums.fedoraforum.org/showthread.php?t=230618           <-- GOOD STUFF



-- WRITE FPDMA QUEUED
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/550559
http://us.generation-nt.com/answer/problem-reproduceable-storage-errors-high-io-load-help-203628822.html?page=2
http://www.linuxonlinehelp.de/?tag=write-fpdma-queued
http://ubuntuforums.org/archive/index.php/t-903198.html
http://web.archiveorange.com/archive/v/PqQ0RyEKwX1raCQooTfa
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/550559
http://forums.gentoo.org/viewtopic-p-4104334.html
http://lime-technology.com/wiki/index.php?title=The_Analysis_of_Drive_Issues   
http://forums.fedoraforum.org/archive/index.php/t-220438.html                                  <-- MAKES SENSE.. ERRORS ON NCQ
http://www.linuxquestions.org/questions/linux-hardware-18/sata-link-down-on-non-existing-sata-channel-694937/
http://ubuntuforums.org/archive/index.php/t-1037819.html
http://www.spinics.net/lists/linux-ide/msg41261.html
https://forums.openfiler.com/viewtopic.php?id=3551
http://www.linuxquestions.org/questions/linux-hardware-18/sata-link-down-on-non-existing-sata-channel-694937/














}}}
http://www.java2s.com/Tutorial/Oracle/0160__View/creatematerializedviewempdeptbuildimmediaterefreshondemandenablequeryrewrite.htm
https://gerardnico.com/db/oracle/methodology_for_designing_and_building_the_materialized_views
https://gerardnico.com/db/oracle/materialized_view
https://gerardnico.com/db/oracle/pre_compute_operations
https://gerardnico.com/dit/owb/materialized_view
https://gerardnico.com/db/oracle/partition/materialized_view


http://kimballgroup.forumotion.net/t127-appropriate-use-of-materialized-views



MATERIALIZED VIEW REFRESH: Locking, Performance, Monitoring
  	Doc ID: 	Note:258252.1


-- REPLICATION

Troubleshooting Guide: Replication Propagation
  	Doc ID: 	Note:1035874.6





-- DIAGNOSIS

ORA-00917 While Using using DBMS_MVIEW.EXPLAIN_REWRITE
  	Doc ID: 	471056.1

ORA-12899 When Executing DBMS_MVIEW.EXPLAIN_REWRITE
  	Doc ID: 	469448.1

Snapshot Refresh Fails with ORA-2055 and ORA-7445
  	Doc ID: 	141086.1

Privileges To Refresh A Snapshot Or Materialized View
  	Doc ID: 	1027174.6

Materialized View Refresh Fails With ORA-942: table or view does not exist
  	Doc ID: 	236652.1

How To Use DBMS_MVIEW.EXPLAIN_REWRITE and EXPLAIN_MVIEW To Diagnose Query Rewrite Problems
  	Doc ID: 	149815.1



-- BLOG
http://avdeo.com/2012/10/14/materialized-views-concepts-discussion-series-1/
http://avdeo.com/2012/10/16/materialized-views-concepts-discussion-series-2/
http://avdeo.com/2012/10/24/materialized-view-concepts-discussion-series-3/



• Linear Algebra = https://www.khanacademy.org/math/linear-algebra
• Differential Calculus = https://www.khanacademy.org/math/differential-calculus
• Integral Calculus = https://www.khanacademy.org/math/integral-calculus
• Probability and Statistics = https://www.khanacademy.org/math/probability

! math notation
https://www.adelaide.edu.au/mathslearning/seminars/MathsNotation2013.pdf
https://www.adelaide.edu.au/mathslearning/seminars/mathsnotation.html
https://www.mathsisfun.com/sets/symbols.html
http://abstractmath.org/MM/MMTOC.htm
http://www.abstractmath.org/Word%20Press/?p=9471
https://www.safaribooksonline.com/library/view/introduction-to-abstract/9781118311738/
https://www.safaribooksonline.com/library/view/introduction-to-abstract/9781118347898/
https://www.safaribooksonline.com/library/view/technical-math-for/9780470598740/

! discrete math 
https://www.lynda.com/Programming-Foundations-tutorials/Basics-discrete-mathematics/411376/475394-4.html


! udemy 


''you only need to replace a physical disk on Exadata if it's a predictive failure''

''References:''
LSI KnowledgeBase http://kb.lsi.com/KnowledgebaseArticle16516.aspx
http://www.fatmin.com/2011/10/lsi-megacli-check-for-failed-raid-controller-battery.html
http://windowsmasher.wordpress.com/2011/08/13/using-megacli-to-monitor-openfiler/
http://timjacobs.blogspot.com/2008/05/installing-lsi-logic-raid-monitoring.html
http://lists.us.dell.com/pipermail/linux-poweredge/2010-December/043835.html

http://www.watters.ws/mediawiki/index.php/RAID_controller_commands
http://artipc10.vub.ac.be/wordpress/2011/09/12/megacli-useful-commands/
http://thornelaboratories.net/documentation/2013/02/01/megacli64-command-usage-cheat-sheet.html
https://wiki.xkyle.com/MegaCLI


{{{
check the firmware version 
/opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -Lall -a0



[root@enkcel04 ~]# dcli -l root -g cell_group /opt/MegaRAID/MegaCli/MegaCli64 -AdpAllInfo -aALL | egrep "Degraded|Failed Disks"
enkcel04: Degraded        : 0
enkcel04: Failed Disks    : 0
enkcel05: Degraded        : 0
enkcel05: Failed Disks    : 0
enkcel06: Degraded        : 0
enkcel06: Failed Disks    : 0
enkcel07: Degraded        : 0
enkcel07: Failed Disks    : 0


[root@enkcel04 ~]# dcli -l root -g cell_group /opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -LALL -aALL | egrep "Virtual|State"
enkcel04: Adapter 0 -- Virtual Drive Information:
enkcel04: Virtual Drive: 0 (Target Id: 0)
enkcel04: State               : Optimal
enkcel04: Virtual Drive: 1 (Target Id: 1)
enkcel04: State               : Optimal
enkcel04: Virtual Drive: 2 (Target Id: 2)
enkcel04: State               : Optimal
enkcel04: Virtual Drive: 3 (Target Id: 3)
enkcel04: State               : Optimal
enkcel04: Virtual Drive: 4 (Target Id: 4)
enkcel04: State               : Optimal
enkcel04: Virtual Drive: 5 (Target Id: 5)
enkcel04: State               : Optimal
enkcel04: Virtual Drive: 6 (Target Id: 6)
enkcel04: State               : Optimal
enkcel04: Virtual Drive: 7 (Target Id: 7)
enkcel04: State               : Optimal
enkcel04: Virtual Drive: 8 (Target Id: 8)
enkcel04: State               : Optimal
enkcel04: Virtual Drive: 9 (Target Id: 9)
enkcel04: State               : Optimal
enkcel04: Virtual Drive: 10 (Target Id: 10)
enkcel04: State               : Optimal
enkcel04: Virtual Drive: 11 (Target Id: 11)
enkcel04: State               : Optimal
enkcel05: Adapter 0 -- Virtual Drive Information:
enkcel05: Virtual Drive: 0 (Target Id: 0)
enkcel05: State               : Optimal
enkcel05: Virtual Drive: 1 (Target Id: 1)
enkcel05: State               : Optimal
enkcel05: Virtual Drive: 2 (Target Id: 2)
enkcel05: State               : Optimal
enkcel05: Virtual Drive: 3 (Target Id: 3)
enkcel05: State               : Optimal
enkcel05: Virtual Drive: 4 (Target Id: 4)
enkcel05: State               : Optimal
enkcel05: Virtual Drive: 5 (Target Id: 5)
enkcel05: State               : Optimal
enkcel05: Virtual Drive: 6 (Target Id: 6)
enkcel05: State               : Optimal
enkcel05: Virtual Drive: 7 (Target Id: 7)
enkcel05: State               : Optimal
enkcel05: Virtual Drive: 8 (Target Id: 8)
enkcel05: State               : Optimal
enkcel05: Virtual Drive: 9 (Target Id: 9)
enkcel05: State               : Optimal
enkcel05: Virtual Drive: 10 (Target Id: 10)
enkcel05: State               : Optimal
enkcel05: Virtual Drive: 11 (Target Id: 11)
enkcel05: State               : Optimal
enkcel06: Adapter 0 -- Virtual Drive Information:
enkcel06: Virtual Drive: 0 (Target Id: 0)
enkcel06: State               : Optimal
enkcel06: Virtual Drive: 1 (Target Id: 1)
enkcel06: State               : Optimal
enkcel06: Virtual Drive: 2 (Target Id: 2)
enkcel06: State               : Optimal
enkcel06: Virtual Drive: 3 (Target Id: 3)
enkcel06: State               : Optimal
enkcel06: Virtual Drive: 4 (Target Id: 4)
enkcel06: State               : Optimal
enkcel06: Virtual Drive: 5 (Target Id: 5)
enkcel06: State               : Optimal
enkcel06: Virtual Drive: 6 (Target Id: 6)
enkcel06: State               : Optimal
enkcel06: Virtual Drive: 7 (Target Id: 7)
enkcel06: State               : Optimal
enkcel06: Virtual Drive: 8 (Target Id: 8)
enkcel06: State               : Optimal
enkcel06: Virtual Drive: 9 (Target Id: 9)
enkcel06: State               : Optimal
enkcel06: Virtual Drive: 10 (Target Id: 10)
enkcel06: State               : Optimal
enkcel06: Virtual Drive: 11 (Target Id: 11)
enkcel06: State               : Optimal
enkcel07: Adapter 0 -- Virtual Drive Information:
enkcel07: Virtual Drive: 0 (Target Id: 0)
enkcel07: State               : Optimal
enkcel07: Virtual Drive: 1 (Target Id: 1)
enkcel07: State               : Optimal
enkcel07: Virtual Drive: 2 (Target Id: 2)
enkcel07: State               : Optimal
enkcel07: Virtual Drive: 3 (Target Id: 3)
enkcel07: State               : Optimal
enkcel07: Virtual Drive: 4 (Target Id: 4)
enkcel07: State               : Optimal
enkcel07: Virtual Drive: 5 (Target Id: 5)
enkcel07: State               : Optimal
enkcel07: Virtual Drive: 6 (Target Id: 6)
enkcel07: State               : Optimal
enkcel07: Virtual Drive: 7 (Target Id: 7)
enkcel07: State               : Optimal
enkcel07: Virtual Drive: 8 (Target Id: 8)
enkcel07: State               : Optimal
enkcel07: Virtual Drive: 9 (Target Id: 9)
enkcel07: State               : Optimal
enkcel07: Virtual Drive: 10 (Target Id: 10)
enkcel07: State               : Optimal
enkcel07: Virtual Drive: 11 (Target Id: 11)
enkcel07: State               : Optimal


[root@enkcel04 ~]# dcli -l root -g cell_group /opt/MegaRAID/MegaCli/MegaCli64 -PDList -aALL | egrep "Slot Number|Device Id|Count"
enkcel04: Slot Number: 0
enkcel04: Device Id: 19
enkcel04: Media Error Count: 0
enkcel04: Other Error Count: 0
enkcel04: Predictive Failure Count: 0
enkcel04: Slot Number: 1
enkcel04: Device Id: 18
enkcel04: Media Error Count: 0
enkcel04: Other Error Count: 0
enkcel04: Predictive Failure Count: 0
enkcel04: Slot Number: 2
enkcel04: Device Id: 17
enkcel04: Media Error Count: 0
enkcel04: Other Error Count: 0
enkcel04: Predictive Failure Count: 0
enkcel04: Slot Number: 3
enkcel04: Device Id: 16
enkcel04: Media Error Count: 0
enkcel04: Other Error Count: 0
enkcel04: Predictive Failure Count: 0
enkcel04: Slot Number: 4
enkcel04: Device Id: 15
enkcel04: Media Error Count: 0
enkcel04: Other Error Count: 0
enkcel04: Predictive Failure Count: 0
enkcel04: Slot Number: 5
enkcel04: Device Id: 14
enkcel04: Media Error Count: 0
enkcel04: Other Error Count: 0
enkcel04: Predictive Failure Count: 0
enkcel04: Slot Number: 6
enkcel04: Device Id: 13
enkcel04: Media Error Count: 0
enkcel04: Other Error Count: 0
enkcel04: Predictive Failure Count: 0
enkcel04: Slot Number: 7
enkcel04: Device Id: 12
enkcel04: Media Error Count: 0
enkcel04: Other Error Count: 0
enkcel04: Predictive Failure Count: 0
enkcel04: Slot Number: 8
enkcel04: Device Id: 11
enkcel04: Media Error Count: 0
enkcel04: Other Error Count: 0
enkcel04: Predictive Failure Count: 0
enkcel04: Slot Number: 9
enkcel04: Device Id: 10
enkcel04: Media Error Count: 0
enkcel04: Other Error Count: 0
enkcel04: Predictive Failure Count: 0
enkcel04: Slot Number: 10
enkcel04: Device Id: 9
enkcel04: Media Error Count: 0
enkcel04: Other Error Count: 0
enkcel04: Predictive Failure Count: 0
enkcel04: Slot Number: 11
enkcel04: Device Id: 8
enkcel04: Media Error Count: 0
enkcel04: Other Error Count: 0
enkcel04: Predictive Failure Count: 0
enkcel05: Slot Number: 0
enkcel05: Device Id: 19
enkcel05: Media Error Count: 0
enkcel05: Other Error Count: 0
enkcel05: Predictive Failure Count: 0
enkcel05: Slot Number: 1
enkcel05: Device Id: 18
enkcel05: Media Error Count: 0
enkcel05: Other Error Count: 0
enkcel05: Predictive Failure Count: 0
enkcel05: Slot Number: 2
enkcel05: Device Id: 17
enkcel05: Media Error Count: 0
enkcel05: Other Error Count: 0
enkcel05: Predictive Failure Count: 0
enkcel05: Slot Number: 3
enkcel05: Device Id: 16
enkcel05: Media Error Count: 0
enkcel05: Other Error Count: 0
enkcel05: Predictive Failure Count: 0
enkcel05: Slot Number: 4
enkcel05: Device Id: 15
enkcel05: Media Error Count: 0
enkcel05: Other Error Count: 0
enkcel05: Predictive Failure Count: 0
enkcel05: Slot Number: 5
enkcel05: Device Id: 22
enkcel05: Media Error Count: 0
enkcel05: Other Error Count: 0
enkcel05: Predictive Failure Count: 0
enkcel05: Slot Number: 6
enkcel05: Device Id: 13
enkcel05: Media Error Count: 146
enkcel05: Other Error Count: 1
enkcel05: Predictive Failure Count: 0
enkcel05: Slot Number: 7
enkcel05: Device Id: 12
enkcel05: Media Error Count: 0
enkcel05: Other Error Count: 0
enkcel05: Predictive Failure Count: 0
enkcel05: Slot Number: 8
enkcel05: Device Id: 11
enkcel05: Media Error Count: 0
enkcel05: Other Error Count: 0
enkcel05: Predictive Failure Count: 0
enkcel05: Slot Number: 9
enkcel05: Device Id: 10
enkcel05: Media Error Count: 0
enkcel05: Other Error Count: 0
enkcel05: Predictive Failure Count: 0
enkcel05: Slot Number: 10
enkcel05: Device Id: 9
enkcel05: Media Error Count: 0
enkcel05: Other Error Count: 0
enkcel05: Predictive Failure Count: 0
enkcel05: Slot Number: 11
enkcel05: Device Id: 8
enkcel05: Media Error Count: 0
enkcel05: Other Error Count: 0
enkcel05: Predictive Failure Count: 0
enkcel06: Slot Number: 0
enkcel06: Device Id: 19
enkcel06: Media Error Count: 0
enkcel06: Other Error Count: 0
enkcel06: Predictive Failure Count: 0
enkcel06: Slot Number: 1
enkcel06: Device Id: 18
enkcel06: Media Error Count: 0
enkcel06: Other Error Count: 0
enkcel06: Predictive Failure Count: 0
enkcel06: Slot Number: 2
enkcel06: Device Id: 17
enkcel06: Media Error Count: 0
enkcel06: Other Error Count: 1
enkcel06: Predictive Failure Count: 0
enkcel06: Slot Number: 3
enkcel06: Device Id: 16
enkcel06: Media Error Count: 0
enkcel06: Other Error Count: 0
enkcel06: Predictive Failure Count: 0
enkcel06: Slot Number: 4
enkcel06: Device Id: 15
enkcel06: Media Error Count: 0
enkcel06: Other Error Count: 0
enkcel06: Predictive Failure Count: 0
enkcel06: Slot Number: 5
enkcel06: Device Id: 14
enkcel06: Media Error Count: 0
enkcel06: Other Error Count: 0
enkcel06: Predictive Failure Count: 0
enkcel06: Slot Number: 6
enkcel06: Device Id: 13
enkcel06: Media Error Count: 0
enkcel06: Other Error Count: 0
enkcel06: Predictive Failure Count: 0
enkcel06: Slot Number: 7
enkcel06: Device Id: 12
enkcel06: Media Error Count: 0
enkcel06: Other Error Count: 0
enkcel06: Predictive Failure Count: 0
enkcel06: Slot Number: 8
enkcel06: Device Id: 11
enkcel06: Media Error Count: 0
enkcel06: Other Error Count: 0
enkcel06: Predictive Failure Count: 0
enkcel06: Slot Number: 9
enkcel06: Device Id: 10
enkcel06: Media Error Count: 0
enkcel06: Other Error Count: 0
enkcel06: Predictive Failure Count: 0
enkcel06: Slot Number: 10
enkcel06: Device Id: 9
enkcel06: Media Error Count: 0
enkcel06: Other Error Count: 0
enkcel06: Predictive Failure Count: 0
enkcel06: Slot Number: 11
enkcel06: Device Id: 8
enkcel06: Media Error Count: 0
enkcel06: Other Error Count: 0
enkcel06: Predictive Failure Count: 0
enkcel07: Slot Number: 0
enkcel07: Device Id: 34
enkcel07: Media Error Count: 0
enkcel07: Other Error Count: 0
enkcel07: Predictive Failure Count: 0
enkcel07: Slot Number: 1
enkcel07: Device Id: 33
enkcel07: Media Error Count: 0
enkcel07: Other Error Count: 0
enkcel07: Predictive Failure Count: 0
enkcel07: Slot Number: 2
enkcel07: Device Id: 32
enkcel07: Media Error Count: 0
enkcel07: Other Error Count: 1
enkcel07: Predictive Failure Count: 0
enkcel07: Slot Number: 3
enkcel07: Device Id: 31
enkcel07: Media Error Count: 0
enkcel07: Other Error Count: 0
enkcel07: Predictive Failure Count: 0
enkcel07: Slot Number: 4
enkcel07: Device Id: 30
enkcel07: Media Error Count: 0
enkcel07: Other Error Count: 0
enkcel07: Predictive Failure Count: 0
enkcel07: Slot Number: 5
enkcel07: Device Id: 29
enkcel07: Media Error Count: 0
enkcel07: Other Error Count: 0
enkcel07: Predictive Failure Count: 0
enkcel07: Slot Number: 6
enkcel07: Device Id: 28
enkcel07: Media Error Count: 0
enkcel07: Other Error Count: 0
enkcel07: Predictive Failure Count: 0
enkcel07: Slot Number: 7
enkcel07: Device Id: 27
enkcel07: Media Error Count: 1
enkcel07: Other Error Count: 0
enkcel07: Predictive Failure Count: 0
enkcel07: Slot Number: 8
enkcel07: Device Id: 26
enkcel07: Media Error Count: 0
enkcel07: Other Error Count: 0
enkcel07: Predictive Failure Count: 0
enkcel07: Slot Number: 9
enkcel07: Device Id: 25
enkcel07: Media Error Count: 14
enkcel07: Other Error Count: 0
enkcel07: Predictive Failure Count: 0
enkcel07: Slot Number: 10
enkcel07: Device Id: 24
enkcel07: Media Error Count: 0
enkcel07: Other Error Count: 0
enkcel07: Predictive Failure Count: 0
enkcel07: Slot Number: 11
enkcel07: Device Id: 23
enkcel07: Media Error Count: 0
enkcel07: Other Error Count: 0
enkcel07: Predictive Failure Count: 0





}}}














''The memory experts''
http://www.crucial.com/index.aspx  

''You Probably Don't Need More DIMMs'' http://h30507.www3.hp.com/t5/Eye-on-Blades-Blog-Trends-in/You-Probably-Don-t-Need-More-DIMMs/ba-p/81647#.UvvgJUJdWig



http://img339.imageshack.us/i/hynix2gb.jpg/sr=1

Here are the details you need to know when buying/upgrading memory for your machine

[img[picturename| https://lh3.googleusercontent.com/_F2x5WXOJ6Q8/TZwsfhjwZNI/AAAAAAAABLo/oHZs5EC7HRI/physicalmemory.png]]



SDRAM vs DIMM
http://forums.techguy.org/hardware/161660-sdram-vs-dimm.html

1333 just runs 1066? 
http://forum.notebookreview.com/dell-latitude-vostro-precision/475324-e6410-owners-thread-149.html
http://forums.anandtech.com/showthread.php?t=2141483
http://www.computerhope.com/issues/ch001376.htm#1
http://www.liberidu.com/blog/?p=2343&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+Bloggralikecom+(blog.gralike.com)
https://blogs.oracle.com/UPGRADE/

https://apex.oracle.com/database-features/
https://oradiff.oracle.com/ords/r/oradiff/oradiff/home?session=711138461406179


Migration Solutions Directory
http://www.oracle.com/technology/tech/migration/mti/index.html



http://www.oracle.com/technology/tech/migration/maps/index.html



Migration Workbench
Application Migration Assistant
Oracle Database Migration Verifier

* Migration Technology Center on OTN is the main entry point on OTN for all your migration requirements.
http://www.oracle.com/technology/tech/migration/index.html

* Migration Solutions Directory on OTN provides a quick and easy way to search for the migration solutions that are best suited to your particular migration.
http://www.oracle.com/technology/tech/migration/mti/index.html

* Migration Maps provide a set of step-by-step instructions to guide you through the recommended process for the migration of an existing third-party database to Oracle.
http://www.oracle.com/technology/tech/migration/maps/index.html

* Oracle Migration Knowledge Base offers a collection of technical articles to help you resolve any migration issue.
http://www.oracle.com/technology/tech/migration/kb/index.html

* Discussion Forums on OTN, monitored by developers
      o Oracle Migration Workbench Forum
      	http://forums.oracle.com/forums/forum.jspa?forumID=1
      o Application Migration Assistant Forum
      	http://forums.oracle.com/forums/forum.jspa?forumID=182
      	
* Relational Migration Center of Excellence Introduction
http://www.oracle.com/technology/tech/migration/isv/mig_services.html
      	
      	      	

Migration Services
------------------

The PTS group has successfully completed over 1,000 partner migrations from Sybase, Informix, Microsoft, IBM DB2, Mumps, and other databases to Oracle-based solutions. PTS has also successfully completed over 300 partner migrations from BEA Weblogic, IBM Websphere, JBoss, and Sun iPlanet to Oracle Application Server 10g. PTS migration engagements typically last from a few days to a couple of weeks depending on the size, type, and complexity of the project. Technical resources such as on-site support, phone support, and e-mail support are all available.

When you have your solution running on Oracle, PTS also provides hands-on architecture and design reviews, database and middle-tier benchmarks, performance and tuning, Java/J2EE coding, RAC validations, proofs of concept, project planning and monitoring, product deployment, and new Oracle product release implementation. 



---------------------------------------------------------------------------


Customer Information
Migration Reasons and Goals
Database Information
Operational Procedures and Requirements
Other Special Logic/Subsystems to Migrate
Documentation

----

Migration Lifecycle:
1) Evaluation
2) Assessment
3) Migration
4) Testing
5) Optimization
6) Customer Acceptance
7) Production
8) Project Support
Information and Guide for Finding Migration Information
  	Doc ID: 	Note:468083.1

Consolidated Reference List For Migration / Upgrade Service Requests
  	Doc ID: 	762540.1


-- UPGRADE PLANNER
What is the Upgrade Planner and how do I use it? [ID 1277424.1]
http://supportweb.siebel.com/crmondemand/videos/Customer_Support/UITraining/MOS2010/upgradeplanner_introduction/upgradeplanner_introduction.htm
http://supportweb.siebel.com/crmondemand/videos/Customer_Support/UITraining/MOS2010/upgradeplanner_advanced/upgradeplanner_advanced.htm


-- UPGRADE ADVISOR 

Upgrade Advisor: Database [ID 251.1]
Oracle Support Upgrade Advisors [ID 250.1]
Upgrade Advisor: OracleAS 10g Forms/Reports Services to FMW 11g [ID 252.1]
Oracle Support Lifecycle Advisors [ID 250.1]  <-- ''new!''



-- UPGRADE COMPANION

Oracle 11gR2 Upgrade Companion
  	Doc ID: 	785351.1

10g Upgrade Companion
 	Doc ID:	Note:466181.1
 	
Oracle 11g Upgrade Companion
 	Doc ID:	Note:601807.1

Oracle 11gR2 Upgrade Companion
  	Doc ID: 	785351.1

Compatibility Matrix for Export And Import Between Different Oracle Versions
  	Doc ID: 	132904.1
 	
 	

-- UPGRADE METHODS

Different Upgrade Methods For Upgrading Your Database
 	Doc ID:	Note:419550.1

How to Perform a Full Database Export Import during Upgrade, Migrate, Copy, or Move of a Database
  	Doc ID: 	286775.1
 	
 	

-- UPGRADE WITH DATA GUARD

Upgrading to 10g with a Physical Standby in Place
 	Doc ID:	Note:278521.1
 	
Upgrading to 10g with a Logical Standby in Place
 	Doc ID:	Note:278108.1
 	
 	

-- 7 to 8/8i

Note 122926.1 What Happens Inside Oracle when Migrating from 7 to 8/8i




-- CATCPU

Do I Need To Run catcpu.sql After Upgrading A Database?
  	Doc ID: 	Note:461082.1


-- UTLRP, UTLIRP, UTLIP
Difference between UTLRP.SQL - UTLIRP.SQL - UTLIP.SQL?
  	Doc ID: 	Note:272322.1



-- CONVERT SE TO EE

How to Convert Database from Standard to Enterprise Edition ?
  	Doc ID: 	Note:117048.1

How to convert a RAC database from Standard Edition (SE) to Enterprise Edition (EE)?
  	Doc ID: 	Note:451981.1



-- CONVERT EE TO SE

Converting from Enterprise Edition to Standard Edition
  	Doc ID: 	Note:139642.1




-- ISSUES on converting from SE to EE

Unable To Recompile Invalid Objects with UTLRP Script After Upgrading From 9i To 10g
  	Doc ID: 	Note:465050.1

ORA-07445 [zllcini] or ORA-04045 in a Database with OLS Set to FALSE
  	Doc ID: 	Note:233110.1

Queries Against Tables Protected by OLS Are Erroring Out
  	Doc ID: 	Note:577569.1

While compiling, Ora-04063: Package Body 'Lbacsys.Lbac_events' Has Errors
  	Doc ID: 	Note:359649.1




-- UPGRADE CHECKLIST

Complete Upgrade Checklist for Manual Upgrades from 8.X / 9.0.1 to Oracle9iR2 (9.2.0)
  	Doc ID: 	Note:159657.1

Upgrading Directly to a 9.2.0 Patch Set
  	Doc ID: 	Note:214887.1

Complete Upgrade Checklist for Manual Upgrades from 8.X / 9.0.1 to Oracle9iR2 (9.2.0)
  	Doc ID: 	Note:159657.1




-- MIGRATION FROM 32 to 64

Memory Requirements of Databases Migrated from 32-bit to 64-bit
  	Doc ID: 	209766.1

How to convert a 32-bit database to 64-bit database on Linux?
  	Doc ID: 	Note:341880.1 	

Failure to Create new Control File Migrating From 32-Bit 10gr1 To 64-Bit 10gr2
  	Doc ID: 	Note:458401.1 	

How I Solved a Problem During a Migration of 32 bit to 64 bit on 10.2.0.2
  	Doc ID: 	Note:452416.1 	

How To Change The Platform From Linux X86 To Linux IA64 Itanium (RH or Suse)
  	Doc ID: 	Note:316358.1

How to Migrate Oracle 10.2 32bit to 10.2 64bit on Microsoft Windows
  	Doc ID: 	Note:403522.1

How I Solved a Problem During a Migration of 32 bit to 64 bit on 10.2.0.2
  	Doc ID: 	452416.1

http://dba.5341.com/msg/66637.html

http://seilerwerks.wordpress.com/2007/03/06/fixing-a-32-to-64-bit-migration-with-utlirpsql/

http://www.oraclealchemist.com/2007/12/

http://www.miraclelinux.com/english/case/index.html

http://pat98.tistory.com/tag/oracle%2064bit

FULL EXPORT FAILS AFTER 32BIT TO 64BIT CONVERSION with ORA-7445
  	Doc ID: 	Note:559777.1

How to Migrate Oracle 10.2 32bit to 10.2 64bit on Microsoft Windows
  	Doc ID: 	Note:403522.1

Changing between 32-bit and 64-bit Word Sizes
  	Doc ID: 	Note:62290.1

How To Verify the Word Size(32bit vs 64bit) of Oracle and UNIX Operating Systems
  	Doc ID: 	Note:168604.1

AIX - 32bit vs 64bit
  	Doc ID: 	Note:225551.1

Upgrading OLAP from 32 to 64 bits
  	Doc ID: 	Note:352306.1

Can you restore RMAN backups taken on 32-bit Oracle with 64-bit Oracle?
  	Doc ID: 	Note:430278.1

Got Ora-600 [17069] While Migrating To 64bit From 32bit DB On 64bit Solaris.
  	Doc ID: 	Note:434458.1 

ORA-25153: Temporary Tablespace is Empty during 32-Bit To 64-Bit 9iR2 on Linux Conversion
  	Doc ID: 	Note:602849.1

How To Change Oracle 11g Wordsize from 32-bit to 64-bit.
  	Doc ID: 	Note:548978.1

Changing between 32-bit and 64-bit Word Sizes
  	Doc ID: 	62290.1

RMAN Restoring A 32 bit Database to 64 bit - An Example
  	Doc ID: 	467676.1 	

How to convert a 32-bit database to 64-bit database on Linux?
  	Doc ID: 	341880.1

Can you restore RMAN backups taken on 32-bit Oracle with 64-bit Oracle?
  	Doc ID: 	430278.1

How to Upgrade a Database from 32 Bit Oracle to 64 Bit Oracle
  	Doc ID: 	164997.1

http://gavinsoorma.com/2012/10/performing-a-32-bit-to-64-bit-migration-using-the-transportable-database-rman-feature/





-- DBA_REGISTRY, after wordsize change 10.2.0.4

How to check if Intermedia Audio/Image/Video is Installed Correctly?
  	Doc ID: 	221337.1

Manual upgrade of the 10.2.x JVM fails with ORA-3113 and ORA-7445
  	Doc ID: 	459060.1

Jserver Java Virtual Machine Become Invalid After Catpatch.Sql
  	Doc ID: 	312140.1

How to Reload the JVM in 10.1.0.X and 10.2.0.X
  	Doc ID: 	276554.1

Script to Check the Status of the JVM within the Database
  	Doc ID: 	456949.1

How to Tell if Java Virtual Machine Has Been Installed Correctly
  	Doc ID: 	102717.1




-- CROSS PLATFORM

Answers To FAQ For Restoring Or Duplicating Between Different Versions And Platforms
  	Doc ID: 	369644.1

Migration of Oracle Database Instances Across OS Platforms
  	Doc ID: 	733205.1

How to Use Export and Import when Transferring Data Across Platforms or Across 32-bit and 64-bit Servers
  	Doc ID: 	277650.1

How to Perform a Full Database Export Import during Upgrade, Migrate, Copy, or Move of a Database
  	Doc ID: 	286775.1





-- ITANIUM to X86-64

How To Migrate a Database From Linux Itanium 64-bit To Linux x86-64 (AMD64/EM64T)
  	Doc ID: 	Note:550042.1



-- HP-UX PA-RISC to ITANIUM

427712.1 pa-risc to itanium


-- DOWNGRADE

How to Downgrade from Oracle RDBMS 10gR2?
  	Doc ID: 	Note:398372.1

Complete Checklist For Downgrading The Database From 11g To Lower Releases
  	Doc ID: 	443890.1



-- PARAMETERS

What is 'STARTUP MIGRATE'?
  	Doc ID: 	Note:252273.1

Difference Between Deprecated and Obsolete Parameters
  	Doc ID: 	Note:342875.1

  	
  	
-- PATCHING

Clarity On Database Patchset 10.2.0.3.0 Apply, Where The README Has References To Oracle Database Vault Option
 	Doc ID:	Note:405042.1
 	
How to rollback a patchset 
  Doc ID:  Note:334598.1 

Restoring a database to a higher patchset
      Doc ID:     558408.1



-- MINIMAL DOWNTIME

My Experience in Moving a 1 Terabyte Database Across Platforms With Minimal Downtime
  	Doc ID: 	431096.1

How I Create a Physical Standby Database for a 24/7 Shop
  	Doc ID: 	580004.1


-- FORMS MIGRATE TO 9i/10g

FRM-10256: User is not authorized to run Oracle Forms Menu
	  Cause:
		Forms menu is relying on the FRM50_ENABLED_ROLES view for the menu security; this view is
		owned by SYSTEM. The application schemas are only imported on the 10.2.0.4 database and as a
		result this view was not created.
	  Solution:
		I found Metalink Note 28933.1, labeled “Implementing and Troubleshooting Menu Security in Forms”,
		that suggested to run the FRMSEC.SQL to create the view. This script could be found in the
		D:\$oracle_home$\tools\dbtab\forms directory of a Developer Suite installation on a client desktop.

	  Executed the FRMSEC.SQL on the 10.2.0.4 database as SYSTEM user and granted the
	  FRM50_ENABLED_ROLES view to PUBLIC

Migrating to Oracle Forms 9i / 10g - Forms Upgrade Center
  	Doc ID: 	Note:234540.1
MindMap - Plan Stability http://www.evernote.com/shard/s48/sh/727c84ca-a25e-4ffa-89f9-4d1e96c471c4/dcad83781f8a07f8983e26fbb8c066a3
<<showtoc>>

! my mindmap workflow 
> freemind as a default tool for mindmap creation 
> if I'm reading from my iPad I mindmap using iThoughtsX
> if I need a better search functionality inside my mindmap notes to find the specific leaf node I would use the iThoughtsX for mac
> if I need to layout my TODOs in a timeline manner for Goal setting, then I would use xmind 


! software you need
''FREE online MindMap tool''
http://mind42.com/mindmaps

''iThoughtsX (Mac and IOS - iphone and ipad)''  <- I just like the search feature of this software, that's it
http://toketaware.com/ithoughtsx-faq/

''MindMap offline viewer''
http://freemind.sourceforge.net/wiki/index.php/Download

''xmind'' 
https://www.xmind.net/ <- for a mindmap + timeline view (gantt chart) + office integration 


! freemind 
freemind spacing between nodes ''vgap'' http://sourceforge.net/p/freemind/discussion/22102/thread/6d7a8d0b, ''drag'' http://sourceforge.net/p/freemind/discussion/22102/thread/6d7a8d0b, http://www.linuxquestions.org/questions/attachment.php?attachmentid=7061&d=1305983839, http://www.linuxquestions.org/questions/attachment.php?attachmentid=7074&d=1306148002


! My Mind Maps 
''this is a bit outdated, I moved all of them to a cloud directory where I can view them across my laptop, iphone, ipad - the iThoughtsX+dropbox integration let me do the syncing on my mobile and I still use freemind as my main software for mind mapping on my laptop''
https://sites.google.com/site/karlarao/home/mindmap
<<<
!!! Capacity Planning
Mining the AWR repository for Capacity Planning, Visualization, & other real world stuff  https://sites.google.com/site/karlarao/mindmap/mining-the-awr-repository
Prov worksheet vs Consolidation Planner https://sites.google.com/site/karlarao/home/mindmap/apx
Prov worksheet https://sites.google.com/site/karlarao/home/mindmap/provworksheet
Capacity Planning paper https://sites.google.com/site/karlarao/home/mindmap/capacity-planning-paper
Threads vs Cores https://sites.google.com/site/karlarao/home/mindmap/cpuvsthread
Exadata Consolidation Success Story https://sites.google.com/site/karlarao/home/mindmap/exadata-consolidation-success-story
OaktableWorld12 https://sites.google.com/site/karlarao/home/mindmap/oaktableworld12

!!! Visualization
AWR Tableau and R visualization examples https://sites.google.com/site/karlarao/home/mindmap/awr-tableau-and-r-toolkit-visualization-examples

!!! Exadata
write-back flash cache https://sites.google.com/site/karlarao/home/mindmap/write-back-flash-cache 

!!! Speaking
E412 https://sites.google.com/site/karlarao/home/mindmap/e4_12
IOUG13 https://sites.google.com/site/karlarao/home/mindmap/ioug13
E413 https://sites.google.com/site/karlarao/home/mindmap/e4_13
kscope13-css https://sites.google.com/site/karlarao/home/mindmap/kscope13-css

!!! Code ninja
Python https://sites.google.com/site/karlarao/home/mindmap/python

!!! SQL 
Plan Stability http://www.evernote.com/shard/s48/sh/727c84ca-a25e-4ffa-89f9-4d1e96c471c4/dcad83781f8a07f8983e26fbb8c066a3
<<<

! mindmap as timeline view 
http://hubaisms.com/tag/timeline/
http://vismap.blogspot.com/2009/02/from-mind-map-to-timeline-in-one-click.html
http://www.matchware.com/en/products/mindview/education/storyboarding.htm
http://www.pcworld.com/article/2029529/review-mindview-5-makes-mind-maps-first-class-citizens-in-the-office-ecosystem.html
http://www.techrepublic.com/blog/tech-decision-maker/brainstorm-project-solutions-with-mindview-mind-mapping-software/
http://www.techrepublic.com/blog/tech-decision-maker/build-milestone-charts-faster-with-mindview-3-business-software/


! Create a Mind Map File from a Directory Structure 
https://gist.github.com/karlarao/c14413ba48e84f4de4dac84a297da1f6
https://leftbraintinkering.blogspot.com/2014/09/linux-create-mind-map-freemind-from.html?showComment=1479152008083
{{{
## the XML output is 5 directories deep and filtering any folder name with "tmp" in it

tree -d -L 5 -X -I tmp /Users/karl/Dropbox/CodeNinja/GitHub | sed 's/directory/node/g'| sed 's/name/TEXT/g' | sed 's/tree/map/g' | sed '$d' | sed '$d' | sed '$d'|  sed "1d" | sed 's/report/\/map/g' | sed 's/<map>/<map version="1.0.1">/g' > /Users/karl/Dropbox/CodeNinja/GitHub/Gitmap.mm



-- tree output 
Karl-MacBook:example karl$ tree -d -L 5 -X 
<?xml version="1.0" encoding="UTF-8"?>
<tree>
  <directory name=".">
    <directory name="root_folder">
      <directory name="folder1">
        <directory name="subfolder1">
        </directory>
      </directory>
      <directory name="folder2">
      </directory>
    </directory>
  </directory>
  <report>
    <directories>4</directories>
  </report>
</tree>

-- what I'd like it to be
Karl-MacBook:example karl$ cat root_folder.mm 
<map version="1.0.1">
<!-- To view this file, download free mind mapping software FreeMind from http://freemind.sourceforge.net -->
<node CREATED="1479134130850" ID="ID_83643410" MODIFIED="1479134208336" TEXT="X">
	<node CREATED="1479134193879" ID="ID_281547913" MODIFIED="1479149770801" POSITION="right" TEXT="root_folder">
		<node CREATED="1479134139495" ID="ID_1307804355" MODIFIED="1479134141733" TEXT="folder1">
			<node CREATED="1479149779339" ID="ID_12170653" MODIFIED="1479149782231" TEXT="subfolder1"/>
			</node>
		<node CREATED="1479134143525" ID="ID_880690660" MODIFIED="1479134146575" TEXT="folder2"/>
	</node>
</node>
</map>

-- XML to mindmap 
Karl-MacBook:example karl$ tree -d -L 5 -X | sed 's/directory/node/g'| sed 's/name/TEXT/g' | sed 's/tree/map/g' | sed '$d' | sed '$d' | sed '$d'|  sed "1d" | sed 's/report/\/map/g' | sed 's/<map>/<map version="1.0.1">/g'
<map version="1.0.1">
  <node TEXT=".">
    <node TEXT="root_folder">
      <node TEXT="folder1">
        <node TEXT="subfolder1">
        </node>
      </node>
      <node TEXT="folder2">
      </node>
    </node>
  </node>
  </map>

-- final script 

./Gitmap.sh 
tree -d -L 5 -X -I tmp /Users/karl/Dropbox/CodeNinja/GitHub | sed 's/directory/node/g'| sed 's/name/TEXT/g' | sed 's/tree/map/g' | sed '$d' | sed '$d' | sed '$d'|  sed "1d" | sed 's/report/\/map/g' | sed 's/<map>/<map version="1.0.1">/g' > /Users/karl/Dropbox/CodeNinja/GitHub/Gitmap.mm

}}}



! references 
https://en.wikipedia.org/wiki/List_of_concept-_and_mind-mapping_software
http://mashable.com/2013/09/25/mind-mapping-tools/



! end






http://karlarao.wordpress.com/2011/12/06/mining-emgc-notification-alerts/

* nice video on ''configuring rules'' http://www.oracle.com/webfolder/technetwork/tutorials/demos/em/gc/r10205/notifications/notifications_viewlet_swf.html
* tutorial on ''creating notification methods (os or snmp)'' and ''mapping it to notification rules'' http://www.oracle.com/webfolder/technetwork/tutorials/obe/em/emgc10gr2/quick_start/notification/notification.htm
* configure ''preferences'' http://www.oracle.com/webfolder/technetwork/tutorials/obe/em/emgc10gr2/quick_start/preferred_credentials/preferred_credentials.htm





http://arup.blogspot.com/2010/05/mining-listener-logs.html
https://docs.google.com/viewer?url=http://www.proligence.com/MiningListenerLogPart1.pdf&pli=1
https://docs.google.com/viewer?url=http://www.proligence.com/MiningListenerLogPart2.pdf&pli=1
https://docs.google.com/viewer?url=http://www.proligence.com/MiningListenerLogPart3.pdf&pli=1
''Some prereq readables''
* Metrics and DBA_HIST tables https://docs.google.com/viewer?url=http://www.perfvision.com/ftp/emea_2010_may/04_NEW_features.ppt
* http://dioncho.wordpress.com/2009/01/23/misunderstanding-on-top-sqls-of-awr-repository/
* http://www.freelists.org/post/oracle-l/SQLs-run-in-any-period,6
* http://www.freelists.org/post/oracle-l/Missing-SQL-in-DBA-HIST-SQLSTAT
* ''Slide 14 of the OOW presentation S317114 What Else Can I Do with System and Session Performance Data'' http://asktom.oracle.com/pls/apex/z?p_url=ASKTOM%2Edownload_file%3Fp_file%3D3400036420700662395&p_cat=oow_2010.zip&p_company=822925097021874 the presentation says "Remember they are snapshots, not movies.. DBA_HIST_SQLTEXT will not be 100% complete for example - especially if you have a poorly written application"
* http://kerryosborne.oracle-guy.com/2009/04/hidden-sql-why-cant-i-find-my-sql-text/
* ''Slide 45-59'' http://www.slideshare.net/karlarao/unconference-mining-the-awr-repository-for-capacity-planning-visualization-other-real-world-stuff the presentation shows the correlation of SQLs to the workload of the server in a time series manner.. that is, when tuned will have big impact on the workload reduction
* Andy Rivenes www.appsdba.com/papers/Oracle_Workload_Measurement.pdf good read about interval based monitoring
* http://oracledoug.com/serendipity/index.php?/archives/1402-MMON-Sampling-ASH-Data.html
How To Interpret DBA_HIST_SQLSTAT [ID 471053.1]
http://shallahamer-orapub.blogspot.com/2011/01/when-is-vsqlstats-refreshed.html



''Object and SQLs used for the test case (by Dion Cho):''
{{{
-- create objects
create table t1(c1 int, c2 char(100));
insert into t1
select level, 'x'
from dual
connect by level <= 10000
;

commit;
}}}


{{{
set heading off
set timing off
set feedback off
spool select2.sql

select 'select /*+ top_sql_' || mod(level,10000) || ' */ count(*) from t1;'
from dual
connect by level <= 10000;
spool off
}}}

''Executed as follows:''
{{{
exec dbms_workload_repository.create_snapshot;
@select2
exec dbms_workload_repository.create_snapshot;
}}}


''The first test was done on SNAP_ID 1329, you'll notice the 33% Oracle CPU and 269.110 exec/s
the 2nd test was on SNAP_ID 1332 with 43% Oracle CPU and 315.847 exec/s
There was also a shutdown on period SNAP_ID 1330-1331 because I increased the SGA_MAX_SIZE from 300-700M
The increased SGA helped as I was able to have more SQLs appearing on the awr_topsql.sql
SNAP_ID 1329 -- 300M SGA -- 54sqls
SNAP_ID 1332 -- 700M SGA -- 106sqls''
{{{
													      AWR CPU and IO Workload Report

			 i			  ***							    *** 		***
			 n			Total							  Total 	      Total													      U    S
       Snap		 s	 Snap	C	  CPU						  A	 Oracle 		 OS   Physical										  Oracle RMAN	OS    S    Y	I
  Snap Start		 t	  Dur	P	 Time	      DB	DB	  Bg	 RMAN	  A	    CPU      OS 	CPU	Memory	    IOPs      IOPs	IOPs	  IO r	    IO w      Redo	     Exec    CPU  CPU  CPU    R    S	O
    ID Time		 #	  (m)	U	  (s)	    Time       CPU	 CPU	  CPU	  S	    (s)    Load 	(s)	  (mb)	       r	 w	redo	(mb)/s	  (mb)/s    (mb)/s Sess        /s      %    %	 %    %    %	%
------ --------------- --- ---------- --- ----------- ---------- --------- --------- -------- ----- ----------- ------- ----------- ---------- --------- --------- --------- --------- --------- --------- ---- --------- ------ ---- ---- ---- ---- ----
  1328 10/10/18 21:36	 1	 1.23	1	73.80	    4.41      4.19	0.33	 0.00	0.1	   4.52    0.46        8.51	  0.05	   0.257     2.019     0.217	 0.004	   0.026     0.006   21     2.073      6    0	12    5    5	2
  1329 10/10/18 21:37	 1	 1.46	1	87.60	   29.04     28.73	0.43	 0.00	0.3	  29.16    0.97       43.35	  0.03	   0.342     1.906     0.148	 0.004	   0.019     0.006   21   269.110     33    0	49   21   27	2
  1330 10/10/18 21:38	 1	 7.02	1      421.20	    6.45      4.37	2.14	 0.00	0.0	   6.51    0.15       35.12	  0.02	   0.306     0.715     0.674	 0.004	   0.010     0.004   21    19.418      2    0	 8    3    4	2
  1331 10/10/18 21:45	 1	12.77	1      766.20	 -417.20   -100.73    -33.41	 0.00  -0.5	-134.13    0.39     -663.97	  0.07	  -7.794    -2.536    -1.541	-0.230	  -0.033    -0.014   22   -60.715    -18    0  -87  -14  -59  -20
  1332 10/10/18 21:58	 1	 1.26	1	75.60	   36.63     32.37	0.30	 0.00	0.5	  32.67    0.85       46.88	  0.04	   6.548     0.463     0.185	 0.100	   0.010     0.008   25   315.847     43    0	62   28   33	3


}}}

''Drilling down on SNAP_ID 1332 awr_topsql.sql output (below) the following are the filter options''
AND snap_id = 1332
AND lower(st.sql_text) like '%top_sql%'
''and it just shows 26 rows''
''notice the top_sql_7199 below.. the time rank is 9 and has the elapsed time of 0.04sec, notice that the query is arranged on Elapsed Time which really what matters
and on top_sql_7199 half of the time was on CPU .02sec (approximate, this may be different when you actually trace the SQL)
also notice the "SQL Text" column (far right).. the lowest from the 10K execution starts on the range of top_sql_6xxx and highest is top_sql_9xxx... this may happen as SQL goes in and out of the shared pool which could be aged out or cycled or simply because my shared pool is too small..

and then as per the official doc
//"DBA_HIST_SQLSTAT displays historical information about SQL statistics. This view captures the top SQL statements based on a set of criteria and captures the statistics information from V$SQL. The total value is the value of the statistics since instance startup. The delta value is the value of the statistics from the BEGIN_INTERVAL_TIME to the END_INTERVAL_TIME in the DBA_HIST_SNAPSHOT view."//

so this correlates with Tom Kyte's statement that they are based on snapshots... but the "top SQL statements based on a set of criteria" are only captured on this view (hmm have to check if there will be top_sql on v$sql with no load that are not captured on dba_hist_sqlstat)
"Remember they are snapshots, not movies.. DBA_HIST_SQLTEXT will not be 100% complete for example - especially if you have a poorly written application"

Note that these are not the only SQLs running on this SNAP period... you'll see on the next sections that on this snap period (1332) there are 90 rows selected

(You can have a better output by double clicking this whole page and copy paste it on a textpad)
''
{{{

																	     AWR Top SQL Report

			 i
			 n									   Elapsed
       Snap		 s    Snap			   Plan 			Elapsed       Time	  CPU											   A
  Snap Start		 t     Dur SQL			   Hash 			   Time   per exec	 Time	 Cluster						      Parse	  PX	   A Time SQL
    ID Time		 #     (m) ID			  Value Module			    (s)        (s)	  (s)	    Wait	  LIO	       PIO	   Rows     Exec      Count	Exec	   S Rank Text
------ --------------- --- ------- --------------- ------------ -------------------- ---------- ---------- ---------- ---------- ------------ ------------ ------------ -------- ---------- -------- ------- ---- ----------------------------------------
  1332 10/10/18 21:58	 1    1.26 6yd53x1zjqts9     3724264953 sqlplus@dbrocaix01.b	   0.04       0.04	 0.02	       0	  223		 0	      1        1	  1	   0	0.00	9 select /*+ top_sql_7199 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 bpxnmunkcywzg     3724264953 sqlplus@dbrocaix01.b	   0.03       0.03	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   12 select /*+ top_sql_8170 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 7fa2r0xkfbs6b     3724264953 sqlplus@dbrocaix01.b	   0.02       0.02	 0.02	       0	  223		 0	      1        1	  1	   0	0.00   27 select /*+ top_sql_8314 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 f71p3w4xx1pfc     3724264953 sqlplus@dbrocaix01.b	   0.02       0.02	 0.02	       0	  223		 0	      1        1	  1	   0	0.00   33 select /*+ top_sql_8286 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 f3wcc30napt5a     3724264953 sqlplus@dbrocaix01.b	   0.02       0.02	 0.02	       0	  223		 0	      1        1	  1	   0	0.00   36 select /*+ top_sql_7198 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 ghvnum1dfm05q     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   58 select /*+ top_sql_9331 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 2ta3r31t0z08a     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   60 select /*+ top_sql_7523 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 59kybrhwdk040     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   61 select /*+ top_sql_9853 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 9wf93m8rau04d     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   62 select /*+ top_sql_8652 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 fuhanmqynt02p     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   63 select /*+ top_sql_9743 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 1dzkrjdvjt03n     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   65 select /*+ top_sql_8498 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 0s5uzug7cr029     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   66 select /*+ top_sql_8896 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 gq6kp76f1307x     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   67 select /*+ top_sql_8114 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 bfa3qt29jg07b     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   68 select /*+ top_sql_9608 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 9nk1jwamsy02n     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   69 select /*+ top_sql_9724 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 2sry32gac2079     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   70 select /*+ top_sql_7316 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 atp84rb53u072     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   71 select /*+ top_sql_9091 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 1wb6wx2nb8093     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   73 select /*+ top_sql_9446 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 3czfc573u505f     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   74 select /*+ top_sql_9702 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 c31xpspd8n08k     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   75 select /*+ top_sql_8045 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 3k07s1fhv6043     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   77 select /*+ top_sql_9321 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 0qh6dbs79n06s     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   78 select /*+ top_sql_9052 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 9xt7tfmzut065     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   79 select /*+ top_sql_9429 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 28hu85p69d047     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   80 select /*+ top_sql_8978 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 4w2jxfhrfh037     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   81 select /*+ top_sql_7464 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 5kzjxrqgqv03x     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   83 select /*+ top_sql_6849 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)


26 rows selected.
}}}


''Below is the dba_hist_sqltext filtered output this only shows the '%top_sql%' SQLs
and only has 41 rows
as per the official doc //"the DBA_HIST_SQLTEXT displays the text of SQL statements belonging to shared SQL cursors captured in the Workload Repository. This view captures information from V$SQL and is used with the DBA_HIST_SQLSTAT view."//
and also when you do 
''
select count(*) from dba_hist_sqltext — this view does not have SNAP_ID.. and the total row count is 3243

''for the script awr_topsqlx.sql  I outer join it with the dba_hist_snapshot and dba_hist_sqlstat''
             where st.sql_id(+)             = sqt.sql_id
             and st.dbid(+)                 = &_dbid
''to get the sql_text information on the SELECT portion''
                  , nvl(substr(st.sql_text,1,6), to_clob('** SQL Text Not Available **')) sql_text     
''
on later sections, you'll see that even if I remove dba_hist_sqltext and dba_hist_snapshot from the join I will still get the same amount of SQL output (90 rows) on a specific SNAP_ID
''

(You can have a better output by double clicking this whole page and copy paste it on a textpad)
{{{
sys@IVRS> select * from dba_hist_sqltext where lower(sql_text) like '%top_sql%' 
  2  /

																	     AWR Top SQL Report

SQL		SQL
ID		Text					 COMMAND_TYPE
--------------- ---------------------------------------- ------------
93s9k7wvfs05m	select snap_interval, retention,most_rec	    3
		ent_snap_time, most_recent_snap_id, stat
		us_flag, most_recent_purge_time, most_re
		cent_split_id, most_recent_split_time, m
		rct_snap_time_num, mrct_purge_time_num,
		snapint_num, retention_num, swrf_version
		, registration_status, mrct_baseline_id,
		 topnsql from wrm$_wr_control where dbid
		 = :dbid

7k5ymabz2vkgu	update wrm$_wr_control	  set snap_inter	    6
		val = :bind1, snapint_num = :bind2, rete
		ntion = :bind3,      retention_num = :bi
		nd4, most_recent_snap_id = :bind5,
		most_recent_snap_time = :bind6, mrct_sna
		p_time_num = :bind7,	  status_flag =
		:bind8, most_recent_purge_time = :bind9,
		      mrct_purge_time_num = :bind10,
		  most_recent_split_id = :bind11, most_r
		ecent_split_time = :bind12,	 swrf_ve
		rsion = :bind13, registration_status = :
		bind14,      mrct_baseline_id = :bind15,
		 topnsql = :bind16    where dbid = :dbid

f83wtgbnb9usa	select 'select /*+ top_sql_' || mod(leve	    3
		l,100) || ' */ count(*) from t1;'
		from dual
		connect by level <= 10000

89tw99zyhrcbz	select 'select /*+ top_sql_' || mod(leve	    3
		l,10000) || ' */ count(*) from t1;'
		from dual
		connect by level <= 10000

1wb6wx2nb8093	select /*+ top_sql_9446 */ count(*) from	    3
		 t1

2v0d7sukxs097	select /*+ top_sql_8504 */ count(*) from	    3
		 t1

gw8hg6m5ur0ac	select /*+ top_sql_9717 */ count(*) from	    3
		 t1

7g2ssk0p2a0ap	select /*+ top_sql_8764 */ count(*) from	    3
		 t1

d8dw2zx62c0by	select /*+ top_sql_9869 */ count(*) from	    3
		 t1

6ucxssb64u0c3	select /*+ top_sql_9803 */ count(*) from	    3
		 t1

8drg8cmfhj0c9	select /*+ top_sql_9336 */ count(*) from	    3
		 t1

db1tvtsu2u0d2	select /*+ top_sql_8531 */ count(*) from	    3
		 t1

b66tsa9sxa0dh	select /*+ top_sql_9444 */ count(*) from	    3
		 t1

bq0yu0jjry0fg	select /*+ top_sql_8799 */ count(*) from	    3
		 t1

0s5uzug7cr029	select /*+ top_sql_8896 */ count(*) from	    3
		 t1

9nk1jwamsy02n	select /*+ top_sql_9724 */ count(*) from	    3
		 t1

fuhanmqynt02p	select /*+ top_sql_9743 */ count(*) from	    3
		 t1

4w2jxfhrfh037	select /*+ top_sql_7464 */ count(*) from	    3
		 t1

1dzkrjdvjt03n	select /*+ top_sql_8498 */ count(*) from	    3
		 t1

5kzjxrqgqv03x	select /*+ top_sql_6849 */ count(*) from	    3
		 t1

59kybrhwdk040	select /*+ top_sql_9853 */ count(*) from	    3
		 t1

3k07s1fhv6043	select /*+ top_sql_9321 */ count(*) from	    3
		 t1

28hu85p69d047	select /*+ top_sql_8978 */ count(*) from	    3
		 t1

9wf93m8rau04d	select /*+ top_sql_8652 */ count(*) from	    3
		 t1

3czfc573u505f	select /*+ top_sql_9702 */ count(*) from	    3
		 t1

ghvnum1dfm05q	select /*+ top_sql_9331 */ count(*) from	    3
		 t1

3qw7025q1tcf3	select /*+ top_sql_8865 */ count(*) from	    3
		 t1

423v9vytv8064	select /*+ top_sql_6733 */ count(*) from	    3
		 t1

9xt7tfmzut065	select /*+ top_sql_9429 */ count(*) from	    3
		 t1

0qh6dbs79n06s	select /*+ top_sql_9052 */ count(*) from	    3
		 t1

atp84rb53u072	select /*+ top_sql_9091 */ count(*) from	    3
		 t1

2sry32gac2079	select /*+ top_sql_7316 */ count(*) from	    3
		 t1

bfa3qt29jg07b	select /*+ top_sql_9608 */ count(*) from	    3
		 t1

gq6kp76f1307x	select /*+ top_sql_8114 */ count(*) from	    3
		 t1

2ta3r31t0z08a	select /*+ top_sql_7523 */ count(*) from	    3
		 t1

c31xpspd8n08k	select /*+ top_sql_8045 */ count(*) from	    3
		 t1

f3wcc30napt5a	select /*+ top_sql_7198 */ count(*) from	    3
		 t1

7fa2r0xkfbs6b	select /*+ top_sql_8314 */ count(*) from	    3
		 t1

6yd53x1zjqts9	select /*+ top_sql_7199 */ count(*) from	    3
		 t1

f71p3w4xx1pfc	select /*+ top_sql_8286 */ count(*) from	    3
		 t1

bpxnmunkcywzg	select /*+ top_sql_8170 */ count(*) from	    3
		 t1


41 rows selected.


}}}




(You can have a better output by double clicking this whole page and copy paste it on a textpad)
{{{
																	     AWR Top SQL Report

			 i
			 n									   Elapsed
       Snap		 s    Snap			   Plan 			Elapsed       Time	  CPU											   A
  Snap Start		 t     Dur SQL			   Hash 			   Time   per exec	 Time	 Cluster						      Parse	  PX	   A Time SQL
    ID Time		 #     (m) ID			  Value Module			    (s)        (s)	  (s)	    Wait	  LIO	       PIO	   Rows     Exec      Count	Exec	   S Rank Text
------ --------------- --- ------- --------------- ------------ -------------------- ---------- ---------- ---------- ---------- ------------ ------------ ------------ -------- ---------- -------- ------- ---- ----------------------------------------
  1332 10/10/18 21:58	 1    1.26 404qh4yx36y1v     2586623307 			   9.25       0.00	 9.16	       0       660002	       134	  10000    10000      10000	   0	0.12	1 SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS I
																										  GNORE_WHERE_CLAUSE NO_PARALLEL(SAMPLESUB
																										  ) opt_param('parallel_execution_enabled'
																										  , 'false') NO_PARALLEL_INDEX(SAMPLESUB)
																										  NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C
																										  2),0) FROM (SELECT /*+ NO_PARALLEL("T1")
																										   FULL("T1") NO_PARALLEL_INDEX("T1") */ 1
																										   AS C1, 1 AS C2 FROM "T1" SAMPLE BLOCK (
																										  41.447368 , 1) SEED (1) "T1") SAMPLESUB

  1332 10/10/18 21:58	 1    1.26 bunssq950snhf     2694099131 			   0.80       0.80	 0.80	       0	  146		 0	      8        1	  1	   0	0.01	2 insert into wrh$_sga_target_advice   (sn
																										  ap_id, dbid, instance_number,    SGA_SIZ
																										  E, SGA_SIZE_FACTOR, ESTD_DB_TIME, ESTD_P
																										  HYSICAL_READS)  select    :snap_id, :dbi
																										  d, :instance_number,	  SGA_SIZE, SGA_SI
																										  ZE_FACTOR, ESTD_DB_TIME, ESTD_PHYSICAL_R
																										  EADS	from	v$sga_target_advice


  1332 10/10/18 21:58	 1    1.26 7vgmvmy8vvb9s       43914496 			   0.08       0.08	 0.08	       0	  168		 0	      1        1	  1	   0	0.00	3 insert into wrh$_tempstatxs	(snap_id,
																										  dbid, instance_number, file#, creation_c
																										  hange#, phyrds,    phywrts, singleblkrds
																										  , readtim, writetim, singleblkrdtim, phy
																										  blkrd,    phyblkwrt, wait_count, time)
																										  select    :snap_id, :dbid, :instance_num
																										  ber,	  tf.tfnum, to_number(tf.tfcrc_scn
																										  ) creation_change#,	 ts.kcftiopyr, ts.
																										  kcftiopyw, ts.kcftiosbr, ts.kcftioprt, t
																										  s.kcftiopwt,	  ts.kcftiosbt, ts.kcftiop
																										  br, ts.kcftiopbw, fw.count, fw.time  fro
																										  m    x$kcftio ts, x$kcctf tf, x$kcbfwait
																										   fw  where	tf.tfdup != 0 and    tf.tf
																										  num  = ts.kcftiofno and    fw.indx+1 = (
																										  ts.kcftiofno + :db_files)

  1332 10/10/18 21:58	 1    1.26 6hwjmjgrpsuaa     2721822575 			   0.05       0.05	 0.02	       0	  196		 5	     57        1	  1	   0	0.00	4 insert into wrh$_enqueue_stat   (snap_id
																										  , dbid, instance_number, eq_type, req_re
																										  ason,    total_req#, total_wait#, succ_r
																										  eq#, failed_req#,    cum_wait_time, even
																										  t#)  select	 :snap_id, :dbid, :instanc
																										  e_number, eq_type, req_reason,    total_
																										  req#, total_wait#, succ_req#, failed_req
																										  #,	cum_wait_time, event#  from    v$e
																										  nqueue_statistics  where    total_req# !
																										  = 0  order by    eq_type, req_reason

  1332 10/10/18 21:58	 1    1.26 84qubbrsr0kfn     3385247542 			   0.04       0.04	 0.04	       0	  372		 0	    388        1	  1	   0	0.00	5 insert into wrh$_latch   (snap_id, dbid,
																										   instance_number, latch_hash, level#, ge
																										  ts, misses,	 sleeps, immediate_gets, i
																										  mmediate_misses, spin_gets, sleep1,	 s
																										  leep2, sleep3, sleep4, wait_time)  selec
																										  t    :snap_id, :dbid, :instance_number,
																										  hash, level#, gets,	 misses, sleeps, i
																										  mmediate_gets, immediate_misses, spin_ge
																										  ts,	 sleep1, sleep2, sleep3, sleep4, w
																										  ait_time  from    v$latch  order by	 h
																										  ash

  1332 10/10/18 21:58	 1    1.26 db78fxqxwxt7r     3312420081 			   0.04       0.00	 0.03	       0	 1163		 3	   5135      379	 20	   0	0.00	6 select /*+ rule */ bucket, endpoint, col
																										  #, epvalue from histgrm$ where obj#=:1 a
																										  nd intcol#=:2 and row#=:3 order by bucke
																										  t

  1332 10/10/18 21:58	 1    1.26 96g93hntrzjtr     2239883476 			   0.04       0.00	 0.04	       0	 3517		 0	    736     1346	 20	   0	0.00	7 select /*+ rule */ bucket_cnt, row_cnt,
																										  cache_cnt, null_cnt, timestamp#, sample_
																										  size, minimum, maximum, distcnt, lowval,
																										   hival, density, col#, spare1, spare2, a
																										  vgcln from hist_head$ where obj#=:1 and
																										  intcol#=:2

  1332 10/10/18 21:58	 1    1.26 130dvvr5s8bgn     1160622595 			   0.04       0.00	 0.04	       0	 1105		 0	    198       18	 18	   0	0.00	8 select obj#, dataobj#, part#, hiboundlen
																										  , hiboundval, ts#, file#, block#, pctfre
																										  e$, pctused$, initrans, maxtrans, flags,
																										   analyzetime, samplesize, rowcnt, blkcnt
																										  , empcnt, avgspc, chncnt, avgrln, length
																										  (bhiboundval), bhiboundval from tabpart$
																										   where bo# = :1 order by part#

  1332 10/10/18 21:58	 1    1.26 6yd53x1zjqts9     3724264953 sqlplus@dbrocaix01.b	   0.04       0.04	 0.02	       0	  223		 0	      1        1	  1	   0	0.00	9 select /*+ top_sql_7199 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 70utgu2587mhs     1395584798 			   0.04       0.04	 0.01	       0	  173		10	      4        1	  1	   0	0.00   10 insert into wrh$_java_pool_advice	(s
																										  nap_id, dbid, instance_number,      java
																										  _pool_size_for_estimate, java_pool_size_
																										  factor,      estd_lc_size, estd_lc_memor
																										  y_objects, estd_lc_time_saved,      estd
																										  _lc_time_saved_factor, estd_lc_load_time
																										  ,	 estd_lc_load_time_factor, estd_lc
																										  _memory_object_hits)	select	    :snap_
																										  id, :dbid, :instance_number,	    java_p
																										  ool_size_for_estimate, java_pool_size_fa
																										  ctor,      estd_lc_size, estd_lc_memory_
																										  objects, estd_lc_time_saved,	    estd_l
																										  c_time_saved_factor, estd_lc_load_time,
																										       estd_lc_load_time_factor, estd_lc_m
																										  emory_object_hits  from v$java_pool_advi
																										  ce

  1332 10/10/18 21:58	 1    1.26 c3zymn7x3k6wy     3446064519 			   0.03       0.00	 0.03	       0	 1035		 0	    209       19	 19	   0	0.00   11 select obj#, dataobj#, part#, hiboundlen
																										  , hiboundval, flags, ts#, file#, block#,
																										   pctfree$, initrans, maxtrans, analyzeti
																										  me, samplesize, rowcnt, blevel, leafcnt,
																										   distkey, lblkkey, dblkkey, clufac, pctt
																										  hres$, length(bhiboundval), bhiboundval
																										  from indpart$ where bo# = :1 order by pa
																										  rt#

  1332 10/10/18 21:58	 1    1.26 bpxnmunkcywzg     3724264953 sqlplus@dbrocaix01.b	   0.03       0.03	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   12 select /*+ top_sql_8170 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 3252fkazwq930     3220283061 			   0.03       0.03	 0.02	       0	   34		 0	      0        1	  1	   0	0.00   13 UPDATE WRH$_SERVICE_NAME SET snap_id = :
																										  lah_snap_id  WHERE dbid = :dbid    AND (
																										  SERVICE_NAME_HASH) IN (SELECT NUM1_KEWRA
																										  TTR FROM X$KEWRATTRSTALE)

  1332 10/10/18 21:58	 1    1.26 fdxrh8tzyw0yw     2786456350 			   0.03       0.03	 0.03	       0	   38		 0	      0        1	  1	   0	0.00   14 SELECT snap_id , SERVICE_NAME_HASH FROM
																										    (SELECT /*+ ordered use_nl(t2) index(t
																										  2) */ t2.snap_id , t1.NAME_HASH  SERVICE
																										  _NAME_HASH FROM V$SERVICES t1, WRH$_SERV
																										  ICE_NAME t2	   WHERE t2.dbid(+)  = :db
																										  id  AND t2.SERVICE_NAME_HASH(+) = t1.NAM
																										  E_HASH) WHERE nvl(snap_id, 0) < :snap_id

  1332 10/10/18 21:58	 1    1.26 7k6zct1sya530     2444078832 			   0.03       0.03	 0.03	       0	  152		 0	      0        1	  1	   0	0.00   15 insert into WRH$_STREAMS_APPLY_SUM	(s
																										  nap_id, dbid, instance_number, apply_nam
																										  e,	 startup_time, reader_total_messag
																										  es_dequeued, reader_lag,     coord_total
																										  _received, coord_total_applied, coord_to
																										  tal_rollbacks,     coord_total_wait_deps
																										  , coord_total_wait_cmts, coord_lwm_lag,
																										      server_total_messages_applied, serve
																										  r_elapsed_dequeue_time,     server_elaps
																										  ed_apply_time)  select * from    (select
																										   :snap_id, :dbid, :instance_number, ac.a
																										  pply_name,		ac.startup_time, a
																										  r.total_messages_dequeued,		ar
																										  .dequeue_time - ar.dequeued_message_crea
																										  te_time,	      ac.total_received, a
																										  c.total_applied, ac.total_rollbacks,
																											  ac.total_wait_deps, ac.total_wai
																										  t_commits,		ac.lwm_time - ac.l
																										  wm_message_create_time,	     al.to
																										  tal_messages_applied, al.elapsed_dequeue
																										  _time,	    al.elapsed_apply_time
																											from v$streams_apply_coordinator a
																										  c,		v$streams_apply_reader ar,
																											      (select apply_name,
																											     sum(total_messages_applied) t
																										  otal_messages_applied,
																										    sum(elapsed_dequeue_time) elapsed_dequ
																										  eue_time,		       sum(elapsed
																										  _apply_time) elapsed_apply_time
																										       from v$streams_apply_server
																											group by apply_name) al       wher
																										  e al.apply_name=ac.apply_name and
																											ar.apply_name=ac.apply_name
																										  order by ac.total_applied desc)   where
																										  rownum <= 25

  1332 10/10/18 21:58	 1    1.26 7qjhf5dzmazsr      751380177 			   0.03       0.03	 0.01	       0	  143		 7	      1        1	  1	   0	0.00   16 SELECT snap_id , OBJ#, DATAOBJ# FROM	 (
																										  SELECT /*+ ordered use_nl(t2) index(t2)
																										  */ t2.snap_id , t1.OBJN_KEWRSEG  OBJ#, t
																										  1.OBJD_KEWRSEG  DATAOBJ# FROM X$KEWRTSEG
																										  STAT t1, WRH$_SEG_STAT_OBJ t2      WHERE
																										   t2.dbid(+)  = :dbid	AND t2.OBJ#(+) = t
																										  1.OBJN_KEWRSEG AND t2.DATAOBJ#(+) = t1.O
																										  BJD_KEWRSEG) WHERE nvl(snap_id, 0) < :sn
																										  ap_id

  1332 10/10/18 21:58	 1    1.26 32wqka2zwvu65      875704766 			   0.03       0.03	 0.03	       0	  557		 0	    264        1	  1	   0	0.00   17 insert into wrh$_parameter   (snap_id, d
																										  bid, instance_number, parameter_hash, va
																										  lue,	  isdefault, ismodified)  select
																										    :snap_id, :dbid, :instance_number, i.k
																										  sppihash hash, sv.ksppstvl,	 sv.ksppst
																										  df, decode(bitand(sv.ksppstvf,7), 1, 'MO
																										  DIFIED', 'FALSE')  from x$ksppi i, x$ksp
																										  psv sv  where i.indx = sv.indx    and ((
																										  (i.ksppinm not like '#_%' escape '#') or
																											    (sv.ksppstdf = 'FALSE') or
																											(bitand(sv.ksppstvf,5) > 0)) or
																											(i.ksppinm like '#_#_%' escape '#'
																										  ))  order by	  hash

  1332 10/10/18 21:58	 1    1.26 53saa2zkr6wc3     1514015273 			   0.03       0.00	 0.03	       0	 2192		 0	    633      463	 15	   0	0.00   18 select intcol#,nvl(pos#,0),col#,nvl(spar
																										  e1,0) from ccol$ where con#=:1

  1332 10/10/18 21:58	 1    1.26 4qju99hqmn81x     4055547183 			   0.02       0.02	 0.02	       0	  591		 0	      4        1	  1	   0	0.00   19 INSERT INTO WRH$_ACTIVE_SESSION_HISTORY
																										  ( snap_id, dbid, instance_number, sample
																										  _id,	  sample_time, session_id, session
																										  _serial#, user_id,	sql_id, sql_child_
																										  number,    sql_plan_hash_value, force_ma
																										  tching_signature, service_hash,    sessi
																										  on_type, sql_opcode,	  plsql_entry_obje
																										  ct_id, plsql_entry_subprogram_id,    pls
																										  ql_object_id, plsql_subprogram_id,	bl
																										  ocking_session, blocking_session_serial#
																										  ,    qc_session_id, qc_instance_id,	 x
																										  id,	 current_obj#, current_file#, curr
																										  ent_block#,	 event_id, seq#,    p1, p2
																										  , p3, wait_time, time_waited,    program
																										  , module, action, client_id )  (SELECT :
																										  snap_id, :dbid, :instance_number, a.samp
																										  le_id,	  a.sample_time, a.session
																										  _id, a.session_serial#, a.user_id,
																										      a.sql_id, a.sql_child_number,
																										     a.sql_plan_hash_value, a.force_matchi
																										  ng_signature, a.service_hash, 	 a
																										  .session_type, a.sql_opcode,		a.
																										  plsql_entry_object_id, a.plsql_entry_sub
																										  program_id,	       a.plsql_object_id,
																										  a.plsql_subprogram_id,	  a.blocki
																										  ng_session,	       a.blocking_session_
																										  serial#, a.qc_session_id, a.qc_instance_
																										  id,	       a.xid,	       a.current_o
																										  bj#, a.current_file#, a.current_block#,
																											   a.event_id, a.seq#,		a.
																										  p1, a.p2, a.p3, a.wait_time, a.time_wait
																										  ed,	       substrb(a.program, 1, 64),
																										  a.module, a.action, a.client_id   FROM
																										   x$ash a,	     (SELECT h.sample_addr
																										  , h.sample_id 	  FROM	 x$kewash
																										  h	     WHERE		    ( (h.s
																										  ample_id >= :begin_flushing) and
																											     (h.sample_id <  :latest_sampl
																										  e_id) )	      and (MOD(h.sample_id
																										  , :disk_filter_ratio) = 0)	       ) s
																										  hdr	WHERE shdr.sample_addr = a.sample_
																										  addr	   and shdr.sample_id	= a.sample
																										  _id)

  1332 10/10/18 21:58	 1    1.26 32whwm2babwpt      183139296 			   0.02       0.02	 0.02	       0	  420		 0	      0        1	  1	   0	0.00   20 insert into wrh$_seg_stat_obj 	 (
																										   snap_id	    , dbid	    , ts#
																											   , obj#	   , dataobj#
																										       , owner		, object_name
																										       , subobject_name 	 , partiti
																										  on_type	   , object_type
																										  , tablespace_name)	 select :lah_snap_
																										  id	      , :dbid	       , ss1.tsn_k
																										  ewrseg	  , ss1.objn_kewrseg
																										      , ss1.objd_kewrseg	  , ss1.ow
																										  nername_kewrseg	   , ss1.objname_k
																										  ewrseg	  , ss1.subobjname_kewrseg
																											    , decode(po.parttype, 1, 'RANG
																										  E', 2, 'HASH',
																											  3, 'SYSTEM', 4, 'LIST',
																													  NULL, 'NONE', 'U
																										  NKNOWN')	    , decode(ss1.objtype_k
																										  ewrseg, 0, 'NEXT OBJECT',
																										      1, 'INDEX', 2, 'TABLE', 3, 'CLUSTER'
																										  ,			    4, 'VIEW', 5,
																										  'SYNONYM', 6, 'SEQUENCE',
																											  7, 'PROCEDURE', 8, 'FUNCTION', 9
																										  , 'PACKAGE',			11, 'PACKA
																										  GE BODY', 12, 'TRIGGER',
																											      13, 'TYPE', 14, 'TYPE BODY',
																														    19, 'T
																										  ABLE PARTITION',
																												  20, 'INDEX PARTITION', 2
																										  1, 'LOB',				22
																										  , 'LIBRARY', 23, 'DIRECTORY', 24, 'QUEUE
																										  ',		      28, 'JAVA SOURCE', 2
																										  9, 'JAVA CLASS',
																										    30, 'JAVA RESOURCE', 32, 'INDEXTYPE',
																													  33, 'OPERATOR',
																										  34, 'TABLE SUBPARTITION',
																											35, 'INDEX SUBPARTITION',
																													      40, 'LOB PAR
																										  TITION', 41, 'LOB SUBPARTITION',
																											    42, 'MATERIALIZED VIEW',
																												43, 'DIMENSION',
																												    44, 'CONTEXT', 47, 'RE
																										  SOURCE PLAN', 		  48, 'CON
																										  SUMER GROUP',
																											51, 'SUBSCRIPTION', 52, 'LOCATION'
																										  ,		      55, 'XML SCHEMA', 56
																										  , 'JAVA DATA',		    57, 'S
																										  ECURITY PROFILE',		      'UND
																										  EFINED')	       , ss1.tsname_kewrse
																										  g	  from x$kewrattrnew  at,
																										     x$kewrtsegstat ss1,	    (selec
																										  t tp.obj#, pob.parttype		fr
																										  om   sys.tabpart$ tp, sys.partobj$ pob
																											       where  tp.bo#   = pob.obj#
																											      union all 	    select
																										   ip.obj#, pob.parttype	       fro
																										  m   sys.indpart$ ip, sys.partobj$ pob
																											      where  ip.bo#   = pob.obj#)
																										  po	  where at.num1_kewrattr  = ss1.ob
																										  jn_kewrseg	    and at.num2_kewrattr
																										  = ss1.objd_kewrseg	    and at.num1_ke
																										  wrattr  = po.obj#(+)	      and (ss1.obj
																										  type_kewrseg not in
																										      (1  /* INDEX - handled below */,
																													 10 /* NON-EXISTEN
																										  T */) 	    or (ss1.objtype_kewrse
																										  g = 1 			     and 1
																										   = (select 1 from ind$  i
																												 where i.obj# = ss1.objn_k
																										  ewrseg
																										      and i.type# in
																														 (1, 2, 3,
																										   4, 6, 7, 9))))	  and ss1.objname_
																										  kewrseg != '_NEXT_OBJECT'
																											 and ss1.objname_kewrseg != '_defa
																										  ult_auditing_options_'

  1332 10/10/18 21:58	 1    1.26 fktqvw2wjxdxc     2042248707 			   0.02       0.02	 0.02	       0	  293		 0	     13        1	  1	   0	0.00   21 insert into wrh$_filestatxs	(snap_id,
																										  dbid, instance_number, file#, creation_c
																										  hange#, phyrds,    phywrts, singleblkrds
																										  , readtim, writetim, singleblkrdtim, phy
																										  blkrd,    phyblkwrt, wait_count, time)
																										  select    :snap_id, :dbid, :instance_num
																										  ber, df.file#,    (df.crscnbas + (df.crs
																										  cnwrp * power(2,32))) creation_change#,
																										     fs.kcfiopyr, fs.kcfiopyw, fs.kcfiosbr
																										  , fs.kcfioprt, fs.kcfiopwt,	 fs.kcfios
																										  bt, fs.kcfiopbr, fs.kcfiopbw, fw.count,
																										  fw.time  from    x$kcfio fs, file$ df, x
																										  $kcbfwait fw	where	 fw.indx+1  = fs.k
																										  cfiofno and	 df.file#   = fs.kcfiofno
																										  and	 df.status$ = 2

  1332 10/10/18 21:58	 1    1.26 2ym6hhaq30r73     3755742892 			   0.02       0.00	 0.02	       0	 1428		 0	    476      476	476	   0	0.00   22 select type#,blocks,extents,minexts,maxe
																										  xts,extsize,extpct,user#,iniexts,NVL(lis
																										  ts,65535),NVL(groups,65535),cachehint,hw
																										  mincr, NVL(spare1,0),NVL(scanhint,0) fro
																										  m seg$ where ts#=:1 and file#=:2 and blo
																										  ck#=:3

  1332 10/10/18 21:58	 1    1.26 71y370j6428cb     3717298615 			   0.02       0.02	 0.02	       0	  146		 0	      1        1	  1	   0	0.00   23 insert into wrh$_thread     (snap_id, db
																										  id, instance_number,	    thread#, threa
																										  d_instance_number, status,	  open_tim
																										  e, current_group#, sequence#)  select
																										     :snap_id, :dbid, :instance_number,
																										     t.thread#, i.instance_number, t.statu
																										  s,	  t.open_time, t.current_group#, t
																										  .sequence#  from v$thread t, v$instance
																										  i  where i.thread#(+) = t.thread#

  1332 10/10/18 21:58	 1    1.26 f9nzhpn9854xz     2614576983 			   0.02       0.02	 0.02	       0	  499		 5	     57        1	  1	   0	0.00   24 insert into wrh$_seg_stat   (snap_id, db
																										  id, instance_number, ts#, obj#, dataobj#
																										  , logical_reads_total,    logical_reads_
																										  delta, buffer_busy_waits_total, buffer_b
																										  usy_waits_delta,    db_block_changes_tot
																										  al, db_block_changes_delta, physical_rea
																										  ds_total,    physical_reads_delta, physi
																										  cal_writes_total, physical_writes_delta,
																										      physical_reads_direct_total, physica
																										  l_reads_direct_delta,    physical_writes
																										  _direct_total, physical_writes_direct_de
																										  lta,	  itl_waits_total, itl_waits_delta
																										  ,    row_lock_waits_total, row_lock_wait
																										  s_delta,    gc_buffer_busy_total, gc_buf
																										  fer_busy_delta,    gc_cr_blocks_received
																										  _total, gc_cr_blocks_received_delta,
																										  gc_cu_blocks_received_total, gc_cu_block
																										  s_received_delta,    space_used_total, s
																										  pace_used_delta,    space_allocated_tota
																										  l, space_allocated_delta,    table_scans
																										  _total, table_scans_delta,	chain_row_
																										  excess_total, chain_row_excess_delta)  s
																										  elect :snap_id, :dbid, :instance_number,
																										      tsn_kewrseg, objn_kewrseg, objd_kewr
																										  seg,	  log_rds_kewrseg, log_rds_dl_kewr
																										  seg,	  buf_busy_wts_kewrseg, buf_busy_w
																										  ts_dl_kewrseg,    db_blk_chgs_kewrseg, d
																										  b_blk_chgs_dl_kewrseg,    phy_rds_kewrse
																										  g, phy_rds_dl_kewrseg,    phy_wrts_kewrs
																										  eg, phy_wrts_dl_kewrseg,    phy_rds_drt_
																										  kewrseg, phy_rds_drt_dl_kewrseg,    phy_
																										  wrts_drt_kewrseg, phy_wrts_drt_dl_kewrse
																										  g,	itl_wts_kewrseg, itl_wts_dl_kewrse
																										  g,	row_lck_wts_kewrseg, row_lck_wts_d
																										  l_kewrseg,	gc_buf_busy_kewrseg, gc_bu
																										  f_busy_dl_kewrseg,	gc_cr_blks_rcv_kew
																										  rseg, gc_cr_blks_rcv_dl_kewrseg,    gc_c
																										  u_blks_rcv_kewrseg, gc_cu_blks_rcv_dl_ke
																										  wrseg,    space_used_kewrseg, space_used
																										  _dl_kewrseg,	  space_alloc_kewrseg, spa
																										  ce_alloc_dl_kewrseg,	  tbl_scns_kewrseg
																										  , tbl_scns_dl_kewrseg,    chn_exc_kewrse
																										  g, chn_exc_dl_kewrseg  from X$KEWRTSEGST
																										  AT  order by objn_kewrseg, objd_kewrseg

  1332 10/10/18 21:58	 1    1.26 bqnn4c3gjtmgu      592198678 			   0.02       0.02	 0.02	       0	  129		 0	     23        1	  1	   0	0.00   25 insert into wrh$_bg_event_summary   (sna
																										  p_id, dbid, instance_number,	  event_id
																										  , total_waits,    total_timeouts, time_w
																										  aited_micro)	select /*+ ordered use_nl(
																										  e) */    :snap_id, :dbid, :instance_numb
																										  er,	 e.event_id, sum(e.total_waits),
																										    sum(e.total_timeouts), sum(e.time_wait
																										  ed_micro)  from    v$session bgsids, v$s
																										  ession_event e  where    bgsids.type = '
																										  BACKGROUND' and    bgsids.sid  = e.sid
																										  group by    e.event_id

  1332 10/10/18 21:58	 1    1.26 39m4sx9k63ba2      323350262 			   0.02       0.00	 0.01	       0	  138		 1	     42       12	 12	   0	0.00   26 select /*+ index(idl_ub2$ i_idl_ub21) +*
																										  / piece#,length,piece from idl_ub2$ wher
																										  e obj#=:1 and part=:2 and version=:3 ord
																										  er by piece#

  1332 10/10/18 21:58	 1    1.26 7fa2r0xkfbs6b     3724264953 sqlplus@dbrocaix01.b	   0.02       0.02	 0.02	       0	  223		 0	      1        1	  1	   0	0.00   27 select /*+ top_sql_8314 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 1uk5m5qbzj1vt	      0 sqlplus@dbrocaix01.b	   0.02 		 0.02	       0	  155		 0	      0        0	  1	   0	0.00   28 BEGIN dbms_workload_repository.create_sn
								xxxxxxx.com (TNS V1-																  apshot; END;
								V3)

  1332 10/10/18 21:58	 1    1.26 cp3gpd7z878w8     1950636251 			   0.02       0.02	 0.02	       0	  288		 0	     25        1	  1	   0	0.00   29 insert into wrh$_sgastat   (snap_id, dbi
																										  d, instance_number, pool, name, bytes)
																										  select    :snap_id, :dbid, :instance_num
																										  ber, pool, name, bytes   from     (selec
																										  t pool, name, bytes,		   100*(by
																										  tes) / (sum(bytes) over (partition by po
																										  ol)) part_pct        from v$sgastat)	 w
																										  here part_pct >= 1	  or pool is null
																										       or name = 'free memory'	 order by
																										  name, pool

  1332 10/10/18 21:58	 1    1.26 dsd2yqyggtc59     3648994037 			   0.02 		 0.02	       0	    5		 0	      0        0	  1	   0	0.00   30 select SERVICE_ID, NAME, NAME_HASH, NETW
																										  ORK_NAME, CREATION_DATE, CREATION_DATE_H
																										  ASH, GOAL, DTP,  AQ_HA_NOTIFICATION, CLB
																										  _GOAL  from GV$SERVICES where inst_id =
																										  USERENV('Instance')

  1332 10/10/18 21:58	 1    1.26 bu95jup1jp5t3     2436512634 			   0.02       0.02	 0.02	       0	  338		 3	     21        1	  1	   0	0.00   31 insert into wrh$_db_cache_advice
																										  (snap_id, dbid, instance_number,
																										   bpid, buffers_for_estimate, name, block
																										  _size,	 advice_status, size_for_e
																										  stimate, size_factor, 	physical_r
																										  eads, base_physical_reads, actual_physic
																										  al_reads)   select :snap_id, :dbid, :ins
																										  tance_number, 	 a.bpid, a.nbufs,
																										  b.bp_name, a.blksz,	       decode(a.st
																										  atus, 2, 'ON', 'OFF'),	  a.poolsz
																										  , round((a.poolsz / a.actual_poolsz), 4)
																										  ,	     a.preads, a.base_preads, a.ac
																										  tual_preads	  from x$kcbsc a, x$kcbwbp
																										  d b	  where a.bpid = b.bp_id

  1332 10/10/18 21:58	 1    1.26 350myuyx0t1d6     1838802114 			   0.02       0.02	 0.02	       0	  299		 0	     11        1	  1	   0	0.00   32 insert into wrh$_tablespace_stat    (sna
																										  p_id, dbid, instance_number, ts#, tsname
																										  , contents,	  status, segment_space_ma
																										  nagement, extent_management,	   is_back
																										  up)  select	 :snap_id, :dbid, :instanc
																										  e_number,    ts.ts#, ts.name as tsname,
																										     decode(ts.contents$, 0, (decode(bitan
																										  d(ts.flags, 16), 16, 'UNDO',		 '
																										  PERMANENT')), 1, 'TEMPORARY')
																										   as contents,    decode(ts.online$, 1, '
																										  ONLINE', 2, 'OFFLINE',	   4, 'REA
																										  D ONLY', 'UNDEFINED') 	     as st
																										  atus,    decode(bitand(ts.flags,32), 32,
																										  'AUTO', 'MANUAL') as segspace_mgmt,	 d
																										  ecode(ts.bitmapped, 0, 'DICTIONARY', 'LO
																										  CAL')   as extent_management,    (case w
																										  hen b.active_count > 0	  then 'TR
																										  UE' else 'FALSE' end) 	     as is
																										  _backup  from sys.ts$ ts,	  (select
																										  dfile.ts#,		   sum( case when
																										  bkup.status = 'ACTIVE'
																											 then 1 else 0 end ) as active_cou
																										  nt	     from v$backup bkup, file$ dfi
																										  le	     where bkup.file# = dfile.file
																										  #	      and dfile.status$ = 2
																										    group by dfile.ts#) b  where ts.online
																										  $ != 3    and bitand(ts.flags, 2048) !=
																										  2048	  and ts.ts#  = b.ts#

  1332 10/10/18 21:58	 1    1.26 f71p3w4xx1pfc     3724264953 sqlplus@dbrocaix01.b	   0.02       0.02	 0.02	       0	  223		 0	      1        1	  1	   0	0.00   33 select /*+ top_sql_8286 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 c6awqs517jpj0     1780865333 			   0.02       0.00	 0.00	       0	   36		 1	      6       12	 12	   0	0.00   34 select /*+ index(idl_char$ i_idl_char1)
																										  +*/ piece#,length,piece from idl_char$ w
																										  here obj#=:1 and part=:2 and version=:3
																										  order by piece#

  1332 10/10/18 21:58	 1    1.26 agpd044zj368m     3821145811 			   0.02       0.02	 0.02	       0	  284		10	     45        1	  1	   0	0.00   35 insert into wrh$_system_event   (snap_id
																										  , dbid, instance_number, event_id, total
																										  _waits,    total_timeouts, time_waited_m
																										  icro)  select    :snap_id, :dbid, :insta
																										  nce_number, event_id, total_waits,	to
																										  tal_timeouts, time_waited_micro  from
																										   v$system_event  order by    event_id

  1332 10/10/18 21:58	 1    1.26 f3wcc30napt5a     3724264953 sqlplus@dbrocaix01.b	   0.02       0.02	 0.02	       0	  223		 0	      1        1	  1	   0	0.00   36 select /*+ top_sql_7198 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 71k5024zn7c9a     3286887626 			   0.02       0.02	 0.02	       0	  295		 0	      6        1	  1	   0	0.00   37 insert into wrh$_latch_misses_summary
																										  (snap_id, dbid, instance_number, parent_
																										  name, where_in_code,	  nwfail_count, sl
																										  eep_count, wtr_slp_count)  select    :sn
																										  ap_id, :dbid, :instance_number, parent_n
																										  ame, "WHERE",    sum(nwfail_count), sum(
																										  sleep_count), sum(wtr_slp_count)  from
																										    v$latch_misses  where    sleep_count >
																										   0  group by	  parent_name, "WHERE"	or
																										  der by    parent_name, "WHERE"

  1332 10/10/18 21:58	 1    1.26 83taa7kaw59c1     3765558045 			   0.02       0.00	 0.02	       0	  220		 0	    913       69	 21	   0	0.00   38 select name,intcol#,segcol#,type#,length
																										  ,nvl(precision#,0),decode(type#,2,nvl(sc
																										  ale,-127/*MAXSB1MINAL*/),178,scale,179,s
																										  cale,180,scale,181,scale,182,scale,183,s
																										  cale,231,scale,0),null$,fixedstorage,nvl
																										  (deflength,0),default$,rowid,col#,proper
																										  ty, nvl(charsetid,0),nvl(charsetform,0),
																										  spare1,spare2,nvl(spare3,0) from col$ wh
																										  ere obj#=:1 order by intcol#

  1332 10/10/18 21:58	 1    1.26 cvn54b7yz0s8u     2334475966 			   0.02       0.00	 0.00	       0	   62		 7	     20       12	 12	   0	0.00   39 select /*+ index(idl_ub1$ i_idl_ub11) +*
																										  / piece#,length,piece from idl_ub1$ wher
																										  e obj#=:1 and part=:2 and version=:3 ord
																										  er by piece#

  1332 10/10/18 21:58	 1    1.26 66gs90fyynks7     1662736584 			   0.02       0.02	 0.02	       0	  202		 0	      1        1	  1	   0	0.00   40 insert into wrh$_instance_recovery (snap
																										  _id, dbid, instance_number, recovery_est
																										  imated_ios, actual_redo_blks, target_red
																										  o_blks, log_file_size_redo_blks, log_chk
																										  pt_timeout_redo_blks, log_chkpt_interval
																										  _redo_blks, fast_start_io_target_redo_bl
																										  ks, target_mttr, estimated_mttr, ckpt_bl
																										  ock_writes, optimal_logfile_size, estd_c
																										  luster_available_time, writes_mttr, writ
																										  es_logfile_size, writes_log_checkpoint_s
																										  ettings, writes_other_settings, writes_a
																										  utotune, writes_full_thread_ckpt) select
																										   :snap_id, :dbid, :instance_number, reco
																										  very_estimated_ios, actual_redo_blks, ta
																										  rget_redo_blks, log_file_size_redo_blks,
																										   log_chkpt_timeout_redo_blks, log_chkpt_
																										  interval_redo_blks, fast_start_io_target
																										  _redo_blks, target_mttr, estimated_mttr,
																										   ckpt_block_writes, optimal_logfile_size
																										  , estd_cluster_available_time, writes_mt
																										  tr, writes_logfile_size, writes_log_chec
																										  kpoint_settings, writes_other_settings,
																										  writes_autotune, writes_full_thread_ckpt
																										   from v$instance_recovery

  1332 10/10/18 21:58	 1    1.26 5ngzsfstg8tmy     3317232865 			   0.01       0.00	 0.01	       0	  321		 0	    107      107	 19	   0	0.00   41 select o.owner#,o.name,o.namespace,o.rem
																										  oteowner,o.linkname,o.subname,o.dataobj#
																										  ,o.flags from obj$ o where o.obj#=:1

  1332 10/10/18 21:58	 1    1.26 7ng34ruy5awxq      306576078 			   0.01       0.00	 0.01	       0	  566		 0	     78       68	 18	   0	0.00   42 select i.obj#,i.ts#,i.file#,i.block#,i.i
																										  ntcols,i.type#,i.flags,i.property,i.pctf
																										  ree$,i.initrans,i.maxtrans,i.blevel,i.le
																										  afcnt,i.distkey,i.lblkkey,i.dblkkey,i.cl
																										  ufac,i.cols,i.analyzetime,i.samplesize,i
																										  .dataobj#,nvl(i.degree,1),nvl(i.instance
																										  s,1),i.rowcnt,mod(i.pctthres$,256),i.ind
																										  method#,i.trunccnt,nvl(c.unicols,0),nvl(
																										  c.deferrable#+c.valid#,0),nvl(i.spare1,i
																										  .intcols),i.spare4,i.spare2,i.spare6,dec
																										  ode(i.pctthres$,null,null,mod(trunc(i.pc
																										  tthres$/256),256)),ist.cachedblk,ist.cac
																										  hehit,ist.logicalread from ind$ i, ind_s
																										  tats$ ist, (select enabled, min(cols) un
																										  icols,min(to_number(bitand(defer,1))) de
																										  ferrable#,min(to_number(bitand(defer,4))
																										  ) valid# from cdef$ where obj#=:1 and en
																										  abled > 1 group by enabled) c where i.ob
																										  j#=c.enabled(+) and i.obj# = ist.obj#(+)
																										   and i.bo#=:1 order by i.obj#

  1332 10/10/18 21:58	 1    1.26 79uvsz1g1c168      187762771 			   0.01       0.01	 0.01	       0	  216		 0	      1        1	  1	   0	0.00   43 insert into wrh$_buffer_pool_statistics
																										    (snap_id, dbid, instance_number, id, n
																										  ame, block_size, set_msize,	 cnum_repl
																										  , cnum_write, cnum_set, buf_got, sum_wri
																										  te, sum_scan,    free_buffer_wait, write
																										  _complete_wait, buffer_busy_wait,    fre
																										  e_buffer_inspected, dirty_buffers_inspec
																										  ted, db_block_change,    db_block_gets,
																										  consistent_gets, physical_reads, physica
																										  l_writes)  select    :snap_id, :dbid, :i
																										  nstance_number, id, name, block_size, se
																										  t_msize,    cnum_repl, cnum_write, cnum_
																										  set, buf_got, sum_write, sum_scan,	fr
																										  ee_buffer_wait, write_complete_wait, buf
																										  fer_busy_wait,    free_buffer_inspected,
																										   dirty_buffers_inspected, db_block_chang
																										  e,	db_block_gets, consistent_gets, ph
																										  ysical_reads, physical_writes  from	 v
																										  $buffer_pool_statistics

  1332 10/10/18 21:58	 1    1.26 b0cxc52zmwaxs     3771206753 			   0.01       0.01	 0.01	       0	  187		 0	      2        1	  1	   0	0.00   44 insert into wrh$_sess_time_stats    (sna
																										  p_id, dbid, instance_number, session_typ
																										  e, min_logon_time,	 sum_cpu_time, sum
																										  _sys_io_wait, sum_user_io_wait) select :
																										  snap_id, :dbid, :instance_number, type,
																											 min(logon_time)  min_logon_time,
																										  sum(cpu_time)     cpu_time,	     sum(s
																										  ys_io_wait) sys_io_wait,    sum(user_io_
																										  wait) user_io_wait from  (select sid, se
																										  rial#,	  max(type)	  type,
																											 max(logon_time) logon_time,
																										      max(cpu_time)   cpu_time, 	 s
																										  um(case when kslcsclsname = 'System I/O'
																												     then kslcstim else 0
																										  end) as sys_io_wait,		sum(case w
																										  hen kslcsclsname ='User I/O'
																											 then kslcstim else 0 end) as user
																										  _io_wait   from     (select /*+ ordered
																										  */		 allsids.sid sid, allsids.
																										  serial# serial#,	       max(type)
																										       type,		 max(logon_time) l
																										  ogon_time,		 sum(kewsval)	 c
																										  pu_time	from (select type,
																										  allsids.sid, sess.ksuseser as serial#,
																											sess.ksuseltm as logon_time  from
																										    (select /*+ ordered index(p) */
																										      s.indx as sid,	       decode(l.ro
																										  le, 'reader',  'Logminer Reader',
																												     'preparer','Logminer
																										  Preparer',			      'bui
																										  lder', 'Logminer Builder') as type
																										  from x$logmnr_process l, x$ksupr p, x$ks
																										  use s      where l.role in ('reader','pr
																										  eparer','builder')	    and l.pid = p.
																										  indx	      and bitand(p.ksspaflg,1)!=0
																											 and p.ksuprpid = s.ksusepid	un
																										  ion all    select sid_knst as sid,
																										       decode(type_knst, 8,'STREAMS Captur
																										  e',				  7,'STREA
																										  MS Apply Reader',
																											2,'STREAMS Apply Server',
																												      1,'STREAMS Apply Coo
																										  rdinator') as type	  from x$knstcap
																										      where type_knst in (8,7,2,1)    unio
																										  n all    select indx as sid, (case when
																										  ksusepnm like '%(q00%)'
																											       then 'QMON Slaves'
																												       else 'QMON Coordina
																										  tor' end) as type	 from x$ksuse
																										   where ksusepnm like '%(q00%)'	 o
																										  r ksusepnm like '%(QMNC)'    union all
																										    select kwqpssid as sid, 'Propagation S
																										  ender' as type      from x$kwqps    unio
																										  n all    select kwqpdsid as sid, 'Propag
																										  ation Receiver' as type      from x$kwqp
																										  d) allsids, x$ksuse sess   where bitand(
																										  sess.ksspaflg,1) != 0     and bitand(ses
																										  s.ksuseflg,1) != 0	 and allsids.sid =
																										   sess.indx) allsids,		  x$kewsse
																										  sv sesv,	      x$kewssmap map
																										   where   allsids.sid = sesv.ksusenum
																											 and sesv.kewsnum = map.soffst
																											 and map.aggid	 = 1	       and
																										   (map.stype = 2 or map.stype = 3)
																										      and map.sname in ('DB CPU', 'backgro
																										  und cpu time')       group by sid, seria
																										  l#) allaggr,	   x$kslcs allio   where
																										     allaggr.sid = allio.kslcssid(+) and
																										     allio.kslcsclsname in ('System I/O',
																										  'User I/O')	group by allaggr.sid, alla
																										  ggr.serial#) group by type

  1332 10/10/18 21:58	 1    1.26 1tn90bbpyjshq      722989617 			   0.01       0.01	 0.01	       0	   87		 0	      0        1	  1	   0	0.00   45 UPDATE wrh$_tempfile tfh  SET (snap_id,
																										  filename, tsname) =	   (SELECT :lah_sn
																										  ap_id, tf.name name, ts.name tsname
																										    FROM v$tempfile tf, ts$ ts	     WHERE
																										   tf.ts# = ts.ts#	   AND tfh.file# =
																										   tf.file#	    AND tfh.creation_chang
																										  e# = tf.creation_change#)  WHERE (file#,
																										   creation_change#) IN        (SELECT tf.
																										  tfnum, to_number(tf.tfcrc_scn) creation_
																										  change#	    FROM x$kcctf tf
																										      WHERE tf.tfdup != 0)    AND dbid
																										  = :dbid    AND snap_id < :snap_id

  1332 10/10/18 21:58	 1    1.26 a73wbv1yu8x5c     2570921597 			   0.01       0.00	 0.01	       0	  680		 0	    463       71	  5	   0	0.00   46 select con#,type#,condlength,intcols,rob
																										  j#,rcon#,match#,refact,nvl(enabled,0),ro
																										  wid,cols,nvl(defer,0),mtime,nvl(spare1,0
																										  ) from cdef$ where obj#=:1

  1332 10/10/18 21:58	 1    1.26 6c06mfv01xt2h     2399945022 			   0.01       0.01	 0.01	       0	  201		 1	      1        1	  1	   0	0.00   47 update wrh$_seg_stat_obj sso	  set (ind
																										  ex_type, base_obj#, base_object_name, ba
																										  se_object_owner)	   =	    (selec
																										  t decode(ind.type#,
																										    1, 'NORMAL'||
																										   decode(bitand(ind.property, 4), 0, '',
																										  4, '/REV'),			   2, 'BIT
																										  MAP', 3, 'CLUSTER', 4, 'IOT - TOP',
																												    5, 'IOT - NESTED', 6,
																										  'SECONDARY', 7, 'ANSI',
																											8, 'LOB', 9, 'DOMAIN') as index_ty
																										  pe,		     base_obj.obj# as base
																										  _obj#,		base_obj.name as b
																										  ase_object_name,		  base_own
																										  er.name as base_object_owner	       fro
																										  m   sys.ind$	ind,		    sys.us
																										  er$ base_owner,		 sys.obj$
																										   base_obj	    where  ind.obj#	=
																										  sso.obj#	     and  ind.dataobj# = s
																										  so.dataobj#		and  ind.bo#
																										  = base_obj.obj#	    and  base_obj.
																										  owner# = base_owner.user#)  where  sso.d
																										  bid	     = :dbid	and  (obj#, dataob
																										  j#)	      in (select objn_kewrseg, obj
																										  d_kewrseg		   from x$kewrtseg
																										  stat ss1  where objtype_kewrseg = 1)
																										  and  sso.snap_id     = :lah_snap_id	 a
																										  nd  sso.object_type = 'INDEX'

  1332 10/10/18 21:58	 1    1.26 45jb7msfn4x4m      669385525 			   0.01 		 0.01	       0	    5		 0	      0        0	  1	   0	0.00   48 select  SADDR , SID , SERIAL# , AUDSID ,
																										   PADDR , USER# , USERNAME , COMMAND , OW
																										  NERID, TADDR , LOCKWAIT , STATUS , SERVE
																										  R , SCHEMA# , SCHEMANAME ,OSUSER , PROCE
																										  SS , MACHINE , TERMINAL , PROGRAM , TYPE
																										   , SQL_ADDRESS , SQL_HASH_VALUE, SQL_ID,
																										   SQL_CHILD_NUMBER , PREV_SQL_ADDR , PREV
																										  _HASH_VALUE , PREV_SQL_ID, PREV_CHILD_NU
																										  MBER , PLSQL_ENTRY_OBJECT_ID, PLSQL_ENTR
																										  Y_SUBPROGRAM_ID, PLSQL_OBJECT_ID, PLSQL_
																										  SUBPROGRAM_ID, MODULE , MODULE_HASH , AC
																										  TION , ACTION_HASH , CLIENT_INFO , FIXED
																										  _TABLE_SEQUENCE , ROW_WAIT_OBJ# , ROW_WA
																										  IT_FILE# , ROW_WAIT_BLOCK# , ROW_WAIT_RO
																										  W# , LOGON_TIME , LAST_CALL_ET , PDML_EN
																										  ABLED , FAILOVER_TYPE , FAILOVER_METHOD
																										  , FAILED_OVER, RESOURCE_CONSUMER_GROUP,
																										  PDML_STATUS, PDDL_STATUS, PQ_STATUS, CUR
																										  RENT_QUEUE_DURATION, CLIENT_IDENTIFIER,
																										  BLOCKING_SESSION_STATUS, BLOCKING_INSTAN
																										  CE,BLOCKING_SESSION,SEQ#, EVENT#,EVENT,P
																										  1TEXT,P1,P1RAW,P2TEXT,P2,P2RAW, P3TEXT,P
																										  3,P3RAW,WAIT_CLASS_ID, WAIT_CLASS#,WAIT_
																										  CLASS,WAIT_TIME, SECONDS_IN_WAIT,STATE,S
																										  ERVICE_NAME, SQL_TRACE, SQL_TRACE_WAITS,
																										   SQL_TRACE_BINDS from GV$SESSION where i
																										  nst_id = USERENV('Instance')

  1332 10/10/18 21:58	 1    1.26 asvzxj61dc5vs     3028786551 			   0.01       0.00	 0.01	       0	  325		 0	     75      125	125	   0	0.00   49 select timestamp, flags from fixed_obj$
																										  where obj#=:1

  1332 10/10/18 21:58	 1    1.26 04xtrk7uyhknh     2853959010 			   0.01       0.00	 0.01	       0	  125		 1	     41       42	 22	   0	0.00   50 select obj#,type#,ctime,mtime,stime,stat
																										  us,dataobj#,flags,oid$, spare1, spare2 f
																										  rom obj$ where owner#=:1 and name=:2 and
																										   namespace=:3 and remoteowner is null an
																										  d linkname is null and subname is null

  1332 10/10/18 21:58	 1    1.26 6769wyy3yf66f      299250003 			   0.01       0.00	 0.01	       0	  704		 0	    274       78	 20	   0	0.00   51 select pos#,intcol#,col#,spare1,bo#,spar
																										  e2 from icol$ where obj#=:1

  1332 10/10/18 21:58	 1    1.26 1gu8t96d0bdmu     3526770254 			   0.01       0.00	 0.01	       0	  242		 1	     59       59	 20	   0	0.00   52 select t.ts#,t.file#,t.block#,nvl(t.bobj
																										  #,0),nvl(t.tab#,0),t.intcols,nvl(t.cluco
																										  ls,0),t.audit$,t.flags,t.pctfree$,t.pctu
																										  sed$,t.initrans,t.maxtrans,t.rowcnt,t.bl
																										  kcnt,t.empcnt,t.avgspc,t.chncnt,t.avgrln
																										  ,t.analyzetime,t.samplesize,t.cols,t.pro
																										  perty,nvl(t.degree,1),nvl(t.instances,1)
																										  ,t.avgspc_flb,t.flbcnt,t.kernelcols,nvl(
																										  t.trigflag, 0),nvl(t.spare1,0),nvl(t.spa
																										  re2,0),t.spare4,t.spare6,ts.cachedblk,ts
																										  .cachehit,ts.logicalread from tab$ t, ta
																										  b_stats$ ts where t.obj#= :1 and t.obj#
																										  = ts.obj# (+)

  1332 10/10/18 21:58	 1    1.26 88brhumsyg325      146261960 			   0.01 		 0.01	       0	    6		 0	      0        0	  1	   0	0.00   53 select d.inst_id,d.kslldadr,la.latch#,d.
																										  kslldlvl,d.kslldnam,d.kslldhsh,	 l
																										  a.gets,la.misses,	   la.sleeps,la.im
																										  mediate_gets,la.immediate_misses,la.wait
																										  ers_woken,	    la.waits_holding_latch
																										  ,la.spin_gets,la.sleep1,la.sleep2,
																										    la.sleep3,la.sleep4,la.sleep5,la.sleep
																										  6,la.sleep7,la.sleep8,la.sleep9,
																										  la.sleep10, la.sleep11, la.wait_time	fr
																										  om x$kslld d,    (select kslltnum latch#
																										  ,	   sum(kslltwgt) gets,sum(kslltwff
																										  ) misses,sum(kslltwsl) sleeps,	su
																										  m(kslltngt) immediate_gets,sum(kslltnfa)
																										   immediate_misses,	    sum(kslltwkc)
																										  waiters_woken,sum(kslltwth) waits_holdin
																										  g_latch,	  sum(ksllthst0) spin_gets
																										  ,sum(ksllthst1) sleep1,sum(ksllthst2) sl
																										  eep2,        sum(ksllthst3) sleep3,sum(k
																										  sllthst4) sleep4,sum(ksllthst5) sleep5,
																											 sum(ksllthst6) sleep6,sum(ksllths
																										  t7) sleep7,sum(ksllthst8) sleep8,
																										   sum(ksllthst9) sleep9,sum(ksllthst10) s
																										  leep10,sum(ksllthst11) sleep11,	 s
																										  um(kslltwtt) wait_time    from x$ksllt g
																										  roup by kslltnum) la	where la.latch# =
																										  d.indx

  1332 10/10/18 21:58	 1    1.26 7rx9z1ddww1j2     2439216106 			   0.00 		 0.00	       0	    5		 0	      0        0	  1	   0	0.00   54 select SID, SERIAL#, APPLY#, APPLY_NAME,
																										  SERVER_ID, STATE, XIDUSN, XIDSLT, XIDSQN
																										  , COMMITSCN,DEP_XIDUSN, DEP_XIDSLT, DEP_
																										  XIDSQN, DEP_COMMITSCN, MESSAGE_SEQUENCE,
																										  TOTAL_ASSIGNED, TOTAL_ADMIN, TOTAL_ROLLB
																										  ACKS,TOTAL_MESSAGES_APPLIED, APPLY_TIME,
																										   APPLIED_MESSAGE_NUMBER, APPLIED_MESSAGE
																										  _CREATE_TIME,ELAPSED_DEQUEUE_TIME, ELAPS
																										  ED_APPLY_TIME from GV$STREAMS_APPLY_SERV
																										  ER where INST_ID = USERENV('Instance')

  1332 10/10/18 21:58	 1    1.26 6aq34nj2zb2n7     2874733959 			   0.00       0.00	 0.00	       0	  130		 0	      0       65	 20	   0	0.00   55 select col#, grantee#, privilege#,max(mo
																										  d(nvl(option$,0),2)) from objauth$ where
																										   obj#=:1 and col# is not null group by p
																										  rivilege#, col#, grantee# order by col#,
																										   grantee#

  1332 10/10/18 21:58	 1    1.26 17k8dh7vntd3w      669385525 			   0.00 		 0.00	       0	    3		 0	      0        0	  1	   0	0.00   56 select s.inst_id,s.addr,s.indx,s.ksusese
																										  r,s.ksuudses,s.ksusepro,s.ksuudlui,s.ksu
																										  udlna,s.ksuudoct,s.ksusesow, decode(s.ks
																										  usetrn,hextoraw('00'),null,s.ksusetrn),d
																										  ecode(s.ksqpswat,hextoraw('00'),null,s.k
																										  sqpswat),decode(bitand(s.ksuseidl,11),1,
																										  'ACTIVE',0,decode(bitand(s.ksuseflg,4096
																										  ),0,'INACTIVE','CACHED'),2,'SNIPED',3,'S
																										  NIPED', 'KILLED'),decode(s.ksspatyp,1,'D
																										  EDICATED',2,'SHARED',3,'PSEUDO','NONE'),
																										    s.ksuudsid,s.ksuudsna,s.ksuseunm,s.ksu
																										  sepid,s.ksusemnm,s.ksusetid,s.ksusepnm,
																										  decode(bitand(s.ksuseflg,19),17,'BACKGRO
																										  UND',1,'USER',2,'RECURSIVE','?'), s.ksus
																										  esql, s.ksusesqh, s.ksusesqi, decode(s.k
																										  susesch, 65535, to_number(null), s.ksuse
																										  sch),  s.ksusepsq, s.ksusepha, s.ksuseps
																										  i,  decode(s.ksusepch, 65535, to_number(
																										  null), s.ksusepch),  decode(s.ksusepeo,0
																										  ,to_number(null),s.ksusepeo),  decode(s.
																										  ksusepeo,0,to_number(null),s.ksusepes),
																										   decode(s.ksusepco,0,to_number(null),s.k
																										  susepco),  decode(s.ksusepco,0,to_number
																										  (null),s.ksusepcs),  s.ksuseapp, s.ksuse
																										  aph, s.ksuseact, s.ksuseach, s.ksusecli,
																										   s.ksusefix, s.ksuseobj, s.ksusefil, s.k
																										  suseblk, s.ksuseslt, s.ksuseltm, s.ksuse
																										  ctm,decode(bitand(s.ksusepxopt, 12),0,'N
																										  O','YES'),decode(s.ksuseft, 2,'SESSION',
																										   4,'SELECT',8,'TRANSACTIONAL','NONE'),de
																										  code(s.ksusefm,1,'BASIC',2,'PRECONNECT',
																										  4,'PREPARSE','NONE'),decode(s.ksusefs, 1
																										  , 'YES', 'NO'),s.ksusegrp,decode(bitand(
																										  s.ksusepxopt,4),4,'ENABLED',decode(bitan
																										  d(s.ksusepxopt,8),8,'FORCED','DISABLED')
																										  ),decode(bitand(s.ksusepxopt,2),2,'FORCE
																										  D',decode(bitand(s.ksusepxopt,1),1,'DISA
																										  BLED','ENABLED')),decode(bitand(s.ksusep
																										  xopt,32),32,'FORCED',decode(bitand(s.ksu
																										  sepxopt,16),16,'DISABLED','ENABLED')),
																										  s.ksusecqd, s.ksuseclid, decode(s.ksuseb
																										  locker,4294967295,'UNKNOWN',	4294967294
																										  , 'UNKNOWN',4294967293,'UNKNOWN',4294967
																										  292,'NO HOLDER',  4294967291,'NOT IN WAI
																										  T','VALID'),decode(s.ksuseblocker, 42949
																										  67295,to_number(null),4294967294,to_numb
																										  er(null), 4294967293,to_number(null), 42
																										  94967292,to_number(null),4294967291,	to
																										  _number(null),bitand(s.ksuseblocker, 214
																										  7418112)/65536),decode(s.ksuseblocker, 4
																										  294967295,to_number(null),4294967294,to_
																										  number(null), 4294967293,to_number(null)
																										  , 4294967292,to_number(null),4294967291,
																										    to_number(null),bitand(s.ksuseblocker,
																										   65535)),s.ksuseseq, s.ksuseopc,e.ksledn
																										  am, e.ksledp1, s.ksusep1,s.ksusep1r,e.ks
																										  ledp2, s.ksusep2,s.ksusep2r,e.ksledp3,s.
																										  ksusep3,s.ksusep3r,e.ksledclassid,  e.ks
																										  ledclass#, e.ksledclass, decode(s.ksuset
																										  im,0,0,-1,-1,-2,-2, decode(round(s.ksuse
																										  tim/10000),0,-1,round(s.ksusetim/10000))
																										  ), s.ksusewtm,decode(s.ksusetim, 0, 'WAI
																										  TING', -2, 'WAITED UNKNOWN TIME',  -1, '
																										  WAITED SHORT TIME',	decode(round(s.ksu
																										  setim/10000),0,'WAITED SHORT TIME','WAIT
																										  ED KNOWN TIME')),s.ksusesvc, decode(bita
																										  nd(s.ksuseflg2,32),32,'ENABLED','DISABLE
																										  D'),decode(bitand(s.ksuseflg2,64),64,'TR
																										  UE','FALSE'),decode(bitand(s.ksuseflg2,1
																										  28),128,'TRUE','FALSE')from x$ksuse s, x
																										  $ksled e where bitand(s.ksspaflg,1)!=0 a
																										  nd bitand(s.ksuseflg,1)!=0 and s.ksuseop
																										  c=e.indx

  1332 10/10/18 21:58	 1    1.26 7tc5u8t3mmzgf     2144485289 			   0.00       0.00	 0.00	       0	  180		 0	      0      180	 17	   0	0.00   57 select cachedblk, cachehit, logicalread
																										  from tab_stats$ where obj#=:1

  1332 10/10/18 21:58	 1    1.26 ghvnum1dfm05q     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   58 select /*+ top_sql_9331 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 cqgv56fmuj63x     1310495014 			   0.00       0.00	 0.00	       0	  156		 1	     39       22	 22	   0	0.00   59 select owner#,name,namespace,remoteowner
																										  ,linkname,p_timestamp,p_obj#, nvl(proper
																										  ty,0),subname,d_attrs from dependency$ d
																										  , obj$ o where d_obj#=:1 and p_obj#=obj#
																										  (+) order by order#

  1332 10/10/18 21:58	 1    1.26 2ta3r31t0z08a     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   60 select /*+ top_sql_7523 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 59kybrhwdk040     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   61 select /*+ top_sql_9853 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 9wf93m8rau04d     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   62 select /*+ top_sql_8652 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 fuhanmqynt02p     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   63 select /*+ top_sql_9743 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 2q93zsrvbdw48     2874733959 			   0.00       0.00	 0.00	       0	  136		 0	      6       65	 20	   0	0.00   64 select grantee#,privilege#,nvl(col#,0),m
																										  ax(mod(nvl(option$,0),2))from objauth$ w
																										  here obj#=:1 group by grantee#,privilege
																										  #,nvl(col#,0) order by grantee#

  1332 10/10/18 21:58	 1    1.26 1dzkrjdvjt03n     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   65 select /*+ top_sql_8498 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 0s5uzug7cr029     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   66 select /*+ top_sql_8896 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 gq6kp76f1307x     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   67 select /*+ top_sql_8114 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 bfa3qt29jg07b     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   68 select /*+ top_sql_9608 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 9nk1jwamsy02n     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   69 select /*+ top_sql_9724 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 2sry32gac2079     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   70 select /*+ top_sql_7316 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 atp84rb53u072     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   71 select /*+ top_sql_9091 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 f8pavn1bvsj7t     1224215794 			   0.00       0.00	 0.00	       0	  144		 0	      1       71	 15	   0	0.00   72 select con#,obj#,rcon#,enabled,nvl(defer
																										  ,0) from cdef$ where robj#=:1

  1332 10/10/18 21:58	 1    1.26 1wb6wx2nb8093     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   73 select /*+ top_sql_9446 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 3czfc573u505f     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   74 select /*+ top_sql_9702 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 c31xpspd8n08k     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   75 select /*+ top_sql_8045 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 cbdfcfcp1pgtp      142600749 			   0.00       0.00	 0.00	       0	   74		 0	     74       37	 37	   0	0.00   76 select intcol#, col# , type#, spare1, se
																										  gcol#, charsetform from partcol$  where
																										  obj# = :1 order by pos#

  1332 10/10/18 21:58	 1    1.26 3k07s1fhv6043     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   77 select /*+ top_sql_9321 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 0qh6dbs79n06s     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   78 select /*+ top_sql_9052 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 9xt7tfmzut065     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   79 select /*+ top_sql_9429 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 28hu85p69d047     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   80 select /*+ top_sql_8978 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 4w2jxfhrfh037     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   81 select /*+ top_sql_7464 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 5x83v19wj302c     2439216106 			   0.00 		 0.00	       0	    3		 0	      0        0	  1	   0	0.00   82 select inst_id,sid_knst,serial_knst,appl
																										  ynum_knstasl, applyname_knstasl,slavid_k
																										  nstasl,decode(state_knstasl,0,'IDLE',1,'
																										  POLL SHUTDOWN',2,'RECORD LOW-WATERMARK',
																										  3,'ADD PARTITION',4,'DROP PARTITION',5,'
																										  EXECUTE TRANSACTION',6,'WAIT COMMIT',7,'
																										  WAIT DEPENDENCY',8,'GET TRANSACTIONS',9,
																										  'WAIT FOR NEXT CHUNK',12,'ROLLBACK TRANS
																										  ACTION',13,'TRANSACTION CLEANUP',14,'REQ
																										  UEST UA SESSION',15,'INITIALIZING'), xid
																										  _usn_knstasl,xid_slt_knstasl,xid_sqn_kns
																										  tasl,cscn_knstasl,depxid_usn_knstasl,dep
																										  xid_slt_knstasl,depxid_sqn_knstasl,depcs
																										  cn_knstasl,msg_num_knstasl,total_assigne
																										  d_knstasl,total_admin_knstasl,total_roll
																										  backs_knstasl,total_msg_knstasl, last_ap
																										  ply_time_knstasl, last_apply_msg_num_kns
																										  tasl,last_apply_msg_time_knstasl,elapsed
																										  _dequeue_time_knstasl, elapsed_apply_tim
																										  e_knstasl from x$knstasl x where type_kn
																										  st=2 and exists (select 1 from v$session
																										   s where s.sid=x.sid_knst and s.serial#=
																										  x.serial_knst)

  1332 10/10/18 21:58	 1    1.26 5kzjxrqgqv03x     3724264953 sqlplus@dbrocaix01.b	   0.00       0.00	 0.00	       0	  223		 0	      1        1	  1	   0	0.00   83 select /*+ top_sql_6849 */ count(*) from
								xxxxxxx.com (TNS V1-																   t1
								V3)

  1332 10/10/18 21:58	 1    1.26 8hd36umbhpgsz     3362549386 			   0.00       0.00	 0.00	       0	   74		 0	     37       37	 37	   0	0.00   84 select parttype, partcnt, partkeycols, f
																										  lags, defts#, defpctfree, defpctused, de
																										  finitrans, defmaxtrans, deftiniexts, def
																										  extsize, defminexts, defmaxexts, defextp
																										  ct, deflists, defgroups, deflogging, spa
																										  re1, mod(spare2, 256) subparttype, mod(t
																										  runc(spare2/256), 256) subpartkeycols, m
																										  od(trunc(spare2/65536), 65536) defsubpar
																										  tcnt, mod(trunc(spare2/4294967296), 256)
																										   defhscflags from partobj$ where obj# =
																										  :1

  1332 10/10/18 21:58	 1    1.26 ga9j9xk5cy9s0     1516415349 			   0.00       0.00	 0.00	       0	   55		 0	     18       12	 12	   0	0.00   85 select /*+ index(idl_sb4$ i_idl_sb41) +*
																										  / piece#,length,piece from idl_sb4$ wher
																										  e obj#=:1 and part=:2 and version=:3 ord
																										  er by piece#

  1332 10/10/18 21:58	 1    1.26 1fkh93md0802n     2485227045 			   0.00 		 0.00	       0	    5		 0	      0        0	  1	   0	0.00   86 select   LOW_OPTIMAL_SIZE,	       HIG
																										  H_OPTIMAL_SIZE,	    OPTIMAL_EXECUT
																										  IONS, 	  ONEPASS_EXECUTIONS,
																											MULTIPASSES_EXECUTIONS,
																										  TOTAL_EXECUTIONS    from   GV$SQL_WORKAR
																										  EA_HISTOGRAM	  where  INST_ID = USERENV
																										  ('Instance')

  1332 10/10/18 21:58	 1    1.26 8swypbbr0m372      893970548 			   0.00       0.00	 0.00	       0	  106		 0	     31       22	 22	   0	0.00   87 select order#,columns,types from access$
																										   where d_obj#=:1

  1332 10/10/18 21:58	 1    1.26 dpvv2ua0tfjcv      467914355 			   0.00       0.00	 0.00	       0	   19		 0	      0       19	 18	   0	0.00   88 select cachedblk, cachehit, logicalread
																										  from ind_stats$ where obj#=:1

  1332 10/10/18 21:58	 1    1.26 6qz82dptj0qr7     2819763574 			   0.00       0.00	 0.00	       0	   16		 0	      4        5	  5	   0	0.00   89 select l.col#, l.intcol#, l.lobj#, l.ind
																										  #, l.ts#, l.file#, l.block#, l.chunk, l.
																										  pctversion$, l.flags, l.property, l.rete
																										  ntion, l.freepools from lob$ l where l.o
																										  bj# = :1 order by l.intcol# asc

  1332 10/10/18 21:58	 1    1.26 b1wc53ddd6h3p     1637390370 			   0.00       0.00	 0.00	       0	    9		 0	      3        3	  3	   0	0.00   90 select audit$,options from procedure$ wh
																										  ere obj#=:1


90 rows selected.

}}}


''Even if not joined with dba_hist_sqltext it still shows 90 rows''
{{{

select snap_id, sql_id, module, elap, cput, exec, time_rank
from
                   (
                   select s0.snap_id,
                          e.sql_id, 
                          max(e.module) module,
                          sum(e.elapsed_time_delta)/1000000 elap,
                          sum(e.cpu_time_delta)/1000000     cput, 
                          sum(e.executions_delta)   exec,
                          DENSE_RANK() OVER (
                          PARTITION BY s0.snap_id ORDER BY e.elapsed_time_delta DESC) time_rank
                   from 
                       dba_hist_snapshot s0,
                       dba_hist_sqlstat e
                       where e.dbid            = s0.dbid
                       and e.instance_number   = s0.instance_number
                       and e.snap_id           = s0.snap_id + 1
                   group by s0.snap_id, e.sql_id, e.elapsed_time_delta
                   )
where 
-- time_rank <= 5 and 
snap_id in (1332)


   SNAP_ID SQL_ID	 MODULE 								ELAP	   CPUT       EXEC  TIME_RANK
---------- ------------- ---------------------------------------------------------------- ---------- ---------- ---------- ----------
      1332 404qh4yx36y1v								    9.254373   9.155145      10000	    1
      1332 bunssq950snhf								     .801489	.801489 	 1	    2
      1332 7vgmvmy8vvb9s								     .083412	.083412 	 1	    3
      1332 6hwjmjgrpsuaa								     .050253	.015212 	 1	    4
      1332 84qubbrsr0kfn								     .044464	.044464 	 1	    5
      1332 db78fxqxwxt7r								      .04239	.031295        379	    6
      1332 96g93hntrzjtr								     .040821	.040821       1346	    7
      1332 130dvvr5s8bgn								      .04013	 .04013 	18	    8
      1332 6yd53x1zjqts9 sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3)				.039	.016832 	 1	    9
      1332 70utgu2587mhs								     .035026	.012265 	 1	   10
      1332 c3zymn7x3k6wy								     .033542	.033542 	19	   11
      1332 bpxnmunkcywzg sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3)			     .033223	.003645 	 1	   12
      1332 3252fkazwq930								     .033124	.022902 	 1	   13
      1332 fdxrh8tzyw0yw								     .030767	.028199 	 1	   14
      1332 7k6zct1sya530								     .028502	.028502 	 1	   15
      1332 7qjhf5dzmazsr								     .028234	.006275 	 1	   16
      1332 32wqka2zwvu65								     .025672	.025672 	 1	   17
      1332 53saa2zkr6wc3								     .025226	.025226        463	   18
      1332 4qju99hqmn81x								     .024763	.024763 	 1	   19
      1332 32whwm2babwpt								     .022436	.022436 	 1	   20
      1332 fktqvw2wjxdxc								     .022079	.022079 	 1	   21
      1332 2ym6hhaq30r73								      .02171	 .02171        476	   22
      1332 71y370j6428cb								     .021454	.017777 	 1	   23
      1332 f9nzhpn9854xz								     .021191	.020515 	 1	   24
      1332 bqnn4c3gjtmgu								      .02018	 .02018 	 1	   25
      1332 39m4sx9k63ba2								     .019843	.008332 	12	   26
      1332 7fa2r0xkfbs6b sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3)			     .019497	.015929 	 1	   27
      1332 1uk5m5qbzj1vt sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3)			     .019044	.019044 	 0	   28
      1332 cp3gpd7z878w8								     .018802	.018802 	 1	   29
      1332 dsd2yqyggtc59								     .018707	.016998 	 0	   30
      1332 bu95jup1jp5t3								     .018591	.018301 	 1	   31
      1332 350myuyx0t1d6								      .01829	.017399 	 1	   32
      1332 f71p3w4xx1pfc sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3)			      .01743	.016771 	 1	   33
      1332 c6awqs517jpj0								      .01715	.004729 	12	   34
      1332 agpd044zj368m								       .0166	.016206 	 1	   35
      1332 f3wcc30napt5a sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3)			     .016506	.016506 	 1	   36
      1332 71k5024zn7c9a								     .016167	.016167 	 1	   37
      1332 83taa7kaw59c1								     .016105	.016105 	69	   38
      1332 cvn54b7yz0s8u								     .015882	.004263 	12	   39
      1332 66gs90fyynks7								     .015312	.015312 	 1	   40
      1332 5ngzsfstg8tmy								     .013302	.013302        107	   41
      1332 7ng34ruy5awxq								     .013015	.013015 	68	   42
      1332 79uvsz1g1c168								     .012924	.012924 	 1	   43
      1332 b0cxc52zmwaxs								      .01172	.011716 	 1	   44
      1332 1tn90bbpyjshq								     .010358	.010358 	 1	   45
      1332 a73wbv1yu8x5c								     .009111	.009111 	71	   46
      1332 6c06mfv01xt2h								     .008496	.008496 	 1	   47

   SNAP_ID SQL_ID	 MODULE 								ELAP	   CPUT       EXEC  TIME_RANK
---------- ------------- ---------------------------------------------------------------- ---------- ---------- ---------- ----------
      1332 45jb7msfn4x4m								     .007906	.007906 	 0	   48
      1332 asvzxj61dc5vs								     .007837	.007837        125	   49
      1332 04xtrk7uyhknh								      .00661	 .00661 	42	   50
      1332 6769wyy3yf66f								     .006371	.006371 	78	   51
      1332 1gu8t96d0bdmu								     .006356	.006356 	59	   52
      1332 88brhumsyg325								     .005116	.005116 	 0	   53
      1332 7rx9z1ddww1j2								     .004431	.004431 	 0	   54
      1332 6aq34nj2zb2n7								     .004392	.004392 	65	   55
      1332 17k8dh7vntd3w								     .003737	.003737 	 0	   56
      1332 7tc5u8t3mmzgf								     .003626	.003626        180	   57
      1332 ghvnum1dfm05q sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3)			     .003133	.003133 	 1	   58
      1332 cqgv56fmuj63x								     .003087	.003087 	22	   59
      1332 2ta3r31t0z08a sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3)			     .002808	.002808 	 1	   60
      1332 59kybrhwdk040 sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3)			     .002756	.002756 	 1	   61
      1332 9wf93m8rau04d sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3)			     .002671	.002671 	 1	   62
      1332 fuhanmqynt02p sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3)			     .002665	.002665 	 1	   63
      1332 2q93zsrvbdw48								     .002652	.002652 	65	   64
      1332 1dzkrjdvjt03n sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3)			     .002596	.002596 	 1	   65
      1332 0s5uzug7cr029 sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3)			     .002508	.002508 	 1	   66
      1332 gq6kp76f1307x sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3)			     .002491	.002491 	 1	   67
      1332 bfa3qt29jg07b sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3)			     .002475	.002475 	 1	   68
      1332 9nk1jwamsy02n sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3)			     .002465	.002465 	 1	   69
      1332 2sry32gac2079 sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3)			     .002461	.002461 	 1	   70
      1332 atp84rb53u072 sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3)			     .002449	.002449 	 1	   71
      1332 f8pavn1bvsj7t								     .002441	.002441 	71	   72
      1332 1wb6wx2nb8093 sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3)			      .00243	 .00243 	 1	   73
      1332 3czfc573u505f sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3)			     .002369	.002369 	 1	   74
      1332 c31xpspd8n08k sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3)			     .002352	.002352 	 1	   75
      1332 cbdfcfcp1pgtp								     .002347	.002347 	37	   76
      1332 3k07s1fhv6043 sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3)			     .002305	.002305 	 1	   77
      1332 0qh6dbs79n06s sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3)			     .002299	.002299 	 1	   78
      1332 9xt7tfmzut065 sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3)			     .002274	.002274 	 1	   79
      1332 28hu85p69d047 sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3)			     .002269	.002269 	 1	   80
      1332 4w2jxfhrfh037 sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3)			     .002238	.002238 	 1	   81
      1332 5x83v19wj302c								     .002196	.002196 	 0	   82
      1332 5kzjxrqgqv03x sqlplus@dbrocaix01.xxxxxxxx.com (TNS V1-V3)			     .002108	.002108 	 1	   83
      1332 8hd36umbhpgsz								     .002057	.002057 	37	   84
      1332 ga9j9xk5cy9s0								     .002001	.002001 	12	   85
      1332 1fkh93md0802n								     .001568	.001568 	 0	   86
      1332 8swypbbr0m372								     .001303	.001303 	22	   87
      1332 dpvv2ua0tfjcv								     .000683	.000683 	19	   88
      1332 6qz82dptj0qr7								     .000329	.000329 	 5	   89
      1332 b1wc53ddd6h3p								     .000242	.000242 	 3	   90

90 rows selected.

}}}


''Even if you query dba_hist_sqlstat alone, it will still return 90''
{{{

select count(*) from dba_hist_sqlstat where snap_id = 1333   -- returns 90
                  -- NOTE.. the values from snap 1333 is actually what you see in AWR report 1332-1333.. so it's just getting the end value..
                  -- need a way to show the 1332 that's why i have to do the SQL trick e.snap_id = s0.snap_id + 1

select count(*)                                              -- also returns 90
from
                   (
                   select s0.snap_id,
                          e.sql_id, 
                          max(e.module) module,
                          sum(e.elapsed_time_delta)/1000000 elap,
                          sum(e.cpu_time_delta)/1000000     cput, 
                          sum(e.executions_delta)   exec,
                          DENSE_RANK() OVER (
                          PARTITION BY s0.snap_id ORDER BY e.elapsed_time_delta DESC) time_rank
                   from 
                       dba_hist_snapshot s0,
                       dba_hist_sqlstat e
                       where e.dbid            = s0.dbid
                       and e.instance_number   = s0.instance_number
                       and e.snap_id           = s0.snap_id + 1
                   group by s0.snap_id, e.sql_id, e.elapsed_time_delta
                   )
where 
-- time_rank <= 5 and 
snap_id in (1332)


select * from dba_hist_sqlstat where snap_id = 1333 order by elapsed_time_delta desc -- will show SQL_ID 404qh4yx36y1v, bunssq950snhf, 7vgmvmy8vvb9s, 6hwjmjgrpsuaa, 84qubbrsr0kfn as top five

select snap_id, sql_id, module, elap, cput, exec, time_rank -- will show SQL_ID 404qh4yx36y1v, bunssq950snhf, 7vgmvmy8vvb9s, 6hwjmjgrpsuaa, 84qubbrsr0kfn as top five
from
                   (
                   select s0.snap_id,
                          e.sql_id, 
                          max(e.module) module,
                          sum(e.elapsed_time_delta)/1000000 elap,
                          sum(e.cpu_time_delta)/1000000     cput, 
                          sum(e.executions_delta)   exec,
                          DENSE_RANK() OVER (
                          PARTITION BY s0.snap_id ORDER BY e.elapsed_time_delta DESC) time_rank
                   from 
                       dba_hist_snapshot s0,
                       dba_hist_sqlstat e
                       where e.dbid            = s0.dbid
                       and e.instance_number   = s0.instance_number
                       and e.snap_id           = s0.snap_id + 1
                   group by s0.snap_id, e.sql_id, e.elapsed_time_delta
                   )
where 
-- time_rank <= 5 and 
snap_id in (1332)


select sql_id from dba_hist_sqlstat where snap_id = 1333    -- will return zero
minus
select sql_id
from
                   (
                   select s0.snap_id,
                          e.sql_id, 
                          max(e.module) module,
                          sum(e.elapsed_time_delta)/1000000 elap,
                          sum(e.cpu_time_delta)/1000000     cput, 
                          sum(e.executions_delta)   exec,
                          DENSE_RANK() OVER (
                          PARTITION BY s0.snap_id ORDER BY e.elapsed_time_delta DESC) time_rank
                   from 
                       dba_hist_snapshot s0,
                       dba_hist_sqlstat e
                       where e.dbid            = s0.dbid
                       and e.instance_number   = s0.instance_number
                       and e.snap_id           = s0.snap_id + 1
                   group by s0.snap_id, e.sql_id, e.elapsed_time_delta
                   )
where 
-- time_rank <= 5 and 
snap_id in (1332)
}}}




''some other queries i used''

select count(*) from DBA_HIST_SQLSTAT where snap_id = 1349
select count(*) from dba_hist_sqltext where snap_id = 1349

-- 50 appeared
select * from dba_hist_sqltext where lower(sql_text) like '% top_sql_%' 

-- starts at 5505 - 9999 with 8KB sharable_mem per cursor that is when dynamic sampling 0
select * from v$sql where lower(sql_text) like '%top_sql%' order by sql_text

-- starts at 5505 - 9999 that is when dynamic sampling 0
select * from v$sqlstats where sql_text like '%top_sql%' order by sql_text

-- starts at 5505 - 9999 that is when dynamic sampling 0
select * from v$sqlarea where sql_text like '%top_sql%' order by sql_text 

-- starts at 2890 - 9999 that is when dynamic sampling 0 ''if dynamic sampling 2 starts with 8266 ends at 9999''
select * from v$sqltext where sql_text like '%top_sql%' order by 6  -- starts with 8266 ends at 0

''also on the row count of the 10K execution''
select count(*) from v$sql  -- 5853
select count(*) from v$sqlstats -- 5916
select count(*) from v$sqlarea  -- 5839
select count(*) from dba_hist_sqltext  -- this view does not have SNAP_ID.. but the total row count is 3243
http://alternativeto.net/software/balsamiq-mockups/


<<showtoc>>

! workflow 
! installation  and upgrade
! commands
! performance and troubleshooting
!! sizing and capacity planning
!! benchmark
!! modeling 


! high availability 
! security



! time series mongodb 
https://www.mongodb.com/blog/post/time-series-data-and-mongodb-part-2-schema-design-best-practices
https://medium.com/oracledevs/build-a-go-lang-based-rest-api-on-top-of-cassandra-3ac5d9316852
https://www3.nd.edu/~dial/publications/xian2018restful.pdf


! xxxxxxxxxxxxxxxxxxxxxxxx






! installation 
!! on 14.04 ubuntu 
https://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/
https://www.howtoforge.com/tutorial/install-mongodb-on-ubuntu-14.04/ <-- good stuff




! references 

http://rsmith.co/2012/11/05/mongodb-gotchas-and-how-to-avoid-them/
picking a data store
http://www.sarahmei.com/blog/2013/11/11/why-you-should-never-use-mongodb/



! SQL to MongoDB Mapping Chart
http://stackoverflow.com/questions/11507995/sql-view-in-mongodb
https://docs.mongodb.com/manual/reference/sql-comparison/
https://docs.mongodb.com/manual/reference/sql-aggregation-comparison/







http://www.evernote.com/shard/s48/sh/b4ed1850-abe5-4c21-8871-3a6d4584a456/f142132af40954b6b385f531738a468a
http://www.evernote.com/shard/s48/sh/4e630767-1a54-44b0-a885-7a8ba2bb3afe/938bc5de1fe3dcc7bb2249fd42927684
How to mount an LVM partition on another system

http://www.techbytes.ca/techbyte118.html  <-- first article i saw
http://tldp.org/HOWTO/LVM-HOWTO/recipemovevgtonewsys.html <-- lvm howto
http://forgetmenotes.blogspot.com/2009/06/how-to-mount-lvm-partition.html
http://www.thegibson.org/blog/archives/467 <-- "WARNING: Duplicate VG name"
http://forums.fedoraforum.org/archive/index.php/t-183575.html
http://www.linuxquestions.org/questions/linux-general-1/how-to-rename-a-vol-group-433993/ <-- rename VG
http://forums13.itrc.hp.com/service/forums/questionanswer.do?admit=109447627+1286430324270+28353475&threadId=1133855
http://www.gossamer-threads.com/lists/gentoo/user/215444
http://www.linuxquestions.org/questions/linux-general-1/lvm-stop-functioning-after-unmounting-usr-660010/
http://evuraan.blogspot.com/2005/05/sbinlvmstatic-in-rhel40-systems.html <-- lvm.static

ubuntu
http://www.linuxquestions.org/questions/fedora-35/how-can-i-mount-lvm-partition-in-ubuntu-569507/
http://www.linux-sxs.org/storage/fedora2ubuntu.html
http://www.brandonhutchinson.com/Mounting_a_Linux_LVM_volume.html
http://linux.byexamples.com/archives/321/fstab-with-uuid/
http://ubuntuforums.org/showthread.php?t=283131 <-- great detail
http://www.g-loaded.eu/2009/01/04/always-use-a-block-device-label-or-its-uuid-in-fstab/
How To Setup LUN Persistence in non-Multipathing environment [ID 1076299.1]
How to Configure Oracle Enterprise Linux to be Highly Available Using RAID1 [ID 759260.1]	
http://husnusensoy.wordpress.com/2008/06/13/moving-any-file-between-asm-diskgroups-1/
{{{

mkdir backup

# UP TO PER DAY TIME FRAME
find -type f -name '*' -printf "mkdir -p backup/%TY%Tm%Td\n" | sort | uniq | sh
find -type f -name '*' -printf "mv %h/%f backup/%TY%Tm%Td/%f\n" | sh

# UP TO PER HOUR TIME FRAME
find -type f -name '*' -printf "mkdir -p backup/%TY%Tm%Td%TH\n" | sort | uniq | sh
find -type f -name '*' -printf "mv %h/%f backup/%TY%Tm%Td%TH/%f\n" | sh

# UP TO PER MINUTE TIME FRAME
find -type f -name '*' -printf "mkdir -p backup/%TY%Tm%Td%TH%TM\n" | sort | uniq | sh
find -type f -name '*' -printf "mv %h/%f backup/%TY%Tm%Td%TH%TM/%f\n" | sh


# UP TO PER MINUTE AND FILTER HCMPRD6 FILE
find -type f -name '*' -printf "mkdir -p backup/%TY%Tm%Td%TH%TM\n" | grep 2012031912 | sh
find -type f -name '*' -printf "mv %h/%f backup/%TY%Tm%Td%TH%TM/%f\n" | grep 2012031912 | sh
}}}


http://www.unix.com/unix-dummies-questions-answers/144957-move-file-based-time-stamp.html


! clean up of aud files 
{{{
cd /u01/app/oracle/admin/dw/adump/
rm -rf *    <-- will error with "Argument list too long"

[root@desktopserver adump]# ls -1 | wc -l
9602

# UP TO PER DAY TIME FRAME
find -type f -name '*' -printf "mkdir -p backup/%TY%Tm%Td\n" | sort | uniq | sh
find -type f -name '*' -printf "mv %h/%f backup/%TY%Tm%Td/%f\n" | sh

[root@desktopserver adump]# ls -ltr
total 4
drwxr-xr-x 31 root root 4096 Mar  2 21:49 backup
[root@desktopserver adump]# cd backup/
[root@desktopserver backup]# ls
20120311  20120420  20120427  20120521  20120603  20120613  20120928  20121204  20121210  20130124
20120410  20120421  20120429  20120525  20120604  20120920  20121018  20121205  20121212  20130127
20120419  20120424  20120504  20120529  20120605  20120927  20121019  20121206  20121213

}}}

or do this 
{{{
find . -name '*aud' | xargs rm
find . -name '*trc' | xargs rm
find . -name '*trm' | xargs rm
find . -name '*xml' | xargs rm
}}}


{{{
reports <-- source dir
csvfiles <-- target dir

mkdir csvfiles 

find reports -name '*.txt' | while read file; do
    cp "$file" "csvfiles/$(tr / _ <<< "$file")"
done
}}}

on tarfiles
{{{
mkdir tarfiles
mkdir tarfilesconsolidated

find tarfiles -name '*.tar' | while read file; do
    cp "$file" "tarfilesconsolidated/$(tr / _ <<< "$file")"
done

for i in *.tar; do tar xf $i; done
gunzip -vf *.gz
}}}


then consolidate the textfiles
{{{
mkdir awr_topevents
mv *awr_topevents-tableau-* awr_topevents
cat awr_topevents/*csv > awr_topevents.txt

mkdir awr_services
mv *awr_services-tableau-* awr_services
cat awr_services/*csv > awr_services.txt

mkdir awr_cpuwl
mv *awr_cpuwl-tableau-* awr_cpuwl
cat awr_cpuwl/*csv > awr_cpuwl.txt

mkdir awr_sysstat
mv *awr_sysstat-tableau-* awr_sysstat
cat awr_sysstat/*csv > awr_sysstat.txt

mkdir awr_topsqlx
mv *awr_topsqlx-tableau-* awr_topsqlx
cat awr_topsqlx/*csv > awr_topsqlx.txt

mkdir awr_iowl
mv *awr_iowl-tableau-* awr_iowl
cat awr_iowl/*csv > awr_iowl.txt

mkdir awr_storagesize_summary
mv *awr_storagesize_summary-tableau-* awr_storagesize_summary
cat awr_storagesize_summary/*csv > awr_storagesize_summary.txt

mkdir awr_storagesize_detail
mv *awr_storagesize_detail-tableau-* awr_storagesize_detail
cat awr_storagesize_detail/*csv > awr_storagesize_detail.txt
}}}


shortcut can also be like this 
{{{
find . -name '*awr_sysstat*.txt' | while read file; do
    cat "$file" >> awr_sysstat.txt
done
}}}


http://stackoverflow.com/questions/14345714/recursively-moving-all-files-of-a-specific-type-into-a-target-directory-in-bash
http://www.usn-it.de/index.php/2007/03/09/how-to-move-or-add-a-controlfile-when-asm-is-involved/
How To Move SQL Profiles From One Database To Another Database (Doc ID 457531.1)
! how to run two versions of mozilla (need to create a new profile)
{{{
"C:\Program Files (x86)\MozillaFirefox4RC2\firefox.exe" -P "karlarao" -no-remote
}}}

! addons 

! run 2nd instance of firefox (older version)
https://support.mozilla.org/en-US/questions/974208
http://kb.mozillazine.org/Opening_a_new_instance_of_Firefox_with_another_profile
https://developer.mozilla.org/en-US/Firefox/Multiple_profiles


Name: MptwBlack
Background: #000
Foreground: #fff
PrimaryPale: #333
PrimaryLight: #555
PrimaryMid: #888
PrimaryDark: #aaa
SecondaryPale: #111
SecondaryLight: #222
SecondaryMid: #555
SecondaryDark: #888
TertiaryPale: #222
TertiaryLight: #666
TertiaryMid: #888
TertiaryDark: #aaa
Error: #300

This is in progress. Help appreciated.
Name: MptwBlue
Background: #fff
Foreground: #000
PrimaryPale: #cdf
PrimaryLight: #57c
PrimaryMid: #114
PrimaryDark: #012
SecondaryPale: #ffc
SecondaryLight: #fe8
SecondaryMid: #db4
SecondaryDark: #841
TertiaryPale: #eee
TertiaryLight: #ccc
TertiaryMid: #999
TertiaryDark: #666
Error: #f88
/***
|Name:|MptwConfigPlugin|
|Description:|Miscellaneous tweaks used by MPTW|
|Version:|1.0 ($Rev: 3646 $)|
|Date:|$Date: 2008-02-27 02:34:38 +1000 (Wed, 27 Feb 2008) $|
|Source:|http://mptw.tiddlyspot.com/#MptwConfigPlugin|
|Author:|Simon Baird <simon.baird@gmail.com>|
|License:|http://mptw.tiddlyspot.com/#MptwConfigPlugin|
!!Note: instead of editing this you should put overrides in MptwUserConfigPlugin
***/
//{{{
var originalReadOnly = readOnly;
var originalShowBackstage = showBackstage;

config.options.chkHttpReadOnly = false; 		// means web visitors can experiment with your site by clicking edit
readOnly = false;								// needed because the above doesn't work any more post 2.1 (??)
showBackstage = true;							// show backstage for same reason

config.options.chkInsertTabs = true;    		// tab inserts a tab when editing a tiddler
config.views.wikified.defaultText = "";			// don't need message when a tiddler doesn't exist
config.views.editor.defaultText = "";			// don't need message when creating a new tiddler 

config.options.chkSaveBackups = true;			// do save backups
config.options.txtBackupFolder = 'twbackup';	// put backups in a backups folder

config.options.chkAutoSave = (window.location.protocol == "file:"); // do autosave if we're in local file

config.mptwVersion = "2.5.3";

config.macros.mptwVersion={handler:function(place){wikify(config.mptwVersion,place);}};

if (config.options.txtTheme == '')
	config.options.txtTheme = 'MptwTheme';

// add to default GettingStarted
config.shadowTiddlers.GettingStarted += "\n\nSee also [[MPTW]].";

// add select theme and palette controls in default OptionsPanel
config.shadowTiddlers.OptionsPanel = config.shadowTiddlers.OptionsPanel.replace(/(\n\-\-\-\-\nAlso see AdvancedOptions)/, "{{select{<<selectTheme>>\n<<selectPalette>>}}}$1");

// these are used by ViewTemplate
config.mptwDateFormat = 'DD/MM/YY';
config.mptwJournalFormat = 'Journal DD/MM/YY';

//}}}
Name: MptwGreen
Background: #fff
Foreground: #000
PrimaryPale: #9b9
PrimaryLight: #385
PrimaryMid: #031
PrimaryDark: #020
SecondaryPale: #ffc
SecondaryLight: #fe8
SecondaryMid: #db4
SecondaryDark: #841
TertiaryPale: #eee
TertiaryLight: #ccc
TertiaryMid: #999
TertiaryDark: #666
Error: #f88
Name: MptwRed
Background: #fff
Foreground: #000
PrimaryPale: #eaa
PrimaryLight: #c55
PrimaryMid: #711
PrimaryDark: #500
SecondaryPale: #ffc
SecondaryLight: #fe8
SecondaryMid: #db4
SecondaryDark: #841
TertiaryPale: #eee
TertiaryLight: #ccc
TertiaryMid: #999
TertiaryDark: #666
Error: #f88
|Name|MptwRounded|
|Description|Mptw Theme with some rounded corners (Firefox only)|
|ViewTemplate|MptwTheme##ViewTemplate|
|EditTemplate|MptwTheme##EditTemplate|
|PageTemplate|MptwTheme##PageTemplate|
|StyleSheet|##StyleSheet|

!StyleSheet
/*{{{*/

[[MptwTheme##StyleSheet]]

.tiddler,
.sliderPanel,
.button,
.tiddlyLink,
.tabContents
{ -moz-border-radius: 1em; }

.tab {
	-moz-border-radius-topleft: 0.5em;
	-moz-border-radius-topright: 0.5em;
}
#topMenu {
	-moz-border-radius-bottomleft: 2em;
	-moz-border-radius-bottomright: 2em;
}

/*}}}*/
Name: MptwSmoke
Background: #fff
Foreground: #000
PrimaryPale: #F5F5F5
PrimaryLight: #5C84A8
PrimaryMid: #111
PrimaryDark: #000
SecondaryPale: #ffc
SecondaryLight: #fe8
SecondaryMid: #db4
SecondaryDark: #841
TertiaryPale: #eee
TertiaryLight: #ccc
TertiaryMid: #999
TertiaryDark: #666
Error: #f88
|Name|MptwStandard|
|Description|Mptw Theme with the default TiddlyWiki PageLayout and Styles|
|ViewTemplate|MptwTheme##ViewTemplate|
|EditTemplate|MptwTheme##EditTemplate|
Name: MptwTeal
Background: #fff
Foreground: #000
PrimaryPale: #B5D1DF
PrimaryLight: #618FA9
PrimaryMid: #1a3844
PrimaryDark: #000
SecondaryPale: #ffc
SecondaryLight: #fe8
SecondaryMid: #db4
SecondaryDark: #841
TertiaryPale: #f8f8f8
TertiaryLight: #bbb
TertiaryMid: #999
TertiaryDark: #888
Error: #f88
|Name|MptwTheme|
|Description|Mptw Theme including custom PageLayout|
|PageTemplate|##PageTemplate|
|ViewTemplate|##ViewTemplate|
|EditTemplate|##EditTemplate|
|StyleSheet|##StyleSheet|

http://mptw.tiddlyspot.com/#MptwTheme ($Rev: 1829 $)

!PageTemplate
<!--{{{-->
<div class='header' macro='gradient vert [[ColorPalette::PrimaryLight]] [[ColorPalette::PrimaryMid]]'>
	<div class='headerShadow'>
		<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
		<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
	</div>
	<div class='headerForeground'>
		<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
		<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
	</div>
</div>
<!-- horizontal MainMenu -->
<div id='topMenu' refresh='content' tiddler='MainMenu'></div>
<!-- original MainMenu menu -->
<!-- <div id='mainMenu' refresh='content' tiddler='MainMenu'></div> -->
<div id='sidebar'>
	<div id='sidebarOptions' refresh='content' tiddler='SideBarOptions'></div>
	<div id='sidebarTabs' refresh='content' force='true' tiddler='SideBarTabs'></div>
</div>
<div id='displayArea'>
	<div id='messageArea'></div>
	<div id='tiddlerDisplay'></div>
</div>
<!--}}}-->

!ViewTemplate
<!--{{{-->
[[MptwTheme##ViewTemplateToolbar]]

<div class="tagglyTagged" macro="tags"></div>

<div class='titleContainer'>
	<span class='title' macro='view title'></span>
	<span macro="miniTag"></span>
</div>

<div class='subtitle'>
	(updated <span macro='view modified date {{config.mptwDateFormat?config.mptwDateFormat:"MM/0DD/YY"}}'></span>
	by <span macro='view modifier link'></span>)
	<!--
	(<span macro='message views.wikified.createdPrompt'></span>
	<span macro='view created date {{config.mptwDateFormat?config.mptwDateFormat:"MM/0DD/YY"}}'></span>)
	-->
</div>

<div macro="showWhen tiddler.tags.containsAny(['css','html','pre','systemConfig']) && !tiddler.text.match('{{'+'{')">
	<div class='viewer'><pre macro='view text'></pre></div>
</div>
<div macro="else">
	<div class='viewer' macro='view text wikified'></div>
</div>

<div class="tagglyTagging" macro="tagglyTagging"></div>

<!--}}}-->

!ViewTemplateToolbar
<!--{{{-->
<div class='toolbar'>
	<span macro="showWhenTagged systemConfig">
		<span macro="toggleTag systemConfigDisable . '[[disable|systemConfigDisable]]'"></span>
	</span>
	<span macro="showWhenTagged systemTheme"><span macro="applyTheme"></span></span>
	<span macro="showWhenTagged systemPalette"><span macro="applyPalette"></span></span>
	<span macro="showWhen tiddler.tags.contains('css') || tiddler.title == 'StyleSheet'"><span macro="refreshAll"></span></span>
	<span style="padding:1em;"></span>
	<span macro='toolbar closeTiddler closeOthers +editTiddler deleteTiddler > fields syncing permalink references jump'></span> <span macro='newHere label:"new here"'></span>
	<span macro='newJournalHere {{config.mptwJournalFormat?config.mptwJournalFormat:"MM/0DD/YY"}}'></span>
</div>
<!--}}}-->

!EditTemplate
<!--{{{-->
<div class="toolbar" macro="toolbar +saveTiddler saveCloseTiddler closeOthers -cancelTiddler cancelCloseTiddler deleteTiddler"></div>
<div class="title" macro="view title"></div>
<div class="editLabel">Title</div><div class="editor" macro="edit title"></div>
<div macro='annotations'></div>
<div class="editLabel">Content</div><div class="editor" macro="edit text"></div>
<div class="editLabel">Tags</div><div class="editor" macro="edit tags"></div>
<div class="editorFooter"><span macro="message views.editor.tagPrompt"></span><span macro="tagChooser"></span></div>
<!--}}}-->

!StyleSheet
/*{{{*/

/* a contrasting background so I can see where one tiddler ends and the other begins */
body {
	background: [[ColorPalette::TertiaryLight]];
}

/* sexy colours and font for the header */
.headerForeground {
	color: [[ColorPalette::PrimaryPale]];
}
.headerShadow, .headerShadow a {
	color: [[ColorPalette::PrimaryMid]];
}

/* separate the top menu parts */
.headerForeground, .headerShadow {
	padding: 1em 1em 0;
}

.headerForeground, .headerShadow {
	font-family: 'Trebuchet MS' sans-serif;
	font-weight:bold;
}
.headerForeground .siteSubtitle {
	color: [[ColorPalette::PrimaryLight]];
}
.headerShadow .siteSubtitle {
	color: [[ColorPalette::PrimaryMid]];
}

/* make shadow go and down right instead of up and left */
.headerShadow {
	left: 1px;
	top: 1px;
}

/* prefer monospace for editing */
.editor textarea, .editor input {
	font-family: 'Consolas' monospace;
	background-color:[[ColorPalette::TertiaryPale]];
}


/* sexy tiddler titles */
.title {
	font-size: 250%;
	color: [[ColorPalette::PrimaryLight]];
	font-family: 'Trebuchet MS' sans-serif;
}

/* more subtle tiddler subtitle */
.subtitle {
	padding:0px;
	margin:0px;
	padding-left:1em;
	font-size: 90%;
	color: [[ColorPalette::TertiaryMid]];
}
.subtitle .tiddlyLink {
	color: [[ColorPalette::TertiaryMid]];
}

/* a little bit of extra whitespace */
.viewer {
	padding-bottom:3px;
}

/* don't want any background color for headings */
h1,h2,h3,h4,h5,h6 {
	background-color: transparent;
	color: [[ColorPalette::Foreground]];
}

/* give tiddlers 3d style border and explicit background */
.tiddler {
	background: [[ColorPalette::Background]];
	border-right: 2px [[ColorPalette::TertiaryMid]] solid;
	border-bottom: 2px [[ColorPalette::TertiaryMid]] solid;
	margin-bottom: 1em;
	padding:1em 2em 2em 1.5em;
}

/* make options slider look nicer */
#sidebarOptions .sliderPanel {
	border:solid 1px [[ColorPalette::PrimaryLight]];
}

/* the borders look wrong with the body background */
#sidebar .button {
	border-style: none;
}

/* this means you can put line breaks in SidebarOptions for readability */
#sidebarOptions br {
	display:none;
}
/* undo the above in OptionsPanel */
#sidebarOptions .sliderPanel br {
	display:inline;
}

/* horizontal main menu stuff */
#displayArea {
	margin: 1em 15.7em 0em 1em; /* use the freed up space */
}
#topMenu br {
	display: none;
}
#topMenu {
	background: [[ColorPalette::PrimaryMid]];
	color:[[ColorPalette::PrimaryPale]];
}
#topMenu {
	padding:2px;
}
#topMenu .button, #topMenu .tiddlyLink, #topMenu a {
	margin-left: 0.5em;
	margin-right: 0.5em;
	padding-left: 3px;
	padding-right: 3px;
	color: [[ColorPalette::PrimaryPale]];
	font-size: 115%;
}
#topMenu .button:hover, #topMenu .tiddlyLink:hover {
	background: [[ColorPalette::PrimaryDark]];
}

/* make 2.2 act like 2.1 with the invisible buttons */
.toolbar {
	visibility:hidden;
}
.selected .toolbar {
	visibility:visible;
}

/* experimental. this is a little borked in IE7 with the button 
 * borders but worth it I think for the extra screen realestate */
.toolbar { float:right; }

/* fix for TaggerPlugin. from sb56637. improved by FND */
.popup li .tagger a {
   display:inline;
}

/* makes theme selector look a little better */
#sidebarOptions .sliderPanel .select .button {
  padding:0.5em;
  display:block;
}
#sidebarOptions .sliderPanel .select br {
	display:none;
}

/* make it print a little cleaner */
@media print {
	#topMenu {
		display: none ! important;
	}
	/* not sure if we need all the importants */
	.tiddler {
		border-style: none ! important;
		margin:0px ! important;
		padding:0px ! important;
		padding-bottom:2em ! important;
	}
	.tagglyTagging .button, .tagglyTagging .hidebutton {
		display: none ! important;
	}
	.headerShadow {
		visibility: hidden ! important;
	}
	.tagglyTagged .quickopentag, .tagged .quickopentag {
		border-style: none ! important;
	}
	.quickopentag a.button, .miniTag {
		display: none ! important;
	}
}

/* get user styles specified in StyleSheet */
[[StyleSheet]]

/*}}}*/
|Name|MptwTrim|
|Description|Mptw Theme with a reduced header to increase useful space|
|ViewTemplate|MptwTheme##ViewTemplate|
|EditTemplate|MptwTheme##EditTemplate|
|StyleSheet|MptwTheme##StyleSheet|
|PageTemplate|##PageTemplate|

!PageTemplate
<!--{{{-->

<!-- horizontal MainMenu -->
<div id='topMenu' macro='gradient vert [[ColorPalette::PrimaryLight]] [[ColorPalette::PrimaryMid]]'>
<span refresh='content' tiddler='SiteTitle' style="padding-left:1em;font-weight:bold;"></span>:
<span refresh='content' tiddler='MainMenu'></span>
</div>
<div id='sidebar'>
	<div id='sidebarOptions'>
		<div refresh='content' tiddler='SideBarOptions'></div>
		<div style="margin-left:0.1em;"
			macro='slider chkTabSliderPanel SideBarTabs {{"tabs \u00bb"}} "Show Timeline, All, Tags, etc"'></div>
	</div>
</div>
<div id='displayArea'>
	<div id='messageArea'></div>
	<div id='tiddlerDisplay'></div>
</div>
For upgrading. See [[ImportTiddlers]].
URL: http://mptw.tiddlyspot.com/upgrade.html
/***
|Description:|A place to put your config tweaks so they aren't overwritten when you upgrade MPTW|
See http://www.tiddlywiki.org/wiki/Configuration_Options for other options you can set. In some cases where there are clashes with other plugins it might help to rename this to zzMptwUserConfigPlugin so it gets executed last.
***/
//{{{

// example: set your preferred date format
//config.mptwDateFormat = 'MM/0DD/YY';
//config.mptwJournalFormat = 'Journal MM/0DD/YY';

// example: set the theme you want to start with
//config.options.txtTheme = 'MptwRoundTheme';

// example: switch off autosave, switch on backups and set a backup folder
//config.options.chkSaveBackups = true;
//config.options.chkAutoSave = false;
//config.options.txtBackupFolder = 'backups';

// uncomment to disable 'new means new' functionality for the new journal macro
//config.newMeansNewForJournalsToo = false;

//}}}
''-- software download''
http://method-r.com/downloads

''-- changelog''
http://method-r.com/component/content/article/157

''product home page''
http://method-r.com/software/mrtools


''-- useful commands''
{{{
-- show which sqlid consumes the most R across all your trace files
mrskew *.trc --group='$sqlid'

-- show which files have the most R for the sqlid(s) that the first query identified as interesting.
mrskew *.trc --where='$sqlid eq "96g93hntrzjtr"' --group='$file'

-- show you whether there's skew in the individual execution times of EXEC calls but giving you Accounted For time
mrskew *.trc --where='$sqlid eq "4c8mrs99xp26b"' --group='"$file $line"' --name=EXEC

-- or all calls.. It's possible that none of your executions bears any resemblance to the 218.10-second average response time per execution that AWR is reporting. It could be that one execution is responsible for almost all the response time, and the others are near zero. With mrskew, you'll know.
mrskew *.trc --where='$sqlid eq "4c8mrs99xp26b"' --group='"$file $line"'

-- you can count EXEC calls with mrskew using this ...That's in case you just want to reconcile the average per execution with the AWR data; this is how you can determine your denominator.
mrskew *.trc --name=EXEC --where='$sqlid eq "4c8mrs99xp26b"'

-- use --select='$dur' and see the total response time attributable to your sqlid. This figure should match what AWR is telling you
mrskew *.trc --select='$dur' --where='$sqlid eq "4c8mrs99xp26b"'

-- with --select='$uaf', you'll be able to see how much of that response time for the given sqlid is unaccounted for by the trace data
mrskew *.trc --select='$uaf' --where='$sqlid eq "4c8mrs99xp26b"'

-- show you whether there's skew in the individual execution times of EXEC calls but giving you the total duration RT
mrskew *.trc --where='$sqlid eq "4c8mrs99xp26b"' --group='"$file $line"' --name=EXEC --select='$dur'

-- show you whether there's skew in the individual execution times of EXEC calls but giving you the total duration UAF
mrskew *.trc --where='$sqlid eq "4c8mrs99xp26b"' --group='"$file $line"' --name=EXEC --select='$uaf'

-- command below would be similar to the "Profile by Subroutine" of the Method R profiler
mrskew *.trc --select='$dur'

-- below shows the total UAF
mrskew *.trc --select='$uaf'

-- drill down on the SQL that has the most unaccounted for time 
mrskew *.trc --select='$dur' --group='$sqlid'
mrskew *.trc --select='$uaf' --group='$sqlid'

-- give the latency numbers of smart scan stats
mrskew --name='smart.*scan' --ebucket *trc

-- group by storage servers and will show the statistical distribution of calls, and time spent
mrskew --name='smart.*scan' --group='$p1' *trc

-- group by module and account
mrskew *.trc --where='$mod eq "xxx" and $act eq "yyy"' --group='"$file $line"' --name=EXEC --select='$dur'

-- you have to compare an integer with $tim. Therefore, you need to convert the human readable to a tim and then use that as a comparison with $tim.
$ mrtim '2011-05-10 05:00:00.000'
1305021600000000
$ mrtim '2011-05-10 05:15:00.000'
1305022500000000
$ mrskew *.trc --group='$sqlid' --where='1305021600000000 <= $tim and $tim <= 1305022500000000'

--
mrskew ODEV11_ora_14370.trc --group='"$sqlid $name"' --where='$tim == 1305035383.707178'

mrskew *.trc --group='$sqlid' --where='(1305035000.000000 <= $tim) and ($tim <= 1305035900.000000)'

}}}

* mrskew v1 doesn't recognize $sqlid, but you could use $hv in v1 to get the same kind of result.
* DURATION column is in seconds
* mrskew reports total elapsed times for the group clause that you use. It's not dividing by anything. That's one of the principle design criteria for this skew analysis tool.
* accounted-for (by c and ela) time 
* unaccounted-for time (that is, difference between $e and $c + sum($ela for children of the calls)) ...  In other words, I think that roughly 2/3 of your response time for that statement is being consumed by processes (42 of them) that want CPU time but have been preempted and can't get it.
* The AWR habit of reporting averages (such as response time divided by execution count) actually hides important phenomena that mrskew can help you find


''other examples...''
''mrls''
http://method-r.com/component/content/article/124#examples
''mrtim''
http://method-r.com/component/content/article/162#examples
''mrskew''
http://method-r.com/component/content/article/126#examples
''mrcallrm''
http://method-r.com/component/content/article/164#examples
''mrtimfix''
http://method-r.com/component/content/article/163#examples



''package requirements''
{{{
Other than Linux x86, there are no requirements that I'm aware of. We don't distribute it as an rpm and I'm not aware of any requirements because of the way we compile the tools.

I can let you know which rpm's we have installed on our build machine but there's a good chance your rpm's are newer.

Here are the shared libraries required by the most recent release:

$ ldd mrls
libnsl.so.1 => /lib/libnsl.so.1 (0x0083d000)
libdl.so.2 => /lib/libdl.so.2 (0x005c7000)
libm.so.6 => /lib/tls/libm.so.6 (0x005cd000)
libcrypt.so.1 => /lib/libcrypt.so.1 (0x00422000)
libutil.so.1 => /lib/libutil.so.1 (0x00a07000)
libpthread.so.0 => /lib/tls/libpthread.so.0 (0x006fd000)
libc.so.6 => /lib/tls/libc.so.6 (0x00496000)
/lib/ld-linux.so.2 (0x0047c000)
$ ldd mrnl
libnsl.so.1 => /lib/libnsl.so.1 (0x0083d000)
libdl.so.2 => /lib/libdl.so.2 (0x005c7000)
libm.so.6 => /lib/tls/libm.so.6 (0x005cd000)
libcrypt.so.1 => /lib/libcrypt.so.1 (0x00422000)
libutil.so.1 => /lib/libutil.so.1 (0x00a07000)
libpthread.so.0 => /lib/tls/libpthread.so.0 (0x006fd000)
libc.so.6 => /lib/tls/libc.so.6 (0x00496000)
/lib/ld-linux.so.2 (0x0047c000)
$ ldd mrskew
libnsl.so.1 => /lib/libnsl.so.1 (0x0083d000)
libdl.so.2 => /lib/libdl.so.2 (0x005c7000)
libm.so.6 => /lib/tls/libm.so.6 (0x005cd000)
libcrypt.so.1 => /lib/libcrypt.so.1 (0x00422000)
libutil.so.1 => /lib/libutil.so.1 (0x00a07000)
libpthread.so.0 => /lib/tls/libpthread.so.0 (0x006fd000)
libc.so.6 => /lib/tls/libc.so.6 (0x00111000)
/lib/ld-linux.so.2 (0x0047c000)
}}}


http://kbase.redhat.com/faq/docs/DOC-7715
http://www.evernote.com/shard/s48/sh/374cdb18-97d3-421d-85b6-0be1d270cc77/fcde8c4f5ca369745cfd3d6de07379e9
<<<
* Xen kernel does not differentiate between multi-core, multi-processor or hyperthreading processors. Each "processor", regardless of type, is treated as a unique, single-core processor under Xen.

* The ''physical id'' value is a number assigned to each processor socket. The number of unique physical id values on a system tells you the number of CPU sockets that are in use. All logical processors (cores or hyperthreaded images) contained within the same physical processor will share the same physical id value.
* The ''siblings'' value tells you how many logical processors are provided by each physical processor.
* The ''core id'' values are numbers assigned to each physical processor core. Systems with hyperthreading will see duplications in this value as each hyperthreaded image is part of a physical core. Under Red Hat Enterprise Linux 5, these numbers are an index within a particular CPU socket so duplications will also occur in multi-socket systems. Under Red Hat Enterprise Linux 4, which uses APIC IDs to assign core id values, these numbers are not reused between sockets so any duplications seen will be due solely to hyperthreading.
* The ''cpu cores'' value tells you how many physical cores are provided by each physical processor.

''Indications of HT enabled:''
* If the siblings and cpu cores values match, the processors do not support hyperthreading (or hyperthreading is turned off in the BIOS).
* If siblings is twice the value of cpu cores, the processors support hyperthreading and it is in use by the system. 
* Duplication of the core id values is also indicative of hyperthreading.
* It is worth noting that the presence of the "ht" flag in the cpuflags section of /proc/cpuinfo does not necessarily indicate that a system has hyperthreading capabilities. That flag indicates that the processor is capable of reporting the number of siblings it has, not that it specifically has the hyperthreading feature.

<<<



http://kevinclosson.wordpress.com/2009/04/22/linux-thinks-its-a-cpu-but-what-is-it-really-mapping-xeon-5500-nehalem-processor-threads-to-linux-os-cpus/
''the script''
''for solaris use this'' https://blogs.oracle.com/sistare/entry/cpu_to_core_mapping
{{{
[root@desktopserver ~]# cat cpu

cat /proc/cpuinfo | grep -i "model name" | uniq
function filter(){
sed 's/^.*://g' | xargs echo
}
echo "processor                          " `grep processor /proc/cpuinfo | filter`
echo "physical id (processor socket)     " `grep 'physical id' /proc/cpuinfo | filter`
echo "siblings    (logical cores/socket) " `grep siblings /proc/cpuinfo | filter`
echo "core id                            " `grep 'core id' /proc/cpuinfo | filter`
echo "cpu cores   (physical cores/socket)" `grep 'cpu cores' /proc/cpuinfo | filter`

[root@desktopserver ~]# ./cpu
model name      : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
processor                           0 1 2 3 4 5 6 7
physical id (processor socket)      0 0 0 0 0 0 0 0
siblings    (logical cores/socket)  8 8 8 8 8 8 8 8
core id                             0 1 2 3 0 1 2 3
cpu cores   (physical cores/socket) 4 4 4 4 4 4 4 4

               ----------------- ----------------- ----------------- -----------------
Socket0 OScpu#| 0              4| 1              5| 2              6| 3              7|
        Core  |S0_c0_t0 S0_c0_t1|S0_c1_t0 S0_c1_t1|S0_c2_t0 S0_c2_t1|S0_c3_t0 S0_c3_t1|
               ----------------- ----------------- ----------------- -----------------
}}}

{{{
Intel Nehalem E5540 2s8c16t

[enkdb01:root] /home/oracle/dba/benchmark/cpu_topology
> sh cpu_topology
model name      : Intel(R) Xeon(R) CPU           E5540  @ 2.53GHz
processors  (OS CPU count)          0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
physical id (processor socket)      0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1
siblings    (logical CPUs/socket)   8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8
core id     (# assigned to a core)  0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3
cpu cores   (physical cores/socket) 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4

               ----------------- ----------------- ----------------- -----------------
Socket0 OScpu#| 0              8| 1              9| 2             10| 3             11|
        Core  |S0_c0_t0 S0_c0_t1|S0_c1_t0 S0_c1_t1|S0_c2_t0 S0_c2_t1|S0_c3_t0 S0_c3_t1|
               ----------------- ----------------- ----------------- -----------------
Socket1 OScpu#| 4             12| 5             13| 6             14| 7             15|
        Core  |S1_c0_t0 S1_c0_t1|S1_c1_t0    c1_t1|S1_c2_t0 S1_c2_t1|S1_c3_t0 S1_c3_t1|
               ----------------- ----------------- ----------------- -----------------

[enkdb01:root] /home/oracle/dba/benchmark/cpu_topology
> ./turbostat
pkg core CPU   %c0   GHz  TSC   %c1    %c3    %c6   %pc3   %pc6
              13.36 2.30 2.53  21.21  16.29  49.15   0.00   0.00
   0   0   0  40.71 1.63 2.53  51.78   7.52   0.00   0.00   0.00
   0   0   8  14.97 1.63 2.53  77.51   7.52   0.00   0.00   0.00
   0   1   1   7.47 1.62 2.53  16.16  13.55  62.81   0.00   0.00
   0   1   9   8.10 1.75 2.53  15.53  13.55  62.81   0.00   0.00
   0   2   2   7.30 1.62 2.53  15.34  10.80  66.56   0.00   0.00
   0   2  10   7.35 1.88 2.53  15.29  10.80  66.56   0.00   0.00
   0   3   3   2.28 1.65 2.53   5.73  10.53  81.46   0.00   0.00
   0   3  11   3.91 1.92 2.53   4.10  10.53  81.46   0.00   0.00
   1   0   4  99.79 2.79 2.53   0.21   0.00   0.00   0.00   0.00
   1   0  12   3.07 2.77 2.53  96.93   0.00   0.00   0.00   0.00
   1   1   5   5.31 2.75 2.53   8.19  24.35  62.14   0.00   0.00
   1   1  13   3.67 2.75 2.53   9.83  24.35  62.14   0.00   0.00
   1   2   6   1.92 2.73 2.53   4.65  40.08  53.35   0.00   0.00
   1   2  14   2.14 2.73 2.53   4.43  40.08  53.35   0.00   0.00
   1   3   7   2.97 2.74 2.53   6.72  23.45  66.85   0.00   0.00
   1   3  15   2.78 2.74 2.53   6.91  23.45  66.85   0.00   0.00

Linux OS CPU	Package Locale
0	            S0_c0_t0
1	            S0_c1_t0
2	            S0_c2_t0
3	            S0_c3_t0
4	            S1_c0_t0
5	            S1_c1_t0
6	            S1_c2_t0
7	            S1_c3_t0
8	            S0_c0_t1
9	            S0_c1_t1
10	            S0_c2_t1
11	            S0_c3_t1
12	            S1_c0_t1
13	            S1_c1_t1
14	            S1_c2_t1
15	            S1_c3_t1
}}}

! ''output on exadata v2 - db node & storage cell''
Intel® Xeon® Processor E5540 (8M Cache, 2.53 GHz, 5.86 GT/s Intel® QPI)
http://ark.intel.com/Product.aspx?id=37104
{{{
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1
8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8
0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3
4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4

processor       : 15
vendor_id       : GenuineIntel
cpu family      : 6
model           : 26
model name      : Intel(R) Xeon(R) CPU           E5540  @ 2.53GHz
stepping        : 5
cpu MHz         : 1600.000
cache size      : 8192 KB
physical id     : 1
siblings        : 8
core id         : 3
cpu cores       : 4
apicid          : 23
fpu             : yes
fpu_exception   : yes
cpuid level     : 11
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx rdtscp lm constant_tsc ida nonstop_tsc pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr sse4_1 sse4_2 popcnt lahf_lm
bogomips        : 5054.02
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management: [8]
}}}

! ''output on exadata x2 - db node''
Intel® Xeon® Processor X5670 (12M Cache, 2.93 GHz, 6.40 GT/s Intel® QPI)
http://ark.intel.com/Product.aspx?id=47920
{{{
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1
12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12
0 1 2 8 9 10 0 1 2 8 9 10 0 1 2 8 9 10 0 1 2 8 9 10
6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6

processor       : 23
vendor_id       : GenuineIntel
cpu family      : 6
model           : 44
model name      : Intel(R) Xeon(R) CPU           X5670  @ 2.93GHz
stepping        : 2
cpu MHz         : 2926.096
cache size      : 12288 KB
physical id     : 1
siblings        : 12
core id         : 10
cpu cores       : 6
apicid          : 53
fpu             : yes
fpu_exception   : yes
cpuid level     : 11
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx pdpe1gb rdtscp lm constant_tsc ida nonstop_tsc arat pni monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr sse4_1 sse4_2 popcnt lahf_lm
bogomips        : 5852.00
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management: [8]
}}}

! ''output on exadata x2 - storage cell''
Intel® Xeon® Processor L5640 (12M Cache, 2.26 GHz, 5.86 GT/s Intel® QPI)
http://ark.intel.com/Product.aspx?id=47926
{{{
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1
12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12
0 1 2 8 9 10 0 1 2 8 9 10 0 1 2 8 9 10 0 1 2 8 9 10
6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6

processor       : 23
vendor_id       : GenuineIntel
cpu family      : 6
model           : 44
model name      : Intel(R) Xeon(R) CPU           L5640  @ 2.27GHz
stepping        : 2
cpu MHz         : 2261.060
cache size      : 12288 KB
physical id     : 1
siblings        : 12
core id         : 10
cpu cores       : 6
apicid          : 53
fpu             : yes
fpu_exception   : yes
cpuid level     : 11
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx pdpe1gb rdtscp lm constant_tsc ida nonstop_tsc arat pni monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr sse4_1 sse4_2 popcnt lahf_lm
bogomips        : 4522.01
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management: [8]
}}}

! ''x3-8 and x2-8''
{{{
8,80cores,160threads
$ sh cpu_topology
model name      : Intel(R) Xeon(R) CPU E7- 8870  @ 2.40GHz
processor                           0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159
physical id (processor socket)      0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 6 6 6 6 6 6 6 6 6 6 7 7 7 7 7 7 7 7 7 7 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 6 6 6 6 6 6 6 6 6 6 7 7 7 7 7 7 7 7 7 7
siblings    (logical cores/socket)  20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
core id                             0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25
cpu cores   (physical cores/socket) 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10
}}}

! ''output on ODA''
Intel® Xeon® Processor X5675 (12M Cache, 3.06 GHz, 6.40 GT/s Intel® QPI)
http://ark.intel.com/products/52577/Intel-Xeon-Processor-X5675-(12M-Cache-3_06-GHz-6_40-GTs-Intel-QPI)
{{{
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1
12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12
0 1 2 8 9 10 0 1 2 8 9 10 0 1 2 8 9 10 0 1 2 8 9 10
6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6

processor       : 23
vendor_id       : GenuineIntel
cpu family      : 6
model           : 44
model name      : Intel(R) Xeon(R) CPU           X5675  @ 3.07GHz
stepping        : 2
cpu MHz         : 3059.102
cache size      : 12288 KB
physical id     : 1
siblings        : 12
core id         : 10
cpu cores       : 6
apicid          : 53
fpu             : yes
fpu_exception   : yes
cpuid level     : 11
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx pdpe1gb rdtscp lm constant_tsc ida nonstop_tsc arat pni monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr sse4_1 sse4_2 popcnt lahf_lm
bogomips        : 6118.00
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management: [8]
}}}



''Orapub - core-vs-threadcpu-utilization''
http://shallahamer-orapub.blogspot.com/2011/04/core-vs-threadcpu-utilization-part-1.html
http://shallahamer-orapub.blogspot.com/2011/05/cores-vs-threads-util-differencespart-2.html
http://shallahamer-orapub.blogspot.com/2011/05/cores-vs-threads-util-differencepart2b.html
http://content.dell.com/us/en/enterprise/d/large-business/thread-cores-which-you-need.aspx, http://itexpertvoice.com/home/threads-or-cores-which-do-you-need/
https://plus.google.com/117773751083866603675/posts/HrEbMPTeVxp <-- greg rahn threads vs cores
http://openlab.web.cern.ch/sites/openlab.web.cern.ch/files/technical_documents/Evaluation_of_the_4_socket_Intel_Sandy_Bridge-EP_server_processor.pdf
CPU count consideration for Oracle Parameter setting when using Hyper-Threading Technology [ID 289870.1]


How Memory Allocation Affects Performance in Multithreaded Programs
by Rickey C. Weisner, March 2012
http://www.oracle.com/technetwork/articles/servers-storage-dev/mem-alloc-1557798.html

Application Scaling on CMT and Multicore Systems http://developers.sun.com/solaris/articles/scale_cmt.html
Tutorial: DTrace by Example http://developers.sun.com/solaris/articles/dtrace_tutorial.html
facebook https://code.launchpad.net/mysqlatfacebook
profiler http://poormansprofiler.org/


data warehouse in mysql http://mysql.rjweb.org/doc.php/datawarehouse
https://dba.stackexchange.com/questions/75550/is-data-warehousing-possible-in-mysql-and-postgressql
https://blog.panoply.io/mysql-as-a-data-warehouse-is-it-really-your-best-option
https://www.zdnet.com/article/oracle-takes-a-new-twist-on-mysql-adding-data-warehousing-to-the-cloud-service/
https://cloudwars.co/oracle/oracle-unleashes-heatwave-mysql-thumps-amazon-redshift-aurora/




mysql heatwave whitepaper https://www.oracle.com/a/ocom/docs/mysql-database-service-technical-paper.pdf
https://github.com/oracle/heatwave-tpch
Getting Started to MySQL HeatWave for Analytics https://www.youtube.com/watch?v=Xk6ZeO-tHz8
https://juliandontcheff.wordpress.com/2021/06/07/heatwave-mysql-db-systems-in-oci/
https://medium.com/oracledevs/connect-tableau-to-oracle-mysql-database-service-powered-by-heatwave-5d18bb4a1b5c
https://gitlab.oracle.k8scloud.site/devops_admin/mysql-heatwave-workshop




https://druid.apache.org/docs/latest/ingestion/schema-design.html
https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/adding-druid/content/druid_ingest.html
https://metatron.app/2019/05/24/what-to-keep-in-mind-when-ingesting-a-data-source-to-druid-from-metatron-discovery/
https://dzone.com/articles/ultra-fast-olap-analytics-with-apache-hive-and-dru
hive druid integration https://gist.github.com/rajkrrsingh/f01475f4bfa4a33240134561171f378f
https://stackoverflow.com/questions/58693625/need-to-load-data-from-hadoop-to-druid-after-applying-transformations-if-i-use
https://stackoverflow.com/questions/51106037/upload-data-to-druid-incrementally
https://druid.apache.org/docs/latest/querying/sql.html
https://druid.apache.org/docs/latest/querying/joins.html



BI ENGINE
https://cloud.google.com/bi-engine/docs/optimized-sql#unsupported-features



..
<<showtoc>> 

! merge 
http://blog.mclaughlinsoftware.com/2009/05/25/mysql-merge-gone-awry/
http://www.xaprb.com/blog/2006/06/17/3-ways-to-write-upsert-and-merge-queries-in-mysql/
http://www.mysqlperformanceblog.com/2012/09/18/the-math-of-automated-failover/
http://techblog.netflix.com/2011/04/lessons-netflix-learned-from-aws-outage.html
-- CLUSTERING
http://blogs.oracle.com/mysql/2011/01/managing_database_clusters_-_a_whole_lot_simpler.html

-- PERFORMANCE
https://blogs.oracle.com/MySQL/entry/mysql_cluster_performance_best_practices
High Performance MySQL
http://oreilly.com/catalog/9780596003067

http://mysql-dba-journey.blogspot.com/search/label/MySQL%20for%20Oracle%20DBAs
http://www.pythian.com/news/13369/notes-on-learning-mysql-as-an-oracle-dba/
http://ronaldbradford.com/mysql-oracle-dba/
http://www.ardentperf.com/2010/09/08/mysterious-oracle-net-errors/
connor's presentation on statistics
http://www.evernote.com/shard/s48/sh/dde62582-24a5-42dd-b401-7352f5caff87/38efb9575a9f6cfe8457ad20308bb3c8

Introduced in 11g
http://dioncho.wordpress.com/2010/08/16/batching-nlj-optimization-and-ordering/
http://jeffreylui.wordpress.com/2011/02/21/thoughts-on-nlj_batching/
https://learning.oreilly.com/search/?query=natural%20language%20processing&extended_publisher_data=true&highlight=true&include_assessments=false&include_case_studies=true&include_courses=true&include_playlists=true&include_collections=true&include_notebooks=true&is_academic_institution_account=false&source=user&sort=relevance&facet_json=true&page=0&include_facets=false&include_scenarios=true&include_sandboxes=true&json_facets=true

* Applied Natural Language Processing with Python : Implementing Machine Learning and Deep Learning Algorithms for Natural Language Processing https://learning.oreilly.com/library/view/applied-natural-language/9781484237335/
* Practical Natural Language Processing https://learning.oreilly.com/library/view/practical-natural-language/9781492054047/
* Natural Language Processing with Spark NLP https://learning.oreilly.com/library/view/natural-language-processing/9781492047759/
* Chapter 19. Productionizing NLP Applications https://learning.oreilly.com/library/view/natural-language-processing/9781492047759/ch19.html#productionizing_nlp_applications


https://towardsdatascience.com/deploying-a-machine-learning-model-as-a-rest-api-4a03b865c166
* Oracle Coherence
Sizing Oracle Coherence Applications
http://soainfrastructure.blogspot.com/2010/08/sizing-oracle-coherence-applications.html

* EBusiness Suite 
A Primer on Hardware Sizing for Oracle E-Business Suite
http://blogs.oracle.com/stevenChan/2010/08/ebs_sizing_primer.html
http://www.oracle.com/apps_benchmark/html/white-papers-e-business.html
http://blogs.oracle.com/stevenChan/2010/02/oracle_e-business_suite_platform_smorgasbord.html
http://blogs.oracle.com/stevenChan/2010/04/ebs_1211_tsk.html
http://blogs.oracle.com/stevenChan/2009/11/ebs_tuning_oow09.html
http://blogs.oracle.com/stevenChan/2008/10/case_study_redux_oracles_own_ebs12_upgrade.html
http://blogs.oracle.com/stevenChan/2007/11/analyzing_memory_vs_performanc.html


* From Martin Widlake's
http://mwidlake.wordpress.com/2010/11/05/how-big-is-a-person/
http://mwidlake.wordpress.com/2010/11/11/database-sizing-%E2%80%93-how-much-disk-do-i-need-the-easy-way/
http://mwidlake.wordpress.com/2009/09/27/big-discs-are-bad/
http://www.pythian.com/news/170/750g-disks-are-bahd-for-dbs-a-call-to-arms/
Workload Management for Operational Data Warehousing
http://blogs.oracle.com/datawarehousing/2010/09/workload_management_for_operat.html

Workload Management – Statement Queuing
http://blogs.oracle.com/datawarehousing/2010/09/workload_management_statement.html

Workload Management – A Simple (but real) Example
http://blogs.oracle.com/datawarehousing/2010/10/workload_management_a_simple_b.html

A fair bite of the CPU pie? Monitoring & Testing Oracle Resource Manager
http://rnm1978.wordpress.com/2010/09/10/a-fair-bite-of-the-cpu-pie-monitoring-testing-oracle-resource-manager/

Performance Tips
http://blogs.oracle.com/rtd/2010/11/performance_tips.html


Database Instance Caging: A Simple Approach to Server Consolidation http://www.oracle.com/technetwork/database/focus-areas/performance/instance-caging-wp-166854.pdf
Workload Management for Operational Data Warehousing http://blogs.oracle.com/datawarehousing/entry/workload_management_for_operat
Workload Management – Statement Queuing http://blogs.oracle.com/datawarehousing/entry/workload_management_statement
Workload Management – A Simple (but real) Example http://blogs.oracle.com/datawarehousing/entry/workload_management_a_simple_b
A fair bite of the CPU pie? Monitoring & Testing Oracle Resource Manager http://rnm1978.wordpress.com/2010/09/10/a-fair-bite-of-the-cpu-pie-monitoring-testing-oracle-resource-manager/
Parallel Execution and workload management for an Operational DW environment http://www.oracle.com/technetwork/database/focus-areas/bi-datawarehousing/twp-bidw-parallel-execution-130766.pdf
http://www.oracle.com/technetwork/database/focus-areas/bi-datawarehousing/index.html


-- Exadata specific
http://www.linkedin.com/groupItem?view=&srchtype=discussedNews&gid=918317&item=63941267&type=member&trk=eml-anet_dig-b_pd-ttl-cn&ut=0pKCK5WPN524Y1 <— kerry explains how we do it
Oracle Exadata Database Machine Consolidation: Segregating Databases and Roles http://www.oracle.com/technetwork/database/focus-areas/availability/maa-exadata-consolidated-roles-459605.pdf
Boris - Capacity Management for Oracle Database Machine Exadata v2 https://docs.google.com/viewer?url=http://www.nocoug.org/download/2010-05/DB_Machine_5_17_2010.pdf&pli=1
Performance Stories from Exadata Migrations http://www.slideshare.net/tanelp/tanel-poder-performance-stories-from-exadata-migrations
http://www.brennan.id.au/04-Network_Configuration.html
http://www.redhat.com/magazine/010aug05/departments/tips_tricks/
http://gigaom.com/2013/06/15/how-to-prevent-the-nsa-from-reading-your-email/
https://communities.intel.com/community/itpeernetwork/datastack/blog/2015/03/19/nvm-express-technology-goes-viral-from-data-center-to-client-to-fabrics
''2015'' Under the Hood: Unlocking SSD Performance with NVM Express (NVMe) Technology https://www.youtube.com/watch?v=I7Cic0Rb7D0
''2013'' Under the Hood: Data Center Storage - PCI Express SSDs with NVM Express (NVMe) https://www.youtube.com/watch?v=ACyTonhxXd8
Intel SSD DC P3700 800GB Review - Ludicrous Speed for the Masses! https://www.youtube.com/watch?v=NL_jzPCrdog

http://www.nvmexpress.org/index.php/download_file/view/18/1/
http://en.wikipedia.org/wiki/Nagios
http://www.rittmanmead.com/2012/09/advanced-monitoring-of-obiee-with-nagios/

! net services best practices 
Batch Processing in Disaster Recovery Configurations  http://www.hitachi.co.jp/Prod/comp/soft1/oracle/pdf/OBtecinfo-08-008.pdf
Oracle Net Services - Best Practices for Database Performance and Scalability https://www.doag.org/formes/pubfiles/2261823/169-2010-K-DB-Mensah-Net_Services.pdf
https://www.doag.org/formes/pubfiles/2261824/169-2010-K-DB-Mensah-Net_Services_Best-PRAESENTATION.pdf


! terms 
<<<
* OS tuning - network kernel params and settings - TCP Buffer Sizes
* jumbo frames (for GigE networks) - setting mtu (https://www.youtube.com/watch?v=bCWPdYKPnO4&t=8s)
* buffer size
* sdu (SDU value be a multiple of the MTU) - 	The relation between MTU (Maximum Transmission Unit) and SDU (Session Data Unit) (Doc ID 274483.1)
* client load balance and failover 
* server load balance advisory
* bdp - bandwidth delay product Oracle recommends to set RECV_BUF_SIZE and SEND_BUF_SIZE three time the BDP’s value (Bandwidth delay product) in order to fully use network bandwidth over TCP protocol

also check [[DataGuardNetworkBandwidth]]
<<<



! references 
How to Set SEND_BUF_SIZE And RECV_BUF_SIZE On Thin JDBC Clients (Doc ID 2037434.1)
Network Tuning Best Practices for Oracle Streams Propagation (Doc ID 1377929.1)
Oracle® Database Net Services Administrator's Guide 11g Release 2 (11.2) https://docs.oracle.com/cd/E18283_01/network.112/e10836/performance.htm
CHAPTER 13 - Migrating to Exadata https://learning.oreilly.com/library/view/expert-oracle-exadata/9781430262428/9781430262411_Ch13.xhtml
How to Change MTU Size in Exadata Environment (Doc ID 1586212.1)
Do Changes To MTU Settings On Exadata, Necessitate Similar Changes On Exalytics? (Doc ID 1928262.1)
Oracle Exadata Database Machine Performance Best Practices (Doc ID 1274475.1)
Recommendation for the Real Application Cluster Interconnect and Jumbo Frames (Doc ID 341788.1)
How to test and verify network support of Jumbo Frames (Doc ID 2423930.1)
Setting SEND_BUF_SIZE and RECV_BUF_SIZE of Agent Managed Listeners on Oracle RAC or Grid Infrastructure Standalone Server (Doc ID 2048018.1)
How to improve performance of impdp over a network_link https://www.oracle.com/webfolder/community/oracle_database/3616011.html
Required MTU Size for an Exadata Machine https://www.oracle.com/webfolder/community/engineered_systems/3685442.html
OraRac11g Enable Jumbo Frames demo https://www.youtube.com/watch?v=bCWPdYKPnO4&t=8s
How to Determine SDU Value Being Negotiated Between Client and Server (Doc ID 304235.1)







.



.
UDP Versus TCP/IP: An Overview
  	Doc ID: 	Note:1080335.6

How to Configure Linux OS Ethernet TCP/IP Networking
  	Doc ID: 	132044.1





ORA-12154 While Attempting to Connect to New Database Via SQL*Net
 	Doc ID:	Note:464505.1
 	
TROUBLESHOOTING GUIDE: TNS-12154 TNS:could not resolve service name
 	Doc ID:	Note:114085.1
 	
OERR: ORA 12154 "TNS:could not resolve service name"
 	Doc ID:	Note:21321.1
 	


-- TROUBLESHOOTING

Network Products and Error Stack Components
  	Doc ID: 	39662.1


-- TNSPING
Comparison of Oracle's tnsping to TCP/IP's ping [ID 146264.1]


-- FIREWALL

Oracle Connections and Firewalls (Doc ID 125021.1
SQL*NET PACKET STRUCTURE: NS PACKET HEADER (Doc ID 1007807.6
Resolving Problems with Connection Idle Timeout With Firewall (Doc ID 257650.1



-- NETWORK PERFORMANCE

Oracle Net Performance Tuning (Doc ID 67983.1

Troubleshooting 9i Data Guard Network Issues
  	Doc ID: 	Note:241925.1

Oracle Net Performance Tuning
  	Doc ID: 	Note:67983.1

How can I automatically detect slow connections?
  	Doc ID: 	Note:305299.1

Network Performance Troubleshooting - SQL*NET And CORE/MFG
  	Doc ID: 	Note:101007.1 	

Bandwith Per User Session For Oracle Form Base Web Deployment In Oracle9ias
  	Doc ID: 	Note:287237.1

How to Find Out How Much Network Traffic is Created by Web Deployed Forms?
  	Doc ID: 	Note:109597.1

Few Basic Techniques to Improve Performance of Forms.
  	Doc ID: 	Note:221529.1

Troubleshooting Web Deployed Oracle Forms Performance Issues
  	Doc ID: 	Note:363285.1

High ARCH wait on SENDREQ wait events found in statspack report.
  	Doc ID: 	Note:418709.1

Refining Remote Archival Over a Slow Network with the ARCH Process
  	Doc ID: 	Note:260040.1

Poor Performance When Using CLOBS and Oracle Net
  	Doc ID: 	398380.1




-- ARRAYSIZE

SET LONG, ARRAYSIZE, AND MAXDATA SYSTEM VARIABLES to display LONG columns
  	Doc ID: 	2062061.6

Relationship of Longs/Arraysize/LongChunk when using Oracle Reports?
  	Doc ID: 	10747.1




-- SDU, MTU

The relation between MTU (Maximum Transmission Unit), SDU (Session Data Unit) and TDU (Transmission Data Unit)
  	Doc ID: 	274483.1
    1) Note 67983.1 "Oracle Net Performance Tuning"
    2) Note 125021.1 "SQL*Net Packet Sizes (SDU & TDU Parameters)" 

Bug 1113588 - New SQLNET.ORA parameter DEFAULT_SDU_SIZE
  	Doc ID: 	1113588.8

Net8 Assistant places SDU parameter incorrectly
  	Doc ID: 	Note:99220.1

Recommendation for the Real Application Cluster Interconnect and Jumbo Frames
  	Doc ID: 	341788.1

Asm Does Not Start After Relinking With RDS/Infiniband
  	Doc ID: 	741720.1

304235.1	How to configure and verify that SDU Setting Are Being Read

76412.1	Network Performance Considerations in Designing Client/Server Applications

99715.1	When to modify, when not to modify the Session data unit (SDU)

160738.1	 How To Configure the Size of TCP/IP Packets
How to set MTU (Maximum Transmission Unit) size for interfaces (network interfaces). (Doc ID 1017799.1)
How to configure Jumbo Frames on 10-Gigabit Ethernet (Doc ID 1002594.1)






-- BANDWIDTH DELAY PRODUCT

    http://forums.oracle.com/forums/thread.jspa?threadID=629524
	  Please find below some info on how to calculate the BDP, hope this would help
	  Note:
	  TCP/IP buffer data into send and receive buffers while sending and receiving to or from lower and upper layer protocols. The sizes of these buffers affect network performance, as these buffer sizes influence flow control decisions.
	  The parameters specify sizes of socket receive and send buffers, respectively, associated with Oracle Net connections RECV_BUF_SIZE and SEND_BUF_SIZE.
	  Please note that some operating systems have parameters that set the maximum size for all send and receive socket buffers. You must ensure that these values have been adjusted to allow Oracle Net to use a larger socket buffer size.
	  Oracle recommends to set RECV_BUF_SIZE and SEND_BUF_SIZE three time the BDP’s value (Bandwidth delay product) in order to fully use network bandwidth over TCP protocol.
	  how to calculate RECV_BUF_SIZE and SEND_BUF_SIZE find below the details
	  Bandwidth= 10mbps=10 000 000 bits /s
	  Assume RTT=10ms=10/1000 (0.01s) ( RTT obtain through ping @server)
	  BDP= 10 Mbps * 10msec (0.01 sec) --à 10 ,000,000 * .01=100, 000bits/s Note: I took the worst RTT value=10ms
	  BDP= 100,000 / 8 = 12, 500 bytes
	  The optimal send and receive socket buffer sizes are calculated as follows:
	  Socket buffer size (RECV_BUF_SIZE and SEND_BUF_SIZE ) = 3 * bandwidth * delay = 12,500 * 3 = 37500 bytes




-- BUFFER OVERFLOW

BUFFER OVERFLOW ERROR WHEN RUNNING QUERY
  	Doc ID: 	1020381.6

SQL*Plus: 'BUFFER OVERFLOW' Explained
  	Doc ID: 	2171.1



-- TIMEOUT

VMS: How to Lower Connect Retry Limit and/or Connect Timeout in SQL*Net
  	Doc ID: 	1077706.6





-- LISTENER

TNS Listener Crashes Intermittantly with No Error Message 
  Doc ID:  237887.1 

Dynamic Registration and TNS_ADMIN 
  Doc ID:  181129.1 

How to Diagnose Slow TNS Listener / Connection Performance 
  Doc ID:  557416.1 

Connections To 11g TNS Listener are Slow. 
  Doc ID:  561429.1 







-- SERVICE 

Issues Affecting Automatic Service Registration
  	Doc ID: 	235562.1





-- PPP

Point-to-Point Protocol Internals
  	Doc ID: 	47936.1





-- EMAIL

Oracle Email Basics
  	Doc ID: 	Note:217140.1



-- DEBUG

Troubleshooting Oracle Net 
  Doc ID:  779226.1 

Note 69642.1 - UNIX: Checklist for Resolving Connect AS SYSDBA Issues

How to Perform a SQL*Net Loopback on Unix
  	Doc ID: 	1004599.6

Finding the source of failed login attempts. 
  Doc ID:  352389.1 

Taking Systemstate Dumps when You cannot Connect to Oracle 
  Doc ID:  121779.1 

How To Track Dead Connection Detection(DCD) Mechanism Without Enabling Any Client/Server Network Tracing 
  Doc ID:  438923.1 





-- ADVANCED NETWORKING OPTION

Setup and Testing Advanced Networking Option 
  Doc ID:  1068871.6 

Oracle Advanced Security SSL Troubleshooting Guide 
  Doc ID:  166492.1 






-- KERBEROS

Kerberos: High Level Introduction and Flow 
  Doc ID:  294136.1 


-- 11g /etc/hosts

11g Network Layer Does Not Use /etc/hosts on UNIX
  	Doc ID: 	803838.1



-- INBOUND_CONNECT_TIMEOUT
Description of Parameter SQLNET.INBOUND_CONNECT_TIMEOUT
  	Doc ID: 	274303.1

ORA - 12170 Occured While Connecting to RAC DB using NAT external IP address
  	Doc ID: 	453544.1


How I Resolved ORA-03135: connection lost contact
  	Doc ID: 	465572.1






-- LISTENER

How to Create Multiple Oracle Listeners and Multiple Listener Addresses
  	Doc ID: 	232010.1

How to Create Additional TNS listeners and Load Balance Connections Between them
  	Doc ID: 	557946.1

How to Disable AutoRegistration of an Instance with the Listener
  	Doc ID: 	140571.1



-- LISTENER - AUDIT VAULT

How To Change The Port of The Listener Configured for the AV Database ?
  	Doc ID: 	753577.1



-- MTS, SHARED SERVER

How MTS and DNS are related, MTS_DISPATCHER and ORA-12545
  	Doc ID: 	131658.1



-- LISTENER TRACING

How to Enable Oracle SQLNet Client , Server , Listener , Kerberos and External procedure Tracing from Net Manager
  	Doc ID: 	395525.1

How to Match Oracle Net Client and Server Trace Files
  	Doc ID: 	374116.1

Using and Disabling the Automatic Diagnostic Repository (ADR) with Oracle Net for 11g
  	Doc ID: 	454927.1

Examining Oracle Net, Net8, SQL*Net Trace Files
  	Doc ID: 	156485.1








{{{
NOTE: you need the boot.iso to do the network install


########## PREPARE THE REPOSITORY (for FTP install) ##########

NOTE: in VSFTPD, the directory root for this service is 
	/var/ftp/pub you have to create the directory under this


1) 

# mkdir -pv install/centos/4/{os,updates}/i386 


2) 
	contents of the installation CD (RHEL4):
		base
			- contains key images required and must be in source tree, below are the contents of base

				-r--r--r-- 1 oracle root   718621 Apr 17 04:31 comps.xml
				-r--r--r-- 1 oracle root 15118336 Apr 17 04:43 netstg2.img
				-r--r--r-- 1 oracle root 14835712 Apr 17 04:43 hdstg2.img
				-r--r--r-- 1 oracle root 69660672 Apr 17 04:44 stage2.img
				-r--r--r-- 1 oracle root 22358872 Apr 17 04:46 hdlist2
				-r--r--r-- 1 oracle root  8716184 Apr 17 04:46 hdlist
				-r--r--r-- 1 oracle root  9525755 Apr 17 04:54 comps.rpm
				-r--r--r-- 1 oracle root     1546 Apr 17 05:00 TRANS.TBL
		
		RPMS
		
		SRPMS
			- contains source RPMS
		
		images
			- create different type of boot disks
			- boot.iso 	<-- create boot cdrom for network install
			- diskboot.img	<-- devices larget than floppy
			- pxeboot 	<-- installed on the DHCP server
		
		release notes
			- copy all the release notes

3) 

for RHEL4 and 5, you could just copy all the contents of the CD

# cp RELEASE-NOTES-* /install

4) 

	

			for http:
			# cp -av /media/cdrecorder/RedHat/ /install
			
				below will be the final contents of the directory
					dr-xr-xr-x 2 oracle root  4096 Apr 17 04:54 base
					dr-xr-xr-x 3 oracle root 94208 Apr 17 04:46 RPMS
					-r--r--r-- 1 oracle root   432 Apr 17 05:00 TRANS.TBL

			for ftp:
			cp -a --reply=yes /mnt/discx/RedHat /var/ftp/pub
			cp -a --reply=yes /mnt/discx/images /var/ftp/pub
			cp -a /mnt/discx/* /var/ftp/pub/docs


5) eject and insert disk2

6) 

# cp -av /media/cdrecorder/RedHat/ /install





########## HTTPD (apache) ##########

NOTE: in HTTPD (RHEL) the directory root is /var/www/html the config is in /etc/httpd/conf/httpd.conf
	in SUSE the directory root is /srv/www/htdocs the config is in /etc/apache2/default-server.conf
	

1) edit the httpd.conf look for "alias"

2) add the following lines

					<-- ALIAS, any request thats made to our server
					redirect them to a location in the hard drive
					because the document root is on a different location
					so you have to redirect the files..
					WEBSPACE MAPPING to FILESYSTEM MAPPING


Alias /install "/var/ftp/pub/install"

<Directory "/var/ftp/pub/install">	
    Options Indexes MultiViews
    AllowOverride None
    Order allow,deny
    Allow from all
</Directory>				<-- if you dont specify this you'll not see the tree


OR.... this could be another directory outside of /var/ftp/pub/install... see below: 

[root@oel4 ~]# vi /etc/httpd/conf/httpd.conf
# ADD THE LINE BELOW ON THE ALIAS PART
Alias /oel4.6 "/oracle/installers/oel/4.6/os/x86"
<Directory "/oracle/installers/oel/4.6/os/x86">
Options Indexes MultiViews
AllowOverride None
Order allow,deny
Allow from all
</Directory>
[root@oel4 ~]# service httpd restart
Stopping httpd: [FAILED]
Starting httpd: [ OK ]

and have it as a YUM repository 

[root@racnode1 yum.repos.d]# mv ULN-Base.repo ULN-Base.repo.bak
[root@racnode1 yum.repos.d]# vi oel46.repo
# ADD THE FOLLOWING LINES
[OEL4.6]
name=Enterprise-$releasever - Media
baseurl=http://192.168.203.24/oel4.6/Enterprise/RPMS
gpgcheck=1
gpgkey=http://192.168.203.24/oel4.6/RPM-GPG-KEY-oracle


3) restart the service, now you have installers ready for FTP and HTTP install





########## NFS install ##########



For NFS, export the directory by adding an entry to /etc/exports to export to a specific system: 
/location/of/disk/space client.ip.address(ro,no_root_squash)

To export to all machines (not appropriate for all NFS systems), add: 
/location/of/disk/space *(ro,no_root_squash)

# service nfs reload

}}}
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/3/html/Installation_and_Configuration_Guide/Disabling_Network_Manager.html

http://www.softpanorama.org/Net/Linux_networking/RHEL_networking/disabling_network_manager_in_rhel6.shtml

http://xmodulo.com/2014/02/disable-network-manager-linux.html

http://serverfault.com/questions/429014/what-is-the-relation-between-networkmanager-and-network-service-in-fedora-rhel-c

http://blog.beausanders.org/blog7/?q=node/19

https://apex.oracle.com/database-features/
https://oradiff.oracle.com/ords/r/oradiff/oradiff/home?session=711138461406179


https://apex.oracle.com/database-features/
https://twitter.com/dominic_giles/status/1169161999026184193

[img(100%,100%)[ https://i.imgur.com/csARb3I.png]]





.
<<<
Oracle has announced the latest generation of sun fire m3 servers…The new servers will have the Intel E5 based CPUs, which mean higher speeds and more memory.  The X4170M3 server (Exadata compute nodes) will support up to 512GB of RAM in a single 1U server, along with 4 onboard 10GbE NICs.  Could certainly make the next generation of Exadata even more interesting.

http://www.oracle.com/us/products/servers-storage/servers/x86/overview/index.html

<<<


<<<
Yes, it will be pretty exciting.. 

See the comparison of the X2 CPU (X5670) against the benchmark of Oracle on Ebiz with the new E5 CPU (http://goo.gl/vGTrg)

Comparison here  http://ark.intel.com/compare/64596,47920

<<<

/***
|Name:|NewHerePlugin|
|Description:|Creates the new here and new journal macros|
|Version:|3.0 ($Rev: 3861 $)|
|Date:|$Date: 2008-03-08 10:53:09 +1000 (Sat, 08 Mar 2008) $|
|Source:|http://mptw.tiddlyspot.com/#NewHerePlugin|
|Author:|Simon Baird <simon.baird@gmail.com>|
|License|http://mptw.tiddlyspot.com/#TheBSDLicense|
***/
//{{{
merge(config.macros, {
	newHere: {
		handler: function(place,macroName,params,wikifier,paramString,tiddler) {
			wikify("<<newTiddler "+paramString+" tag:[["+tiddler.title+"]]>>",place,null,tiddler);
		}
	},
	newJournalHere: {
		handler: function(place,macroName,params,wikifier,paramString,tiddler) {
			wikify("<<newJournal "+paramString+" tag:[["+tiddler.title+"]]>>",place,null,tiddler);
		}
	}
});

//}}}
/***
|Name:|NewMeansNewPlugin|
|Description:|If 'New Tiddler' already exists then create 'New Tiddler (1)' and so on|
|Version:|1.1.1 ($Rev: 2263 $)|
|Date:|$Date: 2007-06-13 04:22:32 +1000 (Wed, 13 Jun 2007) $|
|Source:|http://mptw.tiddlyspot.com/empty.html#NewMeansNewPlugin|
|Author:|Simon Baird <simon.baird@gmail.com>|
|License|http://mptw.tiddlyspot.com/#TheBSDLicense|
!!Note: I think this should be in the core
***/
//{{{

// change this or set config.newMeansNewForJournalsToo it in MptwUuserConfigPlugin
if (config.newMeansNewForJournalsToo == undefined) config.newMeansNewForJournalsToo = true;

String.prototype.getNextFreeName = function() {
       var numberRegExp = / \(([0-9]+)\)$/;
       var match = numberRegExp.exec(this);
       if (match) {
               var num = parseInt(match[1]) + 1;
               return this.replace(numberRegExp," ("+num+")");
       }
       else {
               return this + " (1)";
       }
}

config.macros.newTiddler.checkForUnsaved = function(newName) {
	var r = false;
	story.forEachTiddler(function(title,element) {
		if (title == newName)
			r = true;
	});
	return r;
}

config.macros.newTiddler.getName = function(newName) {
       while (store.getTiddler(newName) || config.macros.newTiddler.checkForUnsaved(newName))
               newName = newName.getNextFreeName();
       return newName;
}


config.macros.newTiddler.onClickNewTiddler = function()
{
	var title = this.getAttribute("newTitle");
	if(this.getAttribute("isJournal") == "true") {
		title = new Date().formatString(title.trim());
	}

	// ---- these three lines should be the only difference between this and the core onClickNewTiddler
	if (config.newMeansNewForJournalsToo || this.getAttribute("isJournal") != "true")
		title = config.macros.newTiddler.getName(title);

	var params = this.getAttribute("params");
	var tags = params ? params.split("|") : [];
	var focus = this.getAttribute("newFocus");
	var template = this.getAttribute("newTemplate");
	var customFields = this.getAttribute("customFields");
	if(!customFields && !store.isShadowTiddler(title))
		customFields = String.encodeHashMap(config.defaultCustomFields);
	story.displayTiddler(null,title,template,false,null,null);
	var tiddlerElem = story.getTiddler(title);
	if(customFields)
		story.addCustomFields(tiddlerElem,customFields);
	var text = this.getAttribute("newText");
	if(typeof text == "string")
		story.getTiddlerField(title,"text").value = text.format([title]);
	for(var t=0;t<tags.length;t++)
		story.setTiddlerTag(title,tags[t],+1);
	story.focusTiddler(title,focus);
	return false;
};

//}}}
http://venturebeat.com/2012/06/18/nginx-the-web-server-tech-youve-never-heard-of-that-powers-netflix-facebook-wordpress-and-more/
http://tengine.taobao.org/
http://www.cpearson.com/excel/noblanks.aspx
http://chandoo.org/wp/2010/01/26/delete-blank-rows-excel/



=IFERROR(INDEX(CpuCoreBlank,SMALL((IF(LEN(CpuCoreBlank),ROW(INDIRECT("1:"&ROWS(CpuCoreBlank))))),ROW(A1)),1),"")
<<showtoc>> 

https://github.com/mitchellh/vagrant-google/issues/234
{{{

I experienced this "NoMethodError in run_instance" issue and fixed it by adding "compute admin" on my service account. By the way, this is just my test environment so use a subset of this role on prod.

Initially I was having this error when accessing the API. I agree that there should be a more descriptive error message on permission issue.

gcurl https://compute.googleapis.com/compute/v1/projects/ansible-swarm/zones/us-east1-c/diskTypes/pd-standard
{
  "error": {
    "code": 403,
    "message": "Required 'compute.diskTypes.get' permission for 'projects/ansible-swarm/zones/us-east1-c/diskTypes/pd-standard'",
    "errors": [
      {
        "message": "Required 'compute.diskTypes.get' permission for 'projects/ansible-swarm/zones/us-east1-c/diskTypes/pd-standard'",
        "domain": "global",
        "reason": "forbidden"
      }
    ]
  }
}

}}}

! the detailed error 

!! error msg
{{{


kristofferson.a.arao@karldevgcp:~/vagrant-gcp$ vagrant up --provider=google
Bringing machine 'default' up with 'google' provider...
==> default: Checking if box 'google/gce' version '0.1.0' is up to date...
==> default: Launching an instance with the following settings...
==> default:  -- Name:            develmach
==> default:  -- Project:         example-dev-284123
==> default:  -- Type:            n1-standard-2
==> default:  -- Disk type:       pd-standard
==> default:  -- Disk size:       10 GB
==> default:  -- Disk name:       
==> default:  -- Image:           
==> default:  -- Image family:    ubuntu-os-cloud
==> default:  -- Instance Group:  
==> default:  -- Zone:            us-east1-b
==> default:  -- Network:         default
==> default:  -- Network Project: example-dev-284123
==> default:  -- Metadata:        '{}'
==> default:  -- Labels:          '{}'
==> default:  -- Network tags:    '[]'
==> default:  -- IP Forward:      
==> default:  -- Use private IP:  false
==> default:  -- External IP:     
==> default:  -- Network IP:      
==> default:  -- Preemptible:     false
==> default:  -- Auto Restart:    true
==> default:  -- On Maintenance:  MIGRATE
==> default:  -- Autodelete Disk: true
==> default:  -- Additional Disks:[]
Traceback (most recent call last):
	35: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/batch_action.rb:82:in `block (2 levels) in run'
	34: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/machine.rb:194:in `action'
	33: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/machine.rb:194:in `call'
	32: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/environment.rb:614:in `lock'
	31: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/machine.rb:208:in `block in action'
	30: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/machine.rb:239:in `action_raw'
	29: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/runner.rb:66:in `run'
	28: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/util/busy.rb:19:in `busy'
	27: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/runner.rb:66:in `block in run'
	26: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/builder.rb:116:in `call'
	25: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	24: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/builtin/handle_box.rb:56:in `call'
	23: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	22: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/builtin/config_validate.rb:25:in `call'
	21: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	20: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/builtin/box_check_outdated.rb:84:in `call'
	19: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	18: from /home/kristofferson.a.arao/.vagrant.d/gems/2.5.5/gems/vagrant-google-2.5.0/lib/vagrant-google/action/connect_google.rb:45:in `call'
	17: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	16: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/builtin/call.rb:53:in `call'
	15: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/runner.rb:66:in `run'
	14: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/util/busy.rb:19:in `busy'
	13: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/runner.rb:66:in `block in run'
	12: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/builder.rb:116:in `call'
	11: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	10: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
	 9: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	 8: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/builtin/provision.rb:80:in `call'
	 7: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	 6: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/builtin/synced_folders.rb:87:in `call'
	 5: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	 4: from /home/kristofferson.a.arao/.vagrant.d/gems/2.5.5/gems/vagrant-google-2.5.0/lib/vagrant-google/action/warn_networks.rb:28:in `call'
	 3: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	 2: from /home/kristofferson.a.arao/.vagrant.d/gems/2.5.5/gems/vagrant-google-2.5.0/lib/vagrant-google/action/warn_ssh_keys.rb:28:in `call'
	 1: from /usr/share/rubygems-integration/all/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
/home/kristofferson.a.arao/.vagrant.d/gems/2.5.5/gems/vagrant-google-2.5.0/lib/vagrant-google/action/run_instance.rb:106:in `call': undefined method `self_link' for nil:NilClass (NoMethodError)



kristofferson.a.arao@karldevgcp:~/vagrant-gcp$ ls -ltr
total 8
-rw-r--r-- 1 kristofferson.a.arao kristofferson.a.arao 2339 Sep 13 18:40 example-dev-284123-1c4cf8cf3f8c.json
-rw-r--r-- 1 kristofferson.a.arao kristofferson.a.arao  983 Sep 13 19:09 Vagrantfile





kristofferson.a.arao@karldevgcp:~/vagrant-gcp$ gcloud auth activate-service-account --key-file=example-dev-284123-1c4cf8cf3f8c.json 
Activated service account credentials for: [example-dev-svc@example-dev-284123.iam.gserviceaccount.com]

kristofferson.a.arao@karldevgcp:~/vagrant-gcp$ alias gcurl='curl -H "Authorization: Bearer $(gcloud auth print-access-token)"'
kristofferson.a.arao@karldevgcp:~/vagrant-gcp$ gcurl https://compute.googleapis.com/compute/v1/projects/ansible-swarm/zones/us-east1-c/diskTypes/pd-standard
{
  "error": {
    "code": 403,
    "message": "Required 'compute.diskTypes.get' permission for 'projects/ansible-swarm/zones/us-east1-c/diskTypes/pd-standard'",
    "errors": [
      {
        "message": "Required 'compute.diskTypes.get' permission for 'projects/ansible-swarm/zones/us-east1-c/diskTypes/pd-standard'",
        "domain": "global",
        "reason": "forbidden"
      }
    ]
  }
}


}}}
Videos about RAC performance tuning and Under the Hoods of Cache Fusion, GES, GCS and GRD
http://oraclenz.com/2010/07/26/nzoug-and-laouc-june-and-july-webinars-recording/


see also RACMetalink


see LinkedIn 
http://www.linkedin.com/groupItem?view=&gid=2922607&type=member&item=16757466&qid=6465cd0e-4d67-4e50-a293-55f69c318507&goback=.gmp_2922607
{{{
Node Evictions on RAC , what to do and what to collect
We have worked with customers who have had node evictions and we have been asked to determine root cause of the same. First node evictions in RAC are part of a mechanism to prevent the nodes from corrupting the data when they get into a hung state or are no longer healthy to continue on as part of the cluster and start to cause performance as a whole on the cluster to degrade. Oracle uses its Clusterware as part of the GI (Grid Infrastructure) stack to decide if the nodes are healthy enough using a voting disk and a heartbeat mechanism across nodes. If either of these are missing or do not make it then Oracle initiates a voting cycle to decide which portion of the subcluster survives and then the remaining nodes continue as if nothing happened. There are couple of basic things which come to mind (I'll supplant this discussion with MYoracle notes later) 

- /var/log/messages file from all nodes 
- all the clusterware logs (diagcollection.pl does this for you) 
- If this is an instance eviction then the logs from bdump and udump destinations of the database 
- There is a daemon called oprocd which is no longer present in 11gR2 although for previous releases it exists when there is no vendor clusterware. The logs for the same are in /etc/oracle/oprocd , this tells us if the Clusterware or oprocd rebooted the node 
- Most systems tend to have crash dumps and this illustrates which process took out the node , this can help to also determine what went wrong. The current process list , the active process on the runqueue also tell you more.
}}}


RACHELP - node eviction
http://www.rachelp.nl/index_kb.php?menu=articles&actie=show&id=25
{{{
Date  	2008-12-06 09:42:43
Component 	CRS
Title 	What can cause a Node Eviction ?
Version 	10.1.0 - 11.1.0.7
Problem 	

Node evictions can occur in a cluster environment, the main question is why did the eviction occured ? Below I try to make that part easier.
Solution 	

There are  4 possible causes why a node eviction can occur. 
    *

      Kernel Hang/ extreem load on the system.  (OPROCD and/or HANCHECK TIMER)
    *

      Heartbeat lost Interconnect
    *

      Heartbeat lost Voting Disk
    *

      OCLSMON detects CSSD hang.

The title start with cause, but an Node eviction is a symptom of another problem not the cause. Keep this always in mind when investigating why a node eviction can occur.

Kernel Hang depended on the Operation System used. For Window or Linux this can be done based on the Hangcheck Timer and other Unix environments OPROCD is started. From Oracle 10.2.0.4 and higher OPROCD is also active on LINUX. (Still install the hangcheck timer) To validate if HANGCHECK timer or OPROCD was causing the node eviction validate the OS logfiles for the hangcheck timer. For OPROCD validate the OPROCD logfile.

 

An other possible node eviction can be triggered by OCLSMON starting with the 10.2.0.3 patchset or higher. The Clusterware proces is validating if there is an issue with CSSD. When this is the case it will kill the CSSD deamon, which will lead to the eviction. When this issue occur validate the oclsmon logfile and contact Oracle support. In this note we don’t focus on these parts, but on heartbeat lost. 

Below are two examples of a heartbeat lost symptom. The OCSSD background process is taking care of the heartbeats. In the cssd.log file you can find detail information about the node eviction. In case of an eviction validate all the cssd.log file on all the nodes in your cluster environment. But start with the evicted node. The logging information logged can be changed during patchset and Oracle releases. 

Node eviction due to Interconnect lost symptom.

 Oracle 11g
[    CSSD]2008-11-20 10:59:36.510 [1220598112] >TRACE:  clssnmCheckDskSleepTime: Node 3, dbq0223,
dead, last DHB (1227175136, 73583764) after NHB (1227175121, 73568724), but LATS - current (39090) >
DTO (27000)
[    CSSD]2008-11-20 10:59:36.512 [1147169120] >TRACE:  clssnmReadDskHeartbeat: node 1, dbq0123,
has a disk HB, but no network HB, DHB has rcfg 122475875, wrtcnt, 164452, LATS 58728604, lastSeqNo
164452, timestamp 1227175122/73251784
[    CSSD]2008-11-20 10:59:37.513 [1199618400] >WARNING: clssnmPollingThread: node dbq0227 (5) at
90% heartbeat fatal, eviction in 1.660 seconds
[    CSSD]2008-11-20 10:59:37.513 [1220598112] >TRACE:  clssnmSendSync: syncSeqNo(122475875)
[    CSSD]2008-11-20 10:59:37.513 [1220598112] >TRACE:  clssnm_print_syncacklist: syncacklist (4)

 

Oracle 10g
[    CSSD]2006-10-18 23:49:06.199 [3600] >TRACE:   clssnmCheckDskInfo: Checking disk info...
[    CSSD]2006-10-18 23:49:06.199 [3600] >TRACE:   clssnmCheckDskInfo: node(2) timeout(172) state_network(0) state_disk(3) missCount(30)

[    CSSD]2006-10-18 23:49:06.226 [1] >USER:    NMEVENT_SUSPEND [00][00][00][06]
[    CSSD]2006-10-18 23:49:07.028 [1030] >TRACE:   clssnmReadDskHeartbeat: node(2) is down. rcfg(23) wrtcnt(634353) LATS(2345204583) Disk lastSeqNo(634353)
[    CSSD]2006-10-18 23:49:07.199 [3600] >TRACE:   clssnmCheckDskInfo: node(2) disk HB found, network state 0, disk state(3) missCount(31)

[    CSSD]2006-10-18 23:49:08.032 [1030] >TRACE:   clssnmReadDskHeartbeat: node(2) is down. rcfg(23) wrtcnt(634354) LATS(2345205587) Disk lastSeqNo(634354)
[    CSSD]2006-10-18 23:49:08.199 [3600] >TRACE:   clssnmCheckDskInfo: node(2) disk HB found, network state 0, disk state(3) missCount(32)

[    CSSD]2006-10-18 23:49:09.199 [3600] >TRACE:   clssnmCheckDskInfo: node(2) timeout(1167) state_network(0) state_disk(3) missCount(33)

[    CSSD]2006-10-18 23:49:10.199 [3600] >TRACE:   clssnmCheckDskInfo: node(2) timeout(2167) state_network(0) state_disk(3) missCount(33)

…….
[    CSSD]2006-10-18 23:49:18.571 [3086] >WARNING: clssnmPollingThread: state(0) clusterState(2) exit
[    CSSD]2006-10-18 23:49:18.572 [1287] >ERROR:   clssnmvDiskKillCheck: Evicted by node 1, sync 23, stamp -1949751541,
[    CSSD]2006-10-18 23:49:18.698 [3600] >TRACE:   0x110013a80 00 00 00 00 00 00 00 00 - 00 00 00 00 00 00 00 00

 

Here we see that the Diskkillcheck is report by node 1 and this node is evicted.

The diskkillcheck is done using a poison packets trough the voting disk, as interconnect is lost.

 Possible action: check the availability of the Adapters, large network load/port scans and the OS logfiles for reported errrors related to the interconnect.

 

Node eviction due to  Voting disk lost symptom.

Below an example where we lose the heartbeat to the voting disk.

[    CSSD]2006-10-11 00:35:33.658 [1801] >TRACE:   clssnmHandleSync: Acknowledging sync: src[1] srcName[alligator] seq[9] sync[15]

[    CSSD]2006-10-11 00:35:36.956 [1801] >TRACE:   clssnmHandleSync: diskTimeout set to (27000)ms
[    CSSD]2006-10-11 00:35:36.957 [1801] >WARNING: CLSSNMCTX_NODEDB_UNLOCK: lock held for 3300 ms
[    CSSD]2006-10-11 00:35:36.956 [1544] >TRACE:   clssnmDiskPMT: stale disk (32490 ms) (0//dev/rora_vote_raw)
[    CSSD]2006-10-11 00:35:36.966 [1544] >ERROR:   clssnmDiskPMT: 1 of 1 voting disks unavailable (0/0/1)
[    CSSD]2006-10-11 00:35:37.043 [2058] >TRACE:   clssgmClientConnectMsg: Connect from con(112a8a9f0) proc(112a8f9d0) pid(480150) proto(10:2:1:1)

[    CSSD]2006-10-11 00:35:37.960 [3343] >TRACE:   clscsendx: (11145a3f0) Physical connection (111459b30) not active
[    CSSD]2006-10-11 00:35:37.051 [1] >USER:    NMEVENT_SUSPEND [00][00][00]06]

 

Possible action: check the availability of the Disk subsystem  and the OS logfiles for reported errrors related to the voting disk

 Trace the heartbeat: If needed you can enable a higher level of tracing to debug the heartbeat part. This can be done using the command, level 5 tracing. Level 0 disables the extra trace again. Please keep in mind that this can make your cssd.log growth hard. (4 lines added every second).

crsctl debug log css CSSD:5

crsctl debug log css CSSD:0

NOTICE: Node evictions is a symptom for another problem !
}}}


''The Clusterware logs''
{{{
             My CRS_HOME on my test environment is at /u01/app/oracle/product/crs

-- alert log
/u01/app/oracle/product/crs/log/racnode1/alertracnode1.log

-- CSS log
/u01/app/oracle/product/crs/log/racnode1/cssd/cssdOUT.log
/u01/app/oracle/product/crs/log/racnode1/cssd/ocssd.log
/u01/app/oracle/product/crs/log/racnode1/cssd/racnode1.pid

-- CRSD log
/u01/app/oracle/product/crs/log/racnode1/crsd/crsd.log

-- RACG log
/u01/app/oracle/product/crs/log/racnode1/racg/ora.racnode1.ons.log

-- CRS EVM log
/u01/app/oracle/product/crs/evm/log/racnode1_evmdaemon.log
/u01/app/oracle/product/crs/evm/log/racnode1_evmlogger.log
/u01/app/oracle/product/crs/log/racnode1/evmd/evmd.log
/u01/app/oracle/product/crs/log/racnode1/evmd/evmdOUT.log

-- client log
/u01/app/oracle/product/crs/log/racnode1/client/clsc.log
/u01/app/oracle/product/crs/log/racnode1/client/ocr_15504_3.log
/u01/app/oracle/product/crs/log/racnode1/client/oifcfg.log

-- oprocd logs
/etc/oracle/oprocd
}}}


''Things to check on node eviction (Karl's notes)'' (see also ClusterHealthMonitor and RDA-RemoteDiagnosticAgent and GetAlertLog)
{{{
- Execute GetAlertLog script, do this every end of the day or if you see any signs of a node eviction (on all nodes as oracle)
- Execute the AWR scripts, unzip the awrscripts.zip and execute the run_all.sql, zip the output files (on all nodes as oracle)
- Do a ClusterHealthMonitor dump (just on node1 as crfuser)
      /usr/lib/oracrf/bin/oclumon dumpnodeview -allnodes -v -last "23:59:59" > <your-directory>/<your-filename>
- Execute multinode RDA-RemoteDiagnosticAgent     (just on node1 as oracle)
      ssh-agent $SHELL
      ssh-add
      ./rda.sh -vX Remote setup_cluster 
      ./rda.sh -vX Remote list 
      ./rda.sh -v -e REMOTE_TRACE=1 
- Do a zip of directory /etc/oracle/oprocd (on all nodes as oracle)
- Do a zip of directory /var/log/sa (on all nodes as oracle)
- Do a zip of /var/log/messages file  (on all nodes as root)
- Execute $ORA_CRS_HOME/bin/diagcollection.pl --collect  (on all nodes as root, see Doc ID 330358.1)
}}}

[img[picturename| https://lh5.googleusercontent.com/-8CphyN6W-aE/TOuwimLbTYI/AAAAAAAAA9Y/JCqgr-y3Acg/s2048/RacNodeEviction.gif]]
Nologging in the E-Business Suite
 	Doc ID:	Note:216211.1

Force_logging in Physical Standby Environment
 	Doc ID:	Note:367560.1

Force Logging Feature in Oracle Database
 	Doc ID:	Note:174951.1

Changing Storage Definition in a Logical Standby Database
 	Doc ID:	Note:737460.1



The Gains and Pains of Nologging Operations
 	Doc ID:	Note:290161.1

A Study of Non-Partitioned NOLOGGING DML/DDL on Primary/Standby Data Dictionary
 	Doc ID:	Note:150694.1

Using Oracle7 UNRECOVERABLE and Oracle8 NOLOGGING Option
 	Doc ID:	Note:147474.1


https://taliphakanozturken.wordpress.com/tag/alter-table-logging/
http://www.ehow.com/how_6915411_import-non-csv-file-excel.html
http://office.microsoft.com/en-us/excel-help/import-or-export-text-files-HP010099725.aspx
http://karlarao.wordpress.com/2010/06/28/the-not-a-problem-problem-and-other-related-stuff	
http://www.samsalek.net/?p=2506
[img(30%,30%)[ http://www.samsalek.net/wp-content/uploads/2011/04/samsalek.net_notetakingv2.jpg ]]

http://highscalability.com/numbers-everyone-should-know
http://www.geekologie.com/2010/06/how-big-is-a-yottabyte-spoiler.php
http://highscalability.com/blog/2012/9/11/how-big-is-a-petabyte-exabyte-zettabyte-or-a-yottabyte.html

bytes to yotabyes visualized http://thumbnails.visually.netdna-cdn.com/bytes-sized_51c8d615a7b04.png
http://stevenpoitras.com/the-nutanix-bible/
Securing Your Application with OAuth and Passport
https://www.pluralsight.com/courses/oauth-passport-securing-application
* Exadata and Database Machine Version 2 Series - 1 of 25: Introduction to Smart Scan     Demo    19-Sep-10       10 mins http://goo.gl/AA48J
<<<
{{{

-- start
set timing on
select a.name, b.value/1024/1024 MB from v$sysstat a,
v$mystat b
where a.statistic# = b.statistic#
and (a.name in 
('physical read total bytes',
'physical write total bytes',
'cell IO uncompressed bytes')
or a.name like 'cell phy%');

-- do a non smart scan
select /*+ OPT_PARAM('cell_offload_processing' 'false') */
count(*) from sales
where time_id between '01-JAN-2003' and '31-DEC-2003'
and amount_sold = 1;

-- end 
set timing on
select a.name, b.value/1024/1024 MB from v$sysstat a,
v$mystat b
where a.statistic# = b.statistic#
and (a.name in 
('physical read total bytes',
'physical write total bytes',
'cell IO uncompressed bytes')
or a.name like 'cell phy%');

-- new session
connect sh/sh

-- start
set timing on
select a.name, b.value/1024/1024 MB from v$sysstat a,
v$mystat b
where a.statistic# = b.statistic#
and (a.name in 
('physical read total bytes',
'physical write total bytes',
'cell IO uncompressed bytes')
or a.name like 'cell phy%');

-- do the smart scan
select count(*) from sales
where time_id between '01-JAN-2003' and '31-DEC-2003'
and amount_sold = 1;

-- end
select a.name, b.value/1024/1024 MB from v$sysstat a,
v$mystat b
where a.statistic# = b.statistic#
and (a.name in 
('physical read total bytes',
'physical write total bytes',
'cell IO uncompressed bytes')
or a.name like 'cell phy%');
}}}
<<<
* Exadata and Database Machine Version 2 Series - 2 of 25: Introduction to Exadata Hybrid Columnar Compression    Demo    19-Sep-10       10 mins http://goo.gl/jBKSM
<<<
{{{
select table_name, compression, compress_for
from user_tables
where table_name like '<table_name>';

-- ensure direct path read is done
alter session force parallel query;
alter session force parallel ddl;
alter session force parallel dml;

create table mycust_query compress for query high
parallel 16 as select * from mycustomers;

create table mycust_archive compress for archive high
parallel 16 as select * from mycustomers;

select table_name, compression, compress_for
from user_tables
where table_name like '<table_name>';

select segment_name, sum(bytes)/1024/1024 
from user_segments;
}}}
<<<
* Exadata and Database Machine Version 2 Series - 3 of 25: Introduction to Exadata Smart Flash Cache      Demo    19-Sep-10       12 mins http://goo.gl/4UBic
<<<
{{{
-- start
select a.name, b.value/1024/1024 MB from v$sysstat a,
v$mystat b
where a.statistic# = b.statistic#
and (a.name like '%flash cache read hits'
or a.name like 'cell phy%'
or a.name like 'physical read tot%'
or a.name like 'physical read req%');

-- ensure IO is satisfied using Exadata storage
alter system flush buffer_cache;

-- performs 10000 record lookups, typical OLTP load
set serveroutput on
set timing on
declare 
	a number; 
	s number := 0;
begin
	for n in 1 .. 10000 loop
		select cust_credit_limit into a from customers
			where cust_id=n*5000;
		s := s+a;
	end loop;
	dbms_output.put_line('Transaction total = '||s);
end;
/

-- end 
select a.name, b.value/1024/1024 MB from v$sysstat a,
v$mystat b
where a.statistic# = b.statistic#
and (a.name like '%flash cache read hits'
or a.name like 'cell phy%'
or a.name like 'physical read tot%'
or a.name like 'physical read req%');

connect sh/sh

-- ensure IO is satisfied using Exadata storage
alter system flush buffer_cache;

-- then re execute the loop, you'll see better performance!
}}}
<<<
* Exadata and Database Machine Version 2 Series - 4 of 25: Exadata Process Introduction   Demo    19-Sep-10       6 mins http://goo.gl/qQ6dk
<<<
{{{

connect <celladmin>

-- show processes associated with Exadata restart server (RS)
ps -ef | grep cellrs

-- show Management Server
-- the parent process of MS is RS
ps -ef | grep ms.err

-- the main CELLSRV process
-- the parent process of CELLSRV is RS
ps -ef | grep "/cellsrv "

-- the OSWatcher.. output files located at /opt/oracle.oswatcher
ps -ef | grep OSWatcher

cellcli
list cell detail    <-- displays attributes of the cell
}}}
<<<
* Exadata and Database Machine Version 2 Series - 5 of 25: Hierarchy of Exadata Storage Objects   Demo    19-Sep-10       8 mins http://goo.gl/KYoyV
<<<
{{{

connect <celladmin>

# LUN
cellcli 
list lun		<-- list all the LUN on a cell
				<-- 12 disk based LUNs, 16 flash based LUNs
				
list lun where disktype = harddisk	<-- only show disk based LUNs

list lun 0_0 detail		<-- detailed attributes of the LUN
						<-- isSystemLun=TRUE means it's part of system disk, around 29GB reserved for OS, cell SW
						
# PHYSICAL DISK
list physicaldisk 20:10 detail <--	detailed attributes of physical disk, associated with a LUN

# CELL DISK - a higher level storage abstraction, each cell disk is based on a LUN
list celldisk CD_10_exa9cel01 detail	<-- detailed attributes

# GRID DISK
list griddisk where celldisk = CD_10_exa9cel01 detail
<-- a grid disk defines an area of storage on a cell disk
<-- grid disk are consumed by ASM and used as storage for ASM disk groups
<-- each cell disk can contain a number of grid disks
<-- grid disk are visible as disks inside ASM

select name,path,state,total_mb from v$asm_disk
where name like '%_CD_10_EXA9CEL01';
<-- path to the disk has the form o/<cell IP address>/<grid disk name>

select d.name disk, dg.name diskgroup
from v$asm_disk d, v$asm_diskgroup dg
where dg.group_number = d.group_number
and d.name like '%_CD_10_EXA9CEL01';
<-- grid disk to disk group mapping
}}}
<<<
* Exadata and Database Machine Version 2 Series - 6 of 25: Creating Interleaved Grid Disks        Demo    19-Sep-10       8 mins http://goo.gl/FrHes
<<<
{{{
cellcli
list lun where celldisk = null	<-- list all empty LUNs

-- interleaving option is specified in cell disk
create celldisk interleaving_test lun=0_11, INTERLEAVING='normal_redundancy'

list celldisk interleaving_test detail

create griddisk data1_interleaving_test celldisk=interleaving_test, size=200G
create griddisk data2_interleaving_test celldisk=interleaving_test

list griddisk where celldisk=interleaving_test detail

drop griddisk data1_interleaving_test
drop girddisk data2_interleaving_test
drop celldisk interleaving_test

<-- you cannot create non-interleaved grid disk on a cell disk that has the 
INTERLEAVING='normal_redundancy' attribute
}}}
<<<
* Exadata and Database Machine Version 2 Series - 7 of 25: Examining Exadata Smart Flash Cache    Demo    19-Sep-10       8 mins http://goo.gl/TC41l
<<<
{{{


cellcli
list celldisk where disktype=flashdisk

list flashcache detail		<-- by default all flash-based disk are configured as Exadata Smart Flash Cache

list flashcachecontent detail	<-- shows info about the data inside flash cache, can help assess cache efficiency for specific db objects

list flashcachecontent where objectnumber=74576 and tablespacenumber=7 and dbuniquename=ST01 detail    <-- show info on specific db object
}}}
<<<
* Exadata and Database Machine Version 2 Series - 8 of 25: Exadata Cell Configuration     Demo    19-Sep-10       6 mins http://goo.gl/yy2uh
<<<
{{{
list cell detail 

> temperatureReading - current metrics
> notificationMethod - metrics that can be changed
> notificationPolicy

alter cell smtpToAddr='admin1@example.com, admin2@example.com'		<-- set the adjustable cell attributes
alter cell validate mail				<-- sends a test email

alter cell validate configuration		<-- to do a complete internal check of the cell config settings
}}}
<<<
* Exadata and Database Machine Version 2 Series - 9 of 25: Exadata Storage Provisioning   Demo    19-Sep-10       7 mins http://goo.gl/BiK0w
<<<
{{{

list lun where diskType = hardDisk and cellDisk = null				<-- will show all disk based LUNs that do not contain cell disks! 
																		typically cell disks and grid disks are created on each hard disk so that 
																		data can be spread evenly across the cell
																		
list celldisk where freeSpace != 0							<-- show unallocated free space on cell disks

create celldisk all harddisk interleaving='normal_redundancy'		<-- the command creates cell disks on all the available hard disks.. the hard disks that dont already
																		contain cell disks. the new cell disks are configured in preparation for interleaved grid disks
																		
list celldisk where freeSpace != 0			<-- will show the newly created cell disks

create griddisk all harddisk prefix=st01data2, size=280G	<--	this command creates two sets of interleaved disks on the recently created cell disks, others will be skipped if 
																they dont have the required space 
create griddisk all harddisk prefix=st02data2 				<--

list griddisk attributes name, size, ASMModeStatus			<-- list of all the grid disks, UNUSED means not yet consumed by ASM 
}}}
<<<
Exadata and Database Machine Version 2 Series - 10 of 25: Consuming Exadata Grid Disks Using ASM        Demo    19-Sep-10       10 mins http://goo.gl/Bmr7D
<<<
{{{

select name, header_status, path from v$asm_disk
where path like 'o/%/st01%'
and header_status = 'CANDIDATE'; 		<-- shows the list of CANDIDATE grid disks, the grid disk format is 
											o/<cell IP address>/<grid disk name>  .. the IP represents the storage cell
											
alter diskgroup st01data add disk 'o/*/st01data2_CD_11_exa9cel01';		<-- adds grid disk to ASM disk group

alter diskgroup st01data drop disk st01data2_CD_11_exa9cel01 rebalance power 11 wait;		<-- drops the disk

create diskgroup st01data2 normal redundancy 
disk 'o/*/st01data2*'
attribute 'compatible.rdbms' = '11.2.0.0.0',
'compatible.asm' = '11.2.0.0.0', 
'cell.smart_scan_capable' = 'TRUE',
'au_size' = '4M';						<-- creates disk group with the recommended disk group attributes!!! you'll also notice that grid disk are automatically
											grouped into separate failure groups
}}}
<<<
Exadata and Database Machine Version 2 Series - 11 of 25: Exadata Cell User Accounts    Demo    19-Sep-10       5 mins http://goo.gl/P5Dfi
<<<
{{{
cellmonitor			<-- able to monitor Exadata using LIST

celladmin			<-- can create, modify, drop exadata cell objects

root				<-- can only execute the CALIBRATE command
}}}
<<<
Exadata and Database Machine Version 2 Series - 12 of 25: Monitoring Exadata Using Metrics, Alerts and Active Requests  Demo    19-Sep-10       10 mins http://goo.gl/34Puy
<<<
{{{
list metricdefinition			<-- metrics are recorded observations of important run-time properties or internal instrumentation
									of the storage cell or its components (cell disks, grid disks)
									
list metricdefinition detail	<-- provides more comprehensive info about all the metrics

list metricdefinition where name like 'CL_.*' detail 	<-- add a WHERE condition to view specific metrics

list metriccurrent 			<-- shows the most current metric observations

list metriccurrent where objecttype = 'CELL'	<-- add WHERE to show subset of metrics

list metriccurrent where alertState != normal	<-- shows metrics in abnormal state

list metriccurrent cl_temp		<-- shows specific metric, shows current temperature measured inside the Exadata server
list metriccurrent 				<-- shows the space utilization of the cell OS and exadata software binaries

list metrichistory where alertState != normal		<-- historical alerts, default retention is 7days. This command will determine if there where any 
														abnormal state on the past 7days!
														
list metrichistory where cl_temp memory				<-- list historical, but the data which are still held in memory

list alerthistory				<-- shows all the alerts maintained in the alert repository

drop alerthistory all			<-- clear out unwanted alerts, this command clears the entire alert history

list threshold					<-- list the defined threshold on exadata cell, default is none defined

list alertdefinition			<-- list all available sources of the alerts on the cell

create threshold cl_fsut."/" comparison='>', warning=48			<-- creates threshold on the cell filesystem

list threshold detail		<-- shows the definition of threshold

dd if=/dev/zero of=/tmp/file.out bs=1024 count=950000		<-- creates a big file

list alerthistory		<-- check the alert generated!!!
list alerthistory detail
alter alerthistory 1_1 examinedby='st01'		<-- modify the alert to indicate that you have examined it!

rm /tmp/file.out

list metriccurrent cl_fsut

list alerthistory
list alerthistory	<-- will show the begin and end of the alert condition

alter session force parallel dml;

update customers set cust_credit=0.9*cust_credit_limit 
where cust_id < 2000000;

list activerequest detail			<-- view of IO requests that are currently being processed by a cell..
										shows reason for IO, size of IO, grid disk accessed, TBS number, obj number, SQLID
}}}
<<<
Exadata and Database Machine Version 2 Series - 13 of 25: Monitoring Exadata From Within Oracle Database        Demo    19-Sep-10       10 mins http://goo.gl/RNNqC
<<<
{{{
explain plan for 
select avg(cust_credit_limit)
from customers where cust_credit_limit < 10000;			<-- you can identify smart scan is used by looking at execution plan

select * from table(dbms_xplan.display);

select sql_text, physical_read_bytes, physical_write_bytes, io_interconnect_bytes, io_cell_offload_eligible_bytes, io_cell_uncompressed_bytes,
io_cell_offload_returned_bytes, optimized_phy_read_requests
from v$sql where sql_text like 'select avg%';		<-- you can determine the effectiveness of smart scan for a query by evaluating the ratio between 
														IO_CELL_OFFLOAD_ELIGIBLE_BYTES AND IO_CELL_OFFLOAD_RETURNED_BYTES. IOs optimized by the use of storage index or 
														exadata smart flash cache are counted under OPTIMIZED_PHY_READ_REQUESTS
														
select statistic_name, value
from v$segment_statistics
where owner='SH' and object_name='CUSTOMERS'
and statistic_name = 'optimized physical reads';	<-- shows number of IO requests optimized by exadata

"cell session smart scan efficiency" 	<-- sysstat value , the higher value.. better

select w.event, c.cell_path, d.name, w.p3
from v$session_wait w, v$event_name e, v$asm_disk d, v$cell c
where e.name like 'cell%'
and e.wait_class_id = w.wait_class_id
and w.p1 = c.cell_hashval 
and w.p2 = d.hash_value;			<-- shows WAITS related to Exadata IOs
}}}
<<<
Exadata and Database Machine Version 2 Series - 14 of 25: Exadata High Availability     Demo    19-Sep-10       10 mins http://goo.gl/JrnN3
<<<
{{{
-- long running query

ps -ef | grep "/cellsrv "

kill cellsrv

ps -ef | grep "/cellsrv "		<-- will create a new process

list alerthistory

alter cell restart services all

ps -ef | grep "/cellsrv "		<-- will create a new process

-- long running query not interrupted and completed
}}}
<<<
Exadata and Database Machine Version 2 Series - 15 of 25: Intradatabase I/O Resource Management Demo    19-Sep-10       10 mins http://goo.gl/aqx2J
<<<
{{{

create user fred identified by fred account unlock; 

create user dave identified by dave account unlock;

grant connect to fred, dave;

grant select any table to fred, dave;

-- then connect as fred and dave on separate windows and execute this 

select count(*) from sh.sales where amount_sold=1;		<-- with no intradatabase resource plan, both queries by users will have no effect

-- now as SYSDBA create a database resource plan, specified 80/20 split between two consumer groups HI and LO

begin
	dbms_resource_manager.create_simple_plan(
		simple_plan => 'my_plan',
		consumer_group1 => 'HI', group1_percent => 80,
		consumer_group2 => 'LO', group1_percent => 20)
end;
/

begin
	dbms_resource_manager.create_pending_area();
	dbms_resource_manager_privs.grant_switch_consumer_group(
		grantee_name => 'FRED',
		consumer_group => 'HI',
		grant_option => true);
	dbms_resource_manager_privs.grant_switch_consumer_group(
		grantee_name => 'DAVE',
		consumer_group => 'LO',
		grant_option => true);
	dbms_resource_manager.set_consumer_group_mapping(
		dbms_resource_manager.oracle_user,'FRED','HI');
	dbms_resource_manager.set_consumer_group_mapping(
		dbms_resource_manager.oracle_user,'DAVE','LO');
	dbms_resource_manager.submit_pending_area();
end;
/

alter system set resource_manager_plan = 'my_plan';		<-- the newly created db resource mgt plan is enabled!!! when you set the plan in the database
															the plan is automatically propagated to the exadata cells to enable intradatabase io resource
															management. for this to work you must have an active iormplan on your exadata cells even if its a null plan.
															
select * from dba_rsrc_consumer_group_privs;	<-- confirm consumer group associations

select count(*) from sh.sales where amount_sold=1;		<-- reexecute for both users will have elapsed time change
}}}
<<<
Exadata and Database Machine Version 2 Series - 16 of 25: Interdatabase I/O Resource Management Demo    19-Sep-10       12 mins http://goo.gl/jZptS
<<<
{{{
create bigfile tablespace test
datafile '+STO1DATA2' size 40g;  <-- on both databases

list metriccurrent CD_IO_BY_W_LG_SEC where metricobjectname like 'CD.*'		<-- shows large write throughtput

alter iormplan dbplan=((name=ST01, level=1, allocation=100), (name=other, level=2, allocation=100))

alter iormplan active

list iormplan detail

create bigfile tablespace test
datafile '+STO1DATA2' size 40g;   <-- on both databases
}}}
<<<
Exadata and Database Machine Version 2 Series - 17 of 25: Configuring Flash-Based Disk Groups   Demo    19-Sep-10       16 mins http://goo.gl/9Ve8c
<<<
{{{

list flashcache detail		<-- each exadata server contains 384GB of high performance flash memory. by default all flash memory
								is configured as exadata smart flash cache. 
								
drop flashcache			<-- drop flash cache

list celldisk attributes name,freeSpace,size where diskType=FlashDisk		<-- after dropping, each flash based cell disk shows that all the usable space is free

create flashcache all size=100g		<-- now a smaller than default, smart flash cache is configured spread across cell disk 6.25GB x 16 = 100

create griddisk all flashdisk prefix=st01flash, size=8G		<-- will create flash based grid disk on the 100GB just created.. the same command used to create disk based grid disk
																except the FLASHDISK keyword
																
create griddisk all flashdisk prefix=st02flash			<-- this will create flash based grid disk on all of the remaining free space on the flash based cell disks

list griddisk attributes name,size,ASMModeStatus where disktype=flashdisk 	<-- this will list the newly created flash based grid disks! ready to be consumed by ASM

select path, header_status from v$asm_disk
where path like 'o/%/st01flash%';				<-- will list flash based grid disks.. from viewpoint of ASM flash and disk based grid disks are the same

create diskgroup st01flash normal redundancy 
disk 'o/*/st01flash*'
attribute 'compatible.rdbms' = '11.2.0.0.0',
'compatible.asm' = '11.2.0.0.0', 
'cell.smart_scan_capable' = 'TRUE',
'au_size' = '4M';						<-- this will create a flash based disk group!!! this will also be automatically grouped into separate failure groups

drop diskgroup st01flash;				<-- drops the disk group

drop griddisk all prefix=st01flash

drop griddisk all prefix=st02flash

drop flashcache

create flashcache all		<-- the default smart flash cache is configured on the cell
}}}
<<<
Exadata and Database Machine Version 2 Series - 18 of 25: Examining Exadata Hybrid Columnar Compression Demo    19-Sep-10       14 mins http://goo.gl/ppP3q
<<<
{{{
set serveroutput on 
set timing on 
declare
	b_cmp number;
	b_ucmp number;
	r_cmp number;
	r_ucmp number;
	cmp_ratio number(6,2);
	cmp_type varchar2(1024);
begin
	dbms_compression.get_compression_ratio('SH','SH','MYCUSTOMERS',NULL,DBMS_COMPRESSION.COMP_FOR_QUERY_HIGH,b_cmp,b_ucmp,r_cmp,r_ucmp,cmp_ratio,cmp_type);
	dbms_output.put_line('Table: MYCUSTOMERS');
	dbms_output.put_line('Compression Ratio: '||cmp_ratio);
	dbms_output.put_line('Compression Type:  '||cmp_type);
	dbms_compression.get_compression_ratio('SH','SH','MYCUSTOMERS',NULL,DBMS_COMPRESSION.COMP_FOR_ARCHIVE_HIGH,b_cmp,b_ucmp,r_cmp,r_ucmp,cmp_ratio,cmp_type);
	dbms_output.put_line('Table: MYCUSTOMERS');
	dbms_output.put_line('Compression Ratio: '||cmp_ratio);
	dbms_output.put_line('Compression Type:  '||cmp_type);
end;
/								<-- will show you the compression advisor rates!!! 5.3 and 6.6 respectively

select segment_name, sum(bytes)/1024/1024 
from user_segments
where segment_name like 'MYCUST%'
group by segment_name;			<-- will show the ratio

MYCUST_QUERY   1673	<-- 5.3 RATIO (8850/1673)
MYCUSTOMERS    8850
MYCUST_ARCHIVE 1301	<-- 6.7 RATIO

-- ensure direct path read is done
alter session force parallel query;
alter session force parallel ddl;
alter session force parallel dml;

insert /*+ APPEND */ into mycustomers 
select * from seed_data;				<-- 00:00:01.22 at 1000000 rows   NORMAL
										<-- 00:00:00.89 at 1000000 rows   QUERY COMPRESSION.. performance is offset by less IO operations, suited for DW environments large data loads
										<-- 00:00:03.57 at 1000000 rows   ARCHIVE..	slower, uses more costly algorithm for high compression. suited for archiving 
										
select avg(cust_credit_limit) from mycustomers;	<-- 00.00.10.92  NORMAL
														cell physical IO interconnect bytes                        939MB	<-- data returned by smart scan
														cell physical IO interconnect bytes returned by smart scan 939MB
														cell physical IO bytes eligible for predicate offload      8892MB	<-- offloaded to exadata
												<-- 00.00.02.03  QUERY.. IO reduction results better query performance
														cell physical IO interconnect bytes                        266MB	<-- data returned by smart scan
														cell physical IO interconnect bytes returned by smart scan 266MB
														cell physical IO bytes eligible for predicate offload      1667MB	<-- offloaded to exadata
												<-- 00.00.01.86  ARCHIVE
														cell physical IO interconnect bytes                        239MB	<-- data returned by smart scan
														cell physical IO interconnect bytes returned by smart scan 239MB
														cell physical IO bytes eligible for predicate offload      1297MB	<-- offloaded to exadata
}}}
<<<
Exadata and Database Machine Version 2 Series - 19 of 25: Index Elimination with Exadata        Demo    19-Sep-10       8 mins http://goo.gl/T0SFq
<<<
{{{

shows how to make an index invisible so that you can test the effect on your queries without actually dropping the index

set timing on
set autotrace on explain
select avg(cust_credit_limit) from customers
where cust_id between 2000000 and 2500000;		<-- test query, 15.09 seconds elapsed.. shows index range scan 

alter index customers_pk invisible;			<-- makes index invisible, and not used by optimizer for queries

select status from user_constraints 
where constraint_name = 'CUSTOMERS_PK';		<-- ENABLED and associated with PK constraint, 
												note that even though invisible the associated constraint is still ENABLED
												
select avg(cust_credit_limit) from customers
where cust_id between 2000000 and 2500000;		<-- with invisible index, 23.99 seconds elapsed, and uses SMART SCAN!

alter index customers_pk visible;	<-- makes it visible
}}}
<<<
Exadata and Database Machine Version 2 Series - 20 of 25: Database Machine Configuration Example using Configuration Worksheet  Demo    19-Sep-10       14 mins http://goo.gl/cXgKu
<<<
{{{
.
}}}
<<<
Exadata and Database Machine Version 2 Series - 21 of 25: Migrating to Database Machine Using Transportable Tablespaces Demo    19-Sep-10       14 mins http://goo.gl/otDOF
<<<
{{{
this demo shows how to use RMAN in conjunction with TTS to migrate data from a bid endian platform to exadata

-- TTS dumps and metadata is created.. on real world scenario, you must dump the files to a DBFS!!!

-- The EXADATA is LITTLE ENDIAN!!!

select d.platform_name, endian_format
from v$transportable_platform tp, v$database d
where tp.platform_name = d.platform_namel; 

RMAN> convert datafile '/home/st01/TTS/soe_TTS_AIX.dbf'
to platform="Linux x86 64-bit"
from platform="AIX-Based Systems (64-bit)"
parallelism=1
format  '+ST01DATA';		<-- this converts from big to little endian and loads the converted file into ASM

-- For TTS work the same schema must pre-exist in the destination database

create user soe identified by soe account unlock; 
grant connect,resource to soe; 

create directory tts as '/home/st01/TTS';		<-- creates a directory object that houses the TTS files

impdp system dumpfile=expSOE_TTS.dmp directory=tts logfile=imp_SOE.log transport_datafiles='+ST01DATA/st01/datafile/soe.268.727217185'	<-- Data Pump to import TTS metadata

alter tablespace soe read write;
}}}
<<<
Exadata and Database Machine Version 2 Series - 22 of 25: Bulk Data Loading with Database Machine       Demo    19-Sep-10       20 mins http://goo.gl/KFWyu
<<<
{{{

-- configure DBFS!!! best practice is put it on a separate database

create bigfile tablespace dbfs datafile '+ST01DATA' size 10G;

grant create session, create table, create procedure, dbfs_role to dbfs; 	<-- should be installed in a dedicated schema

mkdir DBFS		<-- this will be the filesystem mount point

cd $ORACLE_HOME/rdbms/admin
sqlplus dbfs/dbfs
@dbfs_create_filesystem_advanced.sql dbfs st01dbfs nocompress nodeduplicate noencrypt non-partition		<-- this creates
																					the database objects for the dbfs store
																					1st - Tablespace where DBFS store is created
																					2nd - name of the DBFS store
																					3,4,5,6 - whether or not to enable the various features
																					typically it is recommended to leave the advanced features
																					DISABLED for a DBFS store that is used to stage data files
																					for BULK DATA loading
																					
echo dbfs > passwd.txt

nohup $ORACLE_HOME/bin/dbfs_client dbfs@st01 -o allow_other,direct_io /home/st01/DBFS < passwd.txt &		<-- dbfs_client has a mount interface that utilizes the FUSE kernel module
																										to implement a file system mount.
																										dbfs_client receives standard file system calls from FUSE and translates them 
																										into calls to the DBFS PL/SQL API
																										
ps -ef | grep dbfs_client

df -k 

cp CSV/customers.csv DBFS/st01dbfs/			<-- transfer files to staging area
cd DBFS/st01dbfs/
ls -l 
head customers.csv

sqlplus "/ as sysdba"
create directory staging as '/home/st01/DBFS/st01dbfs';
grant read, write on directory staging to sh;			<-- create directory object which references to the DBFS

connect sh/sh

create table ext_customers
( 
	customer_id		number(12),
	cust_first_name	varchar2(30),
	cust_last_name	varchar2(30),
	nls_language	varchar2(3),
	nls_territory	varchar2(30),
	credit_limit	number(9,2),
	cust_email		varchar2(100),
	account_mgr_id	number(6)
)
organization external
(
	type oracle_loader
	default directory staging
	access parameters
	(
		records delimited by newline
		badfile staging:'custxt%a_%p.bad'
		logfile staging:'custxt%a_%p.log'
		fields terminated by ',' optionally enclosed by '"'
		missing field values are null
		(
			customer_id, cust_first_name, cust_last_name, nls_language,
			nls_territory, credit_limit, cust_email, account_mgr_id
		)
	)
	location ('customers.csv')
)
parallel
reject limit unlimited;

select count(*) from ext_customers;			<-- query the external table, it is queried in parallel!!

create table loaded_customers
as select * from ext_customers;			<-- actual data loading!!!

fusermount -u /home/st01/DBFS			<-- to unmount!!!
df -k 
ps -ef | grep dbfs_client
}}}
<<<
Exadata and Database Machine Version 2 Series - 23 of 25: Backup Optimization Using RMAN and Exadata    Demo    19-Sep-10       15 mins http://goo.gl/q5Dz8
<<<
{{{

alter database enable block change tracking;

configure device type disk parallelism 2;

backup as backupset incremental level 0 tablespace sh;	<-- full backup of the SH tablespace

list backup;	<-- 90GB 00:04:17 elapsed

select a.name, sum(b.value/1024/1024) MB
from v$sysstat a, v$sesstat b, v$session c
where a.statistic# = b.statistic# 
and b.sid = c.sid
and upper(c.program) like 'RMAN%' 
and (a.name in 
		('physical read total bytes',
		'physical write total bytes',
		'cell IO uncompressed bytes',)
		or a.name like 'cell phy%')
group by a.name;					<-- at Level 0 no offloading

-- do a massive update on the table 

backup as backupset incremental level 1 tablespace sh;

list backup; 	<-- 944KB

select a.name, sum(b.value/1024/1024) MB
from v$sysstat a, v$sesstat b, v$session c
where a.statistic# = b.statistic# 
and b.sid = c.sid
and upper(c.program) like 'RMAN%' 
and (a.name in 
		('physical read total bytes',
		'physical write total bytes',
		'cell IO uncompressed bytes',)
		or a.name like 'cell phy%')
group by a.name;					<-- at Level 1 very significant offloading!!! BCT also helped instead of reading 90GB, read only 488MB
										Also smart scan also kicked in to optimize RMAN reads so that instead of returning 454 MB of data to RMAN
										for further processing.. only 12.5MB was returned 
										
										cell physical IO bytes eligible for predicate offload	454MB
										cell physical IO interconnect bytes						51MB
										cell physical IO interconnect bytes returned by smart scan .89MB
										physical write total bytes									12.49MB
										physical read total bytes									487MB
										
select file#, incremental_level, datafile_blocks, blocks, blocks_read, blocks_skipped_in_cell 
from v$backup_datafile; 			<-- BLOCKS_SKIPPED_IN_CELL is another good metric for backup optimization!
}}}
<<<
Exadata and Database Machine Version 2 Series - 24 of 25: Recovery Optimization Using RMAN and Exadata  Demo    19-Sep-10       12 mins http://goo.gl/TOl2o
<<<
{{{
rm sh.dbf

restore tablespace sh;

select a.name, sum(b.value/1024/1024) MB
from v$sysstat a, v$sesstat b, v$session c
where a.statistic# = b.statistic# 
and b.sid = c.sid
and upper(c.program) like 'RMAN%' 
and (a.name in 
		('physical read total bytes',
		'physical write total bytes',
		'cell IO uncompressed bytes',)
		or a.name like 'cell phy%')
group by a.name;					<-- restore... cell physical IO bytes saved during optimized RMAN file restore 1753MB
										when RMAN restores a file, any blocks in the file that have not been altered since the 
										file was first formatted can be re created by Exadata. This optimization removes the need
										to transport empty formatted blocks across the storage network. Rather, RMAN is able to instruct
										Exadata to conduct the IO on its behalf in the same way that optimized file creation is performed.
										
										cell physical IO bytes eligible for predicate offload	1753MB
										cell physical IO interconnect bytes						398395MB
										cell physical IO interconnect bytes returned by smart scan 0MB
										cell physical IO bytes saved during optimized RMAN file restore 1753MB
										physical write total bytes									154479MB
										physical read total bytes									92939.36MB
}}}
<<<
Exadata and Database Machine Version 2 Series - 25 of 25: Using the distributed command line utility (dcli)     Demo    19-Sep-10       14 mins http://goo.gl/3vAUN
<<<
{{{

-- configure environment
ORACLE_SID=ST01
ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
cat << END > mycells 
exa9cel01
exa9cel02
END

ssh-keygen -t sda 			<-- if doing dcli 1st time, then create ssh key
dcli -g mycells -k 			<-- establish SSH equivalence


-- usage
dcli -g mycells cellcli -e list cell				<--  list cells, BASIC CELLCLI COMMANDS
dcli -g mycells df -k								<-- list filesystems, OS COMMANDS
dcli -g mycells cellcli -e list iormplan			<-- can also be used for configuration changes across servers
dcli -g mycells cellcli -e alter iormplan active
dcli -g mycells cellcli -e alter iormplan inactive

dcli -g mycells "cellcli -e list metriccurrent where name like \'CD_IO_RQ_W_.?.?\' and metricobjectname like \'CD.*\'"	<-- monitor across cells
dcli -g mycells -r '.*CD_0.*' "cellcli -e list metriccurrent where name like \'CD_IO_RQ_W_.?.?\' and metricobjectname like \'CD.*\'"	<-- monitor across cells with regex exclude
dcli -g mycells "cellcli -e list metriccurrent where name like \'CD_IO_RQ_W_.?.?\' and metricobjectname like \'CD.*\' | grep CD_00"		<-- with GREP

dcli -g mycells -f testfile.txt 		<-- distributed file transfer -f option
dcli -g mycells testfile.txt

cat << END > st01script.sh
HST=\`hostname -s\`
DTE=\`date\`
echo -n \`cat testfile.txt\`
echo " on ${HST} at ${DTE}."
END

chmod +x st01script.sh

dcli -g mycells -x st01script.sh			<-- -x option causes the associated file to be copied to and run on the target system
											a filename with .SCL extension is run by the CELLCLI UTILITY
											a filename with different extension is run by the OPERATING SYSTEM SHELL on the target server
											the file is copied to the default home directory on the target server
}}}
<<<
''-- usage tracking''
{{{
$ cat obi_reports.sql
set lines 200
set echo off
set feedback off
col "Elapse Time(Min)" form 999,999
col "Elapse Time(Hr)"  form 999.9
col "Total Row Ct."    form 999,999,999
col "Exec Ct."         form 999,999,999
col "SQL Ct."          form 999,999,999
col "Db Time(Sec)"     form 999,999,999
col "Db Time(Min)"     form 999,999
col "Db Time(Hr)"      form 999.9
col "Total Row Ct."    form 999,999,999,999

SELECT to_char(to_date(start_dt, 'dd-MON-yy'), 'yyyy-mm-dd') "Exec Date",
       count(*)                                              "Exec Ct",
       sum(row_count)                                        "Row Ct",
       sum(total_time_sec/60)                                "Elapse Time(Min)",
       sum(total_time_sec/60/60)                             "Elapse Time(Hr)",
       sum(num_db_query)                                     "SQL Ct.",
       sum(cum_db_time_sec)                                  "Db Time(Sec)",
       sum(cum_db_time_sec/60)                               "Db Time(Min)",
       sum(cum_db_time_sec/60/60)                            "Db Time(Hr)",
       sum(cum_num_db_row)                                   "Total Row Ct."
  FROM OBIUSAGE.S_NQ_ACCT
 WHERE 1=1
   AND QUERY_SRC_CD NOT IN ('ValuePrompt','DashboardPrompt')
   AND cache_ind_flg = 'N'
   AND presentation_name = 'CBRE Financials - GL Profit and Loss'
   AND start_dt > = '07-MAY-2012'
 GROUP BY start_dt
 ORDER BY 1;

}}}



! OBIEE workload separation
{{{
IF contains(lower(trim([Module])),'BIP')=true THEN 'BIP'
ELSEIF contains(lower(trim([Module])),'ODI')=true THEN 'ODI'
ELSEIF contains(lower(trim([Module])),'nqs')=true THEN 'nqsserver'
ELSE 'OTHER' END
}}}







http://www.rittmanmead.com/2012/03/an-obiee-11g-security-primer-introduction/
http://www.rittmanmead.com/2012/03/obiee-11g-security-week-row-level-security/
http://www.rittmanmead.com/2012/03/obiee-11g-security-week-subject-area-catalog-and-functional-area-security-2/
http://www.rittmanmead.com/2012/03/obiee-11g-security-week-understanding-obiee-11g-security-application-roles-and-application-policies/
http://www.rittmanmead.com/2012/03/obiee-11g-security-week-managing-application-roles-and-policies-and-managing-security-migrations-and-deployments/
http://www.rittmanmead.com/2012/03/obiee-11g-security-week-connecting-to-active-directory-and-obtaining-group-membership-from-database-tables/
http://www.rittmanmead.com/2003/09/securing-data-warehouses-with-oid-advanced-security-and-vpd/
http://www.rittmanmead.com/2007/05/obiee-and-row-level-security/
https://blogs.oracle.com/BI4success/entry/high_level_flow_of_obiee
https://blogs.oracle.com/obieeTips/entry/aix_checklist_for_stable_obiee

https://blogs.oracle.com/obieeTips/entry/obiee_memory_usage



! sizing 
OBIEE 11g and 12c: Architectural Deployment Capacity Planning Guide (Doc ID 1323646.1)
NOTE:1333049.1 - OBIEE 11g Infrastructure Performance Tuning Guide
NOTE:2106183.1 - OBIEE 12c: Best Practices Guide for Infrastructure Tuning Oracle® Business Intelligence Enterprise Edition 12c (12.2.1)
NOTE:1323646.1 - OBIEE 11g | 12c: Architectural Deployment Capacity Planning Guide
NOTE:2106183.1 - OBIEE 12c: Best Practices Guide for Infrastructure Tuning Oracle® Business Intelligence Enterprise Edition 12c (12.2.1)
NOTE:1611188.1 - OBIEE: Load Testing OBIEE Using Oracle Load Testing (OLT) 12.x
https://blogs.oracle.com/cealteam/obiee-1111-tuning-guide-script-v1
https://blogs.oracle.com/proactivesupportepm/obiee-tuning-guide-whitepaper-update-available
NOTE:1611188.1 - OBIEE: Load Testing OBIEE Using Oracle Load Testing (OLT) 12.x
NOTE:1323646.1 - OBIEE 11g | 12c: Architectural Deployment Capacity Planning Guide
NOTE:2087801.1 - OBIEE 12c: How To Configure The External Subject Area (XSA) Cache For Data Blending| Mashup And Performance


! obiee active data guard
https://www.oracle.com/technetwork/database/features/availability/maa-wp-11g-biee-activedataguard-1-131999.pdf


https://www.slideshare.net/bogloap/it-2020-technology-optimism-an-oracle-scenario
Upgrading OCFS2 - 1.4
http://www.idevelopment.info/data/Oracle/DBA_tips/OCFS2/OCFS2_1.shtml
http://www.idevelopment.info/data/Oracle/DBA_tips/OCFS2/OCFS2_5.shtml	
https://blogs.oracle.com/observability/post/announcing-support-for-exadata-monitoring-in-performance-hub-v2
https://blogs.oracle.com/cloud-infrastructure/post/available-now-exadata-insights-in-oracle-cloud-infrastructure-operations-insights

https://docs.oracle.com/en-us/iaas/operations-insights/doc/operations-insights.html
https://docs.oracle.com/en-us/iaas/operations-insights/doc/analyze-exadata-resources.html


<<<
Use cases
Forecast resource requirements

Using the Capacity Planning app for Exadata systems, you can perform the following analyses:

    Enterprise-wide analysis of resource utilization, capacity planning for Exadata

    Improve resource utilization by identifying under and overutilized resources

    Identify Exadata systems projected to reach high utilization

    Identify total lead time to expand capacity through machine learning-based forecast, based on long-term historic data to project future resource growth

    Use forecasting and capacity planner functionality to ensure that Exadata satisfies future needs of databases being consolidated

    Estimate usage after 12 months

<<<

<<<
Consolidate Oracle databases on Exadata

You can inspect details of individual Exadata systems and look at performance characteristics of all databases, hosts, and storage servers for the following capabilities:

    Identify top databases by the resource type CPU, memory, I/O, and storage

    Identify top hosts by the resource type CPU and memory

    Identify top Exadata storage servers by storage, I/O, and throughput

    Determine which Exadata hosts satisfy resource requirements

    Find low resource utilization servers

    Plan using performance history and seasonality

    Ensure that service levels can be met over time

<<<
Support of Oracle Transparent Data Encryption (Oracle TDE)
https://help.sap.com/viewer/4b99f675d74f4990b75a8630869a0cd2/CURRENT_VERSION/en-US/bc2f528da0ed423bbaf6aee70b633c01.html
<<<
my experience: 
* automatically configured in X8M
* CDBs are set to use_large_pages=ONLY 
<<<

https://blog.pythian.com/hugepages-for-oracle-database-in-oracle-cloud/
OCI-Classic to OCI IaaS Migration
IaaS Migration Tools
https://cloud.oracle.com/iaas/training/slides/cloud_migration_tools_300.pdf
''OCM preparation exams'' http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=501

http://translate.google.com/translate?langpair=zh-CN%7Cen&hl=zh-CN&ie=UTF8&u=http://www.oracledatabase12g.com/archives/11g-ocm-upgrade-exam-tips.html

http://blogs.oracle.com/certification/entry/0372
Oracle Database 11g Administrator http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=198
Oracle Database 11g Certified Master Upgrade Exam http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=41&p_exam_id=11gOCMU
Oracle Database 11g Certified Master Exam http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=41&p_exam_id=11gOCM

http://www.pythian.com/news/34911/how-to-prepare-to-oracle-database-11g-certified-master-exam/

http://wenku.baidu.com/view/452d880a6c85ec3a87c2c526.html

http://laurentschneider.com/wordpress/2012/09/ocm-11g-upgrade.html

http://gavinsoorma.com/2011/02/passing-the-11g-ocm-exam-some-thoughts/

mclean http://goo.gl/QTFvX

kamran http://kamranagayev.com/2013/08/16/how-to-become-an-oracle-certified-master-my-ocm-journey/

this guy http://jko-licorne.com/oracle/
The Cert Path
http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=198&p_org_id=&lang=

Upgrade program
http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=44

Oracle Database 11g: New Features for Administrators
http://education.oracle.com/pls/web_prod-plq-dad/show_desc.redirect?dc=D50081GC10&p_org_id=&lang=&source_call=

1Z0_050 - Oracle Database 11g: New Features for Administrators
http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=41&p_exam_id=1Z0_050
Release 2
http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=609&p_org_id=1001&lang=US&get_params=dc:D50081GC20,p_preview:N
Oracle Database 12c: New Features for Administrators 
http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=609&get_params=dc:D77758GC10,p_preview:N
http://www.databasejournal.com/features/oracle/article.php/3630231/Oracle-RAC-Administration---Part-4-Administering-the-Clusterware--Components.htm
http://blogs.oracle.com/AlejandroVargas/archives.html
http://blogs.oracle.com/AlejandroVargas/2007/05/rac_with_asm_on_linux_crash_sc_2.html
http://blogs.oracle.com/AlejandroVargas/2007/05/rac_with_asm_on_linux_crash_sc_3.html
http://onlineappsdba.com/index.php/2009/06/09/backup-and-recovery-of-oracle-clusterware/
http://el-caro.blogspot.com/2006/07/ocr-backups.html
http://deepthinking99.wordpress.com/2008/09/20/recover-the-corruption-ocr/
http://askdba.org/weblog/2008/09/how-to-recover-from-corrupted-ocr-disk/
http://www.oracle-dba-database-administration.com/backup-recover-OCR.html
http://achatzia.blogspot.com/2007/06/scripts-for-rac-backup.html
http://www.pythian.com/news/832/how-to-recreate-the-oracle-clusterware-voting-disk/
http://www.databasejournal.com/features/oracle/article.php/3626471/Oracle-RAC-Administration---Part-3-Administering-the-Clusterware-Components.htm
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle10gRAC/CLUSTER_65.shtml#Backup the Voting Disk

What's in a voting disk http://orainternals.wordpress.com/2010/10/29/whats-in-a-voting-disk/
OCR & Voting Disk on ASM http://blog.ronnyegner-consulting.de/2010/10/20/oracle-11g-release-2-asm-best-practises/

How to restore Oracle Grid Infrastructure OCR and vote disk on ASM
http://oracleprof.blogspot.com/2011/09/after-reading-book-about-oracle-rac-see.html


Placement of Voting disk and OCR Files in Oracle RAC 10g and 11gR1 [ID 293819.1]
OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE) [ID 428681.1]
http://noriegaaoracleexpert.blogspot.com/2017/08/demythifying-oracle-database-appliance.html
http://drsalbertspijkers.blogspot.com/2015/04/oracle-database-appliance-x5-2.html
https://blog.pythian.com/oracle-database-appliance-storage-performance-part-1/
https://blog.pythian.com/insiders-guide-to-oda-performance/
Database Sizing for Oracle Database Appliance https://docs.oracle.com/cd/E22693_01/doc.12/e55580/sizing.htm#CHDCCDGD
https://community.oracle.com/blogs/heemasatapathy/2018/07/19/x52-oracle-database-appliance-system-io-assessment
https://www.doag.org/formes/pubfiles/7519722/2015-K-INF-Tammy_Bednar-Deep_Dive_into_Oracle_Database_Appliance_Architecture-Manuskript.pdf
https://www.doag.org/formes/pubfiles/7519746/2015-K-INF-Tammy_Bednar-Deep_Dive_into_Oracle_Database_Appliance_Architecture-Praesentation.pdf
http://www.nocoug.org/download/2013-05/NoCOUG_201305_ODA_IO_and_Performance_Architecuture.pdf

oracle database appliance flash disk group https://www.google.com/search?client=firefox-b-1-d&q=oracle+database+appliance+flash+disk+group
Oracle Database Appliance Software Configuration Defaults https://docs.oracle.com/cd/E22693_01/doc.12/e55580/referapp.htm
Database Disk Group Sizes for Oracle Database Appliance https://docs.oracle.com/cd/E68623_01/doc.121/e68637/GUID-FE280580-F361-494F-B377-10137A6BEA34.htm#CMTAR858

DBFC - Using SSDs to Solve I/O Bottlenecks https://learning.oreilly.com/library/view/oracle-database-problem/9780134429267/ch17.html
Using Oracle Database Appliance SSDs https://docs.oracle.com/cd/E22693_01/doc.12/e55580/dbadmin.htm#CACEHIJJ , https://docs.oracle.com/cd/E64530_01/doc.121/e64200/referapp.htm#CEGBFHFB
Flash Cache in ODA x5-2 Virtual platform - https://community.oracle.com/thread/4195281?parent=MOSC_EXTERNAL&sourceId=MOSC&id=4195281
https://blogs.oracle.com/emeapartnerweblogic/what-you-need-to-know-about-the-new-oda-x5-2-by-simon-haslam

''Configure and Deploy Oracle Database Appliance'' http://apex.oracle.com/pls/apex/f?p=44785:24:2875967671743702::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5903,2
http://www.evernote.com/shard/s48/sh/372c667d-d4d0-4a51-a505-f7010f124f29/05ee84e49642f4d556da907c7d212e35

''Check out the offline configurator'' http://blogs.oracle.com/eSTEP/entry/oda_offline_configurator_for_demo

''Demo: How to Set Up ILOM on the Oracle Database Appliance'' http://download.oracle.com/technology/server-storage/ilom/ILOM-Setup-1-5-12.mp4
Pointed by Bjoern Rost, @karlarao How do you reduce planned downtime with #ODA when rolling patches are not (yet?) supported

Well that really sucks because you have to patch through the ''appliance manager'', as per this doc http://download.oracle.com/docs/cd/E22693_01/doc.21/e22692/undrstd.htm#CIHEFJBA  it's not rolling patch yet and they haven't issued any new patches for it yet and that's the other problem :p 

''hmm'' "At the time of this release, Oracle Appliance Manager Patching does not support rolling patching. The entire system must be taken down and both servers patched before restarting the database."
''even worse is this note:''  Caution: Only patch Oracle Database Appliance with an Oracle Database Appliance patch bundle. Do not use Oracle Grid Infrastructure, Oracle Database patches, or any Linux distribution patch with an Oracle Appliance. If you use non-Oracle Appliance patches with an Oracle Appliance using Opatch or an equivalent tool, then the Oracle Database Appliance inventory is not updated, and future Oracle Appliance patch updates cannot be completed.

Oracle Database Appliance Firmware Page [ID 1360299.1]


http://www.pythian.com/news/34715/migrating-your-10g-database-to-oda-with-minimal-downtime/
http://ermanarslan.blogspot.com/2014/06/ovm-oakcli-command-examples.html
{{{
OVM -- oakcli command examples
To import a Template:
oakcli import vmtemplate EBS_12_2_3_PROD_DB -assembly /OVS/EBS/Oracle-E-Business-Suite-PROD-12.2.3.ova -repo vmtemp2 -node 0

To list the available Templates:
oakcli show vmtemplate

To list the Virtual Machines:
oakcli show vm

To list the repositories:
oakcli show repo

To start a virtual machine:
oakcli start vm EBS_12_2_3_VISION

To create a Repository (size in gb, by default):
oakcli create repo vmrepo1 -dg data -size 2048

To configure a Virutal Machine: (cpu, memory etc..)
oakcli configure vm EBS_12_2_3_PROD_APP -vcpu 16 -maxvcpu 16 
oakcli configure vm EBS_12_2_3_PROD_APP -memory 32768M -maxmemory 32768M

To open a console for a virtual machine (Vnc required)
oakcli show vmconsole EBS_12_2_3_VISION

To create a virtual machine from a template:
oakcli clone vm EBS_12_2_3_PROD_APP -vmtemplate EBS_12_2_3_PROD_APP -repo vmrepo1 -node 1
}}}
Oracle Database Appliance - Steps to Generate a Key via MOS to change your CORE Count and apply this Core Key (Doc ID 1447093.1)
ODA FAQ : Understanding the Oracle Database Appliance Core Key Generation usage, common questions and problems ( FAQ ) (Doc ID 1597084.1)

{{{
# /opt/oracle/oak/bin/oakcli show core_config_key 
Host's serialnumber = 01234AB56C7 
Configured Cores = 20

Note: The CPU’s in the Database Appliance are hyper threaded, so when verifying the number of CPU cores with the cpuinfo command, you will see two times (2X) the number of cores configured pre server. For example, In this note we configured 10 cores per server, for a total of 20 cores for the appliance, so the cpuinfo command will return the following:

# cat /proc/cpuinfo | grep -i processor 
processor : 0 
processor : 1 
processor : 2 
processor : 3 
processor : 4 
processor : 5 
processor : 6 
processor : 7 
processor : 8 
processor : 9 
processor : 10 
processor : 11 
processor : 12 
processor : 13 
processor : 14 
processor : 15 
processor : 16 
processor : 17 
processor : 18 
processor : 19
...
...    -- The maximum number of cores available is HW version dependent
}}}

Certified Compilers
  	Doc ID: 	Note:43208.1
  	
Client / Server / Interoperability Support Between Different Oracle Versions
  	Doc ID: 	Note:207303.1
  	
Oracle Server (RDBMS) Releases Support Status Summary
  	Doc ID: 	Note:161818.1
  	
Is Oracle10g Instant Client Certified With Oracle 9i or Oracle 8i Databases
  	Doc ID: 	Note:273972.1
  	
Client Application Fails After Upgrade of Client Libraries
  	Doc ID: 	Note:268174.1
  	
Basic OCI8 Testcase
  	Doc ID: 	Note:277543.1
  	
Basic OCCI Testcase
  	Doc ID: 	Note:277544.1
  	
OCI/OCCI/Precompilers Testcase FAQ
  	Doc ID: 	Note:271406.1
  	
Where do I Find OCCI Support for Microsoft Visual Studio 2005 / Microsoft Visual C++ 8.0?
  	Doc ID: 	Note:362644.1
  	
Which OCI Functions Where Introduced In What Release Starting With ORACLE RDBMS 8.0
  	Doc ID: 	Note:301983.1
  	
On What Unix/Linux OS are Oracle ODBC Drivers Available ?
  	Doc ID: 	Note:396635.1
  	
Supported ODBC Configurations
  	Doc ID: 	Note:66403.1
  	
ODBC COMPATABILITY ISSUES
  	Doc ID: 	Note:1027811.6
  	
"ORACLE CLIENT NETWORKING COMPONENTS WERE NOT FOUND" w/CONFIGURING ODBC
  	Doc ID: 	Note:1014690.102
  	
Oracle� Database Client Certification Notes 10g Release 2 (10.2.0.3) for Microsoft Windows Vista
  	Doc ID: 	Note:415166.1
  	
Can Instant Client 10g Run On Windows Vista?
  	Doc ID: 	Note:459507.1
  	
Installation Instructions for Oracle ODBC Driver Release 9.2.0.5.4
  	Doc ID: 	Note:290886.1
  	
ODBC and Oracle10g Supportability
  	Doc ID: 	Note:273215.1
  	
How To Implement Expiration Of Passwords Using ODBC
  	Doc ID: 	Note:268240.1
  	
Using ODBC From a Windows NT Service
  	Doc ID: 	Note:1016672.4
  	
ODBC COMPATABILITY ISSUES
  	Doc ID: 	Note:1027811.6
  	
ODBC Compatibility Matrix for the Macintosh Platform
  	Doc ID: 	Note:76570.1
  	
Connection from ODBC Test Fails With TNS-12535
  	Doc ID: 	Note:170795.1
  	
Unable to Use SET SAVEPOINT While Using ODBC Application
  	Doc ID: 	Note:163986.1
  	
ODBC ARCHITECTURE FOR ORACLE DATABASE
  	Doc ID: 	Note:106110.1
  	
Setting up the Oracle ODBC Driver and DSN on Windows 95/98/NT Client
  	Doc ID: 	Note:107364.1



Bug 3564573 - ORA-1017 when 10g client connects to 8i/9i server with EBCDIC <-> ASCII connection
  	Doc ID: 	Note:3564573.8

Bug 3437884 - 10g client cannot connect to 8.1.7.0 - 8.1.7.3 server
  	Doc ID: 	Note:3437884.8

ALERT: Connections from Oracle 9.2 to Oracle7 are Not Supported
  	Doc ID: 	Note:207319.1

Database, FMW, and OCS Software Error Correction Support Policy
  	Doc ID: 	Note:209768.1







Oracle Database Server support Matrix for Windows XP / 2003 64-Bit (Itanium)
  	Doc ID: 	Note:236183.1

Oracle Database Server and Networking Patches for Microsoft Platforms
  	Doc ID: 	Note:161549.1

"An Unsupported Operation was Attempted" Error When Trying to Create DSN With ODBC 10.2.0.3.0
  	Doc ID: 	Note:403021.1

Unable to Connect With Microsoft ODBC Driver for Oracle and 64-Bit Oracle Client
  	Doc ID: 	Note:417246.1

ODBC BASIC OVERVIEW
  	Doc ID: 	Note:1003717.6



http://support.microsoft.com/kb/190475
http://support.microsoft.com/kb/244661
http://support.microsoft.com/kb/259959/
http://support.microsoft.com/kb/306787/



  	
  	
Install ODI
http://avdeo.com/2009/01/19/installing-oracle-data-integrator-odi/
Oracle® Fusion Middleware
Integrating Big Data with Oracle Data Integrator
12 c (12.2.1.2.6)
https://docs.oracle.com/middleware/122126/odi/odi-big-data/ODIBD.pdf
http://download.oracle.com/docs/cd/E15985_01/index.htm
http://download.oracle.com/docs/cd/E15985_01/doc.10136/release/ODIRN.pdf

How To Set Up ODI With Mainframes And Mid-Range Servers? [ID 423769.1]

Performance Optimization Strategies For ODI [ID 423726.1]

Compatibility Of Non Transactional Databases With ODI [ID 424454.1]

Version Compatibility Between ODI Components [ID 423825.1]

Where Are The Certification Matrices For ODI 10g and 11g Which Indicate Platform And Database Compatibilities [ID 424527.1]

What Are The Best Practices When Installing Oracle Data Integrator ? [ID 424598.1]

Oracle Data Integrator/Sunopsis, Releases and Patches [ID 456313.1]
http://www.toadworld.com/platforms/oracle/w/wiki/11469.oracle-exadata-deployment-assistance-oeda
also on ch8 of ExaBook2ndEd 
https://community.oracle.com/message/12572401

install xterm!!!
desktopserver kernel http://www.audentia-gestion.fr/oracle/uek-for-linux-177034.pdf , https://oss.oracle.com/pipermail/el-errata/2011-August/002251.html , https://oss.oracle.com/el5/docs/RELEASE-NOTES-U7-en.html
<<<
IO affinity

    IO affinity ensures processing of a completed IO is handled by the same CPU that initiated the IO. It can have a fairly large impact on performance, especially on large NUMA machines. IO affinity is turned on by default, but it can be controlled via the tunable in /sys/block/xxx/queue/rq_affinity. For example, the following will turn IO affinity on:

    echo 1> /sys/block/sda/queue/rq_affinity
<<<
newer kernels https://docs.oracle.com/en/operating-systems/uek/ , https://www.oracle.com/a/ocom/docs/linux/oracle-linux-ds-1985973.pdf
https://blogs.oracle.com/scoter/oracle-linux-and-unbreakable-enterprise-kernel-uek-releases
https://en.wikipedia.org/wiki/Oracle_Linux#cite_note-57
https://community.oracle.com/tech/apps-infra/discussion/comment/11032808
https://oss.oracle.com/el5/docs/



https://www.oracle.com/technetwork/cn/community/developer-day/3-oracle-linux-2525017-zhs.pdf

https://support.purestorage.com/Solutions/Linux/Linux_Reference/Linux_Recommended_Settings
{{{
# Recommended settings for Pure Storage FlashArray.
# Use noop scheduler for high-performance solid-state storage for SCSI devices
ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/scheduler}="noop"
ACTION=="add|change", KERNEL=="dm-[0-9]*", SUBSYSTEM=="block", ENV{DM_NAME}=="3624a937*", ATTR{queue/scheduler}="noop"

# Reduce CPU overhead due to entropy collection
ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/add_random}="0"
ACTION=="add|change", KERNEL=="dm-[0-9]*", SUBSYSTEM=="block", ENV{DM_NAME}=="3624a937*", ATTR{queue/add_random}="0"

# Spread CPU load by redirecting completions to originating CPU
ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/rq_affinity}="2"
ACTION=="add|change", KERNEL=="dm-[0-9]*", SUBSYSTEM=="block", ENV{DM_NAME}=="3624a937*", ATTR{queue/rq_affinity}="2"

# Set the HBA timeout to 60 seconds
ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{device/timeout}="60"
}}}

Oracle Linux and MySQL TPC-C Optimizations When Implementing the Sun Flash Accelerator F80 PCIe Card
http://www.oracle.com/us/technologies/linux/linux-and-mysql-optimizations-wp-2332321.pdf

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/performance_tuning_guide/index
<<<
rq_affinity
    By default, I/O completions can be processed on a different processor than the processor that issued the I/O request. Set rq_affinity to 1 to disable this ability and perform completions only on the processor that issued the I/O request. This can improve the effectiveness of processor data caching. 
<<<


https://www.google.com/search?source=hp&ei=yurHX-ayNaOGwbkP-Z2gWA&q=OEL+rq_affinity&oq=OEL+rq_affinity&gs_lcp=CgZwc3ktYWIQAzoICAAQsQMQgwE6AggAOggILhCxAxCDAToLCC4QsQMQxwEQowI6DgguELEDEIMBEMcBEKMCOgUIABCxAzoFCC4QsQM6AgguOgsILhDHARCjAhCTAjoICC4QxwEQrwE6CggAELEDEIMBEAo6DQguELEDEMcBEKMCEAo6DgguELEDEMcBEKMCEJMCOggIABCxAxDJAzoICC4QxwEQowI6DgguEMcBEK8BEMkDEJMCOgcIABCxAxAKOgQIABAKOg0ILhCxAxDJAxAKEJMCOgkIABDJAxAWEB46BggAEBYQHjoFCCEQoAE6BwghEAoQoAFQjwhY2LI8YLe1PGgFcAB4AIABsgGIAZ4RkgEEMTQuOZgBAKABAaoBB2d3cy13aXqwAQA&sclient=psy-ab&ved=0ahUKEwjmv5jzg7DtAhUjQzABHfkOCAsQ4dUDCAg&uact=5

-- from http://www.perfvision.com/info/oem.html

{{{
default OEM web port

http://host:1158/em/console/


OEM license 

Database Diagnostics Pack
Automatic Workload Repository
ADDM (Automated Database Diagnostic Monitor)
Performance Monitoring (Database and Host)
Event Notifications: Notification Methods, Rules and Schedules
Event history/metric history (Database and Host)
Blackouts
Dynamic metric baselines
Memory performance monitoring

Database Tuning Pack
SQL Access Advisor
SQL Tuning Advisor
SQL Tuning Sets
Reorganize Objects

Configuration Management Pack
Database and Host Configuration
Deployments
Patch Database and View Patch Cache
Patch staging
Clone Database
Clone Oracle Home
Search configuration
Compare configuration
Policies
}}}
{{{
set arraysize 5000

COLUMN blocksize NEW_VALUE _blocksize NOPRINT
select distinct block_size blocksize from v$datafile;

COLUMN dbid NEW_VALUE _dbid NOPRINT
select dbid from v$database;

COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;

COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;

COLUMN instancenumber NEW_VALUE _instancenumber NOPRINT
select instance_number instancenumber from v$instance;

-- ttitle center 'AWR IO Workload Report' skip 2
set pagesize 50000
set linesize 550

col instname       format a15              heading instname            -- instname
col hostname       format a30              heading hostname            -- hostname
col tm             format a17              heading tm                  -- "tm"
col id             format 99999            heading id                  -- "snapid"
col inst           format 90               heading inst                -- "inst"
col dur            format 999990.00        heading dur                 -- "dur"
col cpu            format 90               heading cpu                 -- "cpu"
col cap            format 9999990.00       heading cap                 -- "capacity"
col dbt            format 999990.00        heading dbt                 -- "DBTime"
col dbc            format 99990.00         heading dbc                 -- "DBcpu"
col bgc            format 99990.00         heading bgc                 -- "BGcpu"
col rman           format 9990.00          heading rman                -- "RMANcpu"
col aas            format 990.0            heading aas                 -- "AAS"
col totora         format 9999990.00       heading totora              -- "TotalOracleCPU"
col busy           format 9999990.00       heading busy                -- "BusyTime"
col load           format 990.00           heading load                -- "OSLoad"
col totos          format 9999990.00       heading totos               -- "TotalOSCPU"
col mem            format 999990.00        heading mem                 -- "PhysicalMemorymb"
col IORs           format 99990.000        heading IORs                -- "IOPsr"
col IOWs           format 99990.000        heading IOWs                -- "IOPsw"
col IORedo         format 99990.000        heading IORedo              -- "IOPsredo"
col IORmbs         format 99990.000        heading IORmbs              -- "IOrmbs"
col IOWmbs         format 99990.000        heading IOWmbs              -- "IOwmbs"
col redosizesec    format 99990.000        heading redosizesec         -- "Redombs"
col logons         format 990              heading logons              -- "Sess"
col logone         format 990              heading logone              -- "SessEnd"
col exsraw         format 99990.000        heading exsraw              -- "Execrawdelta"
col exs            format 9990.000         heading exs                 -- "Execs"
col oracpupct      format 990              heading oracpupct           -- "OracleCPUPct"
col rmancpupct     format 990              heading rmancpupct          -- "RMANCPUPct"
col oscpupct       format 990              heading oscpupct            -- "OSCPUPct"
col oscpuusr       format 990              heading oscpuusr            -- "USRPct"
col oscpusys       format 990              heading oscpusys            -- "SYSPct"
col oscpuio        format 990              heading oscpuio             -- "IOPct"
col SIORs          format 99990.000        heading SIORs               -- "IOPsSingleBlockr"
col MIORs          format 99990.000        heading MIORs               -- "IOPsMultiBlockr"
col TIORmbs        format 99990.000        heading TIORmbs             -- "Readmbs"
col SIOWs          format 99990.000        heading SIOWs               -- "IOPsSingleBlockw"
col MIOWs          format 99990.000        heading MIOWs               -- "IOPsMultiBlockw"
col TIOWmbs        format 99990.000        heading TIOWmbs             -- "Writembs"
col TIOR           format 99990.000        heading TIOR                -- "TotalIOPsr"
col TIOW           format 99990.000        heading TIOW                -- "TotalIOPsw"
col TIOALL         format 99990.000        heading TIOALL              -- "TotalIOPsALL"
col ALLRmbs        format 99990.000        heading ALLRmbs             -- "TotalReadmbs"
col ALLWmbs        format 99990.000        heading ALLWmbs             -- "TotalWritembs"
col GRANDmbs       format 99990.000        heading GRANDmbs            -- "TotalmbsALL"
col readratio      format 990              heading readratio           -- "ReadRatio"
col writeratio     format 990              heading writeratio          -- "WriteRatio"
col diskiops       format 99990.000        heading diskiops            -- "HWDiskIOPs"
col numdisks       format 99990.000        heading numdisks            -- "HWNumofDisks"
col flashcache     format 990              heading flashcache          -- "FlashCacheHitsPct"
col cellpiob       format 99990.000        heading cellpiob            -- "CellPIOICmbs"
col cellpiobss     format 99990.000        heading cellpiobss          -- "CellPIOICSmartScanmbs"
col cellpiobpreoff format 99990.000        heading cellpiobpreoff      -- "CellPIOpredoffloadmbs"
col cellpiobsi     format 99990.000        heading cellpiobsi          -- "CellPIOstorageindexmbs"
col celliouncomb   format 99990.000        heading celliouncomb        -- "CellIOuncompmbs"
col cellpiobs      format 99990.000        heading cellpiobs           -- "CellPIOsavedfilecreationmbs"
col cellpiobsrman  format 99990.000        heading cellpiobsrman       -- "CellPIOsavedRMANfilerestorembs"

SELECT * FROM
( 
  SELECT trim('&_instname') instname, 
         trim('&_dbid') db_id, 
         trim('&_hostname') hostname, 
         s0.snap_id id,
         TO_CHAR(s0.END_INTERVAL_TIME,'MM/DD/YY HH24:MI:SS') tm,
         s0.instance_number inst,
  round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
   (((s20t1.value - s20t0.value) - (s21t1.value - s21t0.value)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as SIORs,
   (((s23t1.value - s23t0.value) - (s24t1.value - s24t0.value)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as SIOWs,
    ((s13t1.value - s13t0.value)  / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as IORedo, 
    (((s22t1.value - s22t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as TIORmbs,
   (((s25t1.value - s25t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as TIOWmbs,
   (((s29t1.value - s29t0.value)/1024/1024) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as cellpiobpreoff,
    ((s33t1.value - s33t0.value) / (s20t1.value - s20t0.value))*100 as flashcache
FROM dba_hist_snapshot s0,
  dba_hist_snapshot s1,
  dba_hist_sysstat s13t0,       -- redo writes, diffed
  dba_hist_sysstat s13t1,
  dba_hist_sysstat s20t0,       -- physical read total IO requests, diffed
  dba_hist_sysstat s20t1,
  dba_hist_sysstat s21t0,       -- physical read total multi block requests, diffed
  dba_hist_sysstat s21t1,  
  dba_hist_sysstat s22t0,       -- physical read total bytes, diffed
  dba_hist_sysstat s22t1,  
  dba_hist_sysstat s23t0,       -- physical write total IO requests, diffed
  dba_hist_sysstat s23t1,
  dba_hist_sysstat s24t0,       -- physical write total multi block requests, diffed
  dba_hist_sysstat s24t1,
  dba_hist_sysstat s25t0,       -- physical write total bytes, diffed
  dba_hist_sysstat s25t1,
  dba_hist_sysstat s29t0,       -- cell physical IO bytes eligible for predicate offload, diffed, cellpiobpreoff
  dba_hist_sysstat s29t1,
  dba_hist_sysstat s33t0,       -- cell flash cache read hits
  dba_hist_sysstat s33t1
WHERE s0.dbid            = &_dbid    -- CHANGE THE DBID HERE!
AND s1.dbid              = s0.dbid
AND s13t0.dbid            = s0.dbid
AND s13t1.dbid            = s0.dbid
AND s20t0.dbid            = s0.dbid
AND s20t1.dbid            = s0.dbid
AND s21t0.dbid            = s0.dbid
AND s21t1.dbid            = s0.dbid
AND s22t0.dbid            = s0.dbid
AND s22t1.dbid            = s0.dbid
AND s23t0.dbid            = s0.dbid
AND s23t1.dbid            = s0.dbid
AND s24t0.dbid            = s0.dbid
AND s24t1.dbid            = s0.dbid
AND s25t0.dbid            = s0.dbid
AND s25t1.dbid            = s0.dbid
AND s29t0.dbid            = s0.dbid
AND s29t1.dbid            = s0.dbid
AND s33t0.dbid            = s0.dbid
AND s33t1.dbid            = s0.dbid
--AND s0.instance_number   = &_instancenumber   -- CHANGE THE INSTANCE_NUMBER HERE!
AND s1.instance_number   = s0.instance_number
AND s13t0.instance_number = s0.instance_number
AND s13t1.instance_number = s0.instance_number
AND s20t0.instance_number = s0.instance_number
AND s20t1.instance_number = s0.instance_number
AND s21t0.instance_number = s0.instance_number
AND s21t1.instance_number = s0.instance_number
AND s22t0.instance_number = s0.instance_number
AND s22t1.instance_number = s0.instance_number
AND s23t0.instance_number = s0.instance_number
AND s23t1.instance_number = s0.instance_number
AND s24t0.instance_number = s0.instance_number
AND s24t1.instance_number = s0.instance_number
AND s25t0.instance_number = s0.instance_number
AND s25t1.instance_number = s0.instance_number
AND s29t0.instance_number = s0.instance_number
AND s29t1.instance_number = s0.instance_number
AND s33t0.instance_number = s0.instance_number
AND s33t1.instance_number = s0.instance_number
AND s1.snap_id            = s0.snap_id + 1
AND s13t0.snap_id         = s0.snap_id
AND s13t1.snap_id         = s0.snap_id + 1
AND s20t0.snap_id         = s0.snap_id
AND s20t1.snap_id         = s0.snap_id + 1
AND s21t0.snap_id         = s0.snap_id
AND s21t1.snap_id         = s0.snap_id + 1
AND s22t0.snap_id         = s0.snap_id
AND s22t1.snap_id         = s0.snap_id + 1
AND s23t0.snap_id         = s0.snap_id
AND s23t1.snap_id         = s0.snap_id + 1
AND s24t0.snap_id         = s0.snap_id
AND s24t1.snap_id         = s0.snap_id + 1
AND s25t0.snap_id         = s0.snap_id
AND s25t1.snap_id         = s0.snap_id + 1
AND s29t0.snap_id         = s0.snap_id
AND s29t1.snap_id         = s0.snap_id + 1
AND s33t0.snap_id         = s0.snap_id
AND s33t1.snap_id         = s0.snap_id + 1
AND s13t0.stat_name       = 'redo writes'
AND s13t1.stat_name       = s13t0.stat_name
AND s20t0.stat_name       = 'physical read total IO requests'
AND s20t1.stat_name       = s20t0.stat_name
AND s21t0.stat_name       = 'physical read total multi block requests'
AND s21t1.stat_name       = s21t0.stat_name
AND s22t0.stat_name       = 'physical read total bytes'
AND s22t1.stat_name       = s22t0.stat_name
AND s23t0.stat_name       = 'physical write total IO requests'
AND s23t1.stat_name       = s23t0.stat_name
AND s24t0.stat_name       = 'physical write total multi block requests'
AND s24t1.stat_name       = s24t0.stat_name
AND s25t0.stat_name       = 'physical write total bytes'
AND s25t1.stat_name       = s25t0.stat_name
AND s29t0.stat_name       = 'cell physical IO bytes eligible for predicate offload'
AND s29t1.stat_name       = s29t0.stat_name
AND s33t0.stat_name       = 'cell flash cache read hits'
AND s33t1.stat_name       = s33t0.stat_name
)
-- WHERE 
-- tm > to_char(sysdate - 30, 'MM/DD/YY HH24:MI')
-- id  in (select snap_id from (select * from r2toolkit.r2_regression_data union all select * from r2toolkit.r2_outlier_data))
-- id in (3391)
-- aas > 1
-- oscpuio > 50
-- rmancpupct > 0
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') >= 1     -- Day of week: 1=Sunday 7=Saturday
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'D') <= 7
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') >= 0900     -- Hour
-- AND TO_CHAR(s0.END_INTERVAL_TIME,'HH24MI') <= 1800
-- AND s0.END_INTERVAL_TIME >= TO_DATE('2010-jan-17 00:00:00','yyyy-mon-dd hh24:mi:ss')     -- Data range
-- AND s0.END_INTERVAL_TIME <= TO_DATE('2010-aug-22 23:59:59','yyyy-mon-dd hh24:mi:ss')
ORDER BY id ASC;
}}}
{{{


SELECT   A.INST_ID
,        A.SNAP_ID
,        TO_CHAR(A.START_TIME, 'YYYYMMDD HH24MISS')  AS START_TIME
,        A.DURATION_IN_MIN
,        A.STAT_NAME_RPT
,        DECODE( A.STAT_NAME_RPT 
               , 'flashcache'
               , 100 * SUM((A.STAT_VALUE * A.STAT_OPER)) / SUM((A.STAT_VALUE2 * A.STAT_OPER))
               , 'SIORs'
               , SUM((A.STAT_VALUE * A.STAT_OPER)) / (60 * A.DURATION_IN_MIN)
               , 'SIOWs'
               , SUM((A.STAT_VALUE * A.STAT_OPER)) / (60 * A.DURATION_IN_MIN)
               , 'IORedo'
               , SUM((A.STAT_VALUE * A.STAT_OPER)) / (60 * A.DURATION_IN_MIN)
               , SUM((A.STAT_VALUE * A.STAT_OPER)) / (60 * A.DURATION_IN_MIN) /1024/1024
               )  AS   STAT_VALUE 
FROM      (SELECT S0.INSTANCE_NUMBER   INST_ID
           ,      S0.SNAP_ID          AS SNAP_ID 
           ,      S0.END_INTERVAL_TIME AS START_TIME 
           ,      round( EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                       + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                       + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                       + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)  as duration_in_min
           ,      DECODE( ST0.STAT_NAME 
                        , 'redo writes', 'IORedo'                                          -- 13
                        , 'physical read total IO requests', 'SIORs'                       -- 20
                        , 'physical read total multi block requests', 'SIORs'             -- 21
                        , 'physical read total bytes', 'TIORmbs'                             -- 22
                        , 'physical write total IO requests', 'SIOWs'                      -- 23
                        , 'physical write total multi block requests', 'SIOWs'            -- 24
                        , 'physical write total bytes', 'TIOWmbs'                            -- 25
                        , 'cell physical IO bytes eligible for predicate offload', 'cellpiobpreoff' -- 29
                        -- , 'cell flash cache read hits', ''                            -- 33
                        , '')   AS STAT_NAME_RPT
           ,      DECODE( ST0.STAT_NAME 
                        , 'redo writes', 1                                           -- 13
                        , 'physical read total IO requests', 1                       -- 20
                        , 'physical read total multi block requests', -1             -- 21
                        , 'physical read total bytes', 1                             -- 22
                        , 'physical write total IO requests', 1                      -- 23
                        , 'physical write total multi block requests', -1            -- 24
                        , 'physical write total bytes', 1                            -- 25
                        , 'cell physical IO bytes eligible for predicate offload', 1 -- 29
                        -- , 'cell flash cache read hits', ''                            -- 33
                        , '')   AS STAT_OPER
           ,      ST1.VALUE - ST0.VALUE AS STAT_VALUE 
           ,      0                     AS STAT_VALUE2    
           ,      DECODE( ST0.STAT_NAME 
                        , 'redo writes', 3                                           -- 13
                        , 'physical read total IO requests', 1                       -- 20
                        , 'physical read total multi block requests', 1              -- 21
                        , 'physical read total bytes', 4                             -- 22
                        , 'physical write total IO requests', 2                      -- 23
                        , 'physical write total multi block requests',  2            -- 24
                        , 'physical write total bytes', 5                            -- 25
                        , 'cell physical IO bytes eligible for predicate offload', 6 -- 29
                        , 99)                    AS STAT_ORDER 
           FROM    V$DATABASE         VD
           ,       dba_hist_snapshot s0
           ,       dba_hist_snapshot s1
           ,       dba_hist_sysstat st0
           ,       dba_hist_sysstat st1
           WHERE   VD.DBID = S0.DBID
           AND     S1.DBID = S0.DBID
           AND     S1.INSTANCE_NUMBER = S0.INSTANCE_NUMBER
           AND     S0.DBID            = ST0.DBID
           AND     S0.INSTANCE_NUMBER = ST0.INSTANCE_NUMBER
           AND     S0.SNAP_ID         = ST0.SNAP_ID 
           AND     S1.DBID            = ST1.DBID
           AND     S1.INSTANCE_NUMBER = ST1.INSTANCE_NUMBER
           AND     S1.SNAP_ID         = ST1.SNAP_ID 
           AND     S0.SNAP_ID     +1  = ST1.SNAP_ID 
           AND     ST0.STAT_ID        = ST1.STAT_ID 
           AND     ST0.STAT_NAME IN   ( 'redo writes'
                                      , 'physical read total IO requests'
                                      , 'physical read total multi block requests'
                                      , 'physical read total bytes'
                                      , 'physical write total IO requests'
                                      , 'physical write total multi block requests'
                                      , 'physical write total bytes'
                                      , 'cell physical IO bytes eligible for predicate offload'
                                      )
           UNION ALL 
           SELECT S0.INSTANCE_NUMBER   INST_ID
           ,      S0.SNAP_ID          AS SNAP_ID 
           ,      S0.END_INTERVAL_TIME AS START_TIME 
           ,      round( EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                       + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                       + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                       + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)  as duration_in_min
           ,      'flashcache'    AS STAT_NAME_RPT
           ,      1    AS STAT_OPER
           ,      DECODE(ST0.STAT_NAME ,'cell flash cache read hits', (  ST1.VALUE - ST0.VALUE) ,0) AS STAT_VALUE 
           ,      DECODE(ST0.STAT_NAME ,'physical read total IO requests', (  ST1.VALUE - ST0.VALUE) ,0) AS STAT_VALUE2 
           ,      7                     AS STAT_ORDER 
           FROM    V$DATABASE         VD
           ,       dba_hist_snapshot s0
           ,       dba_hist_snapshot s1
           ,       dba_hist_sysstat st0
           ,       dba_hist_sysstat st1
           WHERE   VD.DBID = S0.DBID
           AND     S1.DBID = S0.DBID
           AND     S1.INSTANCE_NUMBER = S0.INSTANCE_NUMBER
           AND     S0.DBID            = ST0.DBID
           AND     S0.INSTANCE_NUMBER = ST0.INSTANCE_NUMBER
           AND     S0.SNAP_ID         = ST0.SNAP_ID 
           AND     S1.DBID            = ST1.DBID
           AND     S1.INSTANCE_NUMBER = ST1.INSTANCE_NUMBER
           AND     S1.SNAP_ID         = ST1.SNAP_ID 
           AND     S0.SNAP_ID     +1  = ST1.SNAP_ID 
           AND     ST0.STAT_ID        = ST1.STAT_ID 
           AND     ST0.STAT_NAME IN   ( 'physical read total IO requests'  -- 20
                                      , 'cell flash cache read hits'   -- 33
                                      )
           )   A 
GROUP BY A.INST_ID
,        A.SNAP_ID
,        A.START_TIME
,        A.DURATION_IN_MIN
,        A.STAT_NAME_RPT
,        A.STAT_ORDER
ORDER BY 2 DESC, 1 ASC, A.STAT_ORDER ASC
;



################################################################################################################################################################



-- awr_iowl column format
instname        DB_ID      hostname                           id tm                inst        dur      SIORs      SIOWs     IORedo    TIORmbs    TIOWmbs cellpiobpreoff flashcache
--------------- ---------- ------------------------------ ------ ----------------- ---- ---------- ---------- ---------- ---------- ---------- ---------- -------------- ----------
pib01scp1       1859430704 x03pdb01                         1464 05/11/13 23:00:07    1      60.18     28.203     40.732      1.427     99.087     27.639         97.651          5


-- final out of row format
   INST_ID    SNAP_ID START_TIME      DURATION_IN_MIN STAT_NAME_RPT  STAT_VALUE
---------- ---------- --------------- --------------- -------------- ----------
         1       1464 20130511 230007           60.18 SIORs          28.20316827 
         1       1464 20130511 230007           60.18 SIOWs          40.7322477 
         1       1464 20130511 230007           60.18 IORedo         1.426553672 
         1       1464 20130511 230007           60.18 TIORmbs        99.08688756 
         1       1464 20130511 230007           60.18 TIOWmbs        27.63852702 
         1       1464 20130511 230007           60.18 cellpiobpreoff 97.65057259 
         1       1464 20130511 230007           60.18 flashcache     4.588101237


-- computations
flashcache = 28588/623090 = 4.5
SIORs =  (623090-521254)/3610.8 = 28.2031682729589



-- raw 2nd union row format 
   INST_ID    SNAP_ID START_TIME                      DURATION_IN_MIN STAT_NAME_RPT  STAT_OPER STAT_VALUE STAT_VALUE2 STAT_ORDER
---------- ---------- ------------------------------- --------------- ------------- ---------- ---------- ----------- ----------
         1       1464 11-MAY-13 11.00.07.038000000 PM           60.18 flashcache             1          0      623090          7  <<
         1       1464 11-MAY-13 11.00.07.038000000 PM           60.18 flashcache             1      28588           0          7  <<
         2       1464 11-MAY-13 11.00.07.133000000 PM           60.18 flashcache             1      94156           0          7 
         2       1464 11-MAY-13 11.00.07.133000000 PM           60.18 flashcache             1          0      706102          7 
         3       1464 11-MAY-13 11.00.07.143000000 PM           60.18 flashcache             1      31014           0          7 
         3       1464 11-MAY-13 11.00.07.143000000 PM           60.18 flashcache             1          0      175835          7 
         4       1464 11-MAY-13 11.00.07.134000000 PM           60.18 flashcache             1          0       69121          7 
         4       1464 11-MAY-13 11.00.07.134000000 PM           60.18 flashcache             1      29802           0          7 

-- raw 1st union row format
   INST_ID    SNAP_ID START_TIME                      DURATION_IN_MIN STAT_NAME_RPT   STAT_OPER STAT_VALUE STAT_VALUE2 STAT_ORDER
---------- ---------- ------------------------------- --------------- -------------- ---------- ---------- ----------- ----------
         1       1464 11-MAY-13 11.00.07.038000000 PM           60.18 SIORs                   1     623090           0          1 <<
         1       1464 11-MAY-13 11.00.07.038000000 PM           60.18 SIORs                  -1     521254           0          1 <<

         1       1464 11-MAY-13 11.00.07.038000000 PM           60.18 SIOWs                   1     174630           0          2 
         1       1464 11-MAY-13 11.00.07.038000000 PM           60.18 SIOWs                  -1      27554           0          2 

         1       1464 11-MAY-13 11.00.07.038000000 PM           60.18 IORedo                  1       5151           0          3 
         1       1464 11-MAY-13 11.00.07.038000000 PM           60.18 TIORmbs                 1    3.8E+11           0          4 
         1       1464 11-MAY-13 11.00.07.038000000 PM           60.18 TIOWmbs                 1    1.0E+11           0          5 
         1       1464 11-MAY-13 11.00.07.038000000 PM           60.18 cellpiobpreoff          1    3.7E+11           0          6 


         2       1464 11-MAY-13 11.00.07.133000000 PM           60.18 IORedo                  1       5882           0          3 
         2       1464 11-MAY-13 11.00.07.133000000 PM           60.18 SIORs                   1     706102           0          1 
         2       1464 11-MAY-13 11.00.07.133000000 PM           60.18 SIORs                  -1     567419           0          1 
         2       1464 11-MAY-13 11.00.07.133000000 PM           60.18 SIOWs                  -1      33964           0          2 
         2       1464 11-MAY-13 11.00.07.133000000 PM           60.18 SIOWs                   1     169899           0          2 
         2       1464 11-MAY-13 11.00.07.133000000 PM           60.18 TIORmbs                 1    4.2E+11           0          4 
         2       1464 11-MAY-13 11.00.07.133000000 PM           60.18 TIOWmbs                 1    1.1E+11           0          5 
         2       1464 11-MAY-13 11.00.07.133000000 PM           60.18 cellpiobpreoff          1    4.1E+11           0          6 
         3       1464 11-MAY-13 11.00.07.143000000 PM           60.18 IORedo                  1       5978           0          3 
         3       1464 11-MAY-13 11.00.07.143000000 PM           60.18 SIORs                  -1     141463           0          1 
         3       1464 11-MAY-13 11.00.07.143000000 PM           60.18 SIORs                   1     175835           0          1 
         3       1464 11-MAY-13 11.00.07.143000000 PM           60.18 SIOWs                  -1      48135           0          2 
         3       1464 11-MAY-13 11.00.07.143000000 PM           60.18 SIOWs                   1      77786           0          2 
         3       1464 11-MAY-13 11.00.07.143000000 PM           60.18 TIORmbs                 1    4.6E+11           0          4 
         3       1464 11-MAY-13 11.00.07.143000000 PM           60.18 TIOWmbs                 1    2.0E+11           0          5 
         3       1464 11-MAY-13 11.00.07.143000000 PM           60.18 cellpiobpreoff          1    4.5E+10           0          6 
         4       1464 11-MAY-13 11.00.07.134000000 PM           60.18 IORedo                  1       6011           0          3 
         4       1464 11-MAY-13 11.00.07.134000000 PM           60.18 SIORs                   1      69121           0          1 
         4       1464 11-MAY-13 11.00.07.134000000 PM           60.18 SIORs                  -1      38704           0          1 
         4       1464 11-MAY-13 11.00.07.134000000 PM           60.18 SIOWs                  -1      19434           0          2 
         4       1464 11-MAY-13 11.00.07.134000000 PM           60.18 SIOWs                   1     163201           0          2 
         4       1464 11-MAY-13 11.00.07.134000000 PM           60.18 TIORmbs                 1    1.5E+11           0          4 
         4       1464 11-MAY-13 11.00.07.134000000 PM           60.18 TIOWmbs                 1    7.2E+10           0          5 
         4       1464 11-MAY-13 11.00.07.134000000 PM           60.18 cellpiobpreoff          1  121110528           0          6 

 32 rows selected

}}}
Oracle Enterprise Manager 12c: Oracle Exadata Discovery Cookbook  http://www.oracle.com/technetwork/oem/exa-mgmt/em12c-exadata-discovery-cookbook-1662643.pdf

Prerequisite script for Exadata Discovery in Oracle Enterprise Manager Cloud Control 12c (Doc ID 1473912.1)

https://community.oracle.com/message/12261216#12261216 <-- sucks you can't make use of existing 12cR2 agent, so I just installed a new agent (12cR3) on a new home
http://docs.oracle.com/cd/E24628_01/install.121/e24089/appdx_repoint_agent.htm#BABICJCE

follow the following from the docs
http://docs.oracle.com/cd/E24628_01/doc.121/e27442/ch2_deployment.htm#EMXIG215
http://docs.oracle.com/cd/E24628_01/doc.121/e27442/ch3_discovery.htm#EMXIG206


{{{
-- OLTP
alter system set sga_max_size=18G scope=spfile sid='*';
alter system set sga_target=0 scope=spfile sid='*';
alter system set db_cache_size=10G scope=spfile sid='*';
alter system set shared_pool_size=2G scope=spfile sid='*';
alter system set large_pool_size=4G scope=spfile sid='*';
alter system set java_pool_size=256M scope=spfile sid='*';
alter system set pga_aggregate_target=5G scope=spfile sid='*';

-- DW
alter system set sga_max_size=18G scope=spfile sid='*';
alter system set sga_target=0 scope=spfile sid='*';
alter system set db_cache_size=10G scope=spfile sid='*';
alter system set shared_pool_size=2G scope=spfile sid='*';
alter system set large_pool_size=4G scope=spfile sid='*';
alter system set java_pool_size=256M scope=spfile sid='*';
alter system set pga_aggregate_target=20G scope=spfile sid='*';
}}}
http://www.rittmanmead.com/2008/09/testing-advanced-oltp-compression-in-oracle-11g/
http://www.rittmanmead.com/2006/07/techniques-to-reduce-io-partitioning-and-compression/
10205OMS Restarts When XMLLoader Times out For Repository Connection or 11G OMS login hangs via Cisco Firewall [ID 1073473.1]

http://wiki.oracle.com/page/Oracle+OpenWorld+Unconference
http://wiki.oracle.com/page/What+to+Expect+at+the+Unconference

2010 unconference 
http://wikis.sun.com/display/JavaOne/Unconferences+at+JavaOne+and+Oracle+Develop+2010
Best Practices for Maintaining Your Oracle RAC Cluster [CON8252]
https://oracleus.activeevents.com/2014/connect/sessionDetail.ww?SESSION_ID=8252&tclass=popup

Oracle RAC Operational Best Practices [CON8171]
https://oracleus.activeevents.com/2014/connect/sessionDetail.ww?SESSION_ID=8171


see other presentations and slides here, no need to register
https://oracleus.activeevents.com/2014/connect/search.ww#loadSearch-event=null&searchPhrase=&searchType=session&tc=0&sortBy=&p=&i(10009)=10105

<<<
Goal

Sometimes after migration from earlier version to higher version, performance degrades.  For example, after migrating from 10g to 11g, the performance may degrade due to migration.  This may be true in cases where thorough testing has not been done before the migration.  In such cases,  reverting back  to previous version  for parameter OPTIMIZER_FEATURES_ENABLE may improve performance. 
Solution

The parameter can be set from the system or session level:

1. alter system set optimizer_features_enable='10.2.0.4'
  scope=spfile;

2. alter session set optimizer_features_enable='10.2.0.4';

When setting this parameter, remember the optimizer parameters will revert back to the older version.  Thus, the new optimizer features will not be used. Furthermore, this parameter is not meant to be a permanent fix.  It is recommended to use temporarily until permanent tuning and fix is implemented.  

<<<
I've got two clients, both of them pure oltp environments that upgraded to new hardware and faster storage.. and from 10gR2 to 11gR2.. 
both of them have their own tricks to retain the plans of the SQL to where they were before. 

1) client1
after the upgrade.. most of the plans changed and tend to favor the faster CPU of the new environment.. after investigating using SQLTXPLAIN - SQLTCOMPARE
and some test cases I found out that the fix is to put back the system statistics of the CPU to the old value of the old processor... then when I did that, everything
went back to their old plans. 


2) client2
now this client environment is interesting, they had these parameters set
{{{
optimizer_mode	"FIRST_ROWS_10"
optimizer_index_cost_adj	"5"
}}}
after the upgrade.. the only plan changes we had were the reporting SQLs which we just easily fixed with profiles from the old environment. but the OLTP stuff did not change at all
and that's because of these two parameters which made the database hardware agnostic, cool!  so right at the implementation they have thought about this ;) 

so for OLTP environments.. you've got these two tricks at your disposal.. 







http://oprofile.sourceforge.net/examples/
<<showtoc>>

! Problem and fix - function not closing cursors
<<<
Here the "BAS.ALLOC_UTILITIES_SQL_CALC.CAL_QTY_ALLOC_FR_IMNT_RE" function is called multiple times and inside it's not closing the opened cursors. 
Adding the close of cursors fixed the issue. 

The troubleshooting steps:
* profile the session and cursor usage (get all diagnostic data)
* increase open_cursors from 300 to 1000 
* restart the database and weblogic app server (to get a clean slate) 
* profile the session and cursor usage and detail on the SQL_ID - catch the increase of cursors up to 1000 max, and validate the SQL from initial profiling. In this case the same function popped up as the culprit 
* implement fix on the function 
* kill the problem weblogic session to get clean slate on cursors of that session
* re-run app 
<<<


!! side note
* Cursor leak is different from PGA continuously increasing
{{{

I encountered a similar issue recently on a custom module of Oracle CC&B. The process errors with ORA-04036

If you dump the dba_hist_active_sess_history and graph it in time series you'll see PGA_ALLOCATED increases overtime, and you can color that by SQL_ID and you'll be able to track the PL/SQL entry object id that invoked those SQLs.

The problem was the package contained logic that would loop based on the number of rows of the driving cursor and push it to a PL/SQL collection in memory which overloads the PGA (reaching up to 30GB).The culprit SQL_ID was executed 283 million times with 282 million rows processed all that pushed to PGA. (Increasing the Size of a Collection (EXTEND Method) https://docs.oracle.com/cd/B28359_01/appdev.111/b28370/collections.htm#CJAIJHEI)

The recommendation to the developer was to rewrite the package to a set-based approach rather than row by row.
Putting 282 million rows on a PL/SQL collection is not scalable (due to server physical memory limitations) and will run longer as more rows are processed vs one parallelized high IO bandwidth operation.
}}}



! Below are the SQLs I used for troubleshooting 

!! count SQLs on v$open_cursor 
{{{
COLUMN USER_NAME FORMAT A15

SELECT s.machine, oc.user_name, oc.sql_text, count(1) 
FROM v$open_cursor oc, v$session s
WHERE oc.sid = s.sid
GROUP BY s.machine, oc.user_name, oc.sql_text
HAVING COUNT(1) > 2
ORDER BY count(1) DESC
;


MACHINE                                                          USER_NAME       SQL_TEXT                                                       COUNT(1)
---------------------------------------------------------------- --------------- ------------------------------------------------------------ ----------
appserver1                                                       ALLOC_APP_USER  SELECT BAS.ALLOC_UTILITIES_SQL_CALC.CAL_QTY_ALLOC_FR_IMNT_RE        360
appserver1                                                       ALLOC_APP_USER  WITH MAX_SEQ AS     (SELECT                                          40
appserver1                                                       ALLOC_APP_USER  SELECT                              apbs.SKU,                        33
appserver1                                                       ALLOC_APP_USER  WITH MAX_SEQ AS            (SELECT                    /*+ MA         28
appserver1                                                       ALLOC_APP_USER  WITH MAX_SEQ AS             (SELECT                     /*+          25
appserver1                                                       ALLOC_APP_USER  SELECT DISTINCT FISCAL_MONTH,  CASE     WHEN FISCAL_MONTH =          23
appserver1                                                       ALLOC_APP_USER  WITH MAX_SEQ AS           (SELECT              /*+ MATERIALI         22
appserver1                                                       ALLOC_APP_USER  SELECT COUNT ( DISTINCT AIR.STORE_ID) STR_COUNT_STYLE  FROM          22
appserver1                                                       ALLOC_APP_USER  SELECT DISTINCT BUYER_NAME, BUYER_ID FROM COMPANY_HIER_BUYER         21
appserver1                                                       ALLOC_APP_USER  SELECT LI_DETAILS.BATCH_ID,         ALLOC_SKU0.SKU2,                 21
appserver1                                                       ALLOC_APP_USER  SELECT DISTINCT ALLOC_ID,  ALLOC_LINE_ID,  APPT_DATE,  DC_RE         20
}}}

!! breakdown by SID on v$open_cursor
{{{
set lines 300
COLUMN USER_NAME FORMAT A15
SELECT s.machine, oc.user_name, oc.sql_text, s.sid, count(1)
FROM v$open_cursor oc, v$session s
WHERE oc.sid = s.sid
GROUP BY s.machine, oc.user_name, oc.sql_text, s.sid
HAVING COUNT(1) > 2
ORDER BY count(1) DESC
;

MACHINE                                                          USER_NAME       SQL_TEXT                                                            SID   COUNT(1)
---------------------------------------------------------------- --------------- ------------------------------------------------------------ ---------- ----------
appserver1                                                       ALLOC_APP_USER  SELECT BAS.ALLOC_UTILITIES_SQL_CALC.CAL_QTY_ALLOC_FR_IMNT_RE          4        167
appserver1                                                       ALLOC_APP_USER  SELECT BAS.ALLOC_UTILITIES_SQL_CALC.CAL_QTY_ALLOC_FR_IMNT_RE       1466        109
appserver1                                                       ALLOC_APP_USER  SELECT BAS.ALLOC_UTILITIES_SQL_CALC.CAL_QTY_ALLOC_FR_IMNT_RE       1489         84
appserver1                                                       ALLOC_APP_USER  SELECT                              apbs.SKU,                      1466         30
appserver1                                                       ALLOC_APP_USER  UPDATE ALGO_INPUT_FOR_REVIEW SET LOCK_FLAG ='Y' WHERE STORE_          4          7
appserver1                                                       ALLOC_APP_USER  WITH MAX_SEQ AS     (SELECT                                           4          6

}}}


!! sesstat open cursors count by SID, SQL_ID

here SID 1012 shows open cursors reaching the 1000 limit of the database parameter (notice the number is increasing which is a per session limit). 
{{{

set lines 300
col username format a30
    select c.username,
           a.sid, c.machine, c.sql_id, c.prev_sql_id, c.plsql_object_id,
           sum(a.value) "opened cursors current"
    from   v$sesstat a, v$statname b, v$session c
    where  a.statistic# = b.statistic#
    and    b.name = 'opened cursors current'
    and    c.sid = a.sid
    group  by c.username, a.sid, c.machine, c.sql_id, c.prev_sql_id, c.plsql_object_id
    order by  sum(a.value) asc; 




USERNAME                          SID MACHINE                        SQL_ID        PREV_SQL_ID   PLSQL_OBJECT_ID opened cursors current
------------------------------ ------ ------------------------------ ------------- ------------- --------------- ----------------------

... 


ALLOC_APP_USER                    412 appserver1                                   4azbr8f51a1nr                                     49
ALLOC_APP_USER                     98 appserver1                                   bm3tbmdznzg62                                     80
ALLOC_APP_USER                   1012 appserver1                                   4ps27vbzfnw10                                    122  <<

135 rows selected.

USERNAME                          SID MACHINE                        SQL_ID        PREV_SQL_ID   PLSQL_OBJECT_ID opened cursors current
------------------------------ ------ ------------------------------ ------------- ------------- --------------- ----------------------

... 


ALLOC_APP_USER                   1106 appserver1                                   7xs6pawnx3gj2                                     51
ALLOC_APP_USER                     98 appserver1                                   bm3tbmdznzg62                                     80
ALLOC_APP_USER                   1012 appserver1                                   5m4nu3860346k                                    335  <<

137 rows selected.

USERNAME                          SID MACHINE                        SQL_ID        PREV_SQL_ID   PLSQL_OBJECT_ID opened cursors current
------------------------------ ------ ------------------------------ ------------- ------------- --------------- ----------------------

... 

ALLOC_APP_USER                   1106 appserver1                                   31vdxhgw47a00                                     72
ALLOC_APP_USER                     98 appserver1                                   bm3tbmdznzg62                                     80
ALLOC_APP_USER                   1012 appserver1                                   akkhfudfrvf92                                    996  <<

135 rows selected.

USERNAME                          SID MACHINE                        SQL_ID        PREV_SQL_ID   PLSQL_OBJECT_ID opened cursors current
------------------------------ ------ ------------------------------ ------------- ------------- --------------- ----------------------

... 

ALLOC_APP_USER                   1106 appserver1                                   31vdxhgw47a00                                     72
ALLOC_APP_USER                     98 appserver1                                   bm3tbmdznzg62                                     80
ALLOC_APP_USER                   1012 appserver1                                   9uydavp0gr167                                    997  <<

135 rows selected.

USERNAME                          SID MACHINE                        SQL_ID        PREV_SQL_ID   PLSQL_OBJECT_ID opened cursors current
------------------------------ ------ ------------------------------ ------------- ------------- --------------- ----------------------

... 

ALLOC_APP_USER                   1106 appserver1                                   31vdxhgw47a00                                     72
ALLOC_APP_USER                     98 appserver1                                   bm3tbmdznzg62                                     80
ALLOC_APP_USER                   1012 appserver1                                   7t6r7kutfr2s0                                   1000  <<

135 rows selected.

USERNAME                          SID MACHINE                        SQL_ID        PREV_SQL_ID   PLSQL_OBJECT_ID opened cursors current
------------------------------ ------ ------------------------------ ------------- ------------- --------------- ----------------------

... 

ALLOC_APP_USER                   1106 appserver1                                   31vdxhgw47a00                                     72
ALLOC_APP_USER                     98 appserver1                                   31vdxhgw47a00                                     79
ALLOC_APP_USER                   1012 appserver1                                   31vdxhgw47a00                                   1000  <<

128 rows selected.

}}}


!! detail on specific SQLs 
{{{

-- then from SQL Developer to investigate in detail just do
select * from v$open_cursor 

set lines 300
col username format a30
select SQL_ID, hash_value, sid, user_name, sql_text
from v$open_cursor
where sid in (
    select sid
    from (
        select c.username,
               a.sid, c.machine, c.sql_id, c.prev_sql_id, c.plsql_object_id,
               sum(a.value) "opened cursors current"
        from   v$sesstat a, v$statname b, v$session c
        where  a.statistic# = b.statistic#
        and    b.name = 'opened cursors current'
        and    c.sid = a.sid
        group  by c.username, a.sid, c.machine, c.sql_id, c.prev_sql_id, c.plsql_object_id
        order by  sum(a.value) desc
        )
    where rownum < 2
)
order by sql_text asc;


}}}

!! Also run sqld360 to get the overall metadata definition of objects and related objects








! References
!! database 
asktom - Open cursors exceeded http://bit.ly/2sjvcMN
Working with Cursors http://www.oracle.com/technetwork/issue-archive/2013/13-mar/o23plsql-1906474.html
http://gennick.com/database/does-plsql-implicitly-close-cursors
http://gennick.com/database/more-on-plsqls-cursor-handling
http://gennick.com/database/plsql-cursor-handling-explained
http://gennick.com/the-box/on-the-importance-of-mental-models
Troubleshooting Open Cursor Issues https://docs.oracle.com/cd/E40329_01/admin.1112/e27149/cursor.htm#OMADM5352
How To: Identify a cursor leak in Oracle http://support.esri.com/technical-article/000010136

!! weblogic 
weblogic inactive connection timeout https://stackoverflow.com/questions/21006782/why-we-need-weblogic-inactive-connection-timeout
https://stackoverflow.com/questions/21006782/why-we-need-weblogic-inactive-connection-timeout
Tuning Data Source Connection Pools https://docs.oracle.com/cd/E17904_01/web.1111/e13737/ds_tuning.htm#JDBCA490
https://stackoverflow.com/questions/18328886/weblogic-leaked-connection-timeout
"Inactive Connection Timeout" and "Remove Infected Connections Enabled" parameters in WebLogic Server http://blog.raastech.com/2015/07/inactive-connection-timeout-and-remove.html 
http://andrejusb.blogspot.com/2010/02/monitoring-data-source-connection-leaks.html
Setting the JDBC Connection timeout properties in weblogic server through WLST http://www.albinsblog.com/2014/04/setting-jdbc-connection-timeouts.html#.WVQYqSJKWkI
JDBC Connection leaks – Generation and Detection [BEA-001153] http://blog.sysco.no/db/locking/jdbc-leak/    <- this blog has a program to generate leak JDBCLeak.zip



! final 
{{{

REM ##########################################
REM count SQLs on v$open_cursor 
REM ##########################################

set lines 300
COLUMN USER_NAME FORMAT A15
SELECT s.machine, oc.user_name, oc.sql_text, count(1)
FROM v$open_cursor oc, v$session s
WHERE oc.sid = s.sid
GROUP BY s.machine, oc.user_name, oc.sql_text
HAVING COUNT(1) > 2
ORDER BY count(1) ASC
;


REM ##########################################
REM breakdown by SID on v$open_cursor
REM ##########################################

set lines 300
COLUMN USER_NAME FORMAT A15
SELECT s.machine, oc.user_name, oc.sql_text, s.sid, count(1)
FROM v$open_cursor oc, v$session s
WHERE oc.sid = s.sid
GROUP BY s.machine, oc.user_name, oc.sql_text, s.sid
HAVING COUNT(1) > 2
ORDER BY count(1) ASC
;


REM ##########################################
REM sesstat open cursors count by SID, SQL_ID 
REM ##########################################

set lines 300
col username format a30
    select c.username,
           a.sid, c.machine, c.sql_id, c.prev_sql_id, c.plsql_object_id,
           sum(a.value) "opened cursors current"
    from   v$sesstat a, v$statname b, v$session c
    where  a.statistic# = b.statistic#
    and    b.name = 'opened cursors current'
    and    c.sid = a.sid
    group  by c.username, a.sid, c.machine, c.sql_id, c.prev_sql_id, c.plsql_object_id
    order by  sum(a.value) asc;


REM ##########################################
REM detail on specific SQLs 
REM ##########################################

set lines 300
col username format a30
select SQL_ID, hash_value, sid, user_name, sql_text
from v$open_cursor
where sid in (
    select sid
    from (
        select c.username,
               a.sid, c.machine, c.sql_id, c.prev_sql_id, c.plsql_object_id,
               sum(a.value) "opened cursors current"
        from   v$sesstat a, v$statname b, v$session c
        where  a.statistic# = b.statistic#
        and    b.name = 'opened cursors current'
        and    c.sid = a.sid
        group  by c.username, a.sid, c.machine, c.sql_id, c.prev_sql_id, c.plsql_object_id
        order by  sum(a.value) desc
        )
    where rownum < 2
)
order by sql_text asc;


}}}






! other references 
https://tanelpoder.com/2014/03/26/oracle-memory-troubleshooting-part-4-drilling-down-into-pga-memory-usage-with-vprocess_memory_detail/









<<showtoc>>


! 2 sessions testcase

{{{
alter system set undo_tablespace = undotbs2;

drop tablespace small_undo including contents and datafiles;

create undo tablespace small_undo
datafile '/u01/app/oracle/oradata/ORCLCDB/orcl/smallundo.dbf' size 10m autoextend off
;

alter system set undo_tablespace = small_undo;



drop table t1 purge;

create table t1(c1 int, c2 char(2000), c3 char(2000), c4 char(2000));

insert into t1 values(1, 'x', 'x', 'x');
commit;




-- session #2
variable rc refcursor
exec open :rc for select * from t1 where c1 = 1;


-- session #1 (execute 3x)
begin
  for idx in 1 .. 500 loop
    update t1 set c2 = idx, c3 = idx, c4 = idx where c1 = 1;
    if mod(idx, 500) = 0 then
      commit;
    end if;
  end loop;
end;
/


-- session #1 (execute 2x)
begin
  for idx in 1 .. 500 loop
    update t1 set c2 = idx, c3 = idx, c4 = idx where c1 = 1;
  end loop;
end;
/



--session #2 (snapshot too old error)
print rc 



--session #2 (force cleanout to avoid snapshot too old error)
select count(*) from t1;
print rc 





--monitoring sqls 

--check high MQL SQLs
select TO_CHAR(end_time,'MM/DD/YY HH24:MI') end_tm, 
maxquerysqlid, maxconcurrency, undotsn, undoblks, txncount, activeblks, unexpiredblks, expiredblks, round(maxquerylen/60,0) maxqlen, round(tuned_undoretention/60,0) Tuned
from dba_hist_undostat where end_time > sysdate-30 order by maxquerylen desc
/

--undostat
select TO_CHAR(end_time,'MM/DD/YY HH24:MI') end_tm, a.* from v$undostat a order by 1 desc;

}}}


! 3 sessions testcase - UPDATE and MERGE 
{{{

-- session #1 

alter system set undo_tablespace = undotbs2;

drop tablespace small_undo including contents and datafiles;

create undo tablespace small_undo
datafile '/u01/app/oracle/oradata/ORCLCDB/orcl/smallundo.dbf' size 10m autoextend off
;

alter system set undo_tablespace = small_undo;


col c1 format 9999
col c2 format a30
col c3 format a30
col c4 format a30


drop table t1 purge;
create table t1(c1 int, c2 char(2000), c3 char(2000), c4 char(2000));

insert into t1 values(1, 'x', 'x', 'x');
insert into t1 values(2, 'z', 'z', 'z');
commit;



drop table t2 purge;
create table t2(c1 int, c2 char(2000), c3 char(2000), c4 char(2000));

insert into t2 values(2, 'y', 'y', 'y');
commit;



drop table t3 purge;
create table t3(c1 int, c2 char(2000), c3 char(2000), c4 char(2000));

insert into t3 values(1, 'x', 'x', 'x');
commit;





-- session #1
variable rc refcursor
exec open :rc for select * from t1 where c1 = 1;

-- session #2
variable rc refcursor
exec open :rc for select * from t1 where c1 = 1;

-- session #3
variable rc refcursor
exec open :rc for select * from t3 where c1 = 1;




-- session #1 
MERGE INTO t2 s1
USING (select * from t1) s0 
  ON (
        s1.c1      = s0.c1
  )
WHEN MATCHED THEN UPDATE
SET 
    s1.c2 = s0.c2,
    s1.c3 = s0.c3,
    s1.c4 = s0.c4
WHEN NOT MATCHED THEN 
  INSERT VALUES (
    s0.c1,
    s0.c2,
    s0.c3,
    s0.c4
  );



-- session #1 (execute 1x)  
-- this table is on the select part of merge
-- without this UPDATE there's no dirty block for t1 table, hence no ORA-01555 
begin
  for i in 1 .. 10 loop
    update t1 set c1 = i, c2 = i, c3 = i, c4 = i;
  end loop;
end;
/



-- session #1 (execute 3x)
begin
  for i in 1 .. 500 loop
    update t3 set c1 = i, c2 = i, c3 = i, c4 = i;

    if mod(i, 500) = 0 then
      commit;
    end if;
  end loop;
end;
/


-- session #1 (execute 2x)
begin
  for i in 1 .. 500 loop
    update t3 set c1 = i, c2 = i, c3 = i, c4 = i;
  end loop;
end;
/



--session #1,2,3 (snapshot too old error)
print rc 





--monitoring sqls 

--check high MQL SQLs
select TO_CHAR(end_time,'MM/DD/YY HH24:MI') end_tm, 
maxquerysqlid, maxconcurrency, undotsn, undoblks, txncount, activeblks, unexpiredblks, expiredblks, round(maxquerylen/60,0) maxqlen, round(tuned_undoretention/60,0) Tuned
from dba_hist_undostat where end_time > sysdate-30 order by maxquerylen desc
/

--undostat
select TO_CHAR(end_time,'MM/DD/YY HH24:MI') end_tm, a.* from v$undostat a order by 1 desc;

}}}






! asktom 

{{{
it doesn't matter if it is used for "actual" undo or because of the retention period -- it is all "actual" undo.

But anyway, select sum(used_ublk) from v$transaction will tell you how much undo is being used for current, right now, transactions.

And -- allow me to clarify.  IF the undo tablespace can grow to accomidate the undo retention period -- it will.  If it cannot -- it will not.  So consider this example:

ops$tkyte@ORA920> @test
                           
<b>shows my undo tablespace is 1m right now.  

The biggest it can autoextent to is 2gig and it'll grow in 1m increments (i know that cause I created it that way, this report doesn't show that 1m increment)
</b>
                                                                  MaxPoss    Max
Tablespace Name        KBytes         Used         Free   Used     Kbytes   Used
---------------- ------------ ------------ ------------ ------ ---------- ------
...
*UNDOTBS                1,024          960           64   93.8  2,088,960     .0
.....
                 ------------ ------------ ------------
sum                 2,001,920    1,551,936      449,984

13 rows selected.

ops$tkyte@ORA920>
ops$tkyte@ORA920> show parameter undo

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_management                      string      AUTO
undo_retention                       integer     10800
undo_suppress_errors                 boolean     FALSE
undo_tablespace                      string      UNDOTBS
ops$tkyte@ORA920>
ops$tkyte@ORA920> drop table t;

Table dropped.

<b> my undo retention is 3 hours -- 10,800 seconds...</b>

ops$tkyte@ORA920> create table t ( x char(2000), y char(2000), z char(2000) );

Table created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> insert into t values ( 'x', 'x', 'x' );

1 row created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> begin
  2          for i in 1 .. 500
  3          loop
  4                  update t set x = i, y = i, z = i;
  5                  commit;
  6          end loop;
  7  end;
  8  /

PL/SQL procedure successfully completed.

<b>now each of those transactions is 6+ kbytes of undo -- 3 * 2000 byte "before images" to save off...  That should generate well over 3meg of undo by the time it is done BUT in 500 tiny transactions.  

If the undo retention period is 3hours and I have 1meg of undo and that 1meg of undo can grow to 2gig -- Oracle will grow it and we can see that:</b>

ops$tkyte@ORA920> set echo off

                                                                  MaxPoss    Max
Tablespace Name        KBytes         Used         Free   Used     Kbytes   Used
---------------- ------------ ------------ ------------ ------ ---------- ------
...
*UNDOTBS                5,120        4,608          512   90.0  2,088,960     .2
.....

13 rows selected.

<b>the RBS is now 5m with 4.6 meg "used" (well, none of the undo is really used right now, it is just going to sit there for 3 hours waiting to be reused).

Now I do this:</b>

ops$tkyte@ORA920> create undo tablespace undotbl_new datafile size 1m;
Tablespace created.

ops$tkyte@ORA920> alter system set undo_tablespace = undotbl_new scope=both;
System altered.

ops$tkyte@ORA920> drop tablespace undotbs;
Tablespace dropped.

ops$tkyte@ORA920> exec print_table( 'select * from dba_data_files where tablespace_name = ''UNDOTBL_NEW'' ' );
FILE_NAME                     : /usr/oracle/ora920/OraHome1/oradata/ora920/o1_mf_undotbl__z0936pcx_.dbf
FILE_ID                       : 2
TABLESPACE_NAME               : UNDOTBL_NEW
BYTES                         : 1048576
BLOCKS                        : 128
STATUS                        : AVAILABLE
RELATIVE_FNO                  : 2<b>
AUTOEXTENSIBLE                : NO
MAXBYTES                      : 0
MAXBLOCKS                     : 0
INCREMENT_BY                  : 0</b>
USER_BYTES                    : 983040
USER_BLOCKS                   : 120
-----------------

PL/SQL procedure successfully completed.

<b>
And I rerun the test:</b>

ops$tkyte@ORA920> drop table t;
Table dropped.

ops$tkyte@ORA920> create table t ( x char(2000), y char(2000), z char(2000) );
Table created.

ops$tkyte@ORA920> insert into t values ( 'x', 'x', 'x' );
1 row created.

ops$tkyte@ORA920> begin
  2          for i in 1 .. 500
  3          loop
  4                  update t set x = i, y = i, z = i;
  5                  commit;
  6          end loop;
  7  end;
  8  /

PL/SQL procedure successfully completed.

ops$tkyte@ORA920> set echo off
old  29: order by &1
new  29: order by 1

                                                             %  MaxPoss    Max
Tablespace Name        KBytes         Used         Free   Used   Kbytes   Used
---------------- ------------ ------------ ------------ ------  ------- ------
*UNDOTBL_NEW            1,024        1,024            0  100.0        0     .0


13 rows selected.

<b>and here, we can see that the undo tablespace is still 1m.  Oracle could not grow the undo -- but it did not fail the transactions.

So, in that respect, yes, the undo retention can be thought of as a "desire" -- if there is no way to get the undo space AND the undo space can be reused - it will reuse it.  If the datafiles are autoextend or the undo tablespace is big enough all by itself, it will not reuse it</b>

}}}
https://asktom.oracle.com/pls/apex/f?p=100:11:::::P11_QUESTION_ID:6894817116500
https://www.google.com/search?q=ORA-01723%3A+zero-length+columns+are+not+allowed&oq=ORA-01723%3A+zero-length+columns+are+not+allowed&aqs=chrome..69i57j69i58.510j0j1&sourceid=chrome&ie=UTF-8#q=ORA-01723:+zero-length+columns+are+not+allowed&start=20

https://brainfizzle.wordpress.com/2014/09/10/create-table-as-select-with-additional-or-null-columns/
{{{
SQL> SELECT * from orig_tab;

COL1             COL2
---------- ----------
val1                1
val2                2

SQL> CREATE TABLE copy_tab AS
  2  SELECT col1, col2, CAST( NULL AS NUMBER ) col3
  3  FROM orig_tab;

Table created.

SQL> DESC copy_tab;
 Name                                      Null?    Type
 ----------------------------------------- -------- --------------

 COL1                                               VARCHAR2(10)
 COL2                                               NUMBER
 COL3                                               NUMBER

SQL> SELECT * FROM copy_tab;

COL1             COL2       COL3
---------- ---------- ----------
val1                1
val2                2
}}}
https://www.experts-exchange.com/questions/21558874/ORA-01790-expression-must-have-same-datatype-as-corresponding-expression.html
data conversion https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:2958161889874


ORA-08004: sequence exceeds MAXVALUE and cannot be instantiated 
https://www.funoracleapps.com/2012/06/ora-08004-sequence-fndconcurrentprocess.html

{{{
Solution:

Increase max value;(should be greater than previous Max value)

SQL> ALTER SEQUENCE APPLSYS.FND_CONCURRENT_PROCESSES_S MAXVALUE 99999999;

Sequence altered.

}}}
http://harvarinder.blogspot.com/2016/04/ora-10458-standby-database-requires.html
ORA-01194: When Opening Database After Restoring Backup http://www.parnassusdata.com/en/node/568
https://dborasol.wordpress.com/2013/10/29/rolling-forward-standby-database-with-rman-incremental-backup/
''TNS listener could not find available handler with matching protocol stack''

TNS:listener could not find available handler witht matching protocol stack https://community.oracle.com/thread/362226
Oracle Net Listener Parameters (listener.ora) http://docs.oracle.com/cd/B28359_01/network.111/b28317/listener.htm#NETRF424

http://jhdba.wordpress.com/2010/09/02/using-the-connection_rate-parameter-to-stop-dos-attacks/ <-- good stuff
http://www.oracle.com/technetwork/database/enterprise-edition/oraclenetservices-connectionratelim-133050.pdf  <-- good stuff Oracle Net Listener Connection Rate Limiter 

http://rnm1978.wordpress.com/2010/09/02/misbehaving-informatica-kills-oracle/
http://rnm1978.wordpress.com/2010/10/18/when-is-a-bug-not-a-bug-when-its-a-design-decision/

bad
{{{
As per your update on the Application connection pooling settings
========================================
*****Connection pooling parameters*****
========================================
JMS.SESSION_CACHE_SIZE 50
JMS.CONCURRENT_CONSUMERS 50
JMS.RECEIVE_TIMEOUT_MILLIS 1
POOL.MAX_IDLE 10
POOL SIZE 250
POOL MAX WAIT -1
With that said, however, if we look at the settings logically from a purely client-server communication perspective,
We see that the pool itself (ie: how many connections will be made) is set to 250.
Here is the line which stands out:
POOL SIZE 250
From the SQL*Net point of view for JDBC Thin Connection Pooled connections, we usually see in the listener log from working environments, are 10 to 20 connections.
The value of 250 is very High 
++++
}}}

tested the commands on my personal dev environment. The LOAD_SALES_SKUS does the following:
 
* Drop partition - alter table hr.SALES_SKUS drop partition WEEK_END_DATE_20160305;
* Add partition - alter table hr.SALES_SKUS add partition WEEK_END_DATE_20160305 values (to_date('20160305','YYYYMMDD'));
* Insert on partition - insert into hr.SALES_SKUS                             
 
The error “ORA-14400: inserted partition key does not map to any partition” means the partition key is not there when Insert happens. On this highlighted part of the code of LOAD_SALES_SKUS is where the error happens, the command to add the partition errors so the insert part errors that it doesn’t exist.

So the fix here is to change this -> TO_CHAR(L_DATE, 'YYYYMMDD')
To this  -> TO_CHAR(L_DATE)



! References 
https://www.toadworld.com/platforms/oracle/w/wiki/4498.list-partitioned-tables-maintenance
https://gerardnico.com/wiki/oracle/partition/list
http://hemora.blogspot.com/2012/02/ora-38760-this-database-instance-failed.html
http://blogs.oracle.com/db/entry/ora-4030_troubleshooting
http://dioncho.wordpress.com/2009/07/27/playing-with-ora-4030-error/


11gR2 - finding the SQL that caused the ORA-4030
{{{
1) Check the ORA-4030 on the alert log 

cat alert_mtauat112.log | \
     awk 'BEGIN{buf=""}
          /[0-9]:[0-9][0-9]:[0-9]/{buf=$0}
          /ORA-/{print buf,$0}' > ORA-errors-$(date +%Y%m%d%H%M).txt

2) Check the recent occurrence (10:26:07)

Tue May 22 10:26:07 2012 ORA-04030: out of process memory when trying to allocate 2136 bytes (kxs-heap-c,qkkele)

ls -ltr *trc

-rw-r----- 1 oracle dba    15199 May 22 10:19 mtauat112_smon_9962.trc
-rw-r----- 1 oracle dba     2680 May 22 10:26 mtauat112_ora_26771.trc    <-- this is the trace file
-rw-r----- 1 oracle dba     3121 May 22 10:26 mtauat112_diag_9901.trc
-rw-r----- 1 oracle dba    18548 May 22 10:52 mtauat112_vkrm_10450.trc
-rw-r----- 1 oracle dba    38504 May 22 10:53 mtauat112_mmon_9971.trc
-rw-r----- 1 oracle dba   173892 May 22 10:53 mtauat112_dbrm_9905.trc
-rw-r----- 1 oracle dba   357614 May 22 10:53 mtauat112_lmhb_9950.trc


3) Open the trace file 

less mtauat112_ora_26771.trc

mmap(offset=211263488, len=4096) failed with errno=12 for the file oraclemtauat112
mmap(offset=211263488, len=4096) failed with errno=12 for the file oraclemtauat112
mmap(offset=211263488, len=4096) failed with errno=12 for the file oraclemtauat112
Incident 439790 created, dump file: /u01/app/oracle/diag/rdbms/mtauat11/mtauat112/incident/incdir_439790/mtauat112_ora_26771_i439790.trc
ORA-04030: out of process memory when trying to allocate 2136 bytes (kxs-heap-c,qkkele)

4) Open the dump 

less /u01/app/oracle/diag/rdbms/mtauat11/mtauat112/incident/incdir_439790/mtauat112_ora_26771_i439790.trc

* On the 4030 dump, it will show you the top memory users 

========= Dump for incident 439790 (ORA 4030) ========
----- Beginning of Customized Incident Dump(s) -----
=======================================
TOP 10 MEMORY USES FOR THIS PROCESS
---------------------------------------
74% 3048 MB, 196432 chunks: "permanent memory          "  SQL
         kxs-heap-c      ds=0x2ad56bf501e0  dsprt=0xbb1e6a0
23%  950 MB, 62433 chunks: "free memory               "
         top call heap   ds=0xbb1e6a0  dsprt=(nil)
 1%   31 MB, 194215 chunks: "free memory               "  SQL
         kxs-heap-c      ds=0x2ad56bf501e0  dsprt=0xbb1e6a0
 0%   12 MB, 316351 chunks: "chedef : qcuatc           "
         TCHK^a0c3c921   ds=0x2ad56bf5ff48  dsprt=0xbb1d780
 0%   11 MB, 2426 chunks: "kkecpst : kkehs           "
         TCHK^a0c3c921   ds=0x2ad56bf5ff48  dsprt=0xbb1d780
 0% 7373 KB, 1282 chunks: "kkecpst: kkehev           "
         TCHK^a0c3c921   ds=0x2ad56bf5ff48  dsprt=0xbb1d780
 0% 7367 KB, 2982 chunks: "permanent memory          "
         kkqctdrvTD: co  ds=0x2ad56d31da90  dsprt=0x2ad56bf5ff48
 0% 6555 KB, 2846 chunks: "kkqct.c.kgght             "
         TCHK^a0c3c921   ds=0x2ad56bf5ff48  dsprt=0xbb1d780
 0% 5272 KB, 24970 chunks: "kkqcscpopn:kccdef         "
         TCHK^a0c3c921   ds=0x2ad56bf5ff48  dsprt=0xbb1d780
 0% 4497 KB, 2094 chunks: "qkkele                    "  SQL
         kxs-heap-c      ds=0x2ad56bf501e0  dsprt=0xbb1e6a0

5) Search the "Current SQL" in the 4030 dump 

*** 2012-05-22 10:26:07.513
dbkedDefDump(): Starting incident default dumps (flags=0x2, level=3, mask=0x0)
----- Current SQL Statement for this session (sql_id=cdafm3qhc7k91) -----
select distinct "DAAll_LdPrdDescr"."CHARTFIELD1" "CHARTFIELD1" from (select "DAAll_LdOffDescr"."DEAL_TYPE_RPT" "DEAL_TYPE_RPT", "DAAll_LdOffDescr"."CREATED_DTTM" "CREATED_DT
/Current SQL   <-- do a search 


}}}


Troubleshooting: Tuning the Shared Pool and Tuning Library Cache Latch Contention (Doc ID 62143.1)
http://www.oracle.com/technetwork/database/focus-areas/manageability/ps-s003-274003-106-1-fin-v2-128827.pdf
http://coskan.wordpress.com/2007/09/14/what-i-learned-about-shared-pool-management/
http://www.dbas-oracle.com/2013/05/5-Easy-Step-to-Solve-ORA-04031-with-Oracle-Support-Provided-Tool.html
https://blogs.oracle.com/db/entry/ora-4031_troubleshooting


https://sites.google.com/site/embtdbo/wait-event-documentation/oracle-library-cache#TOC-cursor:-mutex-X-
http://blog.tanelpoder.com/files/Oracle_Latch_And_Mutex_Contention_Troubleshooting.pdf
https://sites.google.com/site/embtdbo/wait-event-documentation
http://tech.e2sn.com/oracle/troubleshooting/latch-contention-troubleshooting#TOC-Download-LatchProf-and-LatchProfX
http://blog.tanelpoder.com/files/scripts/latchprof.sql
http://blog.tanelpoder.com/files/scripts/latchprofx.sql
http://blog.tanelpoder.com/files/scripts/dba.sql
http://m.blog.csdn.net/blog/caixingyun/41827529
sgastatx http://blog.tanelpoder.com/2009/06/04/ora-04031-errors-and-monitoring-shared-pool-subpool-memory-utilization-with-sgastatxsql/
http://yong321.freeshell.org/oranotes/SharedPoolDuration.txt
http://grumpyolddba.blogspot.com/2014/03/final-version-of-my-hotsos-2014.html


library cache internals www.juliandyke.com/Presentations/LibraryCacheInternals.ppt


! Information Gathering Script For ORA-4031 Analysis On Shared Pool (Doc ID 1909791.1)
{{{
REM srdc_db_ora4031sp.sql - Collect information for ORA-4031 analysis on shared pool
define SRDCNAME='DB_ORA4031SP'
SET MARKUP HTML ON PREFORMAT ON
set TERMOUT off FEEDBACK off VERIFY off TRIMSPOOL on HEADING off
COLUMN SRDCSPOOLNAME NOPRINT NEW_VALUE SRDCSPOOLNAME
select 'SRDC_'||upper('&&SRDCNAME')||'_'||upper(instance_name)||'_'||
       to_char(sysdate,'YYYYMMDD_HH24MISS') SRDCSPOOLNAME from v$instance;
set TERMOUT on MARKUP html preformat on
REM
spool &SRDCSPOOLNAME..htm
select '+----------------------------------------------------+' from dual
union all
select '| Diagnostic-Name: '||'&&SRDCNAME' from dual
union all
select '| Timestamp:       '||
       to_char(systimestamp,'YYYY-MM-DD HH24:MI:SS TZH:TZM') from dual
union all
select '| Machine:         '||host_name from v$instance
union all
select '| Version:         '||version from v$instance
union all
select '| DBName:          '||name from v$database
union all
select '| Instance:        '||instance_name from v$instance
union all
select '+----------------------------------------------------+' from dual
/
set HEADING on MARKUP html preformat off
REM === -- end of standard header -- ===
REM
SET PAGESIZE 9999
SET LINESIZE 256
SET TRIMOUT ON
SET TRIMSPOOL ON
COL 'Total Shared Pool Usage' FORMAT 99999999999999999999999
COL bytes FORMAT 999999999999999
COL current_size FORMAT 999999999999999
COL name FORMAT A40
COL value FORMAT A20
ALTER SESSION SET nls_date_format='DD-MON-YYYY HH24:MI:SS';

SET MARKUP HTML ON PREFORMAT ON

/* Database identification */
SET HEADING OFF
SELECT '**************************************************************************************************************' FROM dual
UNION ALL
SELECT 'Database identification:' FROM dual
UNION ALL
SELECT '**************************************************************************************************************' FROM dual;
SET HEADING ON
SELECT name, platform_id, database_role FROM v$database;
SELECT * FROM v$version WHERE banner LIKE 'Oracle Database%';

/* Current instance parameter values */
SET HEADING OFF
SELECT '**************************************************************************************************************' FROM dual
UNION ALL
SELECT 'Current instance parameter values:' FROM dual
UNION ALL
SELECT '**************************************************************************************************************' FROM dual;
SET HEADING ON
SELECT n.ksppinm name, v.KSPPSTVL value
FROM x$ksppi n, x$ksppsv v
WHERE n.indx = v.indx
AND (n.ksppinm LIKE '%shared_pool%' OR n.ksppinm IN ('_kghdsidx_count', '_ksmg_granule_size', '_memory_imm_mode_without_autosga'))
ORDER BY 1;

/* Current memory settings */
SET HEADING OFF
SELECT '**************************************************************************************************************' FROM dual
UNION ALL
SELECT 'Current instance parameter values:' FROM dual
UNION ALL
SELECT '**************************************************************************************************************' FROM dual;
SET HEADING ON
SELECT component, current_size FROM v$sga_dynamic_components;

/* Memory resizing operations */
SET HEADING OFF
SELECT '**************************************************************************************************************' FROM dual
UNION ALL
SELECT 'Memory resizing operations:' FROM dual
UNION ALL
SELECT '**************************************************************************************************************' FROM dual;
SET HEADING ON
SELECT start_time, end_time, component, oper_type, oper_mode, initial_size, target_size, final_size, status
FROM v$sga_resize_ops
ORDER BY 1, 2;

/* Historical memory resizing operations */
SET HEADING OFF
SELECT '**************************************************************************************************************' FROM dual
UNION ALL
SELECT 'Historical memory resizing operations:' FROM dual
UNION ALL
SELECT '**************************************************************************************************************' FROM dual;
SET HEADING ON
SELECT start_time, end_time, component, oper_type, oper_mode, initial_size, target_size, final_size, status
FROM dba_hist_memory_resize_ops
ORDER BY 1, 2;

/* Shared pool 4031 information */
SET HEADING OFF
SELECT '**************************************************************************************************************' FROM dual
UNION ALL
SELECT 'Shared pool 4031 information:' FROM dual
UNION ALL
SELECT '**************************************************************************************************************' FROM dual;
SET HEADING ON
SELECT request_failures, last_failure_size FROM v$shared_pool_reserved;

/* Shared pool reserved 4031 information */
SET HEADING OFF
SELECT '**************************************************************************************************************' FROM dual
UNION ALL
SELECT 'Shared pool reserved 4031 information:' FROM dual
UNION ALL
SELECT '**************************************************************************************************************' FROM dual;
SET HEADING ON
SELECT requests, request_misses, free_space, avg_free_size, free_count, max_free_size FROM v$shared_pool_reserved;

/* Shared pool memory allocations by size */
SET HEADING OFF
SELECT '**************************************************************************************************************' FROM dual
UNION ALL
SELECT 'Shared pool memory allocations by size:' FROM dual
UNION ALL
SELECT '**************************************************************************************************************' FROM dual;
SET HEADING ON
SELECT name, bytes FROM v$sgastat WHERE pool = 'shared pool' AND (bytes > 999999 OR name = 'free memory') ORDER BY bytes DESC;

/* Total shared pool usage */
SET HEADING OFF
SELECT '**************************************************************************************************************' FROM dual
UNION ALL
SELECT 'Total shared pool usage:' FROM dual
UNION ALL
SELECT '**************************************************************************************************************' FROM dual;
SET HEADING ON
SELECT SUM(bytes) "Total Shared Pool Usage" FROM v$sgastat WHERE pool = 'shared pool' AND name != 'free memory';


--------------------------------------------------------------------------------
--
-- File name:   sgastatx
-- Purpose:     Show shared pool stats by sub-pool from X$KSMSS
--
-- Author:      Tanel Poder
-- Copyright:   (c) http://www.tanelpoder.com
--              
-- Usage:       @sgastatx <statistic name>
-- 	        @sgastatx "free memory"
--	        @sgastatx cursor
--
-- Other:       The other script for querying V$SGASTAT is called sgastat.sql
--              
--              
--
--------------------------------------------------------------------------------

COL sgastatx_subpool HEAD SUBPOOL FOR a30

PROMPT
PROMPT -- All allocations:

SELECT
    'shared pool ('||NVL(DECODE(TO_CHAR(ksmdsidx),'0','0 - Unused',ksmdsidx), 'Total')||'):'  sgastatx_subpool
  , SUM(ksmsslen) bytes
  , ROUND(SUM(ksmsslen)/1048576,2) MB
FROM 
    x$ksmss
WHERE
    ksmsslen > 0
--AND ksmdsidx > 0 
GROUP BY ROLLUP
   ( ksmdsidx )
ORDER BY
    sgastatx_subpool ASC
/

BREAK ON sgastatx_subpool SKIP 1
PROMPT -- Allocations matching "&1":

SELECT 
    subpool sgastatx_subpool
  , name
  , SUM(bytes)                  
  , ROUND(SUM(bytes)/1048576,2) MB
FROM (
    SELECT
        'shared pool ('||DECODE(TO_CHAR(ksmdsidx),'0','0 - Unused',ksmdsidx)||'):'      subpool
      , ksmssnam      name
      , ksmsslen      bytes
    FROM 
        x$ksmss
    WHERE
        ksmsslen > 0
    AND LOWER(ksmssnam) LIKE LOWER('%&1%')
)
GROUP BY
    subpool
  , name
ORDER BY
    subpool    ASC
  , SUM(bytes) DESC
/

BREAK ON sgastatx_subpool DUP


/* Cursor sharability problems */
/* This version is for >= 10g; for <= 9i substitute ss.kglhdpar for ss.address!!!! */
SET HEADING OFF
SELECT '**************************************************************************************************************' FROM dual
UNION ALL
SELECT 'Cursor sharability problems (this version is for >= 10g; for <= 9i substitute ss.kglhdpar for ss.address!!!!):' FROM dual
UNION ALL
SELECT '**************************************************************************************************************' FROM dual;
SET HEADING ON
SELECT sa.sql_text,sa.version_count,ss.*
FROM v$sqlarea sa,v$sql_shared_cursor ss
WHERE sa.address=ss.address AND sa.version_count > 50
ORDER BY sa.version_count ;




SPOOL OFF
EXIT
}}}

http://blog.tanelpoder.com/files/scripts/topcur.sql
http://blog.tanelpoder.com/files/scripts/topcurmem.sql


{{{

grep -l "ORA-04031" *trc | xargs ls -ltr

./sdbcaj10_ora_1030.trc:ORA-04031: unable to allocate 4000 bytes of shared memory ("shared pool","SELECT * FROM MFA_MESSAGE_ST...","sga heap(1,0)","kglsim heap")
./sdbcaj10_ora_8755.trc:ORA-04031: unable to allocate 4000 bytes of shared memory ("shared pool","SELECT * FROM MEM_ACCT_MFA_R...","sga heap(6,0)","kglsim heap")
./sdbcaj10_smon_26728.trc:ORA-04031: unable to allocate 4000 bytes of shared memory ("shared pool","select o.name from obj$ o wh...","sga heap(2,0)","kglsim heap")
./sdbcaj10_w008_26962.trc:ORA-04031: unable to allocate 3000 bytes of shared memory ("shared pool","unknown object","sga heap(2,0)","call")

}}}





http://www.eygle.com/digest/2010/08/ora_00600_code_explain.html

{{{
Subject:	ORA-600 Lookup Error Categories
 	Doc ID:	175982.1	Type:	BULLETIN
 	Modified Date :	04-JUN-2009	Status:	PUBLISHED
In this Document
  Purpose
  Scope and Application
  ORA-600 Lookup Error Categories
     Internal Errors Categorised by number range
     Internal Errors Categorised by mnemonic

Applies to:

Oracle Server - Enterprise Edition - Version: 
Oracle Server - Personal Edition - Version: 
Oracle Server - Standard Edition - Version: 
Information in this document applies to any platform.
Checked for relevance 04-Jun-2009
Purpose

This note aims to provide a high level overview of the internal errors which may be encountered on the Oracle Server (sometimes referred to as the Oracle kernel). It is written to provide a guide to where a particular error may live and give some indication as to what the impact of the problem may be. Where a problem is reproducible and connected with a specific feature, you might obviously try not using the feature. If there is a consistent nature to the problem, it is good practice to ensure that the latest patchsets are in place and that you have taken reasonable measures to avoid known issues. 

For repeatable issues which the ora-600 tool has not listed a likely cause , it is worth constructing a test case. Where this is possible, it greatly assists in the resolution time of any issue. It is important to remember that, in a many instances , the Server is very flexible and a workaround can very often be achieved.

Scope and Application

This bulletin provides Oracle DBAs with an overview of internal database errors.

Disclaimer: Every effort has been made to provide a reasonable degree of accuracy in what has been stated. Please consider that the details provided only serve to provide an indication of functionality and, in some cases, may not be wholly correct.
ORA-600 Lookup Error Categories

In the Oracle Server source, there are two types of ora-600 error :

the first parameter is a number which reflects the source component or layer the error is connected with; or
the first parameter is a mnemonic which indicates the source module where the error originated. This type of internal error is now used in preference to an internal error number.
Both types of error may be possible in the Oracle server.

Internal Errors Categorised by number range

The following table provides an indication of internal error codes used in the Oracle server. Thus, if ora-600[X] is encountered, it is possible to glean some high level background information : the error in generated in the Y layer which indicates that there may be a problem with Z.

Ora-600 Base	Functionality	Description
1	Service Layer	The service layer has within it a variety of service related components which are associated with in memory related activities in the SGA such as, for example : the management of Enqueues, System Parameters, System state objects (these objects track the use of structures in the SGA by Oracle server processes), etc.. In the main, this layer provides support to allow process communication and provides support for locking and the management of structures to support multiple user processes connecting and interacting within the SGA. 
Note : vos  - Virtual Operating System provides features to support the functionality above.  As the name suggests it provides base functionality in much the same way as is provided by an Operating System. 
 
Ora-600 Base	Functionality	Description
1	vos	Component notifier 
100	vos	Debug
300	vos	Error
500	vos	Lock
700	vos	Memory
900	vos	System Parameters 
1100	vos	System State object 
1110	vos	Generic Linked List management 
1140	vos	Enqueue
1180	vos	Instance Locks 
1200	vos	User State object 
1400	vos	Async Msgs 
1700	vos	license Key 
1800	vos	Instance Registration 
1850	vos	I/O Services components
2000	Cache Layer	Where errors are generated in this area, it is advisable to check whether the error is repeatable and whether the error is perhaps associated with recovery or undo type operations; where this is the case and the error is repeatable, this may suggest some kind of hardware or physical issue with a data file, control file or log file. The Cache layer is responsible for making the changes to the underlying files and well as managing the related memory structures in the SGA. 
Note : rcv indicates recovery. It is important to remember that the Oracle cache layer is effectively going through the same code paths as used by the recovery mechanism. 
 
Ora-600 Base	Functionality	Description
2000	server/rcv	Cache Op
2100	server/rcv	Control File mgmt 
2200	server/rcv	Misc (SCN etc.) 
2400	server/rcv	Buffer Instance Hash Table 
2600	server/rcv	Redo file component 
2800	server/rcv	Db file 
3000	server/rcv	Redo Application 
3200	server/cache	Buffer manager 
3400	server/rcv	Archival & media recovery component 
3600	server/rcv	recovery component
3700	server/rcv	Thread component 
3800	server/rcv	Compatibility segment
It is important  to consider when the error occurred and the context in which the error was generated. If the error does not reproduce, it may be an in memory issue. 
 

4000	Transaction Layer	Primarily the transaction layer is involved with maintaining structures associated with the management of transactions.  As with the cache layer , problems encountered in this layer may indicate some kind of issue at a physical level. Thus it is important to try and repeat the same steps to see if the problem recurs.  
 
Ora-600 Base	Functionality	Description
4000	server/txn	Transaction Undo 
4100	server/txn	Transaction Undo 
4210	server/txn	Transaction Parallel 
4250	server/txn	Transaction List 
4300	space/spcmgmt	Transaction Segment 
4400	txn/lcltx	Transaction Control 
4450	txn/lcltx	distributed transaction control
4500	txn/lcltx	Transaction Block 
4600	space/spcmgmt	Transaction Table 
4800	dict/rowcache	Query Row Cache 
4900	space/spcmgmt	Transaction Monitor 
5000	space/spcmgmt	Transaction Extent 
It is important to try and determine what the object involved in any reproducible problem is. Then use the analyze command. For more information, please refer to the analyze command as detailed in the context of  Note 28814.1; in addition, it may be worth using the dbverify as discussed in Note 35512.1. 
 

6000	Data Layer	The data layer is responsible for maintaining and managing the data in the database tables and indexes. Issues in this area may indicate some kind of physical issue at the object level and therefore, it is important to try and isolate the object and then perform an anlayze on the object to validate its structure.  
 
Ora-600 Base	Functionality	Description
6000	ram/data 
ram/analyze 
ram/index	data, analyze command and index related activity
7000	ram/object	lob related errors
8000	ram/data	general data access
8110	ram/index	index related
8150	ram/object	general data access
Again, it is important to try and determine what the object involved in any reproducible problem is. Then use the analyze command. For more information, please refer to the analyze command as detailed in the context of  Note 28814.1; in addition, it may be worth using the dbverify as discussed in Note 35512.1. 
 

12000	User/Oracle Interface & SQL Layer Components	This layer governs the user interface with the Oracle server. Problems generated by this layer usually indicate : some kind of presentation or format error in the data received by the server, i.e. the client may have sent incomplete information; or there is some kind of issue which indicates that the data is received out of sequence 
 
Ora-600 Base	Functionality	Description
12200	progint/kpo 
progint/opi	lob related 
errors at interface level on server side, xa , etc.
12300	progint/if	OCI interface to coordinating global transactions 
12400	sqlexec/rowsrc	table row source access
12600	space/spcmgmt	operations associated with tablespace : alter / create / drop operations ; operations associated with create table / cluster
12700	sqlexec/rowsrc 	bad rowid
13000	dict/if	dictionary access routines associated with kernel compilation
13080	ram/index	kernel Index creation
13080	sqllang/integ	constraint mechanism
13100	progint/opi	archival and Media Recovery component
13200	dict/sqlddl	alter table mechanism
13250	security/audit	audit statement processing
13300	objsupp/objdata	support for handling of object generation and object access
14000	dict/sqlddl	sequence generation
15000	progint/kpo	logon to Oracle
16000	tools/sqlldr	sql loader related
You should try and repeat the issue and with the use of sql trace , try and isolate where exactly the issue may be occurring within the application.

14000	System Dependent Component internal error values	This layer manages interaction with the OS. Effectively it acts as the glue which allows the Oracle server to interact with the OS. The types of operation which this layer manages are indicated as follows. 
 
Ora-600 Base	Functionality	Description
14000	osds	File access
14100	osds	Concurrency management; 
14200	osds	Process management;
14300	osds	Exception-handler or signal handler management
14500	osds	Memory allocation
15000	security/dac, 
security/logon 
security/ldap	local user access validation; challenge / response activity for remote access validation; auditing operation; any activities associated with granting and revoking of privileges; validation of password with external password file
15100	dict/sqlddl	this component manages operations associated with creating, compiling (altering), renaming, invalidating, and dropping  procedures, functions, and packages.
15160	optim/cbo	cost based optimizer layer is used to determine optimal path to the data based on statistical information available on the relevant tables and indexes.
15190	optim/cbo	cost based optimizer layer. Used in the generation of a new index to determine how the index should be created. Should it be constructed from the table data or from another index.
15200	dict/shrdcurs	used to in creating sharable context area associated with shared cursors
15230	dict/sqlddl	manages the compilation of triggers
15260	dict/dictlkup 
dict/libcache	dictionary lookup and library cache access
15400	server/drv	manages alter system and alter session operations
15410	progint/if	manages compilation of pl/sql packages and procedures
15500	dict/dictlkup	performs dictionary lookup to ensure semantics are correct
15550	sqlexec/execsvc 
sqlexec/rowsrc	hash join execution management;  
parallel row source management
15600	sqlexec/pq	component provides support for Parallel Query operation
15620	repl/snapshots	manages the creation of snapshot or materialized views as well as related snapshot / MV operations
15640	repl/defrdrpc	layer containing various functions for examining the deferred transaction queue and retrieving information
15660	jobqs/jobq	manages the operation of the Job queue background processes
15670	sqlexec/pq	component provides support for Parallel Query operation
15700	sqlexec/pq	component provides support for Parallel Query operation; specifically mechanism for starting up and shutting down query slaves
15800	sqlexec/pq	component provides support for Parallel Query operation
15810	sqlexec/pq	component provides support for Parallel Query operation; specifically functions for creating mechanisms through which Query co-ordinator can communicate with PQ slaves;
15820	sqlexec/pq	component provides support for Parallel Query operation
15850	sqlexec/execsvc	component provides support for the execution of SQL statements
15860	sqlexec/pq	component provides support for Parallel Query operation
16000	loader	sql Loader direct load operation;
16150	loader	this layer is used for 'C' level call outs to direct loader operation;
16200	dict/libcache	this is part of library Cache operation. Amongst other things it manages the dependency of SQL objects and tracks who is permitted to access these objects;
16230	dict/libcache	this component is responsible for managing access to remote objects as part of library Cache operation;
16300	mts/mts	this component relates to MTS (Multi Threaded Server) operation
16400	dict/sqlddl	this layer contains functionality which allows tables to be loaded / truncated and their definitions to be modified. This is part of dictionary operation;
16450	dict/libcache	this layer layer provides support for multi-instance access to the library cache; this functionality is applicable therefore to OPS environments;
16500	dict/rowcache	this layer provides support to load / cache Oracle's dictionary in memory in the library cache;
16550	sqlexec/fixedtab	this component maps data structures maintained in the Oracle code to fixed tables such that they can be queried using the SQL layer;
16600	dict/libcache	this layer performs management of data structures within the library cache;
16651	dict/libcache	this layer performs management of dictionary related information within library Cache;
16701	dict/libcache	this layer provides library Cache support to support database creation and forms part of the bootstrap process;
17000	dict/libcache	this is the main library Cache manager. This Layer maintains the in memory representation of cached sql statements together will all the necessary support that this demands;
17090	generic/vos	this layer implementations error management operations: signalling errors, catching  errors, recovering from errors, setting error frames, etc.;
17100	generic/vos	Heap manager. The Heap manager manages the storage of internal data in an orderly and consistent manner. There can be many heaps serving various purposes; and heaps within heaps. Common examples are the SGA heap, UGA heap and the PGA heap. Within a Heap there are consistency markers which aim to ensure that the Heap is always in a consistent state. Heaps are use extensively and are in memory structures - not on disk. 
17200	dict/libcache	this component deals with loading remote library objects into the local library cache with information from the remote database.
17250	dict/libcache	more library cache errors ; functionality for handling pipe operation associated with dbms_pipe
17270	dict/instmgmt	this component manages instantiations of procedures, functions, packages, and cursors in a session. This provides a means to keep track of what has been loaded in the event of process death; 
17300	generic/vos	manages certain types of memory allocation structure.  This functionality is an extension of the Heap manager.
17500	generic/vos	relates to various I/O operations. These relate to async i/o operation,  direct i/o operation and the management of writing buffers from the buffer cache by potentially a number of database writer processes;
17625	dict/libcache	additional library Cache supporting functions
17990	plsql	plsql 'standard' package related issues
18000	txn/lcltx	transaction and savepoint management operations 
19000	optim/cbo	cost based optimizer related operations
20000	ram/index	bitmap index and index related errors.
20400	ram/partnmap	operations on partition related objects
20500	server/rcv	server recovery related operation
21000	repl/defrdrpc,  
repl/snapshot, 
repl/trigger	replication related features
23000	oltp/qs	AQ related errors.
24000	dict/libcache	operations associated with managing stored outlines
25000	server/rcv	tablespace management operations
Internal Errors Categorised by mnemonic

The following table details mnemonics error stems which are possible. If you have encountered : ora-600[kkjsrj:1] for example, you should look down the Error Mnemonic column (errors in alphabetical order) until you find the matching stem. In this case, kkj indicates that something unexpected has occurred in job queue operation.

Error Mnemonic(s)	Functionality	Description
ain ainp 	ram/index	ain - alter index; ainp -  alter index partition management operation
apacb 	optim/rbo	used by optimizer in connect by processing
atb atbi atbo ctc ctci cvw 	dict/sqlddl	alter table , create table (IOT) or cluster operations as well as create view related operations (with constraint handling functionality)
dbsdrv	sqllang/parse	alter / create database operation
ddfnet 	progint/distrib	various distributed operations on remote dictionary
delexe 	sqlexec/dmldrv	manages the delete statement operation
dix 	ram/index	manages drop index or validate index operation 
dtb 	dict/sqlddl	manages drop table operation
evaa2g evah2p evaa2g 	dbproc/sqlfunc	various functions involves in evaluating operand outcomes such as : addition , average, OR operator, bites AND , bites OR, concatenation, as well as Oracle related functions : count(), dump() , etc. The list is extensive.
expcmo expgon 	dbproc/expreval	handles expression evaluation with respect to two operands being equivalent
gra 	security/dac	manages the granting and revoking of privilege rights to a user
gslcsq 	plsldap	support for operations with an LDAP server
insexe 	sqlexec/dmldrv	handles the insert statement operation
jox 	progint/opi	functionality associated with the Java compiler and with the Java runtime environment within the Server
k2c k2d 	progint/distrib	support for database to database operation in distributed environements as well as providing, with respect to the 2-phase commit protocol, a globally unique Database id
k2g k2l	txn/disttx	support for the 2 phase commit protocol protocol and the coordination of the various states in managing the distributed transaction
k2r k2s k2sp 	progint/distrib	k2r - user interface for managing distributed transactions and combining distributed results ; k2s - handles logging on, starting a transaction, ending a transaction and recovering a transaction; k2sp - management of savepoints in a distributed environment.
k2v 	txn/disttx	handles distributed recovery operation
kad 	cartserv/picklercs	handles OCIAnyData implementation 
kau 	ram/data	manages the modification of indexes for inserts, updates and delete operations for IOTs as well as modification of indexes for IOTs
kcb kcbb kcbk kcbl kcbs kcbt kcbw kcbz 	cache	manages Oracle's buffer cache operation as well as operations used by capabilities such as direct load, has clusters , etc.
kcc kcf 	rcv	manages and coordinates operations on the control file(s)
kcit 	context/trigger	internal trigger functionality 
kck 	rcv	compatibility related checks associated with the compatible parameter
kcl 	cache	background lck process which manages locking in a RAC or parallel server multiple instance environment
kco kcq kcra kcrf kcrfr kcrfw kcrp kcrr kcs kct kcv 	rcv	various buffer cache operation such as quiesce operation , managing fast start IO target, parallel recovery operation , etc. 
kd 	ram/data	support for row level dependency checking and some log miner operations
kda 	ram/analyze	manages the analyze command and collection of statistics
kdbl kdc kdd 	ram/data	support for direct load operation, cluster space management and deleting rows
kdg 	ram/analyze	gathers information about the underlying data and is used by the analyze command
kdi kdibc3 kdibco kdibh kdibl kdibo kdibq kdibr kdic kdici kdii kdil kdir kdis kdiss kdit kdk 	ram/index	support of the creation of indexes on tables an IOTs and index look up 
kdl kdlt 	ram/object	lob and temporary lob management
kdo 	ram/data	operations on data such as inserting a row piece or deleting a row piece 
kdrp 	ram/analyze	underlying support for operations provided by the dbms_repair package
kds kdt kdu 	ram/data	operations on data such as retrieving a row and updating existing row data
kdv kdx 	ram/index	functionality for dumping index and managing index blocks
kfc kfd kfg  	asm	support for ASM file and disk operations
kfh kfp kft 	rcv	support for writing to file header and transportable tablespace operations
kgaj kgam kgan kgas kgat kgav kgaz 	argusdbg/argusdbg	support for Java Debug Wire Protocol (JDWP) and debugging facilites
kgbt kgg kgh kghs kghx kgkp	vos	kgbt - support for BTree operations; kgg - generic lists processing; kgh - Heap Manager : managing the internal structures withing the SGA / UGA / PGA and ensures their integrity; kghs - Heap manager with Stream support; kghx - fixed sized shared memory manager; kgkp - generic services scheduling policies
kgl kgl2 kgl3 kgla kglp kglr kgls 	dict/libcache	generic library cache operation 
kgm kgmt 	ilms	support for inter language method services - or calling one language from another
kgrq kgsk kgski kgsn kgss 	vos	support for priority queue and scheduling; capabilities for Numa support;  Service State object manager
kgupa kgupb kgupd0 kgupf kgupg kgupi kgupl kgupm kgupp kgupt kgupx kguq2 kguu 	vos	Service related activities activities associated with for Process monitor (PMON); spawning or creating of background processes; debugging; managing process address space;  managing the background processes; etc.
kgxp 	vos	inter process communication related functions
kjak kjat kjb kjbl kjbm kjbr kjcc kjcs kjctc kjcts kjcv kjdd kjdm kjdr kjdx kjfc kjfm kjfs kjfz kjg kji kjl kjm kjp kjr kjs kjt kju kjx 	ccl/dlm	dlm related functionality ; associated with RAC or parallel server operation
kjxgf kjxgg kjxgm kjxgn kjxgna kjxgr 	ccl/cgs	provides communication & synchronisation associated with GMS or OPS related functionality as well as name service and OPS Instance Membership Recovery Facility
kjxt 	ccl/dlm	DLM request message management
kjzc kjzd kjzf kjzg kjzm 	ccl/diag	support for diagnosibility amongst OPS related services
kkb 	dict/sqlddl	support for operatoins which load/change table definitions
kkbl kkbn kkbo 	objsupp/objddl	support for tables with lobs , nested tables and varrays as well as columns with objects
kkdc kkdl kkdo 	dict/dictlkup	support for constraints, dictionary lookup and dictionary support for objects
kke 	optim/cbo	query engine cost engine; provides support functions that provide cost estimates for queries under a number of different circumstances
kkfd 	sqlexec/pq	support for performing parallel query operation
kkfi 	optim/cbo	optimizer support for matching of expressions against functional ndexes
kkfr kkfs 	sqlexec/pq	support for rowid range handling as well as for building parallel query query operations
kkj 	jobqs/jobq	job queue operation
kkkd kkki 	dict/dbsched	resource manager related support. Additionally, provides underlying functions provided by dbms_resource_manager and dbms_resource_manager_privs packages
kklr 	dict/sqlddl	provides functions used to manipulate LOGGING and/or RECOVERABLE attributes of an object (non-partitioned table or index or  partitions of a partitioned table or index)
kkm kkmi 	dict/dictlkup	provides various semantic checking functions 
kkn 	ram/analyze	support for the analyze command
kko kkocri 	optim/cbo	Cost based Optimizer operation : generates alternative execution plans in order to find the optimal / quickest access to the data.  Also , support to determine cost and applicability of  scanning a given index in trying to create or rebuild an index or a partition thereof
kkpam kkpap 	ram/partnmap	support for mapping predicate keys expressions to equivalent partitions
kkpo kkpoc kkpod 	dict/partn	support for creation and modification of partitioned objects
kkqg kkqs kkqs1 kkqs2 kkqs3 kkqu kkqv kkqw 	optim/vwsubq	query rewrite operation 
kks kksa kksh kksl kksm 	dict/shrdcurs	support for managing shared cursors/ shared sql
kkt 	dict/sqlddl	support for creating, altering and dropping trigger definitions as well as handling the trigger operation
kkxa 	repl/defrdrpc	underlying support for dbms_defer_query package operations
kkxb 	dict/sqlddl	library cache interface for external tables 
kkxl 	dict/plsicds	underlying support for the dbms_lob package
kkxm 	progint/opi	support for inter language method services
kkxs 	dict/plsicds	underlying support for the dbms_sys_sql package 
kkxt 	repl/trigger	support for replication internal trigger operation
kkxwtp 	progint/opi	entry point into the plsql compiler
kky 	drv	support for alter system/session commands
kkz kkzd kkzf kkzg kkzi kkzj kkzl kkzo kkzp kkzq kkzr kkzu kkzv 	repl/snapshot	support for snapshots or Materialized View validation and operation
kla klc klcli klx 	tools/sqlldr	support for direct path sql loader operation
kmc kmcp kmd kmm kmr 	mts/mts	support for Multi Threaded server operation (MTS) : manange and operate the virtual circuit mechanism, handle the dispatching of massages, administer shared servers and for collecting and maintaining statistics associated with MTS
knac knafh knaha knahc knahf knahs 	repl/apply	replication apply operation associated with Oracle streams
kncc 	repl/repcache	support for replication related information stored and maintained in library cache
kncd knce 	repl/defrdrpc	replication related enqueue and dequeue of transction data as well as other queue related operations 
kncog 	repl/repcache	support for loading replicaiton object group information into library cache
kni 	repl/trigger	support for replication internal trigger operation
knip knip2 knipi knipl knipr knipu knipu2 knipx 	repl/intpkg	support for replication internal package operation. 
kno 	repl/repobj	support for replication objects 
knp knpc knpcb knpcd knpqc knps 	repl/defrdrpc	operations assocaied with propagating transactions to a remote node and coordination of this activity.
knst 	repl/stats	replication statistics collection
knt kntg kntx 	repl/trigger	support for replication internal trigger operation
koc 	objmgmt/objcache	support for managing ADTs objects in the OOCI heap
kod 	objmgmt/datamgr	support for persistent storage for objects : for read/write objects, to manage object IDs, and to manage object concurrency and recovery. 
koh 	objmgmt/objcache	object heap manager provides memory allocation services for objects
koi 	objmgmt/objmgr	support for object types
koka 	objsupp/objdata	support for reading images, inserting images, updating images, and deleting images based on object references (REFs).
kokb kokb2 	objsupp/objsql	support for nested table objects
kokc 	objmgmt/objcache	support for pinning , unpinning and freeing objects
kokd 	objsupp/datadrv	driver on the server side for managing objects
koke koke2 koki 	objsupp/objsql	support for managing objects
kokl 	objsupp/objdata	lob access
kokl2 	objsupp/objsql	lob DML and programmatic interface support
kokl3 	objsupp/objdata	object temporary LOB support
kokle kokm 	objsupp/objsql	object SQL evaluation functions
kokn 	objsupp/objname	naming support for objects
koko 	objsupp/objsup	support functions to allow oci/rpi to communicate with Object Management Subsystem (OMS).
kokq koks koks2 koks3 koksr 	objsupp/objsql	query optimisation for objects , semantic checking and semantic rewrite operations
kokt kokt2 kokt3 	objsupp/objddl	object compilation type manager
koku kokv 	objsupp/objsql	support for unparse object operators and object view support
kol kolb kole kolf kolo 	objmgmt/objmgr	support for object Lob buffering , object lob evaluation and object Language/runtime functions for Opaque types
kope2 kopi2 kopo kopp2 kopu koputil kopz	objmgmt/pickler	8.1 engine implementation,  implementation of image ops for 8.1+ image format together with various pickler related support functions
kos 	objsupp/objsup	object Stream interfaces for images/objects
kot kot2 kotg 	objmgmt/typemgr	support for dynamic type operations to create, delete, and  update types.
koxs koxx 	objmgmt/objmgt	object generic image Stream routines and miscellaneous generic object functions
kpcp kpcxlt 	progint/kpc	Kernel programmatic connection pooling and kernel programmatic common type XLT translation routines
kpki 	progint/kpki	kernel programatic interface support
kpls 	cartserv/corecs	support for string formatting operations
kpn 	progint/kpn	support for server to server communication 
kpoal8 kpoaq kpob kpodny kpodp kpods kpokgt kpolob kpolon kpon 	progint/kpo	support for programmatic operations 
kpor 	progint/opi	support for streaming protocol used by replication
kposc 	progint/kpo	support for scrollable cursors
kpotc 	progint/opi	oracle side support functions for setting up trusted external procedure callbacks
kpotx kpov 	progint/kpo	support for managing local and distributed transaction coordination.
kpp2 kpp3 	sqllang/parse	kpp2 - parse routines for dimensions; 
kpp3 - parse support for create/alter/drop summary  statements
kprb kprc 	progint/rpi	support for executing sql efficiently on the Oracle server side as well as for copying data types during rpi operations
kptsc 	progint/twotask	callback functions provided to all streaming operation as part of replication functionality 
kpu kpuc kpucp 	progint/kpu	Oracle kernel side programmatic user interface,  cursor management functions and client side connection pooling support
kqan kqap kqas 	argusdbg/argusdbg	server-side notifiers and callbacks for debug operations. 
kql kqld kqlp 	dict/libcache	SQL Library Cache manager - manages the sharing of sql statements in the shared pool
kqr 	dict/rowcache	row cache management. The row cache consists of a set of facilities to provide fast access to table definitions and locking capabilities.
krbi krbx krby krcr krd krpi 	rcv	Backup and recovery related operations : 
krbi - dbms_backup_restore package underlying support.; krbx -  proxy copy controller; krby - image copy; krcr - Recovery Controlfile Redo; krd - Recover Datafiles (Media & Standby Recovery);  krpi - support for the package : dbms_pitr
krvg krvt 	rcv/vwr	krvg - support for generation of redo associated with DDL; krvt - support for redo log miner viewer (also known as log miner)
ksa ksdp ksdx kse ksfd ksfh ksfq ksfv ksi ksim ksk ksl ksm ksmd ksmg ksn ksp kspt ksq ksr kss ksst ksu ksut 	vos	support for various kernel associated capabilities
ksx	sqlexec/execsvc	support for query execution associated with temporary tables
ksxa ksxp ksxr 	vos	support for various kernel associated capabilities in relation to OPS or RAC operation
kta 	space/spcmgmt	support for DML locks and temporary tables associated with table access
ktb ktbt ktc 	txn/lcltx	transaction control operations at the block level : locking block, allocating space within the block , freeing up space, etc.
ktec ktef ktehw ktein ktel kteop kteu 	space/spcmgmt	support for extent management operations : 
ktec - extent concurrency operations; ktef - extent format; ktehw - extent high water mark operations; ktein - extent  information operations; ktel - extent support for sql loader; kteop - extent operations : add extent to segment, delete extent, resize extent, etc. kteu - redo support for operations changing segment header / extent map
ktf 	txn/lcltx	flashback support
ktfb ktfd ktft ktm 	space/spcmgmt	ktfb - support for bitmapped space manipulation of files/tablespaces;  ktfd - dictionary-based extent management; ktft - support for temporary file manipulation; ktm - SMON operation
ktp ktpr ktr ktri 	txn/lcltx	ktp - support for parallel transaction operation; ktpr - support for parallel transaction recovery; ktr - kernel transaction read consistency;  
ktri - support for dbms_resumable package
ktsa ktsap ktsau ktsb ktscbr ktsf ktsfx ktsi ktsm ktsp ktss ktst ktsx ktt kttm 	space/spcmgmt	support for checking and verifying space usage
ktu ktuc ktur ktusm 	txn/lcltx	internal management of undo and rollback segments
kwqa kwqi kwqic kwqid kwqie kwqit kwqj kwqm kwqn kwqo kwqp kwqs kwqu kwqx 	oltp/qs	support for advanced queuing : 
kwqa - advanced queue administration; kwqi - support for AQ PL/SQL trusted callouts; kwqic - common AQ support functions; kwqid - AQ dequeue support; kwqie - AQ enqueu support ; kwqit - time management operation ; kwqj - job queue scheduler for propagation; kwqm - Multiconsumer queue IOT support; kwqn - queue notifier; kwqo - AQ support for checking instType checking options; kwqp - queueing propagation; kwqs - statistics handling; kwqu - handles lob data. ; kwqx - support for handling transformations
kwrc kwre 	oltp/re	rules engine evaluation
kxcc kxcd kxcs 	sqllang/integ	constraint processing
kxdr	sqlexec/dmldrv	DML driver entrypoint 
kxfp kxfpb kxfq kxfr kxfx 	sqlexec/pq	parallel query support
kxhf kxib 	sqlexec/execsvc	khhf- support for hash join file and memory management; kxib - index buffering operations
kxs 	dict/instmgmt	support for executing shared cursors
kxti kxto kxtr 	dbproc/trigger	support for trigger operation
kxtt 	ram/partnmap	support for temporary table operations
kxwph 	ram/data	support for managing attributes of the segment of a table / cluster / table-partition
kza 	security/audit	support for auditing operations 
kzar 	security/dac	support for application auditing
kzck 	security/crypto	encryption support
kzd 	security/dac	support for dictionary access by security related functions 
kzec 	security/dbencryption	support inserting and retrieving encrypted objects into and out of the database
kzfa kzft 	security/audit	support for fine grained auditing
kzia 	security/logon	identification and authentication operations
kzp kzra kzrt kzs kzu kzup 	security/dac	security related operations associated with privileges 
msqima msqimb 	sqlexec/sqlgen	support for generating sql statments
ncodef npi npil npixfr 	progint/npi	support for managing remote network connection from  within the server itself
oba 	sqllang/outbufal	operator buffer allocate for various types of operators : concatenate, decode, NVL, etc.  the list is extensive.
ocik 	progint/oci	OCI oracle server functions
opiaba opidrv opidsa opidsc opidsi opiexe opifch opiino opilng opipar opipls opirip opitsk opix 	progint/opi	OPI Oracle server functions - these are at the top of the server stack and are called indirectly by ythe client in order to server the client request.
orlr 	objmgmt/objmgr	support for  C langauge interfaces to user-defined types (UDTs) 
orp 	objmgmt/pickler	oracle's external pickler / opaque type interfaces
pesblt pfri pfrsqc 	plsql/cox	pesblt - pl/sql built in interpreter; pfri - pl/sql runtime; pfrsqc - pl/sql callbacks for array sql and dml with returning
piht 	plsql/gen/utl	support for pl/sql implementation of utl_http package
pirg 	plsql/cli/utl_raw	support for pl/sql implementation of utl_raw package
pism 	plsql/cli/utl_smtp	support for pl/sql implementation of utl_smtp package
pitcb 	plsql/cli/utl_tcp	support for pl/sql implementation of utl_tcp package
piur 	plsql/gen/utl_url	support for pl/sql implementation of utl_url package
plio 	plsql/pkg	pl/sql object instantiation 
plslm 	plsql/cox	support for NCOMP processing
plsm pmuc pmuo pmux 	objmgmt/pol	support for pl/sql handling of collections
prifold priold 	plsql/cox	support to allow rpc forwarding to an older release 
prm 	sqllang/param	parameter handling associated with sql layer
prsa prsc prssz 	sqllang/parse	prsa - parser for alter cluster command; prsc - parser for create database command; prssz - support for parse context to be saved
psdbnd psdevn 	progint/dbpsd	psdbnd - support for managing bind variables; psdevn - support for pl/sql debugger
psdicd 	progint/plsicds	small number of ICD to allow pl/sql to call into 'C' source
psdmsc psdpgi 	progint/dbpsd	psdmsc - pl/sql system dependent miscellaneous functions ; psdpgi - support for opening and closing cursors in pl/sql
psf 	plsql/pls	pl/sql service related functions for instantiating called pl/sql unit in library cache
qbadrv qbaopn 	sqllang/qrybufal	provides allocation of buffer and control structures in query execution 
qcdl qcdo 	dict/dictlkup	qcdl - query compile semantic analysis; qcdo - query compile dictionary support for objects
qci 	dict/shrdcurs	support for SQL language parser and semantic analyser
qcop qcpi qcpi3 qcpi4 qcpi5 	sqllang/parse	support for query compilation parse phase
qcs qcs2 qcs3 qcsji qcso 	dict/dictlkup	support for semantic analysis by SQL compiler
qct qcto 	sqllang/typeconv	qct - query compile type check operations; qcto -  query compile type check operators
qcu 	sqllang/parse	various utilities provided for sql compilation
qecdrv 	sqllang/qryedchk	driver performing high level checks on sql language query capabilities
qerae qerba qerbc qerbi qerbm qerbo qerbt qerbu qerbx qercb qercbi qerco qerdl qerep qerff qerfi qerfl qerfu qerfx qergi qergr qergs qerhc qerhj qeril qerim qerix qerjm qerjo qerle qerli qerlt qerns qeroc qeroi qerpa qerpf qerpx qerrm qerse qerso qersq qerst qertb qertq qerua qerup qerus qervw qerwn qerxt 	sqlexec/rowsrc	row source operators : 
qerae - row source (And-Equal) implementation; qerba - Bitmap Index AND row source; qerbc - bitmap index compaction row source; qerbi - bitmap index creation row source; qerbm - QERB Minus row source; qerbo  - Bitmap Index OR row source; qerbt - bitmap convert row source; qerbu - Bitmap Index Unlimited-OR row source; qerbx - bitmap index access row source; qercb - row source: connect by; qercbi - support for connect by; qerco - count row source; qerdl - row source delete; qerep - explosion row source; qerff - row source fifo buffer; qerfi  - first row row source; qerfl  - filter row source definition; qerfu - row source: for update; qerfx - fixed table row source; qergi - granule iterator row source; qergr - group by rollup row source; qergs - group by sort row source; qerhc - row sources hash clusters; qerhj - row source Hash Join;  qeril  - In-list row source; qerim - Index Maintenance row source; qerix - Index row source; qerjo - row source: join; qerle - linear execution row source implementation; qerli - parallel create index; qerlt - row source populate Table;  qerns  - group by No Sort row source; qeroc - object collection iterator row source; qeroi - extensible indexing query component; qerpa - partition row sources; qerpf - query execution row source: prefetch; qerpx - row source: parallelizer; qerrm - remote row source; qerse - row source: set implementation; qerso - sort row source; qersq - row source for sequence number; qerst  - query execution row sources: statistics; qertb - table row source; qertq  - table queue row source; qerua - row source : union-All; 
qerup - update row source; qerus - upsert row source ; qervw - view row source; qerwn - WINDOW row source; qerxt - external table fetch row source
qes3t qesa qesji qesl qesmm qesmmc 	sqlexec/execsvc	run time support for sql execution
qkacon qkadrv qkajoi qkatab qke qkk qkn qkna qkne 	sqlexec/rwsalloc	SQL query dynamic structure allocation routines
qks3t 	sqlexec/execsvc	query execution service associated with temp table transformation
qksmm qksmms qksop 	sqllang/compsvc	qksmm -  memory management services for the SQL compiler; qksmms - memory management simulation services for the SQL compiler; qksop - query compilation service for operand processing
qkswc 	sqlexec/execsvc	support for temp table transformation associated for with clause.
qmf 	xmlsupp/util	support for ftp server; implements processing of ftp commands
qmr qmrb qmrs 	xmlsupp/resolver	support hierarchical resolver 
qms 	xmlsupp/data	support for storage and retrieval of XOBs
qmurs 	xmlsupp/uri	support for handling URIs
qmx qmxsax 	xmlsupp/data	qmx - xml support; qmxsax - support for handling sax processing
qmxtc 	xmlsupp/sqlsupp	support for ddl  and other operators related to the sql XML support
qmxtgx 	xmlsupp	support for transformation : ADT -> XML
qmxtsk 	xmlsupp/sqlsupp	XMLType support functions 
qsme 	summgmt/dict	summary management expression processing
qsmka qsmkz 	dict/dictlkup	qsmka - support to analyze request in order to determine whether a summary could be created that would be useful; qsmkz - support for create/alter summary semantic analysis 
qsmp qsmq qsmqcsm qsmqutl 	summgmt/dict	qsmp - summary management partition processing; qsmq - summary management dictionary access; qsmqcsm - support for create / drop / alter summary and related dimension operations; qsmqutl - support for summaries 
qsms 	summgmt/advsvr	summary management advisor
qxdid 	objsupp/objddl	support for domain index ddl operations
qxidm 	objsupp/objsql	support for extensible index dml operations
qxidp 	objsupp/objddl	support for domain index ddl partition operations
qxim 	objsupp/objsql	extensible indexing support for objects
qxitex qxopc qxope 	objsupp/objddl	qxitex - support for create / drop indextype; qxope - execution time support for operator  callbacks; qxope - execution time support for operator DDL
qxopq qxuag qxxm 	objsupp/objsql	qxopq - support for queries with user-defined operators; qxuag - support for user defined aggregate processing; qxxm - queries involving external tables 
rfmon rfra rfrdb rfrla rfrm rfrxpt 	drs	implements 9i data guard broker monitor 
rnm 	dict/sqlddl	manages rename statement operation
rpi 	progint/rpi	recursive procedure interface which handles the the environment setup where multiple recursize statements are executed from one top level statement
rwoima 	sqlexec/rwoprnds	row operand operations
rwsima 	sqlexec/rowsrc	row source implementation/retrieval according to the defining query
sdbima 	sqlexec/sort	manages and performs sort operation
selexe 	sqlexec/dmldrv	handles the operation of select statement execution
skgm 	osds	platform specific memory management rountines interfacing with O.S. allocation functions
smbima sor 	sqlexec/sort	manages and performs sort operation
sqn 	dict/sqlddl	support for parsing references to sequences
srdima srsima stsima 	sqlexec/sort	manages and performs sort operation
tbsdrv 	space/spcmgmt	operations for executing create / alter / drop tablespace and related supporting functions
ttcclr ttcdrv ttcdty ttcrxh ttcx2y 	progint/twotask	two task common layer which provides high level interaction and negotiation functions for Oracle client when communicating with the server.  It also provides important function of converting client side data / data types into equivalent on the server and vice versa
uixexe ujiexe updexe upsexe 	sqlexec/dmldrv	support for : index maintenance operations, the execution of the update statement and associated actions connected with update as well as the upsert command which combines the operations of update and insert
vop 	optim/vwsubq	view optimisation related functionality
xct 	txn/lcltx	support for the management of transactions and savepoint operations
xpl 	sqlexec/expplan	support for the explain plan command
xty 	sqllang/typeconv	type checking functions
zlke 	security/ols/intext	label security error handling component
}}}
https://community.hortonworks.com/questions/2067/orc-vs-parquet-when-to-use-one-over-the-other.html
<<<
In my mind the two biggest considerations for ORC over Parquet are:

1. Many of the performance improvements provided in the Stinger initiative are dependent on features of the ORC format including block level index for each column. This leads to potentially more efficient I/O allowing Hive to skip reading entire blocks of data if it determines predicate values are not present there. Also the Cost Based Optimizer has the ability to consider column level metadata present in ORC files in order to generate the most efficient graph.

2. ACID transactions are only possible when using ORC as the file format.
<<<

https://hortonworks.com/blog/orcfile-in-hdp-2-better-compression-better-performance/
https://stackoverflow.com/questions/32373460/parquet-vs-orc-vs-orc-with-snappy
http://parquet.apache.org/presentations/
<<showtoc>>


! ORDS for REST API ? 

{{{
I would say ORDS for convenience so you don't have to manually create the JSON API

There's a nice pluralsight ORDS course out there to get started with code examples to practice. It also shows how to secure (oauth2) and deploy ords to a webtier
https://www.pluralsight.com/courses/oracle-rest-data-services

Dan McGhan did a presentation at OOW that shows how to create REST API the manual way using node-oracledb (https://oracle.github.io/node-oracledb/) and also using ORDS. 
Creating RESTful Web Services the Easy Way with Node.js https://www.youtube.com/watch?v=tSW72IlTJGw
code examples: https://github.com/oracle/node-oracledb/issues/962, https://github.com/oracle/oracle-db-examples/tree/master/javascript/rest-api, http://web.archive.org/web/20201128020553/https://jsao.io/2018/03/creating-a-rest-api-with-node-js-and-oracle-database/



And just for completeness, if you are using one of the web frameworks out there like Ember, Angular, React, etc. here's how everything will be glued together from DB to frontend. Example below uses Ember.js app

Oracle DB <-> REST JSON API (node-oracledb or ORDS) <-> built-in Ember JSONAPIAdapter <-> Ember Data <-> Ember


Just for comparison if you are using Postgres as DB and Django for creating REST API, here's how it will look like
Postgresql <-> REST JSON API (Django) <-> built-in Ember JSONAPIAdapter <-> Ember Data <-> Ember


If you are using Postgres as DB and Rails REST API
Postgresql <-> REST JSON API (rails) <-> built-in JSONAPIAdapter <-> Ember Data <-> Ember


Then the old school app architecture using Postgres as DB and Rails ORM
Postgresql <-> ActiveRecord ORM (in a Rails Controller) <-> ActiveModel::Serializers <-> Ember Data (with active-model-adapter) <-> Ember


}}}



! references 

https://www.oracle.com/database/technologies/databaseappdev-vm.html

https://www.pluralsight.com/courses/oracle-rest-data-services

https://www.thatjeffsmith.com/oracle-rest-data-services-ords/
https://oracle-base.com/articles/misc/articles-misc#ords


Creating a REST API with Node.js and Oracle Database http://web.archive.org/web/20201128020553/https://jsao.io/2018/03/creating-a-rest-api-with-node-js-and-oracle-database/
https://github.com/oracle/oracle-db-examples/tree/master/javascript/rest-api
https://github.com/oracle/node-oracledb/issues/962
Creating RESTful Web Services the Easy Way with Node.js https://www.youtube.com/watch?v=tSW72IlTJGw
https://blogs.oracle.com/author/dan-mcghan-3
https://oracle.github.io/node-oracledb/





https://developer.oracle.com/dsl/haefel-oracle-ruby.html





.







https://blog.acolyer.org/2019/07/12/view-centric-performance-optimization/
https://blog.acolyer.org/2018/06/28/how-_not_-to-structure-your-database-backed-web-applications-a-study-of-performance-bugs-in-the-wild/

View-Centric Performance Optimization for Database-Backed Web Applications https://people.cs.uchicago.edu/~shanlu/paper/panorama.pdf
https://developers.google.com/web/fundamentals/performance/why-performance-matters/








https://www.oracle.com/technical-resources/documentation/fsgbu.html


! batch stack 
Oracle Revenue Management and Billing  https://docs.oracle.com/cd/E87761_01/homepage.htm
https://docs.oracle.com/cd/E87761_01/books/V2.6.0.0.0/Oracle_Revenue_Management_and_Billing_Transaction_Feed_Management_-_Batch_Execution_Guide.pdf


! analytics stack
Oracle Revenue Management and Billing Analytics https://docs.oracle.com/cd/E64452_01/homepage.htm
https://docs.oracle.com/cd/E64452_01/books/V2.8.0.0.0/Oracle_Revenue_Management_and_Billing_Analytics_Installation_Guide.pdf
https://docs.oracle.com/cd/E64452_01/books/V2.8.0.0.0/Oracle_Revenue_Management_and_Billing_Analytics_Admin_Guide.pdf






.
https://forums.oracle.com/forums/thread.jspa?threadID=369320&start=15&tstart=0
http://www.oracle.com/technetwork/database/enterprise-edition/calling-shell-commands-from-plsql-1-1-129519.pdf
<<<
{{{
Below is the list of activities on the OSB project

*** some observations
- 

---

1) OSB installation
    Read on the install guide
    Installing and Configuring Oracle Secure Backup 10.2 http://st-curriculum.oracle.com/obe/db/11g/r1/prod/ha/osb10_2install/osb1.htm

2) testing of RMAN backups
    Performing Encrypted Backups with Oracle Secure Backup 10.2 http://st-curriculum.oracle.com/obe/db/11g/r1/prod/ha/osb10_2encrypt/osb2.htm#t3
    Performing Database and File System Backups and Restores Using Oracle Secure 
                 Backup http://st-curriculum.oracle.com/obe/db/10g/r2/prod/ha/ob/ob_otn.htm

    - RMAN backs up directly to tape using
      backup incremental level 0 device type sbt_tape database plus archivelog;

    OR 

    - Daily backup of recovery area to tape.. I noticed it pulls only new archivelogs to tape
      backup device type sbt_tape recovery area;

    OR

    - Daily backup of recovery area and backup sets to tape <-- this is more promising!!!
      backup device type sbt_tape recovery files;


3) testing of filesystem backups, possible to just pull the RMAN backups created on the filesystem 

4) recovery testing of RMAN backups from tape (direct) 

5) recovery testing of RMAN backups from tape-to-disk

6) media policy creation

7) creation of backup scripts for OSB
}}}
<<<

{{{

C:\Documents and Settings\Sopraadmin>sqlplus "/ as sysdba"

SQL*Plus: Release 11.1.0.7.0 - Production on Wed Nov 3 05:59:30 2010

Copyright (c) 1982, 2008, Oracle.  All rights reserved.

Connected to an idle instance.

SQL>
SQL>
SQL> startup
ORACLE instance started.

Total System Global Area  535662592 bytes
Fixed Size                  1348508 bytes
Variable Size             331353188 bytes
Database Buffers          197132288 bytes
Redo Buffers                5828608 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: 'C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF'


SQL>
SQL>
SQL> select * from v$instance;

INSTANCE_NUMBER INSTANCE_NAME
--------------- ----------------
HOST_NAME
----------------------------------------------------------------
VERSION           STARTUP_T STATUS       PAR    THREAD# ARCHIVE LOG_SWITCH_WAIT
----------------- --------- ------------ --- ---------- ------- ---------------
LOGINS     SHU DATABASE_STATUS   INSTANCE_ROLE      ACTIVE_ST BLO
---------- --- ----------------- ------------------ --------- ---
              1 osbtest
PHBSPSERV010
11.1.0.7.0        03-NOV-10 MOUNTED      NO           1 STARTED
ALLOWED    NO  ACTIVE            PRIMARY_INSTANCE   NORMAL    NO


SQL>
SQL>
SQL> select name, status from v$datafile;

NAME
--------------------------------------------------------------------------------
STATUS
-------
C:\ORACLE\ORADATA\OSBTEST\SYSTEM01.DBF
SYSTEM

C:\ORACLE\ORADATA\OSBTEST\SYSAUX01.DBF
ONLINE

C:\ORACLE\ORADATA\OSBTEST\UNDOTBS01.DBF
ONLINE


NAME
--------------------------------------------------------------------------------
STATUS
-------
C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF
ONLINE

C:\ORACLE\ORADATA\OSBTEST\EXAMPLE01.DBF
ONLINE


SQL>
SQL> set lines 300
SQL> r
  1* select name, status from v$datafile

NAME
------------------------------------------------------------------------------------------------------------------------------------------------------------------------

STATUS
-------
C:\ORACLE\ORADATA\OSBTEST\SYSTEM01.DBF
SYSTEM

C:\ORACLE\ORADATA\OSBTEST\SYSAUX01.DBF
ONLINE

C:\ORACLE\ORADATA\OSBTEST\UNDOTBS01.DBF
ONLINE


NAME
------------------------------------------------------------------------------------------------------------------------------------------------------------------------

STATUS
-------
C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF
ONLINE

C:\ORACLE\ORADATA\OSBTEST\EXAMPLE01.DBF
ONLINE


SQL>
SQL>
SQL> col name format a30
SQL> r
  1* select name, status from v$datafile

NAME                           STATUS
------------------------------ -------
C:\ORACLE\ORADATA\OSBTEST\SYST SYSTEM
EM01.DBF

C:\ORACLE\ORADATA\OSBTEST\SYSA ONLINE
UX01.DBF

C:\ORACLE\ORADATA\OSBTEST\UNDO ONLINE
TBS01.DBF

C:\ORACLE\ORADATA\OSBTEST\USER ONLINE
S01.DBF

NAME                           STATUS
------------------------------ -------

C:\ORACLE\ORADATA\OSBTEST\EXAM ONLINE
PLE01.DBF


SQL>
SQL>
SQL> col name format a50
SQL> r
  1* select name, status from v$datafile

NAME                                               STATUS
-------------------------------------------------- -------
C:\ORACLE\ORADATA\OSBTEST\SYSTEM01.DBF             SYSTEM
C:\ORACLE\ORADATA\OSBTEST\SYSAUX01.DBF             ONLINE
C:\ORACLE\ORADATA\OSBTEST\UNDOTBS01.DBF            ONLINE
C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF              ONLINE
C:\ORACLE\ORADATA\OSBTEST\EXAMPLE01.DBF            ONLINE

SQL>
SQL>
SQL>
SQL>
SQL>
SQL>
SQL> desc v$datafile
 Name                                                                                                                                                                  N
 -----------------------------------------------------------------------------------------------------------------------------------------------------------------------

 FILE#
 CREATION_CHANGE#
 CREATION_TIME
 TS#
 RFILE#
 STATUS
 ENABLED
 CHECKPOINT_CHANGE#
 CHECKPOINT_TIME
 UNRECOVERABLE_CHANGE#
 UNRECOVERABLE_TIME
 LAST_CHANGE#
 LAST_TIME
 OFFLINE_CHANGE#
 ONLINE_CHANGE#
 ONLINE_TIME
 BYTES
 BLOCKS
 CREATE_BYTES
 BLOCK_SIZE
 NAME
 PLUGGED_IN
 BLOCK1_OFFSET
 AUX_NAME
 FIRST_NONLOGGED_SCN
 FIRST_NONLOGGED_TIME
 FOREIGN_DBID
 FOREIGN_CREATION_CHANGE#
 FOREIGN_CREATION_TIME
 PLUGGED_READONLY
 PLUGIN_CHANGE#
 PLUGIN_RESETLOGS_CHANGE#
 PLUGIN_RESETLOGS_TIME

SQL> select * from v$recover_file;

     FILE# ONLINE  ONLINE_ ERROR                                                                CHANGE# TIME
---------- ------- ------- ----------------------------------------------------------------- ---------- ---------
         4 ONLINE  ONLINE  FILE NOT FOUND                                                             0

SQL>
SQL>
SQL>
SQL>
SQL> recover datafile 4;
ORA-00283: recovery session canceled due to errors
ORA-01110: data file 4: 'C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF'
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: 'C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF'


SQL>
SQL>
SQL>
SQL>
SQL> alter database open;

Database altered.

SQL> select * from v$recover_file;

no rows selected

SQL>
SQL>
SQL> create table test1 as select * from dba_objects;

Table created.

SQL>
SQL>
SQL>
SQL> alter system switch logfile;

System altered.

SQL>
SQL>
SQL>
SQL>
SQL>
SQL>
SQL>
SQL>
SQL> alter system switch logfile;

System altered.

SQL> alter system switch logfile;

System altered.

SQL> shutdown abort
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area  535662592 bytes
Fixed Size                  1348508 bytes
Variable Size             331353188 bytes
Database Buffers          197132288 bytes
Redo Buffers                5828608 bytes
Database mounted.
Database opened.
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area  535662592 bytes
Fixed Size                  1348508 bytes
Variable Size             331353188 bytes
Database Buffers          197132288 bytes
Redo Buffers                5828608 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: 'C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF'


SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-01113: file 4 needs media recovery
ORA-01110: data file 4: 'C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF'


SQL> alter database open;

Database altered.

SQL> select count(*) from test1;

  COUNT(*)
----------
     69614

SQL>



RMAN SESSION
================================================================================

RMAN> list backup of database summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
1       B  0  A *           28-OCT-10       1       2       YES        TAG20101028T065752
4       B  0  A *           28-OCT-10       1       2       YES        TAG20101028T070243
8       B  F  A SBT_TAPE    28-OCT-10       1       1       NO         TEST4
9       B  F  A SBT_TAPE    28-OCT-10       1       1       NO         TEST5
10      B  0  A SBT_TAPE    28-OCT-10       1       1       NO         TEST6
12      B  0  A SBT_TAPE    28-OCT-10       1       1       NO         TAG20101028T083340
14      B  0  A *           03-NOV-10       1       2       YES        TAG20101103T035736
19      B  0  A SBT_TAPE    03-NOV-10       1       1       NO         TAG20101103T042106
23      B  0  A *           03-NOV-10       1       2       YES        TAG20101103T042801
30      B  1  A *           03-NOV-10       1       2       YES        TAG20101103T052913

RMAN>

RMAN>

RMAN> exit


Recovery Manager complete.

C:\>
C:\>
C:\>rman target /

Recovery Manager: Release 11.1.0.7.0 - Production on Wed Nov 3 06:01:24 2010

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

connected to target database: OSBTEST (DBID=3880221928, not open)

RMAN>

RMAN>

RMAN> restore datafile 4;

Starting restore at 03-NOV-10
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=155 device type=DISK
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=151 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Secure Backup

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00004 to C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF
channel ORA_DISK_1: reading from backup piece C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\BACKUPSET\OSBTEST-20101103-16LS21MH_1_1
channel ORA_DISK_1: ORA-19870: error while restoring backup piece C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\BACKUPSET\OSBTEST-20101103-16LS21MH_1_1
ORA-19505: failed to identify file "C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\BACKUPSET\OSBTEST-20101103-16LS21MH_1_1"
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.

channel ORA_DISK_1: failover to duplicate backup on device SBT_TAPE
channel ORA_SBT_TAPE_1: starting datafile backup set restore
channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set
channel ORA_SBT_TAPE_1: restoring datafile 00004 to C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF
channel ORA_SBT_TAPE_1: reading from backup piece 16ls21mh_1_2
channel ORA_SBT_TAPE_1: piece handle=16ls21mh_1_2 tag=TAG20101103T042801
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:02:15
Finished restore at 03-NOV-10

RMAN>

RMAN>

RMAN> recover datafile 4;

Starting recover at 03-NOV-10
using channel ORA_DISK_1
using channel ORA_SBT_TAPE_1
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00004: C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF
channel ORA_DISK_1: reading from backup piece C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\BACKUPSET\OSBTEST-20101103-1ELS259A_1_1
channel ORA_DISK_1: ORA-19870: error while restoring backup piece C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\BACKUPSET\OSBTEST-20101103-1ELS259A_1_1
ORA-19505: failed to identify file "C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\BACKUPSET\OSBTEST-20101103-1ELS259A_1_1"
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.

channel ORA_DISK_1: failover to duplicate backup on device SBT_TAPE
channel ORA_SBT_TAPE_1: starting incremental datafile backup set restore
channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00004: C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF
channel ORA_SBT_TAPE_1: reading from backup piece 1els259a_1_2
channel ORA_SBT_TAPE_1: piece handle=1els259a_1_2 tag=TAG20101103T052913
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:01:05

starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 03-NOV-10

RMAN> exit


Recovery Manager complete.

C:\>rman target /

Recovery Manager: Release 11.1.0.7.0 - Production on Wed Nov 3 06:08:25 2010

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

connected to target database: OSBTEST (DBID=3880221928)

RMAN>

RMAN>

RMAN>

RMAN> backup device type sbt_tape recovery files;

Starting backup at 03-NOV-10
using target database control file instead of recovery catalog
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=152 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Secure Backup
specification does not match any datafile copy in the repository
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_387_6DHMHOHV_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_388_6DJHMSCH_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_389_6DKDNSNX_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_390_6DKJOW98_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_391_6DKPBFO0_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_392_6DLCFW7Y_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_393_6DLO7W6O_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_394_6DLO8284_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_395_6DLOZJKC_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_396_6DLP1RR7_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_397_6DLP4C58_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_398_6DLQVS1F_.ARC; already backed up 5 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_399_6DLSG4J2_.ARC; already backed up 5 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_400_6DLVCC51_.ARC; already backed up 4 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_401_6DMPTGDZ_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_402_6DN53SQ7_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_403_6DNBPQDD_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_404_6DNXNSQK_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_405_6DORDFX3_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_406_6DPNJQGZ_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_407_6DPSL8RX_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_408_6DPZ407C_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_409_6DQQOW6P_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_410_6DR9ODQ0_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_411_6DS0YS5K_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_412_6DSG0WSO_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_413_6DSMWSGN_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_414_6DT12CKT_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_415_6DTVZHTY_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_416_6DVO555M_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_417_6DW2HM13_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_418_6DW8D7SM_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_419_6DX1O0J7_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_420_6DY4YX2O_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_421_6DYW68KR_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_422_6DYW7O4M_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_423_6DZKYR64_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_424_6F0O48W3_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_425_6F1JL1T8_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_426_6F1JOWCO_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_427_6F1THQ8F_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_428_6F28WM5V_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_429_6F2924RX_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_430_6F2B7XRK_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_431_6F2BGSRO_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_432_6F2BOOD3_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_433_6F2BV5OH_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_434_6F2FZC4D_.ARC; already backed up 1 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_435_6F2HGOOW_.ARC; already backed up 1 time(s)
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 11/03/2010 06:08:43
RMAN-06059: expected archived log not found, lost of archived log compromises recoverability
ORA-19625: error identifying file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_436_6F2HZ807_.ARC
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.

RMAN>

RMAN>

RMAN>

RMAN>

RMAN> backup device type sbt_tape recovery files;

Starting backup at 03-NOV-10
using channel ORA_SBT_TAPE_1
specification does not match any datafile copy in the repository
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_387_6DHMHOHV_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_388_6DJHMSCH_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_389_6DKDNSNX_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_390_6DKJOW98_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_391_6DKPBFO0_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_392_6DLCFW7Y_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_393_6DLO7W6O_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_394_6DLO8284_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_395_6DLOZJKC_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_396_6DLP1RR7_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_397_6DLP4C58_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_398_6DLQVS1F_.ARC; already backed up 5 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_399_6DLSG4J2_.ARC; already backed up 5 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_400_6DLVCC51_.ARC; already backed up 4 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_401_6DMPTGDZ_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_402_6DN53SQ7_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_403_6DNBPQDD_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_404_6DNXNSQK_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_405_6DORDFX3_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_406_6DPNJQGZ_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_407_6DPSL8RX_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_408_6DPZ407C_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_409_6DQQOW6P_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_410_6DR9ODQ0_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_411_6DS0YS5K_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_412_6DSG0WSO_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_413_6DSMWSGN_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_414_6DT12CKT_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_415_6DTVZHTY_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_416_6DVO555M_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_417_6DW2HM13_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_418_6DW8D7SM_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_419_6DX1O0J7_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_420_6DY4YX2O_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_421_6DYW68KR_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_422_6DYW7O4M_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_423_6DZKYR64_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_424_6F0O48W3_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_425_6F1JL1T8_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_426_6F1JOWCO_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_427_6F1THQ8F_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_428_6F28WM5V_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_429_6F2924RX_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_430_6F2B7XRK_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_431_6F2BGSRO_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_432_6F2BOOD3_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_433_6F2BV5OH_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_434_6F2FZC4D_.ARC; already backed up 1 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_435_6F2HGOOW_.ARC; already backed up 1 time(s)
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 11/03/2010 06:09:02
RMAN-06059: expected archived log not found, lost of archived log compromises recoverability
ORA-19625: error identifying file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_436_6F2HZ807_.ARC
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.

RMAN>

RMAN>

RMAN> backup device type sbt_tape recovery files;

Starting backup at 03-NOV-10
using channel ORA_SBT_TAPE_1
specification does not match any datafile copy in the repository
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_387_6DHMHOHV_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_388_6DJHMSCH_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_389_6DKDNSNX_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_390_6DKJOW98_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_391_6DKPBFO0_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_392_6DLCFW7Y_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_393_6DLO7W6O_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_394_6DLO8284_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_395_6DLOZJKC_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_396_6DLP1RR7_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_397_6DLP4C58_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_398_6DLQVS1F_.ARC; already backed up 5 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_399_6DLSG4J2_.ARC; already backed up 5 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_400_6DLVCC51_.ARC; already backed up 4 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_401_6DMPTGDZ_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_402_6DN53SQ7_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_403_6DNBPQDD_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_404_6DNXNSQK_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_405_6DORDFX3_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_406_6DPNJQGZ_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_407_6DPSL8RX_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_408_6DPZ407C_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_409_6DQQOW6P_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_410_6DR9ODQ0_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_411_6DS0YS5K_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_412_6DSG0WSO_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_413_6DSMWSGN_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_414_6DT12CKT_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_415_6DTVZHTY_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_416_6DVO555M_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_417_6DW2HM13_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_418_6DW8D7SM_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_419_6DX1O0J7_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_420_6DY4YX2O_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_421_6DYW68KR_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_422_6DYW7O4M_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_423_6DZKYR64_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_424_6F0O48W3_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_425_6F1JL1T8_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_426_6F1JOWCO_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_427_6F1THQ8F_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_428_6F28WM5V_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_429_6F2924RX_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_430_6F2B7XRK_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_431_6F2BGSRO_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_432_6F2BOOD3_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_433_6F2BV5OH_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_434_6F2FZC4D_.ARC; already backed up 1 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_435_6F2HGOOW_.ARC; already backed up 1 time(s)
skipping backup set key 1; already backed up 1 time(s)
skipping backup set key 2; already backed up 1 time(s)
skipping backup set key 3; already backed up 1 time(s)
skipping backup set key 4; already backed up 1 time(s)
skipping backup set key 5; already backed up 1 time(s)
skipping backup set key 6; already backed up 1 time(s)
skipping backup set key 13; already backed up 1 time(s)
skipping backup set key 14; already backed up 1 time(s)
skipping backup set key 15; already backed up 1 time(s)
skipping backup set key 16; already backed up 1 time(s)
skipping backup set key 17; already backed up 1 time(s)
skipping backup set key 22; already backed up 1 time(s)
skipping backup set key 23; already backed up 1 time(s)
skipping backup set key 24; already backed up 1 time(s)
skipping backup set key 25; already backed up 1 time(s)
skipping backup set key 30; already backed up 1 time(s)
skipping backup set key 31; already backed up 1 time(s)
channel ORA_SBT_TAPE_1: starting archived log backup set
channel ORA_SBT_TAPE_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=436 RECID=433 STAMP=734075800
input archived log thread=1 sequence=437 RECID=434 STAMP=734076483
input archived log thread=1 sequence=438 RECID=435 STAMP=734076495
input archived log thread=1 sequence=439 RECID=436 STAMP=734076498
channel ORA_SBT_TAPE_1: starting piece 1 at 03-NOV-10
channel ORA_SBT_TAPE_1: finished piece 1 at 03-NOV-10
piece handle=1jls27la_1_1 tag=TAG20101103T060944 comment=API Version 2.0,MMS Version 10.3.0.2
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:01:55
Finished backup at 03-NOV-10

Starting Control File and SPFILE Autobackup at 03-NOV-10
piece handle=c-3880221928-20101103-08 comment=API Version 2.0,MMS Version 10.3.0.2
Finished Control File and SPFILE Autobackup at 03-NOV-10

RMAN> backup device type sbt_tape recovery files;

Starting backup at 03-NOV-10
using channel ORA_SBT_TAPE_1
specification does not match any datafile copy in the repository
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_387_6DHMHOHV_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_388_6DJHMSCH_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_389_6DKDNSNX_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_390_6DKJOW98_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_391_6DKPBFO0_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_392_6DLCFW7Y_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_393_6DLO7W6O_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_394_6DLO8284_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_395_6DLOZJKC_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_396_6DLP1RR7_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_397_6DLP4C58_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_398_6DLQVS1F_.ARC; already backed up 5 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_399_6DLSG4J2_.ARC; already backed up 5 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_400_6DLVCC51_.ARC; already backed up 4 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_401_6DMPTGDZ_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_402_6DN53SQ7_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_403_6DNBPQDD_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_404_6DNXNSQK_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_405_6DORDFX3_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_406_6DPNJQGZ_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_407_6DPSL8RX_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_408_6DPZ407C_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_409_6DQQOW6P_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_410_6DR9ODQ0_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_411_6DS0YS5K_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_412_6DSG0WSO_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_413_6DSMWSGN_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_414_6DT12CKT_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_415_6DTVZHTY_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_416_6DVO555M_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_417_6DW2HM13_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_418_6DW8D7SM_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_419_6DX1O0J7_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_420_6DY4YX2O_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_421_6DYW68KR_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_422_6DYW7O4M_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_423_6DZKYR64_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_424_6F0O48W3_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_425_6F1JL1T8_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_426_6F1JOWCO_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_427_6F1THQ8F_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_428_6F28WM5V_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_429_6F2924RX_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_430_6F2B7XRK_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_431_6F2BGSRO_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_432_6F2BOOD3_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_433_6F2BV5OH_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_434_6F2FZC4D_.ARC; already backed up 1 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_435_6F2HGOOW_.ARC; already backed up 1 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_436_6F2HZ807_.ARC; already backed up 1 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_437_6F2JNM41_.ARC; already backed up 1 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_438_6F2JNZBW_.ARC; already backed up 1 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_439_6F2JO2TJ_.ARC; already backed up 1 time(s)
skipping backup set key 1; already backed up 1 time(s)
skipping backup set key 2; already backed up 1 time(s)
skipping backup set key 3; already backed up 1 time(s)
skipping backup set key 4; already backed up 1 time(s)
skipping backup set key 5; already backed up 1 time(s)
skipping backup set key 6; already backed up 1 time(s)
skipping backup set key 13; already backed up 1 time(s)
skipping backup set key 14; already backed up 1 time(s)
skipping backup set key 15; already backed up 1 time(s)
skipping backup set key 16; already backed up 1 time(s)
skipping backup set key 17; already backed up 1 time(s)
skipping backup set key 22; already backed up 1 time(s)
skipping backup set key 23; already backed up 1 time(s)
skipping backup set key 24; already backed up 1 time(s)
skipping backup set key 25; already backed up 1 time(s)
skipping backup set key 30; already backed up 1 time(s)
skipping backup set key 31; already backed up 1 time(s)
Finished backup at 03-NOV-10

RMAN>

RMAN>

RMAN> exit


Recovery Manager complete.

C:\>rman target /

Recovery Manager: Release 11.1.0.7.0 - Production on Wed Nov 3 06:15:08 2010

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

connected to target database: OSBTEST (DBID=3880221928, not open)

RMAN> restore datafile 4;

Starting restore at 03-NOV-10
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=156 device type=DISK
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=151 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Secure Backup

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00004 to C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF
channel ORA_DISK_1: reading from backup piece C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\BACKUPSET\OSBTEST-20101103-16LS21MH_1_1
channel ORA_DISK_1: ORA-19870: error while restoring backup piece C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\BACKUPSET\OSBTEST-20101103-16LS21MH_1_1
ORA-19505: failed to identify file "C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\BACKUPSET\OSBTEST-20101103-16LS21MH_1_1"
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.

channel ORA_DISK_1: failover to duplicate backup on device SBT_TAPE
channel ORA_SBT_TAPE_1: starting datafile backup set restore
channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set
channel ORA_SBT_TAPE_1: restoring datafile 00004 to C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF
channel ORA_SBT_TAPE_1: reading from backup piece 16ls21mh_1_2
channel ORA_SBT_TAPE_1: piece handle=16ls21mh_1_2 tag=TAG20101103T042801
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:01:45
Finished restore at 03-NOV-10

RMAN>

RMAN>

RMAN> recover datafile 4;

Starting recover at 03-NOV-10
using channel ORA_DISK_1
using channel ORA_SBT_TAPE_1
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00004: C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF
channel ORA_DISK_1: reading from backup piece C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\BACKUPSET\OSBTEST-20101103-1ELS259A_1_1
channel ORA_DISK_1: ORA-19870: error while restoring backup piece C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\BACKUPSET\OSBTEST-20101103-1ELS259A_1_1
ORA-19505: failed to identify file "C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\BACKUPSET\OSBTEST-20101103-1ELS259A_1_1"
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.

channel ORA_DISK_1: failover to duplicate backup on device SBT_TAPE
channel ORA_SBT_TAPE_1: starting incremental datafile backup set restore
channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00004: C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF
channel ORA_SBT_TAPE_1: reading from backup piece 1els259a_1_2
channel ORA_SBT_TAPE_1: piece handle=1els259a_1_2 tag=TAG20101103T052913
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:01:05

starting media recovery

archived log for thread 1 with sequence 440 is already on disk as file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_440_6F2K0239_.ARC
channel ORA_SBT_TAPE_1: starting archived log restore to default destination
channel ORA_SBT_TAPE_1: restoring archived log
archived log thread=1 sequence=435
channel ORA_SBT_TAPE_1: reading from backup piece 1hls26c7_1_1
channel ORA_SBT_TAPE_1: piece handle=1hls26c7_1_1 tag=TAG20101103T054751
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:01:05
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_435_6F2KCN7F_.ARC thread=1 sequence=435
channel default: deleting archived log(s)
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_435_6F2KCN7F_.ARC RECID=438 STAMP=734077220
channel ORA_SBT_TAPE_1: starting archived log restore to default destination
channel ORA_SBT_TAPE_1: restoring archived log
archived log thread=1 sequence=436
channel ORA_SBT_TAPE_1: restoring archived log
archived log thread=1 sequence=437
channel ORA_SBT_TAPE_1: restoring archived log
archived log thread=1 sequence=438
channel ORA_SBT_TAPE_1: restoring archived log
archived log thread=1 sequence=439
channel ORA_SBT_TAPE_1: reading from backup piece 1jls27la_1_1
channel ORA_SBT_TAPE_1: piece handle=1jls27la_1_1 tag=TAG20101103T060944
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:01:05
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_436_6F2KFOXG_.ARC thread=1 sequence=436
channel default: deleting archived log(s)
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_436_6F2KFOXG_.ARC RECID=439 STAMP=734077286
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_437_6F2KFOS1_.ARC thread=1 sequence=437
channel default: deleting archived log(s)
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_437_6F2KFOS1_.ARC RECID=442 STAMP=734077287
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_438_6F2KFPJ8_.ARC thread=1 sequence=438
channel default: deleting archived log(s)
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_438_6F2KFPJ8_.ARC RECID=441 STAMP=734077286
channel default: deleting archived log(s)
archived log file name=C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_439_6F2KFP0N_.ARC RECID=440 STAMP=734077286
media recovery complete, elapsed time: 00:00:03
Finished recover at 03-NOV-10

RMAN>

RMAN>

RMAN>

RMAN>

RMAN> backup device type sbt_tape recovery files;

Starting backup at 03-NOV-10
released channel: ORA_DISK_1
using channel ORA_SBT_TAPE_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 11/03/2010 06:29:28
RMAN-20021: database not set

RMAN> exit


Recovery Manager complete.

C:\>rman target /

Recovery Manager: Release 11.1.0.7.0 - Production on Wed Nov 3 06:29:31 2010

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

connected to target database: OSBTEST (DBID=3880221928)

RMAN> backup device type sbt_tape recovery files;

Starting backup at 03-NOV-10
using target database control file instead of recovery catalog
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=152 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Secure Backup
specification does not match any datafile copy in the repository
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_387_6DHMHOHV_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_388_6DJHMSCH_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_389_6DKDNSNX_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_390_6DKJOW98_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_27\O1_MF_1_391_6DKPBFO0_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_392_6DLCFW7Y_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_393_6DLO7W6O_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_394_6DLO8284_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_395_6DLOZJKC_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_396_6DLP1RR7_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_397_6DLP4C58_.ARC; already backed up 6 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_398_6DLQVS1F_.ARC; already backed up 5 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_399_6DLSG4J2_.ARC; already backed up 5 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_400_6DLVCC51_.ARC; already backed up 4 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_401_6DMPTGDZ_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_402_6DN53SQ7_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_28\O1_MF_1_403_6DNBPQDD_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_404_6DNXNSQK_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_405_6DORDFX3_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_406_6DPNJQGZ_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_407_6DPSL8RX_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_29\O1_MF_1_408_6DPZ407C_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_409_6DQQOW6P_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_410_6DR9ODQ0_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_411_6DS0YS5K_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_412_6DSG0WSO_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_30\O1_MF_1_413_6DSMWSGN_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_414_6DT12CKT_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_415_6DTVZHTY_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_416_6DVO555M_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_417_6DW2HM13_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_10_31\O1_MF_1_418_6DW8D7SM_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_419_6DX1O0J7_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_420_6DY4YX2O_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_421_6DYW68KR_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_01\O1_MF_1_422_6DYW7O4M_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_423_6DZKYR64_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_424_6F0O48W3_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_425_6F1JL1T8_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_426_6F1JOWCO_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_02\O1_MF_1_427_6F1THQ8F_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_428_6F28WM5V_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_429_6F2924RX_.ARC; already backed up 3 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_430_6F2B7XRK_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_431_6F2BGSRO_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_432_6F2BOOD3_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_433_6F2BV5OH_.ARC; already backed up 2 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_434_6F2FZC4D_.ARC; already backed up 1 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_435_6F2HGOOW_.ARC; already backed up 1 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_436_6F2HZ807_.ARC; already backed up 1 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_437_6F2JNM41_.ARC; already backed up 1 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_438_6F2JNZBW_.ARC; already backed up 1 time(s)
skipping archived log file C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_439_6F2JO2TJ_.ARC; already backed up 1 time(s)
skipping backup set key 1; already backed up 1 time(s)
skipping backup set key 2; already backed up 1 time(s)
skipping backup set key 3; already backed up 1 time(s)
skipping backup set key 4; already backed up 1 time(s)
skipping backup set key 5; already backed up 1 time(s)
skipping backup set key 6; already backed up 1 time(s)
skipping backup set key 13; already backed up 1 time(s)
skipping backup set key 14; already backed up 1 time(s)
skipping backup set key 15; already backed up 1 time(s)
skipping backup set key 16; already backed up 1 time(s)
skipping backup set key 17; already backed up 1 time(s)
skipping backup set key 22; already backed up 1 time(s)
skipping backup set key 23; already backed up 1 time(s)
skipping backup set key 24; already backed up 1 time(s)
skipping backup set key 25; already backed up 1 time(s)
skipping backup set key 30; already backed up 1 time(s)
skipping backup set key 31; already backed up 1 time(s)
channel ORA_SBT_TAPE_1: starting archived log backup set
channel ORA_SBT_TAPE_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=440 RECID=437 STAMP=734076850
channel ORA_SBT_TAPE_1: starting piece 1 at 03-NOV-10
channel ORA_SBT_TAPE_1: finished piece 1 at 03-NOV-10
piece handle=1lls28qf_1_1 tag=TAG20101103T062935 comment=API Version 2.0,MMS Version 10.3.0.2
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:02:15
Finished backup at 03-NOV-10

Starting Control File and SPFILE Autobackup at 03-NOV-10
piece handle=c-3880221928-20101103-09 comment=API Version 2.0,MMS Version 10.3.0.2
Finished Control File and SPFILE Autobackup at 03-NOV-10

RMAN>



ALERT LOG
===========================================

Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 2
Using LOG_ARCHIVE_DEST_10 parameter default value as USE_DB_RECOVERY_FILE_DEST
Autotune of undo retention is turned on. 
IMODE=BR
ILAT =18
LICENSE_MAX_USERS = 0
SYS auditing is disabled
Starting up ORACLE RDBMS Version: 11.1.0.7.0.
Using parameter settings in server-side spfile C:\APP\ORACLE\PRODUCT\11.1.0\DB_1\DATABASE\SPFILEOSBTEST.ORA
System parameters with non-default values:
  processes                = 150
  resource_limit           = TRUE
  nls_territory            = "PHILIPPINES"
  memory_target            = 820M
  control_files            = "C:\ORACLE\ORADATA\OSBTEST\CONTROL01.CTL"
  control_files            = "C:\ORACLE\ORADATA\OSBTEST\CONTROL02.CTL"
  control_files            = "C:\ORACLE\ORADATA\OSBTEST\CONTROL03.CTL"
  db_block_size            = 8192
  compatible               = "11.1.0.0.0"
  log_archive_format       = "ARC%S_%R.%T"
  db_recovery_file_dest    = "\oracle\flash_recovery_area"
  db_recovery_file_dest_size= 40000M
  undo_tablespace          = "UNDOTBS1"
  remote_login_passwordfile= "EXCLUSIVE"
  db_domain                = "epassport.ph"
  dispatchers              = "(PROTOCOL=TCP) (SERVICE=osbtestXDB)"
  audit_file_dest          = "C:\APP\ORACLE\ADMIN\OSBTEST\ADUMP"
  audit_trail              = "NONE"
  db_name                  = "osbtest"
  open_cursors             = 300
  diagnostic_dest          = "C:\APP\ORACLE"
Wed Nov 03 05:56:30 2010
PMON started with pid=2, OS id=77428 
Wed Nov 03 05:56:30 2010
VKTM started with pid=3, OS id=78532 at elevated priority
Wed Nov 03 05:56:30 2010
DIAG started with pid=4, OS id=76744 
VKTM running at (20)ms precision
Wed Nov 03 05:56:30 2010
DBRM started with pid=5, OS id=72456 
Wed Nov 03 05:56:30 2010
PSP0 started with pid=6, OS id=75816 
Wed Nov 03 05:56:30 2010
DIA0 started with pid=7, OS id=77752 
Wed Nov 03 05:56:30 2010
MMAN started with pid=8, OS id=75856 
Wed Nov 03 05:56:30 2010
DBW0 started with pid=9, OS id=78564 
Wed Nov 03 05:56:30 2010
LGWR started with pid=10, OS id=74368 
Wed Nov 03 05:56:30 2010
CKPT started with pid=11, OS id=76372 
Wed Nov 03 05:56:30 2010
SMON started with pid=12, OS id=77512 
Wed Nov 03 05:56:30 2010
RECO started with pid=13, OS id=78396 
Wed Nov 03 05:56:30 2010
MMON started with pid=14, OS id=79724 
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
starting up 1 shared server(s) ...
ORACLE_BASE from environment = C:\app\oracle
Wed Nov 03 05:56:30 2010
ALTER DATABASE   MOUNT
Wed Nov 03 05:56:30 2010
MMNL started with pid=15, OS id=73768 
Wed Nov 03 05:56:34 2010
Sweep Incident[6004]: completed
Sweep Incident[5155]: completed
Sweep Incident[5154]: completed
Setting recovery target incarnation to 2
Successful mount of redo thread 1, with mount id 3888072718
Database mounted in Exclusive Mode
Lost write protection disabled
Completed: ALTER DATABASE   MOUNT
Wed Nov 03 05:56:35 2010
ALTER DATABASE OPEN
Sweep Incident[5153]: completed
Beginning crash recovery of 1 threads
 parallel recovery started with 7 processes
Started redo scan
Completed redo scan
 8 redo blocks read, 3 data blocks need recovery
Started redo application at
 Thread 1: logseq 436, block 372
Recovery of Online Redo Log: Thread 1 Group 1 Seq 436 Reading mem 0
  Mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO01.LOG
Completed redo application of 0.00MB
Completed crash recovery at
 Thread 1: logseq 436, block 380, scn 12289793
 3 data blocks read, 3 data blocks written, 8 redo blocks read
LGWR: STARTING ARCH PROCESSES
Wed Nov 03 05:56:38 2010
ARC0 started with pid=21, OS id=77892 
Wed Nov 03 05:56:38 2010
ARC1 started with pid=27, OS id=79792 
Wed Nov 03 05:56:38 2010
ARC2 started with pid=28, OS id=78260 
ARC0: Archival started
Wed Nov 03 05:56:38 2010
ARC3 started with pid=29, OS id=79212 
ARC1: Archival started
ARC2: Archival started
ARC3: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
Thread 1 advanced to log sequence 437 (thread open)
Thread 1 opened at log sequence 437
ARC0: Becoming the 'no FAL' ARCH
  Current log# 2 seq# 437 mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO02.LOG
ARC0: Becoming the 'no SRL' ARCH
Successful open of redo thread 1
ARC3: Becoming the heartbeat ARCH
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
SMON: enabling cache recovery
db_recovery_file_dest_size of 40000 MB is 44.23% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Successfully onlined Undo Tablespace 2.
Verifying file header compatibility for 11g tablespace encryption..
Verifying 11g file header compatibility for tablespace encryption completed
SMON: enabling tx recovery
Database Characterset is AL32UTF8
Opening with internal Resource Manager plan 
Starting background process FBDA
Wed Nov 03 05:56:41 2010
FBDA started with pid=30, OS id=77832 
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
Wed Nov 03 05:56:42 2010
QMNC started with pid=31, OS id=77904 
Wed Nov 03 05:56:56 2010
Completed: ALTER DATABASE OPEN
Stopping background process FBDA
Shutting down instance: further logons disabled
Stopping background process QMNC
Wed Nov 03 05:57:06 2010
Stopping background process MMNL
Stopping background process MMON
Shutting down instance (immediate)
License high water mark = 8
Waiting for dispatcher 'D000' to shutdown
All dispatchers and shared servers shutdown
ALTER DATABASE CLOSE NORMAL
Wed Nov 03 05:57:10 2010
SMON: disabling tx recovery
SMON: disabling cache recovery
Wed Nov 03 05:57:11 2010
Shutting down archive processes
Archiving is disabled
Wed Nov 03 05:57:11 2010
ARCH shutting down
Wed Nov 03 05:57:11 2010
ARCH shutting down
ARC0: Archival stopped
ARC1: Archival stopped
Wed Nov 03 05:57:11 2010
ARCH shutting down
ARC2: Archival stopped
Wed Nov 03 05:57:11 2010
ARCH shutting down
ARC3: Archival stopped
Thread 1 closed at log sequence 437
Successful close of redo thread 1
Completed: ALTER DATABASE CLOSE NORMAL
ALTER DATABASE DISMOUNT
Completed: ALTER DATABASE DISMOUNT
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
Archive process shutdown avoided: 0 active
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Wed Nov 03 05:57:16 2010
Stopping background process VKTM: 
Archiving is disabled
Archive process shutdown avoided: 0 active
Wed Nov 03 05:57:18 2010
Instance shutdown complete
Wed Nov 03 05:59:32 2010
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 2
Using LOG_ARCHIVE_DEST_10 parameter default value as USE_DB_RECOVERY_FILE_DEST
Autotune of undo retention is turned on. 
IMODE=BR
ILAT =18
LICENSE_MAX_USERS = 0
SYS auditing is disabled
Starting up ORACLE RDBMS Version: 11.1.0.7.0.
Using parameter settings in server-side spfile C:\APP\ORACLE\PRODUCT\11.1.0\DB_1\DATABASE\SPFILEOSBTEST.ORA
System parameters with non-default values:
  processes                = 150
  resource_limit           = TRUE
  nls_territory            = "PHILIPPINES"
  memory_target            = 820M
  control_files            = "C:\ORACLE\ORADATA\OSBTEST\CONTROL01.CTL"
  control_files            = "C:\ORACLE\ORADATA\OSBTEST\CONTROL02.CTL"
  control_files            = "C:\ORACLE\ORADATA\OSBTEST\CONTROL03.CTL"
  db_block_size            = 8192
  compatible               = "11.1.0.0.0"
  log_archive_format       = "ARC%S_%R.%T"
  db_recovery_file_dest    = "\oracle\flash_recovery_area"
  db_recovery_file_dest_size= 40000M
  undo_tablespace          = "UNDOTBS1"
  remote_login_passwordfile= "EXCLUSIVE"
  db_domain                = "epassport.ph"
  dispatchers              = "(PROTOCOL=TCP) (SERVICE=osbtestXDB)"
  audit_file_dest          = "C:\APP\ORACLE\ADMIN\OSBTEST\ADUMP"
  audit_trail              = "NONE"
  db_name                  = "osbtest"
  open_cursors             = 300
  diagnostic_dest          = "C:\APP\ORACLE"
Wed Nov 03 05:59:32 2010
PMON started with pid=2, OS id=78020 
Wed Nov 03 05:59:32 2010
VKTM started with pid=3, OS id=77528 at elevated priority
Wed Nov 03 05:59:32 2010
DIAG started with pid=4, OS id=79056 
VKTM running at (20)ms precision
Wed Nov 03 05:59:32 2010
DBRM started with pid=5, OS id=78224 
Wed Nov 03 05:59:32 2010
PSP0 started with pid=6, OS id=79316 
Wed Nov 03 05:59:32 2010
DIA0 started with pid=7, OS id=76608 
Wed Nov 03 05:59:32 2010
MMAN started with pid=8, OS id=78704 
Wed Nov 03 05:59:33 2010
DBW0 started with pid=9, OS id=79276 
Wed Nov 03 05:59:33 2010
LGWR started with pid=10, OS id=78604 
Wed Nov 03 05:59:33 2010
CKPT started with pid=11, OS id=79412 
Wed Nov 03 05:59:33 2010
SMON started with pid=12, OS id=77836 
Wed Nov 03 05:59:33 2010
RECO started with pid=13, OS id=78544 
Wed Nov 03 05:59:33 2010
MMON started with pid=14, OS id=77560 
Wed Nov 03 05:59:33 2010
MMNL started with pid=15, OS id=79556 
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
starting up 1 shared server(s) ...
ORACLE_BASE from environment = C:\app\oracle
Wed Nov 03 05:59:33 2010
ALTER DATABASE   MOUNT
Setting recovery target incarnation to 2
Successful mount of redo thread 1, with mount id 3888089029
Database mounted in Exclusive Mode
Lost write protection disabled
Completed: ALTER DATABASE   MOUNT
Wed Nov 03 05:59:37 2010
ALTER DATABASE OPEN
Errors in file c:\app\oracle\diag\rdbms\osbtest\osbtest\trace\osbtest_dbw0_79276.trc:
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: 'C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
ORA-1157 signalled during: ALTER DATABASE OPEN...
Wed Nov 03 05:59:39 2010
Checker run found 1 new persistent data failures
Wed Nov 03 06:01:14 2010
ALTER DATABASE RECOVER  datafile 4  
Media Recovery Start
Fast Parallel Media Recovery NOT enabled
Wed Nov 03 06:01:14 2010
Errors in file c:\app\oracle\diag\rdbms\osbtest\osbtest\trace\osbtest_dbw0_79276.trc:
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: 'C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
Media Recovery failed with error 1110
ORA-283 signalled during: ALTER DATABASE RECOVER  datafile 4  ...
Wed Nov 03 06:03:34 2010
Full restore complete of datafile 4 C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF.  Elapsed time: 0:00:01 
  checkpoint is 12264222
Wed Nov 03 06:05:06 2010
Incremental restore complete of datafile 4 C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF
  checkpoint is 12268354
Wed Nov 03 06:05:21 2010
alter database recover datafile list clear
Completed: alter database recover datafile list clear
alter database recover if needed
 datafile 4
Media Recovery Start
Fast Parallel Media Recovery NOT enabled
 parallel recovery started with 7 processes
Recovery of Online Redo Log: Thread 1 Group 3 Seq 435 Reading mem 0
  Mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO03.LOG
Recovery of Online Redo Log: Thread 1 Group 1 Seq 436 Reading mem 0
  Mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO01.LOG
Recovery of Online Redo Log: Thread 1 Group 2 Seq 437 Reading mem 0
  Mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO02.LOG
Completed: alter database recover if needed
 datafile 4
Wed Nov 03 06:05:43 2010
alter database open
Wed Nov 03 06:05:43 2010
LGWR: STARTING ARCH PROCESSES
Wed Nov 03 06:05:43 2010
ARC0 started with pid=30, OS id=78636 
Wed Nov 03 06:05:43 2010
ARC1 started with pid=31, OS id=78152 
Wed Nov 03 06:05:43 2010
ARC2 started with pid=32, OS id=79756 
ARC0: Archival started
Wed Nov 03 06:05:43 2010
ARC3 started with pid=33, OS id=78272 
ARC1: Archival started
ARC2: Archival started
ARC3: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
Thread 1 opened at log sequence 437
ARC2: Becoming the 'no FAL' ARCH
  Current log# 2 seq# 437 mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO02.LOG
ARC2: Becoming the 'no SRL' ARCH
Successful open of redo thread 1
ARC3: Becoming the heartbeat ARCH
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Wed Nov 03 06:05:44 2010
SMON: enabling cache recovery
Successfully onlined Undo Tablespace 2.
Verifying file header compatibility for 11g tablespace encryption..
Verifying 11g file header compatibility for tablespace encryption completed
SMON: enabling tx recovery
Database Characterset is AL32UTF8
Opening with internal Resource Manager plan 
Starting background process FBDA
Wed Nov 03 06:05:45 2010
FBDA started with pid=34, OS id=79520 
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
Wed Nov 03 06:05:46 2010
QMNC started with pid=35, OS id=78484 
Wed Nov 03 06:05:50 2010
db_recovery_file_dest_size of 40000 MB is 44.23% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Wed Nov 03 06:06:00 2010
Completed: alter database open
Wed Nov 03 06:08:02 2010
Thread 1 advanced to log sequence 438 (LGWR switch)
  Current log# 3 seq# 438 mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO03.LOG
Wed Nov 03 06:08:15 2010
Thread 1 advanced to log sequence 439 (LGWR switch)
  Current log# 1 seq# 439 mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO01.LOG
Thread 1 cannot allocate new log, sequence 440
Checkpoint not complete
  Current log# 1 seq# 439 mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO01.LOG
Thread 1 advanced to log sequence 440 (LGWR switch)
  Current log# 2 seq# 440 mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO02.LOG
Wed Nov 03 06:09:39 2010
Starting background process CJQ0
Wed Nov 03 06:09:39 2010
CJQ0 started with pid=22, OS id=77120 
Wed Nov 03 06:10:48 2010
Starting background process SMCO
Wed Nov 03 06:10:48 2010
SMCO started with pid=23, OS id=79416 
Wed Nov 03 06:13:38 2010
Shutting down instance (abort)
License high water mark = 12
USER (ospid: 78892): terminating the instance
Instance terminated by USER, pid = 78892
Wed Nov 03 06:13:41 2010
Instance shutdown complete
Wed Nov 03 06:14:00 2010
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 2
Using LOG_ARCHIVE_DEST_10 parameter default value as USE_DB_RECOVERY_FILE_DEST
Autotune of undo retention is turned on. 
IMODE=BR
ILAT =18
LICENSE_MAX_USERS = 0
SYS auditing is disabled
Starting up ORACLE RDBMS Version: 11.1.0.7.0.
Using parameter settings in server-side spfile C:\APP\ORACLE\PRODUCT\11.1.0\DB_1\DATABASE\SPFILEOSBTEST.ORA
System parameters with non-default values:
  processes                = 150
  resource_limit           = TRUE
  nls_territory            = "PHILIPPINES"
  memory_target            = 820M
  control_files            = "C:\ORACLE\ORADATA\OSBTEST\CONTROL01.CTL"
  control_files            = "C:\ORACLE\ORADATA\OSBTEST\CONTROL02.CTL"
  control_files            = "C:\ORACLE\ORADATA\OSBTEST\CONTROL03.CTL"
  db_block_size            = 8192
  compatible               = "11.1.0.0.0"
  log_archive_format       = "ARC%S_%R.%T"
  db_recovery_file_dest    = "\oracle\flash_recovery_area"
  db_recovery_file_dest_size= 40000M
  undo_tablespace          = "UNDOTBS1"
  remote_login_passwordfile= "EXCLUSIVE"
  db_domain                = "epassport.ph"
  dispatchers              = "(PROTOCOL=TCP) (SERVICE=osbtestXDB)"
  audit_file_dest          = "C:\APP\ORACLE\ADMIN\OSBTEST\ADUMP"
  audit_trail              = "NONE"
  db_name                  = "osbtest"
  open_cursors             = 300
  diagnostic_dest          = "C:\APP\ORACLE"
Wed Nov 03 06:14:00 2010
PMON started with pid=2, OS id=79700 
Wed Nov 03 06:14:00 2010
VKTM started with pid=3, OS id=80292 at elevated priority
Wed Nov 03 06:14:00 2010
DIAG started with pid=4, OS id=77928 
Wed Nov 03 06:14:00 2010
DBRM started with pid=5, OS id=79248 
VKTM running at (20)ms precision
Wed Nov 03 06:14:00 2010
PSP0 started with pid=6, OS id=78088 
Wed Nov 03 06:14:00 2010
DIA0 started with pid=7, OS id=79172 
Wed Nov 03 06:14:00 2010
MMAN started with pid=8, OS id=80988 
Wed Nov 03 06:14:00 2010
DBW0 started with pid=9, OS id=74844 
Wed Nov 03 06:14:01 2010
LGWR started with pid=10, OS id=67128 
Wed Nov 03 06:14:01 2010
CKPT started with pid=11, OS id=80376 
Wed Nov 03 06:14:01 2010
SMON started with pid=12, OS id=78936 
Wed Nov 03 06:14:01 2010
RECO started with pid=13, OS id=76408 
Wed Nov 03 06:14:01 2010
MMON started with pid=14, OS id=80164 
Wed Nov 03 06:14:01 2010
MMNL started with pid=15, OS id=79160 
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
starting up 1 shared server(s) ...
ORACLE_BASE from environment = C:\app\oracle
Wed Nov 03 06:14:01 2010
ALTER DATABASE   MOUNT
Setting recovery target incarnation to 2
Successful mount of redo thread 1, with mount id 3888103977
Database mounted in Exclusive Mode
Lost write protection disabled
Completed: ALTER DATABASE   MOUNT
Wed Nov 03 06:14:05 2010
ALTER DATABASE OPEN
Beginning crash recovery of 1 threads
 parallel recovery started with 7 processes
Started redo scan
Completed redo scan
 477 redo blocks read, 144 data blocks need recovery
Started redo application at
 Thread 1: logseq 440, block 3
Recovery of Online Redo Log: Thread 1 Group 2 Seq 440 Reading mem 0
  Mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO02.LOG
Completed redo application of 0.20MB
Completed crash recovery at
 Thread 1: logseq 440, block 480, scn 12311039
 144 data blocks read, 144 data blocks written, 477 redo blocks read
LGWR: STARTING ARCH PROCESSES
Wed Nov 03 06:14:08 2010
ARC0 started with pid=26, OS id=80964 
Wed Nov 03 06:14:08 2010
ARC1 started with pid=27, OS id=80512 
Wed Nov 03 06:14:08 2010
ARC2 started with pid=28, OS id=80416 
ARC0: Archival started
Wed Nov 03 06:14:08 2010
ARC3 started with pid=29, OS id=81440 
ARC1: Archival started
ARC2: Archival started
ARC3: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
Thread 1 advanced to log sequence 441 (thread open)
Thread 1 opened at log sequence 441
ARC1: Becoming the 'no FAL' ARCH
  Current log# 3 seq# 441 mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO03.LOG
ARC1: Becoming the 'no SRL' ARCH
Successful open of redo thread 1
ARC0: Becoming the heartbeat ARCH
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
SMON: enabling cache recovery
db_recovery_file_dest_size of 40000 MB is 44.23% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Successfully onlined Undo Tablespace 2.
Verifying file header compatibility for 11g tablespace encryption..
Verifying 11g file header compatibility for tablespace encryption completed
SMON: enabling tx recovery
Database Characterset is AL32UTF8
Opening with internal Resource Manager plan 
Starting background process FBDA
Wed Nov 03 06:14:11 2010
FBDA started with pid=30, OS id=79456 
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
Wed Nov 03 06:14:12 2010
QMNC started with pid=31, OS id=81280 
Wed Nov 03 06:14:25 2010
Completed: ALTER DATABASE OPEN
Stopping background process FBDA
Shutting down instance: further logons disabled
Stopping background process QMNC
Stopping background process MMNL
Wed Nov 03 06:14:36 2010
Stopping background process MMON
Shutting down instance (immediate)
License high water mark = 8
Waiting for dispatcher 'D000' to shutdown
All dispatchers and shared servers shutdown
ALTER DATABASE CLOSE NORMAL
Wed Nov 03 06:14:39 2010
SMON: disabling tx recovery
SMON: disabling cache recovery
Wed Nov 03 06:14:39 2010
Shutting down archive processes
Archiving is disabled
Wed Nov 03 06:14:39 2010
ARCH shutting down
ARC3: Archival stopped
Wed Nov 03 06:14:39 2010
ARCH shutting down
ARC0: Archival stopped
Wed Nov 03 06:14:39 2010
ARCH shutting down
ARC1: Archival stopped
Wed Nov 03 06:14:39 2010
ARCH shutting down
ARC2: Archival stopped
Thread 1 closed at log sequence 441
Successful close of redo thread 1
Completed: ALTER DATABASE CLOSE NORMAL
ALTER DATABASE DISMOUNT
Completed: ALTER DATABASE DISMOUNT
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
Archive process shutdown avoided: 0 active
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
Wed Nov 03 06:14:45 2010
Stopping background process VKTM: 
Archive process shutdown avoided: 0 active
Wed Nov 03 06:14:47 2010
Instance shutdown complete
Wed Nov 03 06:14:54 2010
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 2
Using LOG_ARCHIVE_DEST_10 parameter default value as USE_DB_RECOVERY_FILE_DEST
Autotune of undo retention is turned on. 
IMODE=BR
ILAT =18
LICENSE_MAX_USERS = 0
SYS auditing is disabled
Starting up ORACLE RDBMS Version: 11.1.0.7.0.
Using parameter settings in server-side spfile C:\APP\ORACLE\PRODUCT\11.1.0\DB_1\DATABASE\SPFILEOSBTEST.ORA
System parameters with non-default values:
  processes                = 150
  resource_limit           = TRUE
  nls_territory            = "PHILIPPINES"
  memory_target            = 820M
  control_files            = "C:\ORACLE\ORADATA\OSBTEST\CONTROL01.CTL"
  control_files            = "C:\ORACLE\ORADATA\OSBTEST\CONTROL02.CTL"
  control_files            = "C:\ORACLE\ORADATA\OSBTEST\CONTROL03.CTL"
  db_block_size            = 8192
  compatible               = "11.1.0.0.0"
  log_archive_format       = "ARC%S_%R.%T"
  db_recovery_file_dest    = "\oracle\flash_recovery_area"
  db_recovery_file_dest_size= 40000M
  undo_tablespace          = "UNDOTBS1"
  remote_login_passwordfile= "EXCLUSIVE"
  db_domain                = "epassport.ph"
  dispatchers              = "(PROTOCOL=TCP) (SERVICE=osbtestXDB)"
  audit_file_dest          = "C:\APP\ORACLE\ADMIN\OSBTEST\ADUMP"
  audit_trail              = "NONE"
  db_name                  = "osbtest"
  open_cursors             = 300
  diagnostic_dest          = "C:\APP\ORACLE"
Wed Nov 03 06:14:55 2010
PMON started with pid=2, OS id=80900 
Wed Nov 03 06:14:55 2010
VKTM started with pid=3, OS id=81160 at elevated priority
Wed Nov 03 06:14:55 2010
DIAG started with pid=4, OS id=80624 
VKTM running at (20)ms precision
Wed Nov 03 06:14:55 2010
DBRM started with pid=5, OS id=81604 
Wed Nov 03 06:14:55 2010
PSP0 started with pid=6, OS id=80676 
Wed Nov 03 06:14:55 2010
DIA0 started with pid=7, OS id=81684 
Wed Nov 03 06:14:55 2010
MMAN started with pid=8, OS id=80892 
Wed Nov 03 06:14:55 2010
DBW0 started with pid=9, OS id=80360 
Wed Nov 03 06:14:55 2010
LGWR started with pid=10, OS id=81376 
Wed Nov 03 06:14:55 2010
CKPT started with pid=11, OS id=80732 
Wed Nov 03 06:14:55 2010
SMON started with pid=12, OS id=80852 
Wed Nov 03 06:14:55 2010
RECO started with pid=13, OS id=80636 
Wed Nov 03 06:14:55 2010
MMON started with pid=14, OS id=81796 
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
starting up 1 shared server(s) ...
ORACLE_BASE from environment = C:\app\oracle
Wed Nov 03 06:14:55 2010
ALTER DATABASE   MOUNT
Wed Nov 03 06:14:55 2010
MMNL started with pid=15, OS id=80088 
Setting recovery target incarnation to 2
Successful mount of redo thread 1, with mount id 3888118879
Database mounted in Exclusive Mode
Lost write protection disabled
Completed: ALTER DATABASE   MOUNT
Wed Nov 03 06:15:00 2010
ALTER DATABASE OPEN
Errors in file c:\app\oracle\diag\rdbms\osbtest\osbtest\trace\osbtest_dbw0_80360.trc:
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: 'C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
ORA-1157 signalled during: ALTER DATABASE OPEN...
Wed Nov 03 06:17:27 2010
Full restore complete of datafile 4 C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF.  Elapsed time: 0:00:01 
  checkpoint is 12264222
Wed Nov 03 06:18:14 2010
alter database open
ORA-1113 signalled during: alter database open...
Wed Nov 03 06:18:15 2010
Checker run found 1 new persistent data failures
Wed Nov 03 06:19:14 2010
Incremental restore complete of datafile 4 C:\ORACLE\ORADATA\OSBTEST\USERS01.DBF
  checkpoint is 12268354
Wed Nov 03 06:19:28 2010
alter database recover datafile list clear
Completed: alter database recover datafile list clear
alter database recover if needed
 datafile 4
Media Recovery Start
Fast Parallel Media Recovery NOT enabled
 parallel recovery started with 7 processes
ORA-279 signalled during: alter database recover if needed
 datafile 4
...
Wed Nov 03 06:20:20 2010
db_recovery_file_dest_size of 40000 MB is 44.23% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Wed Nov 03 06:20:35 2010
alter database recover logfile 'C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_435_6F2KCN7F_.ARC'
Media Recovery Log C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_435_6F2KCN7F_.ARC
ORA-279 signalled during: alter database recover logfile 'C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_435_6F2KCN7F_.ARC'...
Wed Nov 03 06:21:40 2010
alter database recover logfile 'C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_436_6F2KFOXG_.ARC'
Media Recovery Log C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_436_6F2KFOXG_.ARC
ORA-279 signalled during: alter database recover logfile 'C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_436_6F2KFOXG_.ARC'...
alter database recover logfile 'C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_437_6F2KFOS1_.ARC'
Media Recovery Log C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_437_6F2KFOS1_.ARC
ORA-279 signalled during: alter database recover logfile 'C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_437_6F2KFOS1_.ARC'...
alter database recover logfile 'C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_438_6F2KFPJ8_.ARC'
Media Recovery Log C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_438_6F2KFPJ8_.ARC
Recovery of Online Redo Log: Thread 1 Group 1 Seq 439 Reading mem 0
  Mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO01.LOG
Recovery of Online Redo Log: Thread 1 Group 2 Seq 440 Reading mem 0
  Mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO02.LOG
Recovery of Online Redo Log: Thread 1 Group 3 Seq 441 Reading mem 0
  Mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO03.LOG
Completed: alter database recover logfile 'C:\ORACLE\FLASH_RECOVERY_AREA\OSBTEST\ARCHIVELOG\2010_11_03\O1_MF_1_438_6F2KFPJ8_.ARC'
Wed Nov 03 06:24:26 2010
alter database open
Wed Nov 03 06:24:27 2010
LGWR: STARTING ARCH PROCESSES
Wed Nov 03 06:24:27 2010
ARC0 started with pid=30, OS id=82388 
Wed Nov 03 06:24:27 2010
ARC1 started with pid=31, OS id=79392 
Wed Nov 03 06:24:27 2010
ARC2 started with pid=32, OS id=83504 
ARC0: Archival started
Wed Nov 03 06:24:27 2010
ARC3 started with pid=33, OS id=80064 
ARC1: Archival started
ARC2: Archival started
ARC3: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
Thread 1 opened at log sequence 441
ARC0: Becoming the 'no FAL' ARCH
  Current log# 3 seq# 441 mem# 0: C:\ORACLE\ORADATA\OSBTEST\REDO03.LOG
ARC0: Becoming the 'no SRL' ARCH
Successful open of redo thread 1
ARC3: Becoming the heartbeat ARCH
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Wed Nov 03 06:24:27 2010
SMON: enabling cache recovery
Successfully onlined Undo Tablespace 2.
Verifying file header compatibility for 11g tablespace encryption..
Verifying 11g file header compatibility for tablespace encryption completed
SMON: enabling tx recovery
Database Characterset is AL32UTF8
Opening with internal Resource Manager plan 
Starting background process FBDA
Wed Nov 03 06:24:28 2010
FBDA started with pid=34, OS id=83160 
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
Wed Nov 03 06:24:29 2010
QMNC started with pid=35, OS id=83316 
Wed Nov 03 06:24:42 2010
Completed: alter database open
Wed Nov 03 06:25:01 2010
Starting background process CJQ0
Wed Nov 03 06:25:01 2010
CJQ0 started with pid=37, OS id=82416 
Wed Nov 03 06:29:31 2010
Starting background process SMCO
Wed Nov 03 06:29:31 2010
SMCO started with pid=20, OS id=83932 
}}}
{{{
[oracle@dbrocaix01 ~]$ obtool 
ob> 
ob> 
ob> lsdev
ob> 
ob> obtool -u admin chhost -r client,admin,mediaserver "dbrocaix01.bayantel.com"
Error: unknown command, obtool
ob> chhost -r client,admin,mediaserver "dbrocaix01.bayantel.com"
Error: can't fetch host dbrocaix01.bayantel.com - name not found
ob> 
ob> 
ob> chhost -r client,admin,mediaserver "dbrocaix01"             


[oracle@dbrocaix01 ~]$ obtool -u admin mkdev -t library -o -S 36 -I 4 -a dbrocaix01:/flash_reco/vlib -v vlib > NULL
Password: 
[oracle@dbrocaix01 ~]$ obtool -u admin mkdev -t tape -o -a dbrocaix01:/flash_reco/vdte1 -v -l vlib -d 1 vdte1 > NULL
Password: 
[oracle@dbrocaix01 ~]$ obtool -u admin mkdev -t tape -o -a dbrocaix01:/flash_reco/vdte2 -v -l vlib -d 2 vdte2 > NULL
Password: 
[oracle@dbrocaix01 ~]$ obtool -u admin mkdev -t tape -o -a dbrocaix01:/flash_reco/vdte3 -v -l vlib -d 3 vdte3 > NULL
Password: 
[oracle@dbrocaix01 ~]$ obtool -u admin mkdev -t tape -o -a dbrocaix01:/flash_reco/vdte4 -v -l vlib -d 4 vdte4 > NULL
Password: 
[oracle@dbrocaix01 ~]$ obtool -u admin mkdev -t library -o -I 4 -a dbrocaix01:/flash_reco/vlib2 -v vlib2 > NULL
Password: 
[oracle@dbrocaix01 ~]$ obtool -u admin mkdev -t tape -o -a dbrocaix01:/flash_reco/vdrive1 -v -l vlib2 -d 1 vdrive1 > NULL
Password: 
[oracle@dbrocaix01 ~]$ obtool -u admin mkdev -t tape -o -a dbrocaix01:/flash_reco/vdrive2 -v -l vlib2 -d 2 vdrive2  > NULL
Password: 
[oracle@dbrocaix01 ~]$ obtool
ob> lsdev
library             vlib             in service          
  drive 1           vdte1            in service          
  drive 2           vdte2            in service          
  drive 3           vdte3            in service          
  drive 4           vdte4            in service          
library             vlib2            in service          
  drive 1           vdrive1          in service          
  drive 2           vdrive2          in service          



ob> lshost
dbrocaix01       admin,mediaserver,client          (via OB)   in service 
ob> 
ob> 
ob> lsdev
library             vlib             in service          
  drive 1           vdte1            in service          
  drive 2           vdte2            in service          
  drive 3           vdte3            in service          
  drive 4           vdte4            in service          
library             vlib2            in service          
  drive 1           vdrive1          in service          
  drive 2           vdrive2          in service          
ob> 
ob> 
ob> insertvol -L vlib -c 250 unlabeled 1-32
ob> insertvol -L vlib2 -c 250 unlabeled 1-14
ob> 
ob> 
ob> lsmf --long
OFFSITE_7Y:
    Keep volume set:        7 years
    Appendable:             yes
    Volume ID used:         unique to this media family
    Comment:                Store for 7 years offsite - for compliance with XYZ law
    UUID:                   00cee284-7185-102d-9cae-000c293b8104
OFFSITE_TEST:
    Keep volume set:        10 minutes
    Appendable:             yes
    Volume ID used:         unique to this media family
    Comment:                Edit the test values later
    UUID:                   319d2c68-7185-102d-9cae-000c293b8104
OSB-CATALOG-MF:
    Write window:           7 days
    Keep volume set:        14 days
    Appendable:             yes
    Volume ID used:         unique to this media family
    Comment:                OSB catalog backup media family
    UUID:                   2bab93d0-717b-102d-b17d-000c293b8104
RMAN-DEFAULT:
    Keep volume set:        content manages reuse
    Appendable:             yes
    Volume ID used:         unique to this media family
    Comment:                Default RMAN backup media family
    UUID:                   2a824562-717b-102d-b17d-000c293b8104


}}}
Thread: Drive or volume on Which mount attempted is unusable
http://forums.oracle.com/forums/thread.jspa?threadID=475197

Thread: Oracle Secure Backup
http://forums.oracle.com/forums/thread.jspa?threadID=672792&start=0&tstart=0

''Error: waiting for snapshot controlfile enqueue''
http://www.dbasupport.com/forums/archive/index.php/t-12492.html  <-- this solved it
http://surachartopun.com/2008/03/rman-waiting-for-snapshot-control-file.html
http://www.symantec.com/business/support/index?page=content&id=TECH18161
http://www.freelists.org/post/oracle-l/ORA00230-during-RMAN-backup,4
SELECT s.SID, USERNAME AS "User", PROGRAM, MODULE, ACTION, LOGON_TIME "Logon", l.* 
FROM V$SESSION s, V$ENQUEUE_LOCK l
WHERE l.SID = s.SID AND l.TYPE = 'CF' AND l.ID1 = 0 AND l.ID2 = 2;

''Thread: Unable to open qlm connection - drive database is corrupted''
http://forums.oracle.com/forums/thread.jspa?messageID=1515577
http://forums.oracle.com/forums/thread.jspa?messageID=4296914
http://forums.oracle.com/forums/thread.jspa?messageID=2436266
http://forums.oracle.com/forums/thread.jspa?threadID=587033&tstart=210

''ORA-600 krbb3crw_inv_blk when compressed backupet is on''
- workaround is to turn off compression.. 


''References''
http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle_Secure_Backup/OSB_10.shtml
http://download.oracle.com/docs/cd/E10317_01/doc/backup.102/e05410/obtool_commands.htm#insertedID41
Backup Recovery Area
http://download.oracle.com/docs/cd/B28359_01/backup.111/b28270/rcmbckad.htm#i1006854
http://download.oracle.com/docs/cd/B19306_01/backup.102/b14192/bkscenar002.htm
http://download.oracle.com/docs/cd/B19306_01/backup.102/b14192/rpfbdb003.htm#BABCAGIB
http://www.orafaq.com/forum/t/75250/2/



{{{
some obtool commands:
lsdev -lvg                        <-- shows detailed info of the devices
catxcr -fl0 oracle/5.1        <-- shows the error messages
lsvol --library libraryname    <-- shows storage element address
insertvol -L libraryname <storage element range>  <-- inserts volume
inventory libraryname       <-- inventory the library
}}}
https://martincarstenbach.wordpress.com/2018/10/18/little-things-worth-knowing-oswatcher-analyser-dashboard/
https://blog.dbi-services.com/oswatcher-blackbox-analyzer/

When your query takes too long ...
http://forums.oracle.com/forums/thread.jspa?threadID=501834

HOW TO: Post a SQL statement tuning request - template posting
http://forums.oracle.com/forums/thread.jspa?threadID=863295

Basic SQL statement performance diagnosis - HOW TO, step by step instructions
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
http://oracleprof.blogspot.com/2010/11/unsafe-deinstall-using-oracle-univeral.html
{{{

CREATE OR REPLACE PROCEDURE emp_name (id IN NUMBER, emp_name OUT varchar)
IS
BEGIN
   SELECT ENAME INTO emp_name
   FROM emp_tbl WHERE EMPNO = id;
END;
/

set serveroutput on
DECLARE
 empName varchar(20);
 CURSOR id_cur IS SELECT EMPNO FROM emp_ids;
BEGIN
FOR emp_rec in id_cur
LOOP
  emp_name(emp_rec.EMPNO, empName);
  dbms_output.put_line('The employee ' || empName || ' has id ' || emp_rec.EMPNO);
END LOOP;
 END;
 /
}}}
https://www.ovh.com/world/dedicated-servers/all_servers.xml
Looking "Under the Hood" at Networking in Oracle VM Server for x86 http://www.oracle.com/technetwork/articles/servers-storage-admin/networking-ovm-x86-1873548.html
<<showtoc>>


! Reading Execution Plans 
https://docs.oracle.com/database/121/TGSQL/tgsql_interp.htm#TGSQL94618
https://blogs.oracle.com/sql/query-tuning-101:-comparing-execution-plans-and-access-vs-filter-predicates
http://blog.tanelpoder.com/files/Oracle_SQL_Plan_Execution.pdf
14 part series https://jonathanlewis.wordpress.com/explain-plan/


! “Access Predicates” 
* Predicates used to locate rows in an access structure. For example, start or stop predicates for an index range scan, hash join 

! “Filter Predicates”
* Predicates used to filter rows before producing them.
* Any condition that would throw away/filter rows 

! column projection 
<<<
“column projection” is what you extract from the rowsource you are reading.
In set theory you project (subset of columns) and filter (subset of rows) 
so for example “select object_id from table” then you are projecting just “object_id” column
<<<

! QB name 
* name of the query block, either system-generated or defined by the user with the QB_NAME hint


! gather_plan_statistics columns
<<<
https://www.red-gate.com/simple-talk/sql/oracle/execution-plans-part-11-actuals/ 
here’s a reference to the rest of the columns relating to execution statistics:

    Starts:  The number of times this operation actually occurred
    E-rows:  Estimated rows (per execution of the operation) – i.e. the “Rows” column from a call to display()
    A-rows:  The accumulated number of rows forwarded by this operation
    A-time:  The accumulated time spent in this operation – including time spent in its descendents.
    Buffers:  Accumulated buffer visits made by this operation – including its descendents.
    Reads:  Accumulated number of blocks read from disc by this operation – including its descendents.
    Writes:  Accumulated number of blocks written to disc by this operation – including its descendents.
<<<

! starts
<<<
so when we are trying to assess the accuracy of the optimizer’s predictions we generally need to compare A-Rows with E-rows * Starts and (as we shall see in the next article) we still have to be careful about deciding when that is a useful comparison and when it is meaningless.
<<<




.

Example1 - Online Redefinition - partition example - manually create indexes (no constraints)
http://www.evernote.com/shard/s48/sh/c2ffc788-7d1e-44df-8bd0-c04b62401eb6/48b56fe63e1c28d2e8ee2276c2c0955d

Example 2 - Online Redefinition - Employees Table - all automatic
http://www.evernote.com/shard/s48/sh/8d9633bb-178a-484c-b83f-2fe526d680e7/596829d7f1bd43f60bb3917897df5dcd

Example 3 - Online Redefinition - Employees Table - manually create constraints and indexes
http://www.evernote.com/shard/s48/sh/d80aeaef-03d3-47b8-a6f2-6941c25a75b3/2939053a281a59f6701fc947128293a4


http://asktom.oracle.com/pls/asktom/f?p=100:11:1930891738933501::::P11_QUESTION_ID:7490088329317
Best Practices for Online Table Redefinition [ID 1080969.1]


LOB redefinition http://blog.trivadis.com/b/mathiaszarick/archive/2012/03/05/lob-compression-with-oracle-strange-multiple-physical-reads.aspx



Metadata scripts are here [[dbms_metadata]]
What Are The Possible Ways To Find Out An Oracle Database Patchset/Patch And Download It?
  	Doc ID: 	Note:423016.1


FAQs on OPatch Version : 11.1
  	Doc ID: 	Note:453495.1
  	
How to download and install opatch (generic platform).
  	Doc ID: 	Note:274526.1
  	
How to find whether the one-off Patches will conflict or not?
  	Doc ID: 	Note:458485.1
  	
OPatch version 10.2 - FAQ
  	Doc ID: 	Note:334108.1
  	
How To Do The Prerequisite/Conflicts Checks Using OUI(Oracle Universal Installer) And Opatch Before Applying/Rolling Back A Patch
  	Doc ID: 	Note:459360.1
  	
Location Of Logs For Opatch And OUI
  	Doc ID: 	Note:403212.1
  	
Critical Patch Update - Introduction to Database n-Apply CPUs
  	Doc ID: 	Note:438314.1
  	
Critical Patch Update January 2008 – Database Patch Security Vulnerability Molecule Mapping
  	Doc ID: 	Note:466764.1
  	
SUDO utility in 10gR2 Grid Control
  	Doc ID: 	Note:377934.1
  	
Can Root.Sh Be Run Via SUDO?
  	Doc ID: 	Note:413855.1 	
  	
IS THE ROOT.SH ABSOLUTELY NECESSARY? OR RUN 2ND TIME?
  	Doc ID: 	Note:1007934.6
  	
 	How to setup Linux md devices for CRS and ASM
  	Doc ID: 	Note:343092.1
  	


-- DISK FULL
MetaLink Note 550522.1 (Subject: How To Avoid Disk Full Issues Because OPatch Backups Take Big Amount Of Disk Space.




-- VERIFY 

Good practices applying patches and patchsets
  	Doc ID: 	176311.1

How To Verify The Integrity Of A Patch/Software Download?
  	Doc ID: 	549617.1

What Is The Difference Between ftp'ing An Unzipped File And A Zipped File, From One Machine To Another?
  	Doc ID: 	787775.1



-- DATABASE VAULT

Note 726568.1 How to Install Database Vault Patches on top of 11.1.0.6

How to Install Database Vault Patches on top of 10.2.0.4
  	Doc ID: 	731466.1

How to Install Database Vault Patches on top of 9.2.0.8.1 and 10.2.0.3
  	Doc ID: 	445092.1
https://fbcdn-dragon-a.akamaihd.net/hphotos-ak-ash3/851560_196423357203561_929747697_n.pdf
http://venturebeat.com/2013/09/16/facebook-explains-secrets-of-building-hugely-scalable-sites/
hip hop https://github.com/facebook/hiphop-php
http://apex.oracle.com/pls/apex/f?p=44785:24:0:::24:P24_CONTENT_ID,P24_PREV_PAGE:6613,1#prettyPhoto
http://openvpn.net/
http://openvpn.net/index.php/open-source/documentation/howto.html#install
http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch35_:_Configuring_Linux_VPNs
http://www.throx.net/2008/04/13/openvpn-and-centos-5-installation-and-configuration-guide/
http://blog.wains.be/2006/10/08/simple-vpn-tunnel-using-openvpn/
http://blog.laimbock.com/2008/05/27/howto-add-firewall-rules-to-rhel-5-or-centos-5/
* IaaS (on-premise provisioning) - manage your physical infra through openstack api

https://blogs.oracle.com/oem/entry/enterprise_manager_ops_center_using

http://www.gokhanatil.com/2012/04/how-to-install-oracle-ops-center-12c.html
http://www.gokhanatil.com/2011/09/integrating-enterprise-manager-grid.html
<<showtoc>>

! Facebook engineering

flash cache by Domas Mituzas
interesting to see this technology transforming to Exadata Flash Cache.. they are calling the write-back cache as write-behind caching which has a similar concept if you read on the article
https://www.facebook.com/notes/facebook-engineering/flashcache-at-facebook-from-2010-to-2013-and-beyond/10151725297413920
https://www.facebook.com/notes/facebook-engineering/linkbench-a-database-benchmark-for-the-social-graph/10151391496443920
                Though in most cases tools like ‘iostat’ are useful to understand general system performance, for our needs we needed deeper inspection. We used the ‘blktrace’ facility in Linux to trace every request issued by database software and analyze how it was served by our flash- and disk-based devices. Doing so helped identify a number of areas for improvement,including three major ones: read-write distribution, cache eviction, and write efficiency.

Jay Parikh on VLDB13 keynote - "Data Infrastructure at Web Scale"
video here http://www.ustream.tv/recorded/37879841 @1:02:14 is the awesome Q&A (resource management, etc.)
https://www.facebook.com/notes/facebook-academics/facebook-makes-big-impact-on-big-data-at-vldb/594819857236092

If you're a database guy you'll love this 2 hour video, facebook engineers discussed the following – performance focus, server provisioning, automatic server rebuilds, backup & recovery, online schema changes, sharding, HBase and Hadoop, the Q&A part at the end is also interesting at 1:28:46 Mark Callaghan also answered why they chose MySQL vs commercial databases that already have the features that their engineers are hacking. Good stuff!
http://www.livestream.com/fbtechtalks/video?clipId=pla_a3d62538-1238-4202-a3be-e257cd866bb9

corona resource manager vs yarn
https://www.facebook.com/notes/facebook-engineering/under-the-hood-scheduling-mapreduce-jobs-more-efficiently-with-corona/10151142560538920

real time analytics http://gigaom.com/cloud/how-facebook-is-powering-real-time-analytics/

flash memory field study http://users.ece.cmu.edu/~omutlu/pub/flash-memory-failures-in-the-field-at-facebook_sigmetrics15.pdf


! DBHangops
Good stuff, periodic meetup of devops/dbguys and everything about mysql database. Some of these guys come from high transaction web environments so it’s good to get their view of things even in a mysql point of view. They record their google hangouts so you can watch the previous meetups.
https://twitter.com/DBHangops



! Others 
http://highscalability.com/blog/2012/9/19/the-4-building-blocks-of-architecting-systems-for-scale.html
http://lethain.com/introduction-to-architecting-systems-for-scale/#platform_layer

http://highscalability.com/blog/2012/11/15/gone-fishin-justintvs-live-video-broadcasting-architecture.html
http://highscalability.com/youtube-architecture

etsy performance http://codeascraft.etsy.com/category/performance/
scaling pinterest http://www.slideshare.net/eonarts/mysql-meetup-july2012scalingpinterest#btnNext, http://gigaom.com/cloud/pinterest-flipboard-and-yelp-tell-how-to-save-big-bucks-in-the-cloud/

http://gigaom.com/2013/03/28/3-shades-of-latency-how-netflix-built-a-data-architecture-around-timeliness/
http://techblog.netflix.com/2013/03/system-architectures-for.html
http://gigaom.com/2013/03/03/how-and-why-linkedin-is-becoming-an-engineering-powerhouse/
http://gigaom.com/2013/03/05/facebook-kisses-dram-goodbye-builds-memcached-for-flash/

The 10 Deadly Sins Against Scalability http://highscalability.com/blog/2013/6/10/the-10-deadly-sins-against-scalability.html
22 Recommendations For Building Effective High Traffic Web Software http://highscalability.com/blog/2013/12/16/22-recommendations-for-building-effective-high-traffic-web-s.html







Gathering Statistics for the Cost Based Optimizer
  	Doc ID: 	Note:114671.1
  	
ORA-20000 when running DBMS_STATS.GATHER_DATABASE_STATS
  	Doc ID: 	Note:462496.1
  	
Getting ORA-01031 when gathering database stats in 9i using SYSTEM user
  	Doc ID: 	Note:455221.1

Poor performance after gathering statistics
  	Doc ID: 	Note:278020.1
  	
Poor Database Performance after running DBMS_STATS.GATHER_DATABASE_STATS
  	Doc ID: 	Note:223069.1
  	
Monitoring Statistics in 10g
  	Doc ID: 	Note:295249.1
  	
Bug 4706964 - DBMS_STATS.GATHER_DICTIONARY_STATS errors if schema name has special characters
  	Doc ID: 	Note:4706964.8
  	
ERROR:" WARNING: --> Database contains stale optimizer statistics.Refer to the 10g Upgrade Guide for instructions to update"
  	Doc ID: 	Note:437371.1
  	
Script to Check Schemas with Stale Statistics
  	Doc ID: 	Note:560336.1
  	

  	http://www.globusz.com/ebooks/Oracle/00000015.htm
  	http://hungrydba.com/databasestats.aspx
  	http://www.fadalti.com/oracle/database/how_to_statistics.htm
  	http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1154434873552
  	http://www.dbanotes.net/mirrors/www.psoug.org/reference/dbms_stats.html
  	http://www.pafumi.net/Gather_Statistics.html
  	http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:27658118048105
  	http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:60121137844769
  	http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:735625536552
  	http://tonguc.wordpress.com/2007/10/09/oracle-best-practices-part-5/
  	http://www.maroc-it.ma/blogs/fahd/?p=42
  	http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:247162600346210706
  	http://www.dba-oracle.com/t_worst_practices.htm
  	http://structureddata.org/
  	http://structureddata.org/category/oracle/optimizer/
  	http://structureddata.org/2008/03/26/choosing-an-optimal-stats-gathering-strategy/
  	http://structureddata.org/2008/01/02/what-are-your-system-statistics/
  	http://structureddata.org/2007/12/05/oracle-optimizer-development-team-starts-a-blog/
  	http://optimizermagic.blogspot.com/2007/11/welcome-to-our-blog.html
  	





-- GATHER STATISTICS FOR SYS

Gather Optimizer Statistics For Sys And System 
  Doc ID:  Note:457926.1 

Gathering Statistics For All fixed Objects In The Data Dictionary. 
  Doc ID:  Note:272479.1 

Is ANALYZE on the Data Dictionary Supported (TABLES OWNED BY SYS)?
  	Doc ID: 	35272.1


  
-- MIGRATE TO CBO

Migrating to the Cost-Based Optimizer
 	Doc ID:	Note:222627.1
 	
Rule Based Optimizer is to be Desupported in Oracle10g
 	Doc ID:	Note:189702.1
 	
Cost Based Optimizer - Common Misconceptions and Issues
 	Doc ID:	Note:35934.1
 	


-- GATHER SYSTEM STATISTICS

System Statistics: Collect and Display System Statistics (CPU and IO) for CBO us
  	Doc ID: 	Note:149560.1 	

System Statistics: Scaling the System to Improve CBO optimizer
  	Doc ID: 	Note:153761.1

Using Actual System Statistics (Collected CPU and IO information)
  	Doc ID: 	470316.1





-- GATHER STATISTICS

How to Move from ANALYZE to DBMS_STATS - Introduction
  	Doc ID: 	237293.1

Gathering Schema or Database Statistics Automatically in 8i and 9i - Examples
  	Doc ID: 	237901.1

Statistics Gathering: Frequency and Strategy Guidelines
  	Doc ID: 	44961.1

What are the Default Parameters when Gathering Table Statistics on 9i and 10g?
  	Doc ID: 	406475.1



http://awads.net/wp/2006/04/17/orana-powered-by-google-and-feedburner/



-- MONITOR STATISTICS

Monitoring Statistics in 10g
  	Doc ID: 	295249.1

How to Automate Change Based Statistic Gathering - Monitoring Tables
  	Doc ID: 	102334.1





-- GATHER STALE

Differences between GATHER STALE and GATHER AUTO
  	Doc ID: 	228186.1

Best Practices to Minimize Downtime during Upgrade
  	Doc ID: 	455744.1






-- HISTOGRAMS

Histograms: An Overview
  	Doc ID: 	1031826.6






-- DUPLICATE ROWS

http://www.jlcomp.demon.co.uk/faq/duplicates.html


  


-- DISABLE AUTO STATS IN 10G

How to Disable Automatic Statistics Collection in 10G ?
  	Doc ID: 	311836.1






-- CHAINED ROWS

How to Identify, Avoid and Eliminate Chained and Migrated Rows ?
  	Doc ID: 	746778.1

Monitoring Chained Rows on IOTs
  	Doc ID: 	102932.1

Row Chaining and Row Migration
  	Doc ID: 	122020.1

Analyze Table List chained rows Into chained_rows Gives ORA-947
  	Doc ID: 	265707.1



-- SET STATISTICS

http://decipherinfosys.wordpress.com/2007/07/31/dbms_statsset_table_stats/
http://www.oracle.com/technology/oramag/oracle/06-may/o36asktom.html
http://www.psoug.org/reference/tuning.html
http://www.freelists.org/post/oracle-l/CBO-Predicate-selectivity,10
http://www.orafaq.com/forum/?t=msg&th=71350/0/
http://www.oracle.com/technology/oramag/oracle/04-sep/o54asktom.html
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:735625536552







http://blogs.oracle.com/optimizer/entry/optimizer_technical_papers1
<<showtoc>>

! collection scripts 
https://github.com/oracle/oracle-db-examples/blob/master/optimizer/compare_ofe/ofe.sql <- nigel 
https://github.com/tanelpoder/tpt-oracle/blob/master/cofef.sql
https://github.com/tanelpoder/tpt-oracle/blob/master/cofep.sql
https://github.com/tanelpoder/tpt-oracle/blob/master/tools/optimizer/optimizer_features_matrix.sql
https://blog.tanelpoder.com/posts/scripts-for-drilling-down-into-unknown-optimizer-changes/
https://flowingdata.com/2018/04/17/visualizing-differences/

! 9i 
!! bind peek 
<<<
allow the
optimizer to peek at the value of bind variables and then use a histogram to pick an appropriate plan,
just like it would do with literals. The problem with the new feature was that it only looked at the
variables once, when the statement was parsed
<<<

! 10g
!! Bind variable capture 
is a feature introduced in 10g. It is just a periodic capture of the underlying values of bind variables
<<<
The bind values peeked can be seen in the OTHER_XML column in V$SQL_PLAN or BIND_DATA column in V$SQL. The space used for peeked binds in V$SQL_PLAN is determined by the parameter “_xpl_peeked_binds_log_size”. In some circumstances (ex – sql with large inlists of bind variables) the size may be exceeded and we might not see all the peeked bind values. But that doesnt mean that the variable value is not peeked. the max value of binds peeked is determined by the parameter “_xpl_peeked_binds_log_size” and it maxes out at 8192

V$SQL_BIND_CAPTURE. The capturing interval is determinded by the parameter “_cursor_bind_capture_interval”. The space used for captured binds is determined by the parameter “_cursor_bind_capture_area_size”.
In some circumstances (ex – sql with large inlists of bind variables) the size may be exceeded and we might not see all the captured bind values. In such cases setting “_cursor_bind_capture_interval” to the max value of 3999 helps in looking at most of the captured values.
<<<

! 11g
!! dynamic sampling - no stats
<<<
tries to fix problems with execution plans as they occur, that is if they have no statistics
<<<
!! cardinality feedback - store cardinality to fix estimates issues, doesn't work with bind variables
<<<
we just wait for the result of each step in the execution plan, store it in the shared pool and reference it on subsequent executions, in the hope that the information will give us a good idea of how well we did the last time.

However, remember that we mentioned that the statement needs to execute at least once for the optimizer to store the actual rows so that it can compare them to the estimated number of rows. If dynamic sampling has already been used (because it was needed and it was not disabled), then cardinality feedback will not be used. Also because of the problems that can be introduced by bind variables (especially if you have skewed data), cardinality feedback will not be used for parts of the statement that involve bind variables.
<<<
!! adaptive cursor sharing (ACS) - bind_aware
<<<
aimed at fixing performance issues due to bind
variable peeking. The basic idea is to try to automatically recognize when a statement might benefit from multiple
plans. If a statement is found that the optimizer thinks is a candidate, it is marked as bind aware. Subsequent
executions will peek at the bind variables and new cursors with new plans may result
<<<
!! sql plan management 

! 12cR1 


!! adaptive query optimization 
<<<
check [[12c Adaptive Optimization]], [[12c Adaptive Plans]], [[12cR2 Adaptive Features]]

https://docs.oracle.com/database/121/TGSQL/tgsql_optcncpt.htm#TGSQL221
https://blogs.oracle.com/optimizer/optimizer-adaptive-features-in-oracle-database-12c-release-2
<<<
Adaptive query optimization is a set of capabilities that enables the optimizer to make run-time adjustments to execution plans and discover additional information intended to lead to better query optimization, especially when existing statistics are insufficient to generate an optimal plan. Adaptive query optimization has two major components:


!!! adaptive execution plans (inflection point)
https://blogs.oracle.com/optimizer/optimizer-adaptive-features-in-oracle-database-12c-release-2
Adaptive Plans includes features addressing:
!!!! Join Methods
!!!! Parallel Distribution Methods


!!! Adaptive Statistics address:
!!!! Adaptive Dynamic Statistics
!!!! Automatic Re-optimization
!!!! SQL Plan Directives
!!!! Automatic Extended Statistics (Group Detection and expression stats)


!! Concurrent Statistics Gathering

! 12cR2 
!! Optimizer Statistics Advisor

! 19c 
!! sql plan management enhancements 
<<<
https://mikedietrichde.com/2019/06/03/automatic-sql-plan-management-in-oracle-database-19c/
<<<




! References 
!! https://apex.oracle.com/database-features/
<<<
go to "database overall" -> "optimizer" 
<<<

!! Plan Stability - Apress Book 
https://www.evernote.com/shard/s48/client/snv?noteGuid=013cd51e-e484-49ac-911b-e01bdd54ac06&noteKey=ce780dd4ca02d3d0b72b493acf8c33fd&sn=https%3A%2F%2Fwww.evernote.com%2Fshard%2Fs48%2Fsh%2F013cd51e-e484-49ac-911b-e01bdd54ac06%2Fce780dd4ca02d3d0b72b493acf8c33fd&title=Plan%2BStability%2B-%2BApress%2BBook

!! optimizer papers 
https://www.oracle.com/technetwork/database/bi-datawarehousing/twp-optimizer-with-oracledb-12c-1963236.pdf


!! RWP - Improving Real-World Performance Through Cursor Sharing
https://docs.oracle.com/en/database/oracle/oracle-database/19/tgsql/improving-rwp-cursor-sharing.html#GUID-971F4652-3950-4662-82DE-713DDEED317C



{{{

AWR_PDB_OPTIMIZER_ENV
AWR_PDB_OPTIMIZER_ENV_DETAILS


select * from AWR_PDB_OPTIMIZER_ENV_DETAILS where OPTIMIZER_ENV_HASH_VALUE = 4217826056;


Examining the Optimizer Environment within Which a SQL Statement was Parsed in AWR (Doc ID 2953121.1)
}}}
https://sites.google.com/site/oraclemonitor/optimizer-mistakes
Choosing An Optimal Stats Gathering Strategy
http://structureddata.org/2008/03/26/choosing-an-optimal-stats-gathering-strategy/   <-- good stuff

Restoring the statistics – Oracle Database 10g
http://avdeo.com/2010/11/01/restoring-the-statistics-oracle-database-10g


''Poor Quality Statistics:''
{{{
1) Sample size	
	Inadequate sample sizes
	Infrequently collected samples
	No samples on some objects
	Relying on auto sample collections and not checking what has been collected.
2) Histograms
	Collecting histograms when not needed
	Not collecting histograms when needed.
	Collecting very small sample sizes on histograms.
3) Not using more advanced options like extended statistics to set up correlation between related columns.
4) Collecting statistics at the wrong time
}}}




http://blogs.oracle.com/mt/mt-search.cgi?IncludeBlogs=3361&tag=optimizer%20transformations&limit=20

https://apex.oracle.com/odc_activity  
.
..
...
....
http://www.oracle.com/technetwork/oem/app-test/etest-101273.html


http://radar.oreilly.com/2011/10/oracles-big-data-appliance.html
http://www.oracle.com/us/corporate/press/512001#sf2272790
http://www.oracle.com/us/technologies/big-data/index.html?origref=http://www.oracle.com/us/corporate/press/512001#sf2272790

''Roll your own Big Data Appliance'' http://www.pythian.com/news/30749/roll-your-own-big-data-appliance/


Instructions to Download/Install/Setup Oracle SQL Connector for Hadoop Distributed File System (HDFS) [ID 1519162.1]
NOTE:1492125.1 - Instructions to Download/Install/Setup CDH3 Client to access HDFS on BDA 1.1
NOTE:1506203.1 - Instructions to Download/Install/Setup CDH4 Client to access HDFS on Oracle Big Data Appliance X3-2
NOTE:1519287.1 - Oracle SQL Connector for Hadoop Distributed File System (HDFS) Sample to Publish Data into External Table
Oracle SQL Connector for Hadoop Distributed File System (HDFS) Sample to Create External Table from Hive Table [ID 1557525.1]

''jan 23 oracle cloud policy'' https://www.google.com/search?q=jan+23+oracle+cloud+policy&oq=jan+23+oracle+cloud+policy&aqs=chrome..69i57.5666j0j7&sourceid=chrome&ie=UTF-8
Licensing Oracle Software in the Cloud Computing Environment http://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf
Oracle programs are eligible for Authorized Cloud Environments http://www.oracle.com/us/corporate/pricing/authorized-cloud-environments-3493562.pdf
https://oracle-base.com/blog/2017/01/28/oracles-cloud-licensing-change-be-warned/
http://houseofbrick.com/oracle-gives-itself-a-100-raise-in-authorized-cloud-compute-environments/
http://www.redwoodcompliance.com/dealing-oracles-cloud-licensing-policy/
https://www.pythian.com/blog/oracle-new-public-cloud-licensing-policy-good-or-bad/
Oracle Authorized Cloud Environments Overview of Policy Changes http://www.version1.com/getattachment/3cd6ec0f-2e95-42be-b082-25b32afc71da/Version-1-Oracle-Licensing-in-the-Cloud-Policy
http://madora.co.uk/oracle-changes-the-licensing-rules-for-cloud-again/
https://www.linkedin.com/pulse/dealing-oracles-cloud-licensing-policy-mohammad-inamullah
https://awsinsider.net/articles/2017/01/31/oracle-licensing-cost-for-aws.aspx
http://houseofbrick.com/resources/white-papers/




Stéphane Faroult - Oracle DBA tutorial
http://www.youtube.com/watch?v=yk8esAZKz4k&list=PLD33650E97A140FC8
https://docs.oracle.com/en/database/oracle/developer-tools-for-vscode/getting-started/index.html#Oracle%C2%AE-Database
https://docs.oracle.com/en/database/oracle/developer-tools-for-vscode/21.7.1/licensing-guide/index.html#GUID-8272992F-865C-4F05-8FE4-1BF46082F906
https://github.com/oracle/docker-images
! Oracle Exadata Recipes A Problem-Solution Approach by John Clarke
http://www.apress.com/9781430249146

Summary of the topics per chapter of the exadata recipe book. You can search through this, it’s easier than going through the nested table of contents of the PDF
{{{

####################################
Part1: Exadata Architecture
####################################

CH1: Exadata Hardware
1-1. Identifying Exadata Database Machine Components
1-2. Displaying Storage Server Architecture Details
1-3. Displaying Compute Server Architecture Details
1-4. Listing Disk Storage Details on the Exadata Storage Servers
1-5. Listing Disk Storage Details on the Compute Servers
1-6. Listing Flash Storage on the Exadata Storage Servers
1-7. Gathering Configuration Information for the InfiniBandSwitches

CH2: Exadata Software
2-1. Understanding the Role of Exadata Storage Server Software
2-2. Validating Oracle 11gR2 Databases on Exadata
2-3. Validating Oracle 11gR2 Grid Infrastructure on Exadata
2-4. LocatingtheOracleClusterRegistryandVotingDisksonExadata
2-5. Validating Oracle 11gR2 Real Application Clusters Installation and Database Storage on Exadata
2-6. Validating Oracle 11gR2 Real Application Clusters Networking on Exadata

CH3: How Oracle Works on Exadata
3-1. Mapping Physical Disks, LUNs, and Cell Disks on the Storage Servers
3-2. Mapping ASM Disks, Grid Disks, and Cell Disks
3-3. Mapping Flash Disks to Smart Flash Storage
3-4. Identifying Cell Server Software Processes
3-5. Tracing Oracle I/O Requests on Exadata Compute Nodes
3-6. Validating That Your Oracle RAC Interconnect Is Using InfiniBand
3-7. Tracing cellsrv on the Storage Servers

####################################
Part2: Preparing for Exadata
####################################

CH4: Workload Qualification
4-1. Quantifying I/O Characteristics of Your Current Database
4-2. Conducting a Smart Scan Fit Analysis Using AWR
4-3. Conducting a Smart Scan Fit Analysis Using Exadata Simulation
4-4. Performing a Hybrid Columnar Compression Fit Assessment

CH5: Sizing Exadata
5-1. Determining CPU Requirements
5-2. Determining IOPs Requirements
5-3. Determining I/O Bandwidth Requirements
5-4. Determining ASM Redundancy Requirements
5-5. Forecasting Storage Capacity
5-6. Planning for Database Growth
5-7. Planning for Disaster Recovery
5-8. Planning for Backups
5-9. Determining Your Fast Recovery Area and RECO Disk Group Size Requirements

CH6: Preparing for Exadata
6-1. Planning and Understanding Exadata Networking
6-2. Configuring DNS
6-3. Running checkip.sh
6-4. Customizing Your InfiniBand Network Configuration
6-5. Determining Your DATA and RECO Storage Requirements
6-6. Planning for ASM Disk Group Redundancy
6-7. Planning Database and ASM Extent Sizes
6-8. Completing the Pre-Delivery Survey
6-9. Completing the Configuration Worksheet

####################################
Part3: Exadata Administration
####################################

CH7: Administration and Diagnostics Utilities
7-1. Logging in to the Exadata Compute and Storage Cells Using SSH
7-2. Configuring SSH Equivalency
7-3. Locating Key Configuration Files and Directories on the Cell Servers
7-4. Locating Key Configuration Files and Directories on the Compute Nodes
7-5. Starting and Stopping Cell Server Processes
7-6. Administering Storage Cells Using CellCLI
7-7. Administering Storage Cells Using dcli
7-8. Generating Diagnostics from the ILOM Interface
7-9. Performing an Exadata Health Check Using exachk
7-10. Collecting Compute and Cell Server Diagnostics Using the sundiag.sh Utility
7-11. Collecting RAID Storage Information Using the MegaCLI utility
7-12. Administering the Storage Cell Network Using ipconf
7-13. Validating Your InfiniBand Switches with the CheckSWProfile.sh Utility
7-14. Verifying Your InfiniBand Network Topology
7-15. Diagnosing Your InfiniBand Network
7-16. Connecting to Your Cisco Catalyst 4948 Switch and Changing Switch Configuration

CH8: Backup and Recovery
8-1. Backing Up the Storage Servers
8-2. Displaying the Contents of Your CELLBOOT USB Flash Drive
8-3. Creating a Cell Boot Image on an External USB Drive
8-4. Backing Up Your Compute Nodes Using Your Enterprise Backup Software
8-5. Backing Up the Compute Servers Using LVM Snapshots
8-6. Backing Up Your Oracle Databases with RMAN
8-7. Backing Up the InfiniBand Switches
8-8. Recovering Storage Cells from Loss of a Single Disk
8-9. Recovering Storage Cells from Loss of a System Volume Using CELLBOOT Rescue
8-10. Recovering from a Failed Storage Server Patch
8-11. Recovering Compute Server Using LVM Snapshots
8-12. Reimaging a Compute Node
8-13. Recovering Your InfiniBand Switch Configuration
8-14. Recovering from Loss of Your Oracle Cluster Registry and Voting Disks

CH9: Storage Administration
9-1. Building ASM Disk Groups on Exadata
9-2. Properly Configuring ASM Disk Group Attributes on Exadata
9-3. Identifying Unassigned Grid Disks
9-4. Configuring ASM Redundancy on Exadata
9-5. Displaying ASM Partner Disk Relationships on Exadata
9-6. Measuring ASM Extent Balance on Exadata
9-7. Rebuilding Cell Disks
9-8. Creating Interleaved Cell Disks and Grid Disks
9-9. Rebuilding Grid Disks
9-10. Setting smart_scan_capable on ASM Disk Groups
9-11. Creating Flash Grid Disks for Permanent Storage	 

CH10: Network Administration
10-1. Configuring the Management Network on the Compute Nodes
10-2. Configuring the Client Access Network
10-3. Configuring the Private Interconnect on the Compute Nodes
10-4. Configuring the SCAN Listener
10-5. Managing Grid Infrastructure Network Resources
10-6. Configuring the Storage Server Ethernet Network
10-7. Changing IP Addresses on Your Exadata Database Machine

CH11: Patching and Upgrades
11-1. Understanding Exadata Patching Definitions, Alternatives, and Strategies
11-2. Preparing to Apply Exadata Patches
11-3. Patching Your Exadata Storage Servers
11-4. Patching Your Exadata Compute Nodes and Databases
11-5. Patching the InfiniBand Switches
11-6. Patching Your Enterprise Manager Systems Management Software

CH12: Security
12-1. Configuring Multiple Oracle Software Owners on Exadata Compute Nodes
12-2. Installing Multiple Oracle Homes on Your Exadata Compute Nodes
12-3. Configuring ASM-Scoped Security
12-4. Configuring Database-Scoped Security

####################################
Part4: Monitoring Exadata
####################################

CH13: Monitoring Exadata Storage Cells
13-1. Monitoring Storage Cell Alerts
13-2. Monitoring Cells with Active Requests
13-3. Monitoring Cells with Metrics
13-4. Configuring Thresholds for Cell Metrics
13-5. Using dcli with Special Characters
13-6. Reporting and Summarizing metrichistory Using R
13-7. Reporting and Summarizing metrichistory Using Oracle and SQL
13-8. Detecting Cell Disk I/O Bottlenecks
13-9. Measuring Small I/O vs. Large I/O Requests
13-10. Detecting Grid Disk I/O Bottlenecks
13-11. Detecting Host Interconnect Bottlenecks
13-12. Measuring I/O Load and Waits per Database, Resource Consumer Group, and Resource Category

CH14: Host and Database Performance Monitoring
14-1. Collecting Historical Compute Node and Storage Cell Host Performance Statistics
14-2. Displaying Real-Time Compute Node and Storage Cell Performance Statistics
14-3. Monitoring Exadata with Enterprise Manager
14-4. Monitoring Performance with SQL Monitoring
14-5. Monitoring Performance by Database Time
14-6. Monitoring Smart Scans by Database Time and AAS
14-7. Monitoring Exadata with Wait Events
14-8. Monitoring Exadata with Statistics and Counters
14-9. Measuring Cell I/O Statistics for a SQL Statement

####################################
Part5: Exadata Software
####################################

CH15: Smart Scan and Cell Offload
15-1. Identifying Cell Offload in Execution Plans
15-2. Controlling Cell Offload Behavior
15-3. Measuring Smart Scan with Statistics
15-4. Measuring Offload Statistics for Individual SQL Cursors
15-5. Measuring Offload Efficiency
15-6. Identifying Smart Scan from 10046 Trace Files
15-7. Qualifying for Direct Path Reads
15-8. Influencing Exadata’s Decision to Use Smart Scans
15-9. Identifying Partial Cell Offload
15-10. Dealing with Fast Object Checkpoints

CH16: Hybrid Columnar Compression
16-1. Estimating Disk Space Savings for HCC
16-2. Building HCC Tables and Partitions
16-3. Contrasting Oracle Compression Types
16-4. Determining the Compression Type of a Segment
16-5. Measuring the Performance Impact of HCC for Queries
16-6. Direct Path Inserts into HCC Segments
16-7. Conventional Inserts to HCC Segments
16-8. DML and HCC
16-9. Decompression and the Performance Impact

CH17: I/O Resource Management and Instance Caging
17-1. Prioritizing I/O Utilization by Database
17-2. Limiting I/O Utilization for Your Databases
17-3. Managing Resources within a Database
17-4. Prioritizing I/O Utilization by Category of Resource Consumers
17-5. Prioritizing I/O Utilization by Categories of Resource Consumers and Databases
17-6. Monitoring Performance When IORM Is Enabled
17-7. Obtaining IORM Plan Information
17-8. Controlling Smart Flash Cache and Smart Flash Logging with IORM
17-9. Limiting CPU Resources with Instance Caging

CH18: Smart Flash Cache and Smart Flash Logging
18-1. Managing Smart Flash Cache and Smart Flash Logging
18-2. Determining Which Database Objects Are Cached
18-3. Determining What’s Consuming Your Flash Cache Storage
18-4. Determining What Happens When Querying Uncached Data
18-5. Measuring Smart Flash Cache Performance
18-6. Pinning Specific Objects in Smart Flash Cache
18-7. Quantifying Benefits of Smart Flash Logging

CH19: Storage Indexes
19-1. Measuring Performance Impact of Storage Indexes
19-2. Measuring Storage Index Performance with Not-So-Well-Ordered Data
19-3. Testing Storage Index Behavior with Different Query Predicate Conditions
19-4. Tracing Storage Index Behavior
19-5. Tracing Storage Indexes When More than Eight Columns Are Referenced
19-6. Tracing Storage Indexes when DML Is Issued against Tables
19-7. Disabling Storage Indexes
19-8. Troubleshooting Storage Indexes

####################################
Post Implementation Tasks
####################################

CH20: Post-Installation Monitoring Tasks
20-1. Installing Enterprise Manager 12c Cloud Control Agents for Exadata
20-2. Configuring Enterprise Manager 12c Cloud Control Plug-ins for Exadata
20-3. Configuring Automated Service Requests

CH21: Post-Install Database Tasks
21-1. Creating a New Oracle RAC Database on Exadata
21-2. Setting Up a DBFS File System on Exadata
21-3. Configuring HugePages on Exadata
21-4. Configuring Automatic Degree of Parallelism
21-5. Setting I/O Calibration on Exadata
21-6. Measuring Impact of Auto DOP and Parallel Statement Queuing
21-7. Measuring Auto DOP and In-Memory Parallel Execution
21-8. Gathering Optimizer Statistics on Exadata


}}}
The Lifetime Support Policy provides access to technical experts for as long
as you license your Oracle products and consists of three support stages: 
	Premier Support,
	Extended Support, 
	and Sustaining Support.
	

Expect Lifetime Support
With Oracle Support, you know up front and with certainty how long your Oracle products
are supported. The Lifetime Support Policy provides access to technical experts for as long
as you license your Oracle products and consists of three support stages: Premier Support,
Extended Support, and Sustaining Support. It delivers maximum value by providing you
with rights to major product releases so you can take full advantage of technology and
product enhancements. Your technology and your business keep moving forward together.
Premier Support provides a standard five-year support policy for Oracle Technology and
Oracle Applications products. You can extend support for an additional three years with
Extended Support for specific releases, or receive indefinite technical support with
Sustaining Support.


Premier Support
As an Oracle customer, you can expect the best with Premier Support, our award-winning,
next-generation support program. Premier Support provides you with maintenance and
support of your Oracle Database, Oracle Fusion Middleware, and Oracle Applications for five
years from their general availability date. You benefit from
� Major product and technology releases
� Technical support
� Updates, fixes, security alerts, data fixes, and critical patch updates
� Tax, legal, and regulatory updates
� Upgrade scripts
� Certification with most new third-party products/versions
� Certification with most new Oracle products


Extended Support
Your technology future is assured with Oracle�s Extended Support. Extended Support lets
you stay competitive, with the freedom to upgrade on your timetable. If you take advantage
of Extended Support, it provides you with an extra three years of support for specific Oracle
releases for an additional fee. You benefit from
� Major product and technology releases
� Technical support
� Updates, fixes, security alerts, data fixes, and critical patch updates
� Tax, legal, and regulatory updates
� Upgrade scripts
� Certification with most existing third-party products/versions
� Certification with most existing Oracle products
Extended Support may not include certification with some new third-party
products/versions.


Sustaining Support
Sustaining Support puts you in control of your upgrade strategy. When Premier Support
expires, if you choose not to purchase Extended Support, or when Extended Support expires,
Sustaining Support will be available for as long as you license your Oracle products. With
Sustaining Support, you receive technical support, including access to our online support
tools, knowledgebases, and technical support experts. You benefit from
� Major product and technology releases
� Technical support
� Access to OracleMetaLink/PeopleSoft Customer Connection/Hyperion e-Support
� Fixes, updates, and critical patch updates created during the Premier Support stage
� Upgrade scripts created during the Premier Support stage
Sustaining Support does not include
� New updates, fixes, security alerts, data fixes, and critical patch updates
� New tax, legal, and regulatory updates
� New upgrade scripts
� Certification with new third-party products/versions
� Certification with new Oracle products
For more specifics on Premier Support, Extended Support, and Sustaining Support, please refer to
Oracle�s Technical Support Policies.
https://cloud.oracle.com/en_US/paas
https://cloud.oracle.com/management
https://docs.oracle.com/cloud/latest/em_home/index.html
<<<
Oracle Management Cloud uses a
broad array of machine learning
techniques, including the following:
» Anomaly detection. Flags
unusual resource usage and
identifies configuration changes.
» Clustering. Filters out signal
from noise; aggregates topologybased
data.
» Correlation. Groups and alerts
on related symptoms; discovers
dependencies.
» Prediction. Forecasts outages
before they happen; plans
capacity and resources.
<<<

http://www.oracle.com/us/solutions/cloud/oracle-management-cloud-brief-2714883.pdf
https://www.forbes.com/sites/oracle/2018/07/23/machine-learning-and-it-jobs-early-lessons-learned-from-system-monitoring/#6ac97dc85ef3
https://www.slideshare.net/DheerajHiremath1/oracle-management-cloud-65816440
http://courtneyllamas.com/category/oracle-management-cloud/
https://cloud.oracle.com/_downloads/eBook_OMC/Oracle_Management_Cloud_eBook.pdf

<<showtoc>>


! documentation 
https://docs.oracle.com/en/database/oracle/oracle-rest-data-services/18.4/index.html


! articles 
https://www.slideshare.net/hillbillyToad/oracle-rest-data-services-options-for-your-web-services
https://www.thatjeffsmith.com/archive/2019/02/ords-architecture-a-common-deployment-overview/
https://oracle-base.com/articles/misc/an-introduction-to-json-support-in-the-oracle-database
https://oracle-base.com/articles/misc/articles-misc#ords
https://twiki.cern.ch/twiki/bin/view/DB/DevelopingOracleRestfulServices

https://blogs.oracle.com/sql/how-to-store-query-and-create-json-documents-in-oracle-database


! youtube 
Oracle REST Data Services Product Walk-through and Demonstration https://www.youtube.com/watch?v=rvxTbTuUm5k   <-- GOOD STUFF

Configure/Install and Trouble shooting ORACLE ORDS , with or Without APEX to Run WEB SERVICES (JSON) https://www.youtube.com/watch?v=d6Dl6Dh4zFc
How To create (get and POST ) webservice in Oracle APEX 5 in less than 10 min step by step https://www.youtube.com/watch?v=fD-o73AhzpQ
Creating and Using a RESTful Web Service in Application Express 4.2 https://www.youtube.com/watch?v=gkCvd6P8_OU 
REST API Programming ; Create one in under 8 minutes with APEX's REST API Creator https://www.youtube.com/watch?v=RGq4KuEKW3Q
Cloud PaaS and IaaS - How To Videos https://www.youtube.com/channel/UCoLZREsDUGWqBIBL_bBM2cg/search?query=REST
Make the RDBMS Relevant Again with RESTful Web Services and JSON https://www.youtube.com/watch?v=PohxnQbwTzA
AskTOM Office Hours: Building REST APIs with Node.js and Oracle - Part 1 https://www.youtube.com/watch?v=BghtqQOFyi4
oracle REST Data Service in 5 easy steps https://www.youtube.com/watch?v=fi8gGwNEO9M
https://www.youtube.com/results?search_query=ORDS+REST+api
Making & Consuming REST Web Services using ORDS & APEX https://www.youtube.com/watch?v=OvCgpKtEYBg



! ORDS REST API for Database
https://docs.oracle.com/en/database/oracle/oracle-database/19/dbrst/op-database-datapump-jobs-post.html




! ORDS performance 

!! ORDS benchmarking 
https://github.com/giltene/wrk2
https://github.com/wg/wrk
https://twitter.com/OracleREST/status/1100171448948346880 "500 requests per second, on a free ORDS and XE stack"
https://telegra.ph/Oracle-XE-184-Free-High-load-02-25-2
https://dsavenko.me/oracledb-apex-ords-tomcat-httpd-centos7-all-in-one-guide-introduction/

!! ORDS load balancing 	
http://krisrice.io/2019-04-17-ORDS-Consul-Fabio/








..

! tutorials
Machine Learning with R in Oracle Database https://community.oracle.com/docs/DOC-1013840



! Software
R for windows - http://cran.cnr.berkeley.edu/
IDE - http://rstudio.org/


! Use case
''R and BIEE''
{{{
R plugins to Oracle (Oracle R Enterprise  packages)
Oracle - sys.rqScriptCreate, rqRowEval (parallelism applicable to rq.groupEval and rq.rowEval)
BI publisher consumes XML output from sys.rqScriptCreate for graphs
BIP and OBIEE can also execute R scripts and sys.rqScriptCreate
}}}

''R and Hadoop''
{{{
Hadoop - “Technically, Hadoop consists of two key services: reliable data storage using the Hadoop Distributed File  System (HDFS) 
                and high-performance parallel data processing using a technique called MapReduce.”
with R and Hadoop, you can pretty much do everything in R interface
}}}

''R built-in statistical functions in Oracle'' 
* these are the functions that I used for building the [[r2project]] - a regression analysis tool for Oracle workload performance
* these built-in functions are ''__not__'' smart scan offloadable
{{{
SQL> r
  1  select name , OFFLOADABLE from v$sqlfn_metadata
  2* where lower(name) like '%reg%'

NAME                           OFF
------------------------------ ---
REGR_SLOPE                     NO
REGR_INTERCEPT                 NO
REGR_COUNT                     NO
REGR_R2                        NO
REGR_AVGX                      NO
REGR_AVGY                      NO
REGR_SXX                       NO
REGR_SYY                       NO
REGR_SXY                       NO
REGEXP_SUBSTR                  YES
REGEXP_INSTR                   YES
REGEXP_REPLACE                 YES
REGEXP_COUNT                   YES

13 rows selected.
}}}

''R Enterprise packages''
* the sys.rqScriptCreate could be part of the Oracle R Enterprise packages that make use of SELECT SQLs and PARALLEL options (on hints/objects)  & that's how it utilizes the Exadata offloading
{{{
SQL> select name , OFFLOADABLE from v$sqlfn_metadata
  2  where lower(name) like '%rq%';

no rows selected
}}}


! References
http://www.oracle.com/technetwork/database/options/advanced-analytics/r-enterprise/index.html
http://blogs.oracle.com/R

''Oracle By Example'': Oracle R Enterprise Tutorial Series http://goo.gl/IKd6Q

Oracle R Enterprise Training 1 - Getting Started - http://goo.gl/krOrH
Oracle R Enterprise Training 2 - Introduction to R - http://goo.gl/EGEbn
Oracle R Enterprise Training 3 - Transparency Layer - http://goo.gl/vjvu7
Oracle R Enterprise Training 4 - Embedded R Scripts - http://goo.gl/aZXui
Oracle R Enterprise Training 5 - Operationalizing R Scripts - http://goo.gl/JNRFf
Oracle R Enterprise Training 6 - Advanced Topics - http://goo.gl/ziNs1

How to Import Data from External Files in R http://answers.oreilly.com/topic/1629-how-to-import-data-from-external-files-in-r/

Oracle R install http://husnusensoy.wordpress.com/2012/10/25/oracle-r-enterprise-configuration-on-oracle-linux/
Using the R Language with an Oracle Database. http://dbastreet.com/blog/?p=913

Shiny web app http://www.r-bloggers.com/introducing-shiny-easy-web-applications-in-r/

Plotting AWR database metrics using R http://dbastreet.com/blog/?p=946

Coursera 4 week course  http://www.r-bloggers.com/videos-from-courseras-four-week-course-in-r/

Andy Klock's R reference https://www.evernote.com/shard/s242/sh/26a0913e-cead-4574-a253-aaf6c733bdbe/563543114559066dcb8141708c5c89a2

https://blogs.oracle.com/R/entry/r_to_oracle_database_connectivity

Oracle on R http://www.r-bloggers.com/connecting-r-to-an-oracle-database-with-rjdbc/ , http://www.r-bloggers.com/author/michael-j-bommarito-ii/





https://blogs.sap.com/2009/02/09/oracle-real-application-testing-with-sap/
https://docs.google.com/viewer?url=http://www.oracle.com/technetwork/database/features/information-management/streams-fov-11g-134280.pdf <-- datasheet

The beauty of Oracle Streams
http://geertdepaep.wordpress.com/2007/11/24/the-beauty-of-oracle-streams/

How To: Setup up of Oracle Streams Replication
http://apunhiran.blogspot.com/2009/07/how-to-setup-up-of-oracle-streams.html

http://www.scribd.com/doc/123218/Oracle-Streams-Step-by-Step-Doc
http://www.scribd.com/doc/123217/Oracle-Streams-Step-by-Step-PPT

http://dbataj.blogspot.com/2008/01/oracle-streams-setup-between-two.html
http://www.oracle-base.com/articles/9i/Streams9i.php

http://prodlife.wordpress.com/2009/03/03/a-year-with-streams/
http://prodlife.wordpress.com/2008/02/21/oracle-streams-replication-example/
http://prodlife.wordpress.com/2009/05/05/streams-on-rac/

Oracle Streams Configuration: Change Data Capture http://it.toolbox.com/blogs/oracle-guide/oracle-streams-configuration-change-data-capture-13501
Advanced Queues and Streams: A Definition in Plain English http://it.toolbox.com/blogs/oracle-guide/advanced-queues-and-streams-a-definition-in-plain-english-3677

http://psoug.org/reference/streams_demo1.html

Implementing Replication with Oracle Streams Ashish Ray
https://docs.google.com/viewer?url=http://www.projects.ed.ac.uk/areas/student/euclid/STU139/Other_documents/StreamsAndReplication.pdf

https://docs.google.com/viewer?url=http://www.nocoug.org/download/2007-05/Streams_Presentation.ppt
https://docs.google.com/viewer?url=http://www.nocoug.org/download/2007-05/Streams_White_Paper.doc

Oracle® Streams Replication Administrator's Guide 11g Release 1 (11.1) http://download.oracle.com/docs/cd/B28359_01/server.111/b28322/best_capture.htm

https://docs.google.com/viewer?url=http://www.oracle.com/technetwork/database/features/availability/311396-2-128440.pdf

http://www.colestock.com/blogs/2006/01/how-to-stream-10g-release-1-example.html

http://rohitsinhago.blogspot.com/2009/05/oracle-streams-performance-tests.html

https://docs.google.com/viewer?url=http://www.go-faster.co.uk/mv.dbmssig.20070717.ppt <-- MViews for replication

Oracle® Streams for Near Real Time Asynchronous Replication https://docs.google.com/viewer?url=http://www.cs.berkeley.edu/~nimar/papers/streams-diddr-05.pdf 

http://www.scribd.com/doc/7979240/Oracle-White-Paper-Using-Oracle-Streams-Advanced-Queueing-Best-Practices



''Oracle Official References'' 

Oracle 11g Streams
https://docs.google.com/viewer?url=http://www.oracle.com/technetwork/database/features/data-integration/twp-streams-11gr1-134658.pdf

Oracle Streams Configuration Best Practices: Oracle Database 10g Release 10.2
https://docs.google.com/viewer?url=http://www.oracle.com/technetwork/database/features/availability/maa-10gr2-streams-configuration-132039.pdf

Oracle9i Replication
https://docs.google.com/viewer?url=http://www.oracle.com/technetwork/database/features/data-integration/oracle-adv-replication-twp-132415.pdf
https://docs.oracle.com/cd/E87041_01/index.htm
https://docs.oracle.com/cd/E87041_01/PDF/OUA_Developer_Guide_2.7.0.pdf
https://docs.oracle.com/cd/E87041_01/PDF/OUA_Admin_Guide_2.7.0.0.12.pdf
Resource Management as an Enabling Technology for Virtualization
http://www.oracle.com/technetwork/articles/servers-storage-admin/resource-mgmt-for-virtualization-1890711.html
https://github.com/oracle/vagrant-boxes
https://www.oracle.com/technetwork/community/developer-vm/index.html
Secure Database Passwords in an Oracle Wallet
http://www.idevelopment.info/data/Oracle/DBA_tips/Security/SEC_15.shtml
oracle XA and dbms_pipe type applications (old programs, c programs)
* a lot of dbms_pipe went to AQ because AQ supports RAC
global_txn_processes

http://www.oracle-base.com/articles/11g/dbms_xa_11gR1.php

https://aws.amazon.com/blogs/database/how-to-solve-some-common-challenges-faced-while-migrating-from-oracle-to-postgresql/
https://aws.amazon.com/blogs/database/how-to-migrate-your-oracle-database-to-postgresql/
Oracle Database 11g/12c To Amazon Aurora with PostgreSQL Compatibility (9.6.x) https://d1.awsstatic.com/whitepapers/Migration/oracle-database-amazon-aurora-postgresql-migration-playbook.pdf
Best Practices for AWS Database Migration Service https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html

''how to search patents:'' http://timurakhmadeev.wordpress.com/2012/03/22/patents/
''use this:'' https://www.google.com/?tbm=pts&gws_rd=ssl#tbm=pts&q=assignee:oracle

see also - [[ASH]], VisualSQLTuning



! ASH
ASH patent http://www.google.com/patents?id=cQWbAAAAEBAJ&pg=PA2&source=gbs_selected_pages&cad=3#v=onepage&q&f=false

! VST
Dan Tow Memory structure and method for tuning a database statement using a join-tree data structure representation, including selectivity factors, of a master table and detail table http://www.freepatentsonline.com/5761654.html
Mozes, Ari Method and system for sample size determination for database optimizers http://www.freepatentsonline.com/6732085.html

! Adaptive thresholds
http://www.docstoc.com/docs/56167536/Graphical-Display-And-Correlation-Of-Severity-Scores-Of-System-Metrics---Patent-7246043


{{{
JB Patents

# Diagnosing Database Performance Problems Using a Plurality of Wait Classes
United States Patent 7,555,499 B2 Issued June 30, 2009
Inventors: John Beresniewicz, Vipul Shah, Hsiao Su, Kyle Hailey, and others
Covers the method and apparatus for diagnosing database performance problems using breakdown of time spent in database by wait classes, as presented in the Oracle Enterprise Manager Performance and Top Activity screens and the workflows that issue from them.

# Graphical Display and Correlation of Severity Scores of System Metrics
United States Patent 7,246,043 B2 Issued July 17, 2007
Inventors: John Beresniewicz, Amir Najmi, Jonathan Soule
Covers the technique for scoring and graphical display of severity of database system metric values by normalizing over a statistical characterization of expected values such that meaningful abnormalities are emphasized and normal values dampened.

# Automatic Determination of High Significance Alert Thresholds for System Performance Metrics Using an Exponentially Tailed Model
United States Patent 7,225,103 Issued May 29, 2007
Inventors: John Beresniewicz, Amir Najmi
Covers the fitting of an exponential model to the upper percentile subsets of observed values for system performance metrics, accounting for common temporal variations in expected workloads. The model parameters are used to automatically generate, set and adjust alert thresholds for detecting anomalous system behavior.
}}}

8051486	Indicating SQL injection attack vulnerability with a stored value http://www.patentgenius.com/patent/8051486.html
7246043	Graphical display and correlation of severity scores of system metrics http://www.patentgenius.com/patent/7246043.html
7225103	Automatic determination of high significance alert thresholds for system performance metrics using an exponentially tailed model
U.S. Patent Number: 7,246,043 Graphical display and correlation of severity scores of system metrics
U.S. Patent Number: 7,225,103 Automatic determination of high significance alert thresholds for system performance metrics using an exponentially tailed model


''Kevin Closson patents''
http://www.patentgenius.com/inventedby/ClossonKevinAForestGroveOR.html


! Exadata Patents
{{{
Boris Erlikhman http://goo.gl/2LvXU
                smart scan http://goo.gl/chy2s
                flash cache http://goo.gl/YlCA7
                smart flash log http://goo.gl/TwyRx
                write back cache http://goo.gl/2WCmw

Roger Macnicol http://goo.gl/oxxu7
                hcc http://goo.gl/9ptFe, http://goo.gl/3IOSi

Sue Lee http://goo.gl/6WCFw, http://goo.gl/bI0pd
                iorm http://goo.gl/BHIc1
}}}
[img(70%,70%)[ https://i.imgur.com/zbAYaZF.png]]


QOS http://goo.gl/B2XOp
Consolidation Planner http://goo.gl/M45nL

Direct IO https://www.google.com/patents/US8224813
Optimizer COST model https://www.google.com/patents/US6957211
Parallel partition-wise joins https://www.google.com/patents/US6609131
partition pruning https://www.google.com/patents/US6965891
On-line transaction processing (OLTP) compression and re-compression of database data https://www.google.com/patents/US8392382
Storing row-major data with an affinity for columns https://www.google.com/patents/US20130024612
SQL Execution Plan Baselines https://www.google.com/patents/US20090106306


Cecilia Gervasio Grant  compare AWR snapshots http://goo.gl/WPyGm6





11g
https://docs.google.com/viewer?url=http://www.oracle.com/us/products/database/039449.pdf
Differences Between Enterprise, Standard and Standard One Editions on Oracle 11.2 [ID 1084132.1]

10g
https://docs.google.com/viewer?url=http://www.oracle.com/technetwork/database/database10g/overview/twp-general-10gdb-product-family-132973.pdf

9i
https://docs.google.com/viewer?url=http://www.magnifix.com/pdf/9idb_features.pdf
To ramp up my Exadata learning I have to make use of various media and do multiple reads/references across them. One useful media is Oracle by Example they have tons of video tutorials/demos available. Just go to this site http://goo.gl/Egd1W and copy paste the topics that are mentioned here http://goo.gl/WGNaw


Advisor Webcast Archived Recordings [ID 740964.1]
Database https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=740964.1#data
OEM https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=740964.1#em
Exadata https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=740964.1#exadata



!
! ''Exadata''
The Magic of Exadata
Configuring DCLI
Installing and Configuring Enterprise Manager Exadata Plug-in (Part 1)
Installing and Configuring Enterprise Manager Exadata Plug-in (Part 2)
Exadata Cell First Boot Initialization
Exadata Calibrate and Cell/Grid Disks Configuration
Configuring ASM Disk Groups for Exadata
IORM and Exadata
Possible Execution Plans with Exadata Offloading http://goo.gl/FT2wj
<<<
{{{

show parameter offload

cell_offload_plan_display
cell_offload_processing
-- for future use params 
cell_partition_large_extents
cell_offload_compaction
cell_offload_parameters


-- possibilities would be
offloading a FTS - table access storage full					/*+ PARALLEL FULL(s) */
offloading a Full index scans - index storage fast full scan    /*+ PARALLEL INDEX_FFS(s mysales_cust_id_indx) */
offload in HASH JOINS 											/*+ PARALLEL */
bloom filter - SYS_OP_BLOOM_FILTER on predicate 				/*+ PARALLEL */
}}}
<<<
Exadata Automatic Reconnect
Exadata Cell Failure Scenario

''-- "tagged as Exadata"''
Check out the series here [[OBE Exadata 1 to 25]] and here [[Exadata Best Practices Series]] ! ! !
Managing Parallel Processing with the Database Resource Manager Demo    19-Nov-10       60 mins
Using Exadata Smart Scan        Video   19-Aug-10       4 mins
Hybrid Columnar Compression     Demo    01-Oct-09       22 mins
Smart Flash Cache Architecture  Demo    01-Oct-09       8 mins
Cell First Boot Demo    01-Sep-09       5 mins
Cell Configuration      Demo    01-Sep-09       10 mins
Smart Scan Scale Out Example    Demo    01-Sep-09       10 mins
Smart Flash Cache Monitoring    Demo    01-Sep-09       25 mins
Configuring DCLI        Demo    01-Jul-07       5 mins
Installing and Configuring Enterprise Manager Exadata Plug-in (Part 2)  Demo    01-Jul-07       30 mins
Installing and Configuring Enterprise Manager Exadata Plug-in (Part 1)  Demo    01-Jul-07       24 mins
Exadata Cell First Boot Initialization  Demo    01-Jul-07       12 mins
Exadata Calibrate and Cell/Grid Disks Configuration     Demo    01-Jul-07       12 mins
Configuring ASM Disk Groups for Exadata Demo    01-Jul-07       8 mins
IORM and Exadata        Demo    01-Jul-07       40 mins
Real Performance Tests with Exadata     Demo    01-Jul-07       42 mins http://goo.gl/roFLK
<<<
{{{
cat ./mon
./dcli -g cells -l root --vmstat="2"

cat test.sh
#!/bin/bash

B=$SECONDS
sqlplus test/test @ss_q1.sql
sqlplus test/test @ss_q2.sql
sqlplus test/test @ss_q3.sql
sqlplus test/test @ss_q4.sql

(( TM = $SECONDS - $B ))
echo "All queries completed in $TM seconds"


cat ss_q1.sql
spool ss_q1
set timing on
spool off
}}}
<<<
Exadata Automatic Reconnect     Demo    01-Jul-07       12 mins
Exadata Cell Failure Scenario   Demo    01-Jul-07       10 mins

!
! ''Manageability:''
Using SQL Baselines
Using Metric Baselines
Transport a tablespace version to another database

!
! ''Automatic Storage Management (ASM):''
Install ASM single instance in its own home
Install ASM single instance in the same home
Migrate a database to ASM
Setup XML DB to access ASM
Access ASM files using ASMCMD
Real Application Clusters (RAC)

!
! ''RAC Deployment Series (Beta):''
Setting Up RAC Storage
Setting Up Openfiler Storage
Setting Up iSCSI On Client Side
Using fdisk to Partition Storage
Setting Up Multipathing On Client Side
Installing and Configuring ASMLib
Setting Up Storage Permissions On Client Side
Installing Oracle Clusterware
Installing Real Application Clusters
Configuring ASM Storage
Installing Oracle Database Single Instance Software (Part I)
Installing Oracle Database Single Instance Software (Part II)
Creating Single Instance Database
Protecting Single Instance Database Using Oracle Clusterware
Converting Single Instance Database to RAC Database
Adding a Node to Your Cluster
Extending Oracle Clusterware to Third Node
Extending RAC Software to Third Node
Extending RAC Database to Third Node
Rolling Upgrade Your Entire Cluster
Creating a RAC Physical Standby Database
Installing and Configuring OCFS2
Setting Up RAC Primary Database in Archivelog Mode
Backing Up RAC Primary Database
Configuring Oracle Network Services on Clustered Standby Site
Creating RAC Physical Standby Database Using OCFS2 Storage
Checking RAC Physical to RAC Standby databases Communication
Converting RAC Physical Standby Database to RAC Logical Standby Database
Rolling Upgrade Oracle Clusterware
Rolling Upgrade Oracle Clusterware on Clustered Primary Site (10.2.0.1 to 10.2.0.2)
Rolling Upgrade Oracle Clusterware on Clustered Standby Site (10.2.0.1 to 10.2.0.2)
Upgrading your RAC Standby Site
Upgrading RAC Standby Database From 10.2.0.1 to 10.2.0.2 (Part I)
Upgrading RAC Standby Database From 10.2.0.1 to 10.2.0.2 (Part II)
Switching Primary and Standby Databases Roles
Upgrading your old RAC Primary Site
Upgrading RAC Old Primary Database From 10.2.0.1 to 10.2.0.2 (Part I)
Upgrading RAC Old Primary Database From 10.2.0.1 to 10.2.0.2 (Part II)
Switching Back Primary and Standby Databases Roles

!
! ''Miscellaneous:''
RAC scale example
RAC speedup example
Use Transparent Application Failover (TAF) with SELECT statements

!
! ''Oracle Clusterware:''
Use Oracle Clusterware to protect the apache application
Use Oracle Clusterware to protect the Xclock application
RAC Voting Disk Multiplexing
Patch Oracle Clusterware in a Rolling Fashion
CSS Diagnostic Case Study
RAC OCR Mirroring

!
! ''Services:''
Runtime Connection Load Balancing example
Basic use of services in your RAC environment

!
! ''Installs and Enterprise Manager:''
Install ASM in its own home in a RAC environment
Convert a single-instance database to a RAC database using Grid Control
Push Management Agent software using Grid Control
Clone Oracle Clusterware to extend your cluster using Grid Control
Clone ASM home to extend your cluster using Grid Control
Clone database home to extend your cluster using Grid Control
Add a database instance to your RAC database using Grid Control

!
! ''RAC Concepts:''
RAC VIP Concepts
RAC Object Affinity Concepts
Rolling Release Upgrade (Beta): 10.2.0.1 to 10.2.0.2:
Upgrading your Standby Site
Upgrading RAC Standby Database From 10.2.0.1 to 10.2.0.2 (Part I)
Upgrading RAC Standby Database From 10.2.0.1 to 10.2.0.2 (Part II)
Switching Primary and Standby Databases Roles
Upgrading your old Primary Site
Upgrading RAC Old Primary Database From 10.2.0.1 to 10.2.0.2 (Part I)
Upgrading RAC Old Primary Database From 10.2.0.1 to 10.2.0.2 (Part II)
Switching Back Primary and Standby Databases Roles


https://cloud.oracle.com

''tutorial'' http://www.oracle.com/webfolder/technetwork/tutorials/obe/cloud/dbservice/dataload/dataload.html
Oracle Compute Cloud Service Foundations https://www.pluralsight.com/courses/oracle-compute-cloud-service-foundations

Product announcement
http://www.ome-b.nl/2011/09/22/finally-the-oracle-database-appliance/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+orana+%28OraNA%29

Step by Step install and some other screenshots
http://www.evernote.com/shard/s48/sh/0d565394-1a58-4578-9fc8-e53aa52c4eca/8f360ef4395fa999e32d2c77358ee613

Video intros
http://goo.gl/kWlT3
Unloading history - old Oracle7 dictionary
http://www.ora600.be/node/10707

http://oss.oracle.com/ksplice/docs/ksplice-quickstart.pdf

Using Oracle Ksplice to Update Oracle Linux Systems Without Rebooting
http://www.oracle.com/technetwork/articles/servers-storage-admin/ksplice-linux-518455.html

http://www.freelists.org/post/oracle-l/Reinstall-OS-completely-without-reinstalling-Oracle,6
{{{
I thought of this article as a full OS upgrade just like your case.. but it
seems like it is really just the kernel upgrade/patches..

then I tweeted @wimcoekaerts just out of curiosity..   @wimcoekaerts Do I
still have to relink my Oracle Home even I use ksplice for OS upgrade?
goo.gl/R13Op

this is his response  @karlarao <http://twitter.com/karlarao> No. ksplice
isn't really "os upgrade" in the normal sense. it updates the running kernel
(in memory). no need to relink or restart


So on my notes on the section "APPENDIX A - Some FAQs about relinking"
http://docs.google.com/fileview?id5H46jS7ZPdJNGU0NDljZDktMzUwMC00ZWQ4LWIwZDgtNjFlYzNhMzQyMjg0&hl=en
I had this question before:

6) What if I just did a kernel upgrade (2.6.9-old to 2.6.9-newer), and not a
full OS upgrade (from oel4.4 to 4.6), would I still have to relink? The
kernel upgrade just updates the
hardware modules (/lib/modules) which is not related to the gcc binaries or
libraries used to compile the binaries of Oracle? the /usr/lib/gcc-lib is
not affected when you do a
kernel upgrade?

Answer: If you are just upgrading the kernel, no need to relink. If it's
affecting the system libraries, then you have to relink
}}}


''Related ksplice blogs''
http://blogs.oracle.com/ksplice/entry/solving_problems_with_proc
https://blogs.oracle.com/ksplice/entry/8_gdb_tricks_you_should
https://blogs.oracle.com/ksplice/entry/anatomy_of_a_debian_package



oracle prices 2007-2016
https://www.evernote.com/l/ADC4YXJG_DFL96bzb1ZXNHrZ5bJpmuZw4Mo

http://www.oraclelicensestore.com/ar/licensing/tutorial/licensing-tutorial
http://blog.enkitec.com/wp-content/uploads/2010/06/Randy-Hardee-Oracle-Licensing-Guide.pdf


http://benchmarkingblog.wordpress.com/category/power7/
<<<
(1) An 8-core IBM Power 780 (2 chips, 32 threads) with IBM DB2 9.5 is the best 8-core system (1,200,011 tpmC, $.69/tpmC, configuration available 10/13/10) vs. Oracle Database 11g Release 2 Standard Edition One and Oracle Linux on Cisco UCS c250 M2 Extended-Memory Server, 1,053,100 tpmC, $0.58/tpmC, available 12/7/2011.
Source: www.tpc.org. Results current as of 12/16/11.
TPC-C ,TPC-H, and TPC-E are trademarks of the Transaction Performance Processing Council (TPPC).
<<<

http://blogs.flexerasoftware.com/elo/oracle-software-licensing/
<<<
The cost of the Enterprise edition is currently $47,500 per processor (core) and the Standard Edition $17,500 per processor (socket). If a server or a cluster is equipped with Intel Xeon E7-8870 Processors, supporting up to 10 cores, the calculation for a 4 socket server or cluster is:

Standard Edition: 4 (sockets) x $17,500 = $70,000
Enterprise Edition: 4 (processors) x 10 (cores/processor) x 0.5 (core factor) x $47,500 = $950,000
<<<

http://oraclestorageguy.typepad.com/oraclestorageguy/2011/11/oracle-licensing-on-vmware-no-magic.html
http://www.licenseconsulting.eu/2012/08/29/vmworld-richard-garsthagen-oracle-on-licensing-vmware-virtualized-environments/
http://oraclestorageguy.typepad.com/oraclestorageguy/2012/09/oracle-throws-in-the-towel-on-vmware-licensing-reprise.html
http://www.vmware.com/files/pdf/techpaper/vmw-understanding-oracle-certification-supportlicensing-environments.pdf  ''Understanding Oracle Certification, Support and Licensing for VMware Environments''


! 2021 
<<<

Bug 27213224 - Deploying Exadata Software Fails At - Step 12 (Initializing Cluster Software) (Doc ID 2391108.1)

Apply following patches in orderly manner on top of BOTH GI Home and Oracle RDBMS Home:

a) 27213224
b) 27309269

OR

You can use the below Workaround.

a) Shutdown CRS on both the nodes.

b) Add the following route on both the nodes :-

   # route add -host 169.254.169.254 reject

c) Bring CRS online on node 1.

d) On node 2 run the root.sh.

Above steps mentioned in the workaround, of adding route does not make the route static.
Every time before starting CRS, route need to be added.

BUG:27213224 - NODES ARE NOT ABLE JOIN TO GRID INFRASTRCTURE CSSD FAILING WITH NO NETWORK HB
BUG:27424049 - REJECTING CONNECTION FROM NODE X AS MULTINODE RAC IS NOT SUPPORTED OR CERTIFIED
<<<


! 2021 oracle and non-oracle public cloud 
<<<
Oracle Database Support for Non-Oracle Public Cloud Environments (Doc ID 2688277.1)

For the purposes of this document, Non-Oracle Public Cloud Environments are defined as:

    (a) Non-Oracle Public Clouds. Examples: Google Cloud Platform, Amazon AWS, Microsoft Azure, IBM Cloud, Alibaba Cloud, etc.
    or
    (b) Environments that are in any way considered an extension of Non-Oracle Public Clouds including but not limited to running Non-Oracle cloud management software, cloud billing, cloud support, cloud automation, cloud images, or cloud monitoring. Examples: Google Bare Metal Solution, Amazon AWS Outpost, Microsoft Azure Stack, IBM Bluemix Local, Alibaba Hybrid Cloud, etc.

Support Policy for Non-Oracle Public Cloud Environments

Oracle has not certified any of its products on Non-Oracle Public Cloud Environments. Oracle Support will assist customers running Oracle products on Non-Oracle Public Cloud Environments in the following manner: Oracle will only provide support for issues that either are known to occur on an Oracle Certified Platform outside of a non-Oracle Cloud Environment (Oracle Certification Home), or can be demonstrated not to be as a result of running on a Non-Oracle Public Cloud Environment.

If a problem is a known Oracle issue, Oracle support will recommend the appropriate solution on an Oracle Certified Platform outside of a non-Oracle Cloud Environment. If that solution does not work in the Non-Oracle Public Cloud Environment, the customer will be referred to the Non-Oracle Public Cloud vendor for support. When the customer can demonstrate that the Oracle solution does not work when running on an Oracle Certified Platform outside of a non-Oracle Cloud Environment, Oracle will resume support, including logging a bug with Oracle Development for investigation if required.

If the problem is determined not to be a known Oracle issue, we will refer the customer to the Non-Oracle Public Cloud vendor for support. When the customer can demonstrate that the issue occurs when running on an Oracle Certified Platform outside of a non-Oracle Cloud Environment, Oracle will resume support, including logging a bug with Oracle Development for investigation if required.
Support Policy for Oracle Real Application Clusters (RAC)

Oracle does not support Oracle RAC or Oracle RAC One Node running on Non-Oracle Public Cloud Environments.
<<<



<<<
I worked with Andy Klock a bit on the AWS environment in question, and the installation was being done via some AWS-provided automation code.  The issue that they were seeing was with either performing new installations of 19.9 or patching existing clusters up to 19.9 in the AWS environment.  They could get one node up and running, but as soon as a second node with 19.9 tried to come up, ocssd would spin and eventually time out.  Based on this, I would think that a single node Oracle restart environment would not be affected by this.

 

Now that Frits has found the specific functions, this makes the behavior a little more clear to me. 

 

Since I didn't have the budget to try this out in AWS, I wanted to recreate this and see if I could get the same outcomes.  I built a 3-node RAC environment first, using 19.9.  I'd expect this to be classified by Oracle as the kgcs_is_on_premise.  Here's the entry we see in the ocssd.trc file:

 

[     INFO] clssscGetCloudProvider: Value from OSD ctx: 1, value in global ctx: 1

 

That "1" value matches to the first entry Frits mentioned.  Here, the cluster behaves as expected.

 

Now, to get it to act like AWS.  I found an AWS metadata service simulator (https://github.com/aws/amazon-ec2-metadata-mock) and fired it up using the AWS 169.254.169.254 address.  What I found was that if the metadata service was running and accessible, the ocssd process at cluster startup (on the first node) would be successful and log a return value of 3.  If I tried to start additional nodes, they would fail to start ocssd.  Here's what was reported in the ocssd.trc file on all of the nodes:

 

[     INFO] clssscGetCloudProvider: Value from OSD ctx: 3, value in global ctx: 3

 

The first node would run just fine…I just couldn't start CRS on additional nodes.  Blocking the URL or shutting down the simulator wouldn't change anything at this point, because the cluster recognized that it was on a non-Oracle cloud.  This behavior would continue until I ran a full shutdown on the cluster, then restarted after blocking access to the AWS metadata simulator.  At that point, the cluster reverted back to kgcs_is_on_premise mode, which allowed multiple nodes to start successfully.  I believe that what AWS has done to get around this by implementing an iptables rule blocking access to 169.254.169.254, but Klock would have to be the one to shed light on that. 

 

The big takeaway on my side is that I'd be really apprehensive about pursuing a solution that includes running RAC in AWS.  You're running an unsupported platform in the end.  It looks like a cat and mouse game where Oracle will continue to create ways to block certain functionality from running in other clouds, and you're one new patch away from having a completely broken environment.  The Amazon team did come up with a solution, but it was 2+ months after the 19.9 patch release, which could be a massive issue for clients that have security or regulatory requirements to patch within a certain timeframe.
<<<






https://blogs.oracle.com/OTNGarage/entry/how_the_oracle_linux_update?utm_source=feedly
http://public-yum.oracle.com/

<<showtoc>>

also see [[mulesoft]] for integration patterns 


! salesforce acquisitions
!! tableau
!! heroku
!! mulesoft



! Salesforce Object Query Language (SOQL) or Salesforce Object Search Language (SOSL)
https://www.google.com/search?q=salesforce+SOQL&oq=salesforce+SOQL&aqs=chrome..69i57j0l7.4767j0j1&sourceid=chrome&ie=UTF-8

! Salesforce example data model 
https://mindmajix.com/creating-data-model-in-salesforce#:~:text=In%20Salesforce%2C%20Data%20modelling%20is,different%20relations%20among%20those%20objects.

https://www.google.com/imgres?imgurl=http%3A%2F%2Fforce365.files.wordpress.com%2F2012%2F11%2Fexample-erd1.jpg&imgrefurl=https%3A%2F%2Faudit9.blog%2F2012%2F11%2F14%2Fsalesforce-custom-erd%2F&tbnid=jZgLxzMl_zNLSM&vet=12ahUKEwijk_D52cDrAhVKEd8KHWX_A3QQMygDegUIARDOAQ..i&docid=6dnzlB20V6RRXM&w=1010&h=780&q=salesforce%20data%20model&ved=2ahUKEwijk_D52cDrAhVKEd8KHWX_A3QQMygDegUIARDOAQ

!! salesforce healthcloud data model 
https://www.google.com/search?q=salesforce+health+cloud+data+model&sxsrf=ALeKk01Cq0EsP3pnLLcMSY-OtAoIB6_b6Q:1598875461581&source=lnms&tbm=isch&sa=X&ved=2ahUKEwjcrbWfs8XrAhXWmXIEHTaCBa8Q_AUoAnoECAwQBA&biw=2117&bih=1217#imgrc=eAbVLXinO0jvxM


!!! enterprise clinical operating model humana
https://www.google.com/search?client=firefox-b-1-d&q=enterprise+clinical+operating+model+humana


!! salesforce integration using SALESFORCE CONNECT
https://www.udemy.com/course/salesforce-integration-with-heroku/



! APEX programming 
https://www.udemy.com/course/salesforce-development-for-beginners/learn/lecture/15666198#overview
https://www.udemy.com/course/salesforce-platform-developer-certification/
https://www.udemy.com/course/salesforce-integration-with-heroku/


! INTEGRATION
!! mulesoft salesforce connector 
https://docs.mulesoft.com/salesforce-connector/10.3/

!! salesforce bigquery data sync 
How we made data sync between Salesforce into BigQuery at NestAway https://medium.com/nestaway-engineering/how-we-made-sync-between-salesforce-into-bigquery-at-nestaway-2ec229359e34






Running Oracle Database in Solaris 10 Containers - Best Practices
  	Doc ID: 	Note:317257.1



-- 2GB limit

http://www.sunsolarisadmin.com/general/ufs-maximum-file-size-2gb-restriction-in-sun-solaris/


-- OS TOOLS , SOLARIS

http://developers.sun.com/solaris/articles/tuning_solaris.html

Get Started With Oracle Restart
http://dbatrain.wordpress.com/2010/08/13/get-started-with-oracle-restart/

Data Guard & Oracle Restart in 11gR2
http://uhesse.wordpress.com/2010/09/

Data Guard & Oracle Restart
http://oracleprof.blogspot.com/2012/08/dataguard-and-oracle-restart-how-to.html

* HIPAA
* FIPPS 
* COPPA
* GDPR
* CPNI 
* Data Breaches 
* Reporting 





-- ORACLE SUPPORT

Working Effectively With Global Customer Support
  	Doc ID: 	166650.1

How To Monitor Bugs / Enhancement Requests through Metalink
  	Doc ID: 	602038.1
''Fast, Modern, Reliable: Oracle Linux'' http://www.oracle.com/us/technologies/linux/uek-for-linux-177034.pdf
<<<
''Features and Performance Improvements''
{{{
Latest Infiniband Stack (OFED) 1.5.1 ..................................................... 6
Receive/Transmit Packet Steering and Receive Flow Steering ............. 6
Advanced support for large NUMA systems ........................................... 7
IO affinity ................................................................................................. 7
Improved asynchronous writeback performance .................................... 8
SSD detection ......................................................................................... 8
Task Control Groups ............................................................................... 8
Hardware fault management .................................................................. 9
Power management features .................................................................. 9
Data integrity features ........................................................................... 10
Oracle Cluster File System 2 (OCFS2)................................................. 10
Latencytop ............................................................................................ 10
New fallocate() system call ................................................................... 11
}}}
<<<

http://www.oraclenerd.com/2011/03/oel-6-virtualbox-guest-additions.html


-- some entries on otn forum saying you need to have ULN subscription
https://forums.oracle.com/forums/thread.jspa?threadID=2146476
https://forums.oracle.com/forums/thread.jspa?threadID=2183312

''Playground'' https://blogs.oracle.com/wim/entry/introducing_the_oracle_linux_playground


! OEL6
https://oss.oracle.com/ol6/
uek2 u3 https://oss.oracle.com/ol6/docs/RELEASE-NOTES-UEK2-QU3-en.html
uek2 u2 https://oss.oracle.com/ol6/docs/RELEASE-NOTES-UEK2-QU2-en.html
uek2 https://oss.oracle.com/ol6/docs/RELEASE-NOTES-UEK2-en.html
6.4 https://oss.oracle.com/ol6/docs/RELEASE-NOTES-U4-en.html
6.3 https://oss.oracle.com/ol6/docs/RELEASE-NOTES-U3-en.html
6.2 https://oss.oracle.com/ol6/docs/RELEASE-NOTES-U2-en.html
6.1 https://oss.oracle.com/ol6/docs/RELEASE-NOTES-U1-en.html
6 https://oss.oracle.com/ol6/docs/RELEASE-NOTES-GA-en.html

! Public mirrors
https://wikis.oracle.com/display/oraclelinux/Downloading+Oracle+Linux

! migrate from rhel to oel
http://linux.oracle.com/switch/




http://www.oracle.com/technetwork/server-storage/vm/ovm3-quick-start-guide-wp-516656.pdf
https://www.youtube.com/watch?v=pD54PTPpvYc


http://www.oracle.com/technetwork/server-storage/vm/ovm3-demo-vbox-1680215.pdf
http://www.oracle.com/technetwork/server-storage/vm/template-1482544.html
https://blogs.oracle.com/linux/entry/friday_spotlight_getting_started_with


Underground Book
http://itnewscast.com/chapter-5-oracle-vm-manager-sizing-and-installation#Oracle_VM_Manager_Introduction

''How to Use Oracle VM Templates'' http://www.oracle.com/technetwork/articles/servers-storage-admin/configure-vm-templates-1656261.html

http://www.freelists.org/post/oracle-l/oracle-orion-tool
http://www.freelists.org/post/oracle-l/ORION,1
https://twiki.cern.ch/twiki/bin/view/PSSGroup/HAandPerf
https://twiki.cern.ch/twiki/bin/view/PSSGroup/SwingBench
http://www.freelists.org/post/oracle-l/ORION-num-disks
https://twiki.cern.ch/twiki/bin/view/PDBService/OrionTests
<<<
* Please see below for the details on how to use Orion to measure IO numbers, in particular the small random IOPS (Orion will measure the maximum IOPS obtained 'at saturation' by submitting hundreds of concurrent async IO requests of 8KB blocks). 

* Sequential IO performance is almost inevitably the HBA speed, that is typically 400 MB per sec, or 800 MB when multipathing is used.
<<<

<<<
How to read Orion output and common gotchas
----------------------------------------------------------------------
* The summary file for a simple run you will produce 3 numbers: Maximum Large MBPS, Maximum Small IOPS, Minimum Small Latency
* Plotting metrics aginst load in excel (from ORION cvs files) is a better way to understand the read the results
* Maximum MBPS typically saturates to the HBA speed. For a single ported 4Gbps HBA you will see something less than 400 MBPS. If the HBA is dual ported and you are using multipathing the number should be close to 800 MBPS
* IOPS is the most critical number. That's is the measurement of the max number of small IO (8KB, i.e. 1 Oracle block) operations per second that the IO subsystem can sustain. It is similar to what is needed for a OLTP-like workload in Oracle, although Orion uses async IO for this tests unlike typical RDBMS operations)
* The storage array cache can play a very important role in producing bogus results (tested). The parameter -cache_size in Orion tests should be set appropriately (in MB). If you can make a test with the array cache disabled.
* Average latency is of little use, latency vs load will instead provide a curve that should be flat for load < N# spindles and then starts to grow linearly.
* When running read-only tests on a new system an optimization can kick in where unformatted blocks are read very quickly. I advise to run a at least one write-only test (that is with -write 100) on a new system.
<<<


http://husnusensoy.wordpress.com/2009/03/31/orion-io-calibration-over-sas-disks/
<<<
To interpret Figure 4, let’s think that our storage array is capable of serving only 8K requests. Any larger requests will be chopped into 8K pieces. That means a large IO request will be corresponding to 125 small IO requests. Moreover think that the total capacity of our storage array is 2000 small IOPS. Now by simple division you can either yield 2000 small (8K) IOPS or 16 large (1M) IOPS from this storage array or somewhere between.

So when the number of total large IO requesters increase, the number of total IOPS will decrease.

Now assume that sustaining 1500 IOPS requires 10 ms, 3000 IOPS requires 20 ms service time on the average. While we are sustaining 1500 IOPS, we can either move on large requester axis and with an addition of 12 IOPS we can reach 20 ms latency, or we can move on small requester axis and with an addition of 1500 IOPS we can reach 20 ms latency (We may choose a third option somewhere between also). As a result increase in large IO results in an increase in service time also.
<<<


http://forums.oracle.com/forums/thread.jspa?messageID=2249899
<<<
If you want to emulate 1MB scans then use this:
-run advanced -type rand -testname mytest -num_disks X -matrix point -num_large Y -num_small 0 -duration 300

where X is the number of physical drives and Y is say 2 or 4 times the number of LUNs. This will give you 2 or 4 outstanding (in-flight) IOs LUN. You can tweak Y as you see fit based on what you see in iostat.

-- 
Regards,

Greg Rahn
http://structureddata.org
<<<


''Outstanding IO''
http://kevinclosson.wordpress.com/2006/12/11/a-tip-about-the-orion-io-generator-tool/
<<<
"With Orion an outstanding I/O is one issued by io_submit(). You can tune the size of the “flurry” of I/O submitted through io_submit() by tuning outstanding I/O. The way it works is everytime I/O completions are processed Orion issues N number more I/Os where N is the number of completions in the reaped batch. It’s just a way to keep constant pressure on the I/O subsystem."

-- Kevin Closson
<<<

<<<
Stuart,

I don’t understand how your SAN guys can say there is 2GB bandwidth when you are citing the plumbing for your LPAR is 4x2Gb HBAs. That is 800MB/s. Perhaps they mean the entire SAN array can sustain 2GB because maybe it has a total of 10 active 2Gb ports? I don’t know. All that aside, this can only be one of two things I think. Either a) the LPAR you live in has enough RAM to cache all 5GB of your FS files. This seams reasonable as p595s are some real whoppers or b) Orion is failing silently and calculating as if it is doing I/O.

I recommend you monitor sar –b breads and sar –d for physical reads. I think the odds are very good that there is no physical I/O.
<<<


Jim Czuprynski
http://www.databasejournal.com/article.php/2237601
Oracle Database I/O Performance Tuning: Capturing Extra-Database I/O Performance Metrics 
http://www.dbasupport.com/oracle/ora11g/Oracle-Database-11gR2-IO-Tuning03.shtml
! The output files 
[img[picturename| http://lh3.ggpht.com/_F2x5WXOJ6Q8/TSLtQC4c9VI/AAAAAAAABAI/IGGccAPt89g/s400/OrionGraph.JPG]]

! Supported types of IO
- Small random IO
- Large sequential IO
- Large random IO
- Mixed workloads

Check this out for the details of IO types http://www.evernote.com/shard/s48/sh/7a7a05d2-d08a-4a0c-ac65-de0d8b119f85/4fe95aeed62bd5c0512db073f468f885
http://eval.veritas.com/webfiles/presentations/oracle/ioug-a_odm.pdf
http://www.slideshare.net/WhizBob/io-micro-preso07


! Answer the following questions to properly configure the database storage
__1) Will the I/O requests be primarily single-block or multi-block?__

''DSS'' - multiblock IO operations (''MBPS''), sequential IO throughput issued by multiple users
* parallel queries
* queries on large tables that require table scans
* direct data loads
* backups
* restores
''OLTP'' - single block IO (''IOPS'')


__2) What is your average and peak IOPS requirement? What percentage of this traffic are writes?__

__3) What is your average and peak throughput (in MBPS) requirement? What percentage of this traffic are writes?__

If your database's IO req. are primarily single-block, then you should focus on ensuring that the storage can accommodate your I/O request rate (IOPS), 
if multiblock then focus on throughput capacity MBPS

! SYSSTAT metrics
''reads''
* single-block reads: physical read total IO requests - physical read total multi block requests
* multi-block reads: physical read total multi block requests
* bytes read: physical read total bytes
''writes''
* single-block writes: physical write total IO requests - physical write total multi block requests
* multi-block writes: physical write total multi block requests
* bytes written: physical write total bytes

__other metrics:__
* redo blocks written: redo blocks written
* redo IO requests: redo writes
* backup IO: in v$backup_async_io and v$backup_sync_io, the IO_COUNT field specifies the number of IO req. and the TOTAL_BYTES field specifies the number of bytes read or written. Note that each row of this view corresponds to a data file, the aggregate over all data files, or the output backup piece. 
* flashback log IO: in v$flashback_database_stat, FLASHBACK_DATA, DB_DATA, and REDO_DATA show the number of bytes read or written from the flashback logs, data files and redo logs, respectively, in the given time interval. In SYSSTAT the "flashback log writes" statistic specifies the number of write IO req. to the flashback log. 

! Data Warehouse and Orion 
''run this multiple IO simulations:''

* __Daily workload__ when end-users and/or other applications query the system: ''read-only workload with possibly many individual parallel IOs''
* __Data Load__, when end-users may or may not access the system: ''write workload with possibly parallel reads'' (by the load program and/or by end-users)
* __Index and materialized view builds__, when end-users may or may not access the system: ''read/write workload''
* __Backups__: ''read workload with likely few other processes, but a possible high degree of parallelism''


In a clustered environment you will have to __invoke Orion in parallel on all nodes__ in order to simulate a clustered workload. 
Example, ''a typical Data Warehouse workload'' - simulates __4 parallel sessions__ (-num_large 4) running a statement with a degree of __parallelism of 8__ (-num_streamIO 8), also simulates __raid0 striping__. The internal disks in this case do not have cache.

./orion -run advanced \
-testname orion14 \
-matrix point \
-num_small 0 \ 
-num_large 4 \
-size_large 1024 \
-num_disks 4 \
-type seq \
-num_streamIO 8 \
-simulate raid0 \
-cache_size 0 \
-verbose 

------------------------------------------------------------------------------------------------
''num_large'' (# of parallel sessions) 
''num_streamIO'' (# of PARALLEL hint) increase this parameter in order to simulate parallel execution for individual operations. Specify a DOP that you plan to use for your database operations, a good starting point for DOP is ''# of CPU x Parallel threads per CPU''
------------------------------------------------------------------------------------------------

In other words, the maximum throughput for this specific case with that workload is 57.30 MB/sec. In ideal conditions, Oracle will be able to achieve up to 95% of that number. For this particular case, having __4 parallel sessions__ running the following statement would approach the same throughput:

select /*+ NO_MERGE(sales) */ count(*) 
from 
   (select /*+ FULL (s) PARALLEL (s,8) */  * 
    from all_sales s) sales
/

In a well-balanced Data Warehouse hardware config, there is __sufficient IO bandwidth to feed the CPUs__. As a starting point, you can use the __basic rule that ''every Ghz of CPU power can drive at least 100 MB/sec''__. I.e, for a sinlge server configuration with four 3Ghz CPUs, your storage configuration should at least be able to provide 4*3*100 = 1200 MB/s throughput. __This number should be multiplied by the number of nodes in a RAC configuration__.


! Some Orion command errors
Can only specify -num_streamIO with -type seq
Can only specify -stripe with -simulate RAID0
count (this is num_large) * nstream must be < 2048
Must specify -num_small and cannot specify -num_large when specified -matrix col
{{{
Orion does support filesystems and has done so for years.  All you have
to do is create a file that is a multiple of the block size you'll be
testing, e.g.:

 

Create a 4GB file:

 

dd if=/dev/zero of=/u01/oracle/mytest.dbf bs=8k count=524288

 

Then, put this file in your test.lun file:

 

>cat mytest.lun

/u01/oracle/mytest.dbf

 

Then, run orion:

 

orion -run simple -testname mytest -num_disks 1

 

Regards,

Brandon
}}}

{{{
From my experience, it seems that all num_disks does is increase the max
load orion will run up to when you run with "-run simple/normal", or
"-matrix basic/detailed" tests, for example, with num_disks 1 on a
simple run, it will perform tests of single-block IOs at loads of
1,2,3,4,5 and then multi-block IOs at loads of 1 & 2.  If you increase
to num_disks 2, then it will run single-block IOs at loads
1,2,3,4,5,6,7,8,9,10 and multi-block at 1,2,3,4, and it just keeps going
higher as you continue to increase num_disks.  Beware it also takes much
longer since each run takes 1 minute by default, however with the larger
num_disks values, it does begin to skip data points, so, for example,
instead of doing every point between 1-20, it will do something like
1,2,4,6,8,10,12,16,20.  

In the case of an advanced run like you have below with a specific point
of 45 large IOs and 0 small IOs, I don't think the num_disks parameter
does anything, but please let me know if I'm wrong.

Thanks,
Brandon
}}}

{{{
No prob - I know the documentation doesn't make it very clear.  One
thing to be careful with - most filesystems are cached, so you'll
probably get unbelievably good numbers from Orion.  The way I usually
workaround this is to create files for testing that are much larger than
my RAM, and clear the OS buffer cache prior to testing.  You could also
try playing with the cache_size parameter for Orion, but that never
seemed to do much for me.  Hopefully in a future version of orion,
they'll support using directio on a filesystem where supported by the
OS, just like the Oracle database does (e.g.
filesystemio_options=directio).

 

One more thing to beware of - if you configure orion to run write tests
(it does read-only by default with the simple/normal type tests), it
will destroy any data in the specified test files - so make sure you
don't have it pointed to anything you want to keep, like a real Oracle
datafile.

 

Regards,

Brandon

}}}


{{{
I believe num_disks has to do with the number of I/O threads that are
spawned and num_large has to do with the number of outstanding I/Os
that are targeted to be issued.

For what I use the tool for (I/O bandwidth testing) I generally run
the sequential workload to get a best possible data point and then use
the rand workload to get numbers closer to what a PQ workload would
be.


On Fri, Sep 12, 2008 at 10:47 AM, Allen, Brandon
<Brandon.Allen@xxxxxxxxxxx> wrote:
> In the case of an advanced run like you have below with a specific point
> of 45 large IOs and 0 small IOs, I don't think the num_disks parameter
> does anything, but please let me know if I'm wrong.

-- 
Regards,
Greg Rahn
http://structureddata.org

}}}

{{{
You could be right, I'm really not sure and have just come to most of my
current conclusions through trial and error.  One thing I've noticed, at
least on Linux (OEL4 & 5) is that orion seems to return pretty
consistent results regardless of how high I push the load for a single
execution, e.g., even if I run with num_small 50 (I usually focus more
on IOPS since I work with OLTP systems) and/or num_disks 50, I'll get
about the same throughput as if I run with 5 or 10.  I also never see it
spawn multiple processes/threads at the OS level, so it seems to just be
doing AIO from a single process.  I've found that I can push the system
much harder if I run multiple orion processes concurrently, so what I'll
usually do is something like this:

1) Create four 4GB files with dd
2) Create for lun files, e.g. test1.lun, test2.lun, test3.lun and
test4.lun, each pointing to 1 of the 4 test files I created
3) Put four orion commands in a script like this to run four orion
commands in the background:
        orion -run advanced -matrix point -num_large 0 -num_small 5
-testname mytest1 -num_disks 1 &
        orion -run advanced -matrix point -num_large 0 -num_small 5
-testname mytest2 -num_disks 1 &
        orion -run advanced -matrix point -num_large 0 -num_small 5
-testname mytest3 -num_disks 1 &
        orion -run advanced -matrix point -num_large 0 -num_small 5
-testname mytest4 -num_disks 1 &
4) Run the script

I'll repeat the above test, increasing the number of concurrent
executions until I find the peak performance.  Maybe I'm just doing
something wrong with the standard load-setting parameters, but this
seems to be the only way I can get orion to max out my systems.

}}}
{{{
Normally it is set to the number of physical drives, but it can be
adjusted higher or lower depending on how much load you want to drive.

Here is a couple command lines and summary that I used from a Sun
Thumper(http://www.sun.com/servers/x64/x4500/) for testing I/O
bandwidth using 1MB reads for a data warehouse workload.

-run advanced -type seq -testname thumper_seq -num_disks 45 -matrix
point -num_large 45 -num_small 0 -num_streamIO 16 -disk_start 0
-disk_end 150 -cache_size 0
Maximum Large MBPS=2668.88 @ Small=0 and Large=45

-run advanced -type rand -testname thumper_rand -num_disks 180 -matrix
point -num_large 720 -num_small 0 -duration 60 -disk_start 0 -disk_end
150 -cache_size 0
Maximum Large MBPS=1758.35 @ Small=0 and Large=720

}}}

http://husnusensoy.wordpress.com/2009/03/31/orion-io-calibration-over-sas-disks/
{{{
[oracle@consol10g orion]$ cat mytest.lun
/dev/dm-2
/dev/dm-3
/dev/dm-4
/dev/dm-5
/dev/dm-6
}}}
Small Random & Large Sequential Read Load
{{{
[oracle@consol10g orion]$ ./orion_lnx -run advanced -testname mytest -num_disks 40 -simulate raid0 -write 0 -type seq -matrix basic -cache_size 67108864 -verbose
}}}
Mix Read Load
{{{
[oracle@consol10g orion]$ ./orion_lnx -run advanced -testname mytest -num_disks 40 -simulate raid0 -write 0 -type seq -matrix detailed -cache_size 67108864 -verbose
}}}


For MySQL DW
http://www.pythian.com/news/15161/determining-io-throughput-for-a-system/
{{{
./orion –run advanced –testname mytest –num_small 0 –size_large 1024 –type rand –simulate contact –write 0 –duration 60 –matrix column

-num_small is 0 because you don’t usually do small transactions in a dw.
-type rand for random I/O’s because data warehouse queries usually don’t do sequential reads
-write 0 – no writes, because you do not write often to the dw, that is what the ETL is for.
-duration is in seconds
-matrix column shows you how much you can sustain
}}}
{{{
run			Type of workload to run (simple, normal, advanced, dss, oltp)
			simple - tests random 8K small IOs at various loads,
				 then random 1M large IOs at various loads.
			normal - tests combinations of random 8K small
				 IOs and random 1M large IOs
			advanced - run the workload specified by the user
				   using optional parameters
			dss - run with random 1M large IOs at increasing loads
				to determine the maximum throughput
			oltp - run with random 8K small IOs at increasing loads
				to determine the maximum IOPS
Optional parameters:
testname		Name of the test run
num_disks			Number of disks (physical spindles). Default is
			the number of LUNs in <testname>.lun
size_small		Size of small IOs (in KB) - default 8
size_large		Size of large IOs (in KB) - default 1024

type			Type of large IOs (rand, seq) - default rand
			  rand - Random large IOs
			  seq -  Sequential streams of large IOs
num_streamIO		Number of concurrent IOs per stream (only if type is
			seq) - default 4
simulate		Orion tests on a virtual volume formed by combining the
			provided volumes in one of these ways (default concat):
			  concat - A serial concatenation of the volumes
			  raid0 - A RAID-0 mapping across the volumes

write			Percentage of writes (SEE WARNING ABOVE) - default 0

cache_size		Size *IN MEGABYTES* of the array's cache.
			Unless this option is set to 0, Orion does a number
			of (unmeasured) random IO before each large sequential
			data point.  This is done in order to fill up the array
			cache with random data.  This way, the blocks from one
			data point do not result in cache hits for the next
			data point.  Read tests are preceded with junk reads
			and write tests are preceded with junk writes.  If
			specified, this 'cache warming' is done until
			cache_size worth of IO has been read or written.
			Default behavior: fill up cache for 2 minutes before
			each data point.

duration		Duration of each data point (in seconds) - default 60

num_small		Number of outstanding small IOs (only if matrix is
			point, col, or max) - no default
num_large		For random, number of outstanding large IOs.
			For sequential, number of streams (only if matrix is
			point, row, or max) - no default

matrix			An Orion test consists of data points at various small
			and large IO load levels.  These points can be
			represented as a two-dimensional matrix: Each column
			in the matrix represents a fixed small IO load.  Each
			row represents a fixed large IO load.  The first row
			is with no large IO load and the first column is with
			no small IO load.  An Orion test can be a single point,
			a row, a column or the whole matrix, depending on the
			matrix option setting below (default basic):
			  basic - test the first row and the first column
			  detailed - test the entire matrix
			  point - test at load level num_small, num_large
			  col - varying large IO load with num_small small IOs
			  row - varying small IO load with num_large large IOs
			  max - test varying loads up to num_small, num_large

verbose			Prints tracing information to standard output if set.
			Default -- not set
ORION runs IO performance tests that model Oracle RDBMS IO workloads.
It measures the performance of small (2-32K) IOs and large (128K+) IOs
at various load levels.  Each Orion data point is done at a specific
mix of small and large IO loads sustained for a duration.  Anywhere
from a single data point to a two-dimensional array of data points can
be tested by setting the right options.

An Orion test consists of data points at various small and large IO
load levels.  These points can be represented as a two-dimensional
matrix: Each column in the matrix represents a fixed small IO load.
Each row represents a fixed large IO load.  The first row is with no
large IO load and the first column is with no small IO load.  An Orion
test can be a single point, a row, a column or the whole matrix.

The 'run' parameter is the only mandatory parameter. Defaults
are indicated for all other parameters.  For additional information on
the user interface, see the Orion User Guide.

<testname> is a filename prefix.  By default, it is "orion".  It can be 
specified with the 'testname' parameter.

<testname>.lun should contain a carriage-return-separated list of LUNs
The output files for a test run are prefixed by <testname>_<date> where
date is "yyyymmdd_hhmm".

The output files are: 
<testname>_<date>_summary.txt -  Summary of the input parameters along with 
				 min. small latency, max large MBPS 
				 and/or max. small IOPS.
<testname>_<date>_mbps.csv - Performance results of large IOs in MBPS
<testname>_<date>_iops.csv - Performance results of small IOs in IOPS
<testname>_<date>_lat.csv - Latency of small IOs
<testname>_<date>_tradeoff.csv - Shows large MBPS / small IOPS 
				 combinations that can be achieved at 
				 certain small latencies
<testname>_trace.txt - Extended, unprocessed output

WARNING: IF YOU ARE PERFORMING WRITE TESTS, BE PREPARED TO LOSE ANY DATA STORED
ON THE LUNS.

Mandatory parameters:

Examples
For a preliminary set of data
	-run simple 
For a basic set of data
	-run normal 
To evaluate storage for an OLTP database
	-run oltp 
To evaluate storage for a data warehouse
	-run dss 
To generate combinations of 32KB and 1MB reads to random locations: 
	-run advanced 
	-size_small 32 -size_large 1024 -type rand	-matrix detailed
To generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes
	-run advanced 
	-simulate RAID0 -stripe 1024 -write 100 -type seq
	-matrix col -num_small 0
}}}
Here's the HD used
Barracuda 7200 SATA 3Gb/s (375MB/s) interface 1TB Hard Drive
http://www.seagate.com/ww/v/index.jsp?vgnextoid=20b92d0ca8dce110VgnVCM100000f5ee0a0aRCRD#tTabContentOverview

see also LVMalaASM

I have also created a simple toolkit to characterize the existing storage subsystem if it meets the Application storage performance requirements, it does the following:
--      params_dss_randomwrites
--      params_dss_seqwrites
--      params_dss_randomreads
--      params_dss_seqreads
--      params_oltp_randomwrites
--      params_oltp_seqwrites
--      params_oltp_randomreads
--      params_oltp_seqreads
--      params_dss
--      params_oltp
Get the toolkit here http://karlarao.wordpress.com/scripts-resources/ named ''oriontoolkit.zip''

! Following is the summary of the Orion runs:
------------------------------------------
{{{
+++1 - a run on one datafile created on a filesystem, this is on VMWARE.. mysteriously giving optimistic results
+++2 - a run on the four 1TB hard disk, compare the numbers on the short stroked values!!! whew! way too low!
+++3 - a run on four 1TB hard disk.. but num_disk is 8
+++4 - cool, a raw short stroked partition (3 GB each disk) and not putting it on LVM is at the same performance with LVM magic! but I notice less IO% could be because there is no LVM layer
+++5 - short stroked 4 disks, applied the LVM stripe script trick and turned it into 1 piece of 12 GB LVM
+++6 - a simple orion benchmark on one disk.. not really impresive.. 
+++7 - 2nd run of a simple orion benchmark on one disk! but this time num_disk = 4
+++8 - 3rd run, this time num_disk = 8
+++9 - 4th run, this time num_disk = 16
+++10 - 5th run, this time num_disk = 32
+++11 - 6th run, this time num disk 64
+++12 - 7th run, this time num disk 128
+++13 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 4, 285.10 MBPS
+++14 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 8, 262.08 MBPS
+++15 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 16, 217.93 MBPS
+++16 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 24, 198.45 MBPS
+++17 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 32, 194.99 MBPS
+++18 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 64, 184.84 MBPS
+++19 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 128, 154.78 MBPS
+++20 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 256, 165.18 MBPS
+++21 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 256, 162.33 MBPS
+++22 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 1, 458.25 MBPS
+++23 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 2, 294.11 MBPS
+++24 DW sequential run, matrix col, raid 0, cache 0, streamio 8, large 0-9, 457.89 MBPS
+++31 DW sequential run, matrix point, duration 300, raid 0, cache 0, streamio 8, large 8, 256.72 MBPS
+++25 FAIL, run normal
+++26 run OLTP, 487 IOPS, 19.99ms lat
+++27 run DSS, 181.19 MBPS
+++28 FAIL, generate combinations of 32KB and 1MB reads to random locations, 340 IOPS, 40 MBPS
+++30 Greg Rahn - emulate 1MB random scans, matrix point, duration 300, CONCAT, cache NE, streamio N/A, large 8, 139.28 MBPS
+++44 Greg Rahn - emulate 1MB random scans, matrix point, duration 300, CONCAT, cache 0, streamio N/A, large 8, 138.71 MBPS
+++45 Greg Rahn - emulate 1MB random scans, matrix point, duration 300, raid 0, cache 0, streamio N/A, large 8, 138.47 MBPS
+++46 Greg Rahn - emulate 1MB random scans, matrix point, duration 300, raid 0, cache 0, streamio N/A, large 256, 151.89 MBPS <<< RANDOM READS 
+++52 Greg Rahn -             random scans, matrix point, duration 60, raid 0, cache 0, streamio N/A, large 720, 160.88 MBPS
+++53 Greg Rahn -             random scans, matrix point, duration 300, raid 0, cache 0, streamio N/A, large 720, 151.22 MBPS <<<
+++32 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 300, CONCAT, cache NE, streamio 4, large 8, 440.50 MBPS
+++33 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 300, CONCAT, cache 0, streamio 4, large 8, 441.24 MBPS
+++34 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 300, RAID0, cache 0, streamio 4, large 8, 221.29 MBPS
+++35 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 300, RAID0, cache 0, streamio 8, large 8, 254.70 MBPS
+++36 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 300, RAID0, cache 0, streamio 8, large 256, 157.62 MBPS
+++37 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 3600, RAID0, cache 0, streamio 8, large 256, 159.09 MBPS <<< SEQUENTIAL READS
+++38 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 300, RAID0, cache NE, streamio 8, large 256, 157.65 MBPS
+++51 Greg Rahn -              sequential scans, matrix point, duration 60, RAID0, cache 0, streamio 16, large 45, 347.92 MBPS
+++54 Greg Rahn -              sequential scans, matrix point, duration 300, RAID0, cache 0, streamio 16, large 45, 358.31 MBPS
+++55 Greg Rahn -              sequential scans, matrix point, duration 300, RAID0, cache 0, streamio 32, large 45, 359.01 MBPS
+++56 Greg Rahn -              sequential scans, matrix point, duration 300, RAID0, cache 0, streamio 45, large 45, 352.05 MBPS <<<
+++49 generate multiple random 1MB write streams, matrix point, duration 300, raid 0, cache 0, streamio N/A, large 8, 147.55 MBPS
+++50 generate multiple random 1MB write streams, matrix point, duration 300, raid 0, cache 0, streamio N/A, large 256, 109.53 MBPS <<< RANDOM WRITES
+++57 generate multiple random 1MB write streams, matrix point, duration 300, raid 0, cache 0, streamio N/A, large 720, 107.31 MBPS <<<
+++47 generate multiple sequential 1MB write streams, matrix col, duration 60, CONCAT, cache NE, streamio 4, large 1-8, 421.80 MBPS
+++29 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix col, cache NE, streamio 4, large 1-8, 370.14 MBPS
+++39 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix col, cache 0, streamio 4, large 1-8, 369.68 MBPS
+++48 generate multiple sequential 1MB write streams, matrix point, duration 60, CONCAT, cache 0, streamio 8, large 8, 419.17 MBPS
+++40 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix col, cache 0, streamio 8, large 1-8, 387.46 MBPS
+++41 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix point, duration 60, cache 0, streamio 8, large 8, 251.69 MBPS
+++42 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix point, duration 300, cache 0, streamio 8, large 8, 249.08 MBPS
+++43 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix point, duration 300, cache 0, streamio 8, large 256, 106.62 MBPS <<< SEQUENTIAL WRITES
+++58 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix point, duration 300, cache 0, streamio 45, large 45, 165.57 MBPS <<<
+++59 FAIL, husnu matrix basic (seq and iops test), stopped at point4, 211.56 MBPS, 365 IOPS, 54.77 lat
+++60 FAIL, husnu matrix detailed (seq and iops test), stopped at point24 out of 189, no MBPS, 370 IOPS, 53.91 lat
+++61 SINGLE DISK RUN seq matrix point, num large 256, streamio 8, raid 0, cache0, duration 300, 53.20 MBPS
+++62 MULTIPLE ORION (4) SESSION RUN seq matrix point, num large 256, streamio 8, raid 0, cache0, duration 300, on the OS, around 200 MBPS
+++63 IOPS - read (random, seq), write (random, seq) 
     +++ observations: seems like when you do OLTP runs, the collectl-all outputs the wsec/s (sector writes) and not the IOPS write.. 
     +++ I've checked it with the iostat output
     +++ params_oltp_randomwrites Maximum Small IOPS=309 @ Small=256 and Large=0 Minimum Small Latency=825.24 @ Small=256 and Large=0
     +++ params_oltp_seqwrites Maximum Small IOPS=312 @ Small=256 and Large=0 Minimum Small Latency=818.04 @ Small=256 and Large=0
     +++ params_oltp_randomreads Maximum Small IOPS=532 @ Small=256 and Large=0 Minimum Small Latency=480.28 @ Small=256 and Large=0
     +++ params_oltp_seqreads Maximum Small IOPS=527 @ Small=256 and Large=0 Minimum Small Latency=485.31 @ Small=256 and Large=0
     +++ params_oltp Maximum Small IOPS=481 @ Small=80 and Large=0 Minimum Small Latency=20.34 @ Small=4 and Large=0
+++64 FAIL, increasing random writes
+++65 FULL run of oriontoolkit
     +++ params_dss_randomwrites Maximum Large MBPS=108.17 @ Small=0 and Large=256
     +++ params_dss_seqwrites Maximum Large MBPS=111.59 @ Small=0 and Large=256
     +++ params_dss_randomreads Maximum Large MBPS=148.50 @ Small=0 and Large=256
     +++ params_dss_seqreads Maximum Large MBPS=156.24 @ Small=0 and Large=256
     +++ params_oltp_randomwrites Maximum Small IOPS=312 @ Small=256 and Large=0 Minimum Small Latency=816.17 @ Small=256 and Large=0
     +++ params_oltp_seqwrites Maximum Small IOPS=314 @ Small=256 and Large=0 Minimum Small Latency=812.39 @ Small=256 and Large=0
     +++ params_oltp_randomreads Maximum Small IOPS=530 @ Small=256 and Large=0 Minimum Small Latency=482.69 @ Small=256 and Large=0
     +++ params_oltp_seqreads Maximum Small IOPS=526 @ Small=256 and Large=0 Minimum Small Latency=486.29 @ Small=256 and Large=0
     +++ params_dss Maximum Large MBPS=177.65 @ Small=0 and Large=32
     +++ params_oltp Maximum Small IOPS=480 @ Small=80 and Large=0 Minimum Small Latency=20.42 @ Small=4 and Large=0
+++66 ShortStroked disks 150GB/1000GB 
     +++ params_dss_randomwrites Maximum Large MBPS=151.57 @ Small=0 and Large=256
     +++ params_dss_seqwrites Maximum Large MBPS=163.09 @ Small=0 and Large=256
     +++ params_dss_randomreads Maximum Large MBPS=192.11 @ Small=0 and Large=256
     +++ params_dss_seqreads Maximum Large MBPS=207.77 @ Small=0 and Large=256
     +++ params_oltp_randomwrites Maximum Small IOPS=431 @ Small=256 and Large=0 Minimum Small Latency=592.28 @ Small=256 and Large=0
     +++ params_oltp_seqwrites Maximum Small IOPS=427 @ Small=256 and Large=0 Minimum Small Latency=597.92 @ Small=256 and Large=0
     +++ params_oltp_randomreads Maximum Small IOPS=792 @ Small=256 and Large=0 Minimum Small Latency=323.08 @ Small=256 and Large=0
     +++ params_oltp_seqreads Maximum Small IOPS=794 @ Small=256 and Large=0 Minimum Small Latency=322.24 @ Small=256 and Large=0
     +++ params_dss Maximum Large MBPS=216.53 @ Small=0 and Large=28
     +++ params_oltp Maximum Small IOPS=711 @ Small=80 and Large=0 Minimum Small Latency=14.32 @ Small=4 and Large=0
+++ a short stroked single disk
+++ create regression on OLTP Write and DSS Write
}}}



! Following are the details of the Orion runs:
------------------------------------------
{{{
#################################################################################################################
drwxr-xr-x 2 oracle oracle 4096 Jul  8 12:23 OrionTest1
+++1 - a run on one datafile created on a filesystem, this is on VMWARE.. mysteriously giving optimistic results
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run simple -testname mytest -num_disks 1 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      0,      1,      2
Total Data Points: 8

Name: /home/oracle/mytest.dbf   Size: 4294967296
1 FILEs found.

Maximum Large MBPS=181.83 @ Small=0 and Large=2
Maximum Small IOPS=1377 @ Small=5 and Large=0
Minimum Small Latency=0.79 @ Small=1 and Large=0
#################################################################################################################
drwxr-xr-x 2 root   root   4096 Jul 13 20:42 OrionTest2
+++2 - a run on the four 1TB hard disk, compare the numbers on the short stroked values!!! whew! way too low!
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run simple -testname mytest -num_disks 4 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      0,      1,      2,      3,      4,      5,      6,      7,      8
Total Data Points: 29

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=143.28 @ Small=0 and Large=8
Maximum Small IOPS=387 @ Small=20 and Large=0
Minimum Small Latency=13.61 @ Small=1 and Large=0
#################################################################################################################
drwxr-xr-x 2 root   root   4096 Jul 14 12:25 OrionTest3
+++3 - a run on four 1TB hard disk.. but num_disk is 8
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run simple -testname mytest -num_disks 8 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      0,      1,      2,      3,      4,      5,      6,      7,      8,      9,     10,     11,     12,     13,     14,     15,     16
Total Data Points: 38

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=171.55 @ Small=0 and Large=16
Maximum Small IOPS=456 @ Small=40 and Large=0
Minimum Small Latency=13.63 @ Small=1 and Large=0
#################################################################################################################
drwxr-xr-x 2 root   root   4096 Jul 16 07:42 OrionTest4
+++4 - cool, a raw short stroked partition (3 GB each disk) and not putting it on LVM is at the same performance with LVM magic! but I notice less IO% could be because there is no LVM layer
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run simple -testname mytest -num_disks 4 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      0,      1,      2,      3,      4,      5,      6,      7,      8
Total Data Points: 29

Name: /dev/sdb1 Size: 3257178624
Name: /dev/sdc1 Size: 3257178624
Name: /dev/sdd1 Size: 3257178624
Name: /dev/sde1 Size: 3257178624
4 FILEs found.

Maximum Large MBPS=232.07 @ Small=0 and Large=8
Maximum Small IOPS=954 @ Small=20 and Large=0
Minimum Small Latency=6.62 @ Small=1 and Large=0
#################################################################################################################
drwxr-xr-x 2 root   root   4096 Dec 20 18:13 OrionTest5
+++5 - short stroked 4 disks, applied the LVM stripe script trick and turned it into 1 piece of 12 GB LVM
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run simple -testname mytest -num_disks 4 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      0,      1,      2,      3,      4,      5,      6,      7,      8
Total Data Points: 29

Name: /dev/vgshortstroke/shortstroke    Size: 13514047488
1 FILEs found.

Maximum Large MBPS=232.00 @ Small=0 and Large=8
Maximum Small IOPS=942 @ Small=20 and Large=0
Minimum Small Latency=6.61 @ Small=1 and Large=0
#################################################################################################################
drwxr-xr-x 2 root   root   4096 Dec 21 16:59 OrionTest6
+++6 - a simple orion benchmark on one disk.. not really impresive.. 
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run simple -testname mytest -num_disks 1 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      0,      1,      2
Total Data Points: 8

Name: /dev/sdb  Size: 1000204886016
1 FILEs found.

Maximum Large MBPS=42.64 @ Small=0 and Large=2
Maximum Small IOPS=103 @ Small=5 and Large=0
Minimum Small Latency=13.62 @ Small=1 and Large=0
#################################################################################################################
drwxr-xr-x 2 root   root   4096 Dec 21 18:15 OrionTest7
+++7 - 2nd run of a simple orion benchmark on one disk! but this time num_disk = 4
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run simple -testname mytest -num_disks 4 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      0,      1,      2,      3,      4,      5,      6,      7,      8
Total Data Points: 29

Name: /dev/sdb  Size: 1000204886016
1 FILEs found.

Maximum Large MBPS=52.80 @ Small=0 and Large=8
Maximum Small IOPS=135 @ Small=20 and Large=0
Minimum Small Latency=13.67 @ Small=1 and Large=0
#################################################################################################################
drwxr-xr-x 2 root   root   4096 Dec 21 19:19 OrionTest8
+++8 - 3rd run, this time num_disk = 8
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run simple -testname mytest -num_disks 8 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      0,      1,      2,      3,      4,      5,      6,      7,      8,      9,     10,     11,     12,     13,     14,     15,     16
Total Data Points: 38

Name: /dev/sdb  Size: 1000204886016
1 FILEs found.

Maximum Large MBPS=56.74 @ Small=0 and Large=16
Maximum Small IOPS=148 @ Small=36 and Large=0
Minimum Small Latency=13.57 @ Small=1 and Large=0
#################################################################################################################
drwxr-xr-x 2 root   root   4096 Dec 21 20:36 OrionTest9
+++9 - 4th run, this time num_disk = 16
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run simple -testname mytest -num_disks 16 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      0,      1,      2,      4,      6,      8,     10,     12,     14,     16,     18,     20,     22,     24,     26,     28,     30,     32
Total Data Points: 41

Name: /dev/sdb  Size: 1000204886016
1 FILEs found.

Maximum Large MBPS=56.62 @ Small=0 and Large=18
Maximum Small IOPS=154 @ Small=80 and Large=0
Minimum Small Latency=13.62 @ Small=1 and Large=0
#################################################################################################################
drwxr-xr-x 2 root   root   4096 Dec 22 14:04 OrionTest10
+++10 - 5th run, this time num_disk = 32
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run simple -testname mytest -num_disks 32 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      0,      1,      2,      3,      4,      5,     10,     15,     20,     25,     30,     35,     40,     45,     50,     55,     60
Total Data Points: 44

Name: /dev/sdb  Size: 1000204886016
1 FILEs found.

Maximum Large MBPS=56.11 @ Small=0 and Large=15
Maximum Small IOPS=159 @ Small=128 and Large=0
Minimum Small Latency=13.69 @ Small=1 and Large=0
#################################################################################################################
drwxr-xr-x 2 root   root    4096 Dec 22 16:51 OrionTest11
+++11 - 6th run, this time num disk 64
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run simple -testname mytest -num_disks 64 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      0,      1,      2,      3,      4,      5,      6,      7,      8,      9,     10,     20,     30,     40,     50,     60,     70,     80,     90,    100,    110,    120
Total Data Points: 57

Name: /dev/sdb  Size: 1000204886016
1 FILEs found.

Maximum Large MBPS=55.89 @ Small=0 and Large=20
Maximum Small IOPS=160 @ Small=272 and Large=0
Minimum Small Latency=13.65 @ Small=1 and Large=0
#################################################################################################################
drwxr-xr-x 2 root   root    4096 Dec 23 14:30 OrionTest12
+++12 - 7th run, this time num disk 128
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run simple -testname mytest -num_disks 128 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      0,      1,      2,      3,      4,      5,      6,      7,      8,      9,     10,     11,     12,     13,     14,     15,     16,     17,     18,     19,     20,     21,     42,     63,     84,    105,    126,    147,    168,    189,    210,    231,    252
Total Data Points: 84

Name: /dev/sdb  Size: 1000204886016
1 FILEs found.

Maximum Large MBPS=56.41 @ Small=0 and Large=18
Maximum Small IOPS=160 @ Small=352 and Large=0
Minimum Small Latency=13.61 @ Small=1 and Large=0
#################################################################################################################
+++13 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 4, 285.10 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -testname mytest -matrix point -num_small 0 -num_large 4 -size_large 1024 -num_disks 4 -type seq -num_streamIO 8 -simulate raid0 -cache_size 0 -verbose 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      4
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=285.10 @ Small=0 and Large=4
#################################################################################################################
+++14 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 8, 262.08 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -testname mytest -matrix point -num_small 0 -num_large 8 -size_large 1024 -num_disks 4 -type seq -num_streamIO 8 -simulate raid0 -cache_size 0 -verbose 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      8
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=262.08 @ Small=0 and Large=8
#################################################################################################################
+++15 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 16, 217.93 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -testname mytest -matrix point -num_small 0 -num_large 16 -size_large 1024 -num_disks 4 -type seq -num_streamIO 8 -simulate raid0 -cache_size 0 -verbose 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,     16
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=217.93 @ Small=0 and Large=16
#################################################################################################################
+++16 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 24, 198.45 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -testname mytest -matrix point -num_small 0 -num_large 24 -size_large 1024 -num_disks 4 -type seq -num_streamIO 8 -simulate raid0 -cache_size 0 -verbose 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,     24
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=198.45 @ Small=0 and Large=24
#################################################################################################################
+++17 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 32, 194.99 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -testname mytest -matrix point -num_small 0 -num_large 32 -size_large 1024 -num_disks 4 -type seq -num_streamIO 8 -simulate raid0 -cache_size 0 -verbose 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,     32
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=194.99 @ Small=0 and Large=32
#################################################################################################################
+++18 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 64, 184.84 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -testname mytest -matrix point -num_small 0 -num_large 64 -size_large 1024 -num_disks 4 -type seq -num_streamIO 8 -simulate raid0 -cache_size 0 -verbose 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,     64
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=184.84 @ Small=0 and Large=64
#################################################################################################################
+++19 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 128, 154.78 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -testname mytest -matrix point -num_small 0 -num_large 128 -size_large 1024 -num_disks 4 -type seq -num_streamIO 8 -simulate raid0 -cache_size 0 -verbose 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,    128
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=154.78 @ Small=0 and Large=128
#################################################################################################################
+++20 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 256, 165.18 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -testname mytest -matrix point -num_small 0 -num_large 256 -size_large 1024 -num_disks 4 -type seq -num_streamIO 8 -simulate raid0 -cache_size 0 -verbose 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,    256
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=165.18 @ Small=0 and Large=256
#################################################################################################################
+++21 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 256, 162.33 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -testname mytest -matrix point -num_small 0 -num_large 256 -size_large 1024 -num_disks 4 -type seq -num_streamIO 8 -simulate raid0 -cache_size 0 -verbose 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,    256
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=162.33 @ Small=0 and Large=256
#################################################################################################################
+++22 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 1, 458.25 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -testname mytest -matrix point -num_small 0 -num_large 1 -size_large 1024 -num_disks 4 -type seq -num_streamIO 8 -simulate raid0 -cache_size 0 -verbose 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      1
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=458.25 @ Small=0 and Large=1
#################################################################################################################
+++23 DW sequential run, matrix point, duration 60, raid 0, cache 0, streamio 8, large 2, 294.11 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -testname mytest -matrix point -num_small 0 -num_large 2 -size_large 1024 -num_disks 4 -type seq -num_streamIO 8 -simulate raid0 -cache_size 0 -verbose 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      2
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=294.11 @ Small=0 and Large=2
#################################################################################################################
+++24 DW sequential run, matrix col, raid 0, cache 0, streamio 8, large 0-9, 457.89 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -testname mytest -matrix col -num_small 0 -size_large 1024 -num_disks 4 -type seq -num_streamIO 8 -simulate raid0 -cache_size 0 -verbose 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      0,      1,      2,      3,      4,      5,      6,      7,      8
Total Data Points: 9

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=457.89 @ Small=0 and Large=1
#################################################################################################################
+++31 DW sequential run, matrix point, duration 300, raid 0, cache 0, streamio 8, large 8, 256.72 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -duration 300 -testname mytest -matrix point -num_small 0 -num_large 8 -size_large 1024 -num_disks 4 -type seq -num_streamIO 8 -simulate raid0 -cache_size 0 -verbose

This maps to this test:
Test: mytest 
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB 
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:,      0 
Large Columns:,      8
Total Data Points: 1 

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=256.72 @ Small=0 and Large=8
#################################################################################################################
+++25 FAIL, run normal
-----------------------------------------------------------------------------------------------------------------

#################################################################################################################
+++26 run OLTP, 487 IOPS, 19.99ms lat
-----------------------------------------------------------------------------------------------------------------
-run oltp -testname mytest 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:,      4,      8,     12,     16,     20,     24,     28,     32,     36,     40,     44,     48,     52,     56,     60,     64,     68,     72,     76,     80
Large Columns:,      0
Total Data Points: 24

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Small IOPS=487 @ Small=80 and Large=0
Minimum Small Latency=19.99 @ Small=4 and Large=0
#################################################################################################################
+++27 run DSS, 181.19 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run dss -testname mytest 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 240 seconds
Small Columns:,      0
Large Columns:,      4,      8,     12,     16,     20,     24,     28,     32,     36,     40,     44,     48,     52,     56,     60
Total Data Points: 19

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=181.19 @ Small=0 and Large=32
#################################################################################################################
+++28 FAIL, generate combinations of 32KB and 1MB reads to random locations, 340 IOPS, 40 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -size_small 32 -size_large 1024 -type rand -matrix detailed -testname mytest
#################################################################################################################
+++30 Greg Rahn - emulate 1MB random scans, matrix point, duration 300, CONCAT, cache NE, streamio N/A, large 8, 139.28 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type rand -testname mytest -num_disks 4 -matrix point -num_large 8 -num_small 0 -duration 300 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 300 seconds
Small Columns:,      0
Large Columns:,      8
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=139.28 @ Small=0 and Large=8
#################################################################################################################
+++44 Greg Rahn - emulate 1MB random scans, matrix point, duration 300, CONCAT, cache 0, streamio N/A, large 8, 138.71 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type rand -testname mytest -num_disks 4 -matrix point -num_large 8 -num_small 0 -duration 300 -cache_size 0 -simulate concat

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:,      0
Large Columns:,      8
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=138.71 @ Small=0 and Large=8
#################################################################################################################
+++45 Greg Rahn - emulate 1MB random scans, matrix point, duration 300, raid 0, cache 0, streamio N/A, large 8, 138.47 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type rand -testname mytest -num_disks 4 -matrix point -num_large 8 -num_small 0 -duration 300 -cache_size 0 -simulate raid0

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:,      0
Large Columns:,      8
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=138.47 @ Small=0 and Large=8
#################################################################################################################
+++46 Greg Rahn - emulate 1MB random scans, matrix point, duration 300, raid 0, cache 0, streamio N/A, large 256, 151.89 MBPS <<< RANDOM READS 
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type rand -testname mytest -num_disks 4 -matrix point -num_large 256 -num_small 0 -duration 300 -cache_size 0 -simulate raid0

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:,      0
Large Columns:,    256
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=151.89 @ Small=0 and Large=256
#################################################################################################################
+++52 Greg Rahn -             random scans, matrix point, duration 60, raid 0, cache 0, streamio N/A, large 720, 160.88 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type rand -testname mytest -num_disks 4 -matrix point -num_large 720 -num_small 0 -duration 60 -cache_size 0 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,    720
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=160.88 @ Small=0 and Large=720
#################################################################################################################
+++53 Greg Rahn -             random scans, matrix point, duration 300, raid 0, cache 0, streamio N/A, large 720, 151.22 MBPS <<<
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type rand -testname mytest -num_disks 4 -matrix point -num_large 720 -num_small 0 -duration 300 -cache_size 0 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:,      0
Large Columns:,    720
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=151.22 @ Small=0 and Large=720
#################################################################################################################
+++32 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 300, CONCAT, cache NE, streamio 4, large 8, 440.50 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type seq -testname mytest -num_disks 4 -matrix point -num_large 8 -num_small 0 -duration 300 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 4
Force streams to separate disks: No
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 300 seconds
Small Columns:,      0
Large Columns:,      8
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=440.50 @ Small=0 and Large=8
#################################################################################################################
+++33 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 300, CONCAT, cache 0, streamio 4, large 8, 441.24 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type seq -testname mytest -num_disks 4 -matrix point -num_large 8 -num_small 0 -duration 300 -cache_size 0 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 4
Force streams to separate disks: No
Simulated Array Type: CONCAT
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:,      0
Large Columns:,      8
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=441.24 @ Small=0 and Large=8
#################################################################################################################
+++34 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 300, RAID0, cache 0, streamio 4, large 8, 221.29 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type seq -simulate raid0 -testname mytest -num_disks 4 -matrix point -num_large 8 -num_small 0 -duration 300 -cache_size 0 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 4
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:,      0
Large Columns:,      8
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=221.29 @ Small=0 and Large=8
#################################################################################################################
+++35 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 300, RAID0, cache 0, streamio 8, large 8, 254.70 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type seq -num_streamIO 8 -simulate raid0 -testname mytest -num_disks 4 -matrix point -num_large 8 -num_small 0 -duration 300 -cache_size 0 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:,      0
Large Columns:,      8
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=254.70 @ Small=0 and Large=8
#################################################################################################################
+++36 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 300, RAID0, cache 0, streamio 8, large 256, 157.62 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type seq -num_streamIO 8 -simulate raid0 -testname mytest -num_disks 4 -matrix point -num_large 256 -num_small 0 -duration 300 -cache_size 0 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:,      0
Large Columns:,    256
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=157.62 @ Small=0 and Large=256
#################################################################################################################
+++37 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 3600, RAID0, cache 0, streamio 8, large 256, 159.09 MBPS <<< SEQUENTIAL READS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type seq -num_streamIO 8 -simulate raid0 -testname mytest -num_disks 4 -matrix point -num_large 256 -num_small 0 -duration 3600 -cache_size 0 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 3600 seconds
Small Columns:,      0
Large Columns:,    256
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=159.09 @ Small=0 and Large=256
#################################################################################################################
+++38 Greg Rahn - emulate 1MB sequential scans, matrix point, duration 300, RAID0, cache NE, streamio 8, large 256, 157.65 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type seq -num_streamIO 8 -simulate raid0 -testname mytest -num_disks 4 -matrix point -num_large 256 -num_small 0 -duration 300 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 300 seconds
Small Columns:,      0
Large Columns:,    256
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=157.65 @ Small=0 and Large=256
#################################################################################################################
+++51 Greg Rahn -              sequential scans, matrix point, duration 60, RAID0, cache 0, streamio 16, large 45, 347.92 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type seq -testname mytest -num_disks 4 -matrix point -num_large 45 -num_small 0 -num_streamIO 16 -cache_size 0 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 16
Force streams to separate disks: No
Simulated Array Type: CONCAT
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,     45
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=347.92 @ Small=0 and Large=45
#################################################################################################################
+++54 Greg Rahn -              sequential scans, matrix point, duration 300, RAID0, cache 0, streamio 16, large 45, 358.31 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type seq -testname mytest -num_disks 4 -matrix point -num_large 45 -num_small 0 -num_streamIO 16 -cache_size 0 -duration 300 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 16
Force streams to separate disks: No
Simulated Array Type: CONCAT
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:,      0
Large Columns:,     45
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=358.31 @ Small=0 and Large=45
#################################################################################################################
+++55 Greg Rahn -              sequential scans, matrix point, duration 300, RAID0, cache 0, streamio 32, large 45, 359.01 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type seq -testname mytest -num_disks 4 -matrix point -num_large 45 -num_small 0 -num_streamIO 32 -cache_size 0 -duration 300 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 32
Force streams to separate disks: No
Simulated Array Type: CONCAT
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:,      0
Large Columns:,     45
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=359.01 @ Small=0 and Large=45
#################################################################################################################
+++56 Greg Rahn -              sequential scans, matrix point, duration 300, RAID0, cache 0, streamio 45, large 45, 352.05 MBPS <<<
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type seq -testname mytest -num_disks 4 -matrix point -num_large 45 -num_small 0 -num_streamIO 45 -cache_size 0 -duration 300 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 45
Force streams to separate disks: No
Simulated Array Type: CONCAT
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:,      0
Large Columns:,     45
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=352.05 @ Small=0 and Large=45
#################################################################################################################
+++49 generate multiple random 1MB write streams, matrix point, duration 300, raid 0, cache 0, streamio N/A, large 8, 147.55 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-testname mytest -run advanced -simulate raid0 -stripe 1024 -write 100 -type rand -matrix point -num_small 0 -cache_size 0 -num_large 8 -duration 300

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 100%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:,      0
Large Columns:,      8
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=147.55 @ Small=0 and Large=8
#################################################################################################################
+++50 generate multiple random 1MB write streams, matrix point, duration 300, raid 0, cache 0, streamio N/A, large 256, 109.53 MBPS <<< RANDOM WRITES
-----------------------------------------------------------------------------------------------------------------
Commandline:
-testname mytest -run advanced -simulate raid0 -stripe 1024 -write 100 -type rand -matrix point -num_small 0 -cache_size 0 -num_large 256 -duration 300

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 100%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:,      0
Large Columns:,    256
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=109.53 @ Small=0 and Large=256
#################################################################################################################
+++57 generate multiple random 1MB write streams, matrix point, duration 300, raid 0, cache 0, streamio N/A, large 720, 107.31 MBPS <<<
-----------------------------------------------------------------------------------------------------------------
Commandline:
-testname mytest -run advanced -simulate raid0 -stripe 1024 -write 100 -type rand -matrix point -num_small 0 -cache_size 0 -num_large 720 -duration 300 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 100%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:,      0
Large Columns:,    720
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=107.31 @ Small=0 and Large=720
#################################################################################################################
+++47 generate multiple sequential 1MB write streams, matrix col, duration 60, CONCAT, cache NE, streamio 4, large 1-8, 421.80 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-testname mytest -run advanced -simulate concat -write 100 -type seq -matrix col -num_small 0

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 4
Force streams to separate disks: No
Simulated Array Type: CONCAT
Write: 100%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      0,      1,      2,      3,      4,      5,      6,      7,      8
Total Data Points: 9

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=421.80 @ Small=0 and Large=5
#################################################################################################################
+++29 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix col, cache NE, streamio 4, large 1-8, 370.14 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-testname mytest -run advanced -simulate raid0 -stripe 1024 -write 100 -type seq -matrix col -num_small 0

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 4
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 100%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      0,      1,      2,      3,      4,      5,      6,      7,      8
Total Data Points: 9

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=370.14 @ Small=0 and Large=1
#################################################################################################################
+++39 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix col, cache 0, streamio 4, large 1-8, 369.68 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-testname mytest -run advanced -simulate raid0 -stripe 1024 -write 100 -type seq -matrix col -num_small 0 -cache_size 0 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 4
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 100%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      0,      1,      2,      3,      4,      5,      6,      7,      8
Total Data Points: 9

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=369.68 @ Small=0 and Large=1
#################################################################################################################
+++48 generate multiple sequential 1MB write streams, matrix point, duration 60, CONCAT, cache 0, streamio 8, large 8, 419.17 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-testname mytest -run advanced -simulate concat -write 100 -type seq -matrix point -num_small 0 -cache_size 0 -num_streamIO 8 -num_large 8

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: CONCAT
Write: 100%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      8
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=419.17 @ Small=0 and Large=8
#################################################################################################################
+++40 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix col, cache 0, streamio 8, large 1-8, 387.46 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-testname mytest -run advanced -simulate raid0 -stripe 1024 -write 100 -type seq -matrix col -num_small 0 -cache_size 0 -num_streamIO 8 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 100%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      0,      1,      2,      3,      4,      5,      6,      7,      8
Total Data Points: 9

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=387.46 @ Small=0 and Large=1
#################################################################################################################
+++41 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix point, duration 60, cache 0, streamio 8, large 8, 251.69 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-testname mytest -run advanced -simulate raid0 -stripe 1024 -write 100 -type seq -matrix point -num_small 0 -cache_size 0 -num_streamIO 8 -num_large 8 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 100%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      8
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=251.69 @ Small=0 and Large=8
#################################################################################################################
+++42 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix point, duration 300, cache 0, streamio 8, large 8, 249.08 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-testname mytest -run advanced -simulate raid0 -stripe 1024 -write 100 -type seq -matrix point -num_small 0 -cache_size 0 -num_streamIO 8 -num_large 8 -duration 300 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 100%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:,      0
Large Columns:,      8
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=249.08 @ Small=0 and Large=8
#################################################################################################################
+++43 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix point, duration 300, cache 0, streamio 8, large 256, 106.62 MBPS <<< SEQUENTIAL WRITES
-----------------------------------------------------------------------------------------------------------------
Commandline:
-testname mytest -run advanced -simulate raid0 -stripe 1024 -write 100 -type seq -matrix point -num_small 0 -cache_size 0 -num_streamIO 8 -num_large 256 -duration 300 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 100%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:,      0
Large Columns:,    256
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=106.62 @ Small=0 and Large=256
#################################################################################################################
+++58 generate multiple sequential 1MB write streams, simulating 1MB RAID0 stripes, matrix point, duration 300, cache 0, streamio 45, large 45, 165.57 MBPS <<<
-----------------------------------------------------------------------------------------------------------------
Commandline:
-testname mytest -run advanced -simulate raid0 -stripe 1024 -write 100 -type seq -matrix point -num_small 0 -cache_size 0 -num_streamIO 45 -num_large 45 -duration 300 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 45
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 100%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:,      0
Large Columns:,     45
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
Name: /dev/sdc  Size: 1000204886016
Name: /dev/sdd  Size: 1000204886016
Name: /dev/sde  Size: 1000204886016
4 FILEs found.

Maximum Large MBPS=165.57 @ Small=0 and Large=45
#################################################################################################################
+++59 FAIL, husnu matrix basic (seq and iops test), stopped at point4, 211.56 MBPS, 365 IOPS, 54.77 lat
-----------------------------------------------------------------------------------------------------------------
          1, 454.68
          2, 202.64
          3, 207.06
          4, 211.56

Commandline:
-run advanced -testname mytest -num_disks 4 -simulate raid0 -write 0 -type seq -matrix basic -cache_size 0 -verbose 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 4
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
#################################################################################################################
+++60 FAIL, husnu matrix detailed (seq and iops test), stopped at point24 out of 189, no MBPS, 370 IOPS, 53.91 lat
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -testname mytest -num_disks 4 -simulate raid0 -write 0 -type seq -matrix detailed -cache_size 0 -verbose 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 4
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 60 seconds
#################################################################################################################
+++61 SINGLE DISK RUN seq matrix point, num large 256, streamio 8, raid 0, cache0, duration 300, 53.20 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type seq -num_streamIO 8 -simulate raid0 -testname mytest -num_disks 1 -matrix point -num_large 256 -num_small 0 -duration 300 -cache_size 0 

This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:,      0
Large Columns:,    256
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
1 FILEs found.

Maximum Large MBPS=53.20 @ Small=0 and Large=256
#################################################################################################################
+++62 MULTIPLE ORION (4) SESSION RUN seq matrix point, num large 256, streamio 8, raid 0, cache0, duration 300, on the OS, around 200 MBPS
-----------------------------------------------------------------------------------------------------------------
Commandline:
-run advanced -type seq -num_streamIO 8 -simulate raid0 -testname mytest1 -num_disks 1 -matrix point -num_large 256 -num_small 0 -duration 300 -cache_size 0 

This maps to this test:
Test: mytest1
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:,      0
Large Columns:,    256
Total Data Points: 1

Name: /dev/sdb  Size: 1000204886016
1 FILEs found.

Maximum Large MBPS=51.03 @ Small=0 and Large=256

Commandline:
-run advanced -type seq -num_streamIO 8 -simulate raid0 -testname mytest2 -num_disks 1 -matrix point -num_large 256 -num_small 0 -duration 300 -cache_size 0 

This maps to this test:
Test: mytest2
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:,      0
Large Columns:,    256
Total Data Points: 1

Name: /dev/sdc  Size: 1000204886016
1 FILEs found.

Maximum Large MBPS=48.03 @ Small=0 and Large=256

Commandline:
-run advanced -type seq -num_streamIO 8 -simulate raid0 -testname mytest3 -num_disks 1 -matrix point -num_large 256 -num_small 0 -duration 300 -cache_size 0 

This maps to this test:
Test: mytest3
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:,      0
Large Columns:,    256
Total Data Points: 1

Name: /dev/sdd  Size: 1000204886016
1 FILEs found.

Maximum Large MBPS=44.43 @ Small=0 and Large=256

Commandline:
-run advanced -type seq -num_streamIO 8 -simulate raid0 -testname mytest4 -num_disks 1 -matrix point -num_large 256 -num_small 0 -duration 300 -cache_size 0 

This maps to this test:
Test: mytest4
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Sequential Streams
Number of Concurrent IOs Per Stream: 8
Force streams to separate disks: No
Simulated Array Type: RAID 0
Stripe Depth: 1024 KB
Write: 0%
Cache Size: 0 MB
Duration for each Data Point: 300 seconds
Small Columns:,      0
Large Columns:,    256
Total Data Points: 1

Name: /dev/sde  Size: 1000204886016
1 FILEs found.

Maximum Large MBPS=41.20 @ Small=0 and Large=256
#################################################################################################################
+++63 IOPS - read (random, seq), write (random, seq) 
     +++ observations: seems like when you do OLTP runs, the collectl-all outputs the wsec/s (sector writes) and not the IOPS write.. 
     +++ I've checked it with the iostat output
-----------------------------------------------------------------------------------------------------------------
     +++ params_oltp_randomwrites Maximum Small IOPS=309 @ Small=256 and Large=0 Minimum Small Latency=825.24 @ Small=256 and Large=0
     +++ params_oltp_seqwrites Maximum Small IOPS=312 @ Small=256 and Large=0 Minimum Small Latency=818.04 @ Small=256 and Large=0
     +++ params_oltp_randomreads Maximum Small IOPS=532 @ Small=256 and Large=0 Minimum Small Latency=480.28 @ Small=256 and Large=0
     +++ params_oltp_seqreads Maximum Small IOPS=527 @ Small=256 and Large=0 Minimum Small Latency=485.31 @ Small=256 and Large=0
     +++ params_oltp Maximum Small IOPS=481 @ Small=80 and Large=0 Minimum Small Latency=20.34 @ Small=4 and Large=0
#################################################################################################################
+++64 FAIL, increasing random writes
-----------------------------------------------------------------------------------------------------------------
-run advanced -testname mytest -type rand -matrix col -simulate raid0 -num_disks 4 -cache_size 0 -num_small 256 -stripe 1024 -write 100 -duration 300
#################################################################################################################
+++65 FULL run of oriontoolkit
-----------------------------------------------------------------------------------------------------------------
     +++ params_dss_randomwrites Maximum Large MBPS=108.17 @ Small=0 and Large=256
     +++ params_dss_seqwrites Maximum Large MBPS=111.59 @ Small=0 and Large=256
     +++ params_dss_randomreads Maximum Large MBPS=148.50 @ Small=0 and Large=256
     +++ params_dss_seqreads Maximum Large MBPS=156.24 @ Small=0 and Large=256
     +++ params_oltp_randomwrites Maximum Small IOPS=312 @ Small=256 and Large=0 Minimum Small Latency=816.17 @ Small=256 and Large=0
     +++ params_oltp_seqwrites Maximum Small IOPS=314 @ Small=256 and Large=0 Minimum Small Latency=812.39 @ Small=256 and Large=0
     +++ params_oltp_randomreads Maximum Small IOPS=530 @ Small=256 and Large=0 Minimum Small Latency=482.69 @ Small=256 and Large=0
     +++ params_oltp_seqreads Maximum Small IOPS=526 @ Small=256 and Large=0 Minimum Small Latency=486.29 @ Small=256 and Large=0
     +++ params_dss Maximum Large MBPS=177.65 @ Small=0 and Large=32
     +++ params_oltp Maximum Small IOPS=480 @ Small=80 and Large=0 Minimum Small Latency=20.42 @ Small=4 and Large=0
#################################################################################################################
+++66 ShortStroked disks 150GB/1000GB 
-----------------------------------------------------------------------------------------------------------------
     +++ params_dss_randomwrites Maximum Large MBPS=151.57 @ Small=0 and Large=256
     +++ params_dss_seqwrites Maximum Large MBPS=163.09 @ Small=0 and Large=256
     +++ params_dss_randomreads Maximum Large MBPS=192.11 @ Small=0 and Large=256
     +++ params_dss_seqreads Maximum Large MBPS=207.77 @ Small=0 and Large=256
     +++ params_oltp_randomwrites Maximum Small IOPS=431 @ Small=256 and Large=0 Minimum Small Latency=592.28 @ Small=256 and Large=0
     +++ params_oltp_seqwrites Maximum Small IOPS=427 @ Small=256 and Large=0 Minimum Small Latency=597.92 @ Small=256 and Large=0
     +++ params_oltp_randomreads Maximum Small IOPS=792 @ Small=256 and Large=0 Minimum Small Latency=323.08 @ Small=256 and Large=0
     +++ params_oltp_seqreads Maximum Small IOPS=794 @ Small=256 and Large=0 Minimum Small Latency=322.24 @ Small=256 and Large=0
     +++ params_dss Maximum Large MBPS=216.53 @ Small=0 and Large=28
     +++ params_oltp Maximum Small IOPS=711 @ Small=80 and Large=0 Minimum Small Latency=14.32 @ Small=4 and Large=0
#################################################################################################################
+++ a short stroked single disk
-----------------------------------------------------------------------------------------------------------------

#################################################################################################################

+++ create regression on OLTP Write and DSS Write
-----------------------------------------------------------------------------------------------------------------

#################################################################################################################
}}}
{{{
Here is a repeat of a post I made back in November 2005 - in case anyone
is having trouble getting it to work on Windows.  I haven't checked
lately, but at the time, it wasn't clearly documented.  In retrospect,
maybe it should have been more obvious to me that I had to specify a
datafile, but it wasn't obvious at the time:
 
########################################################################
#######
 
In case anyone else wants to use ORION on Windows, I finally figured out
how to get it to work.  Apparently you have to specify an actual Oracle
datafile, not just a directory or empty text file.  I put
"C:\oracle\oradata\orcl\example01.dbf" in my mytest.lun file, and then
ORION worked, giving me the following command-line output:
 
C:\Program Files\Oracle\Orion>orion -run simple -testname mytest
-num_disks 1
ORION: ORacle IO Numbers -- Version 10.2.0.1.0
Test will take approximately 9 minutes
Larger caches may take longer
 
 
And the following results in mytest_summary.txt:
 
ORION VERSION 10.2.0.1.0
 
Commandline:
-run simple -testname mytest -num_disks 1 
 
This maps to this test:
Test: mytest
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 0%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:,      0
Large Columns:,      0,      1,      2
Total Data Points: 8
 
Name: C:\oracle\oradata\orcl\example01.dbf Size: 157294592
1 FILEs found.
 
Maximum Large MBPS=9.01 @ Small=0 and Large=2
Maximum Small IOPS=52 @ Small=2 and Large=0
Minimum Small Latency=20.45 @ Small=1 and Large=0
########################################################################
####### 
}}}

https://stackoverflow.com/questions/48462896/out-of-memory-in-hive-tez-with-lateral-view-json-tuple
https://stackoverflow.com/questions/48403972/oom-in-tez-hive/48407044
https://community.cloudera.com/t5/Support-Questions/Trying-to-use-Hive-EXPLODE-function-to-quot-unfold-quot-an/td-p/103694


! lateral view examples
https://community.cloudera.com/t5/Support-Questions/Hive-Explode-Lateral-View-clarification/td-p/167827
https://stackoverflow.com/questions/42403306/hive-lateral-view-explode


! documentation 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+LateralView


! similarities with Xquery on DW environments 
<<<
troubleshooting the Xquery. The hidden parameter needs to be tested and re-run the process. If that did not work then they need to break that query into multiple smaller Xquery to load into a table then do the join from there.

Basically the issue is the flattening of XML to do reporting on top of it.

I see this issue on newer data warehouses that uses newer data structures like JSON where they run LATERAL VIEW..EXPLODE function on the marketing data to flatten it https://cwiki.apache.org/confluence/display/Hive/LanguageManual+LateralView

On LATERAL VIEW..EXPLODE the issues encountered is memory related (java gc issues) when developers try to flatten hunders of json leafs at a time. And the usual fix is tune the java container memory and also lessen the columns to explode or break it into pieces.

This is kind of similar to our PGA exhaustion issue.
<<<



http://blogs.oracle.com/optimizer/2010/07/outerjoins_in_oracle.html

outer joins 101 https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::p11_question_id:5229892958977
<<<
outer join 101: 

you have two tables -- emp and dept. dept has 4 rows (deptno = 10, 20, 30, 40). emp has 14 rows but only 3 distinct values for deptno (10,20,30). 

You need to write a report that shows the DEPTNO and count of employees for ALL departments. This requires an OUTER JOIN since the natural join: 

select d.deptno, count(empno) 
from emp e, dept d 
where e.deptno = d.deptno 
group by d.deptno; 

would "lose" deptno = 40. So, we: 

select d.deptno, count(empno) 
from emp e, dept d 
where e.deptno(+) = d.deptno 
group by d.deptno; 

and that simply means "use DEPT as the driving table, for each row in DEPT, find and count all of the EMPNOS we find. IF there isn't a match in EMP for a given DEPTNO -- then "make up" a record in EMP with all NULL values and join that record -- so we don't drop the dept record" 
<<<

<<<
If we needed to use t3.pk = t2.pk(+) -- then t2.pk will be NULL and therefore t1.pk = t2.pk will not be satisified. HENCE, when we actually outer join t2 to t3 and "make up a row in t2", we also immediately turn around and throw it out. 

THEREFORE, the results of the queries: 

where t1.pk = t2.pk 
and t2.pk (+) = t3.pk 
and tb.c = 'SWE' 


and 

where t1.pk = t2.pk 
and t2.pk = t3.pk 
and tb.c = 'SWE' 


are identitical. By adding the (+) to the first one, all you did was remove many different possible execution plans. And given all of your other comments about the performance of the query with and without the (+) you removed the PLAN THAT ACTUALLY WORKS BEST from even being considered. 


Anytime -- anytime -- you see: 


where t1.x = t2.x(+) 
and t2.any_column = any_value 


you know that you can (should, must, be silly not to) remove the (+) from the query. Because if we "make up a NULL row for t2" then we KNOW that t2.any_column cannnot be equal to any_value (it is NULL after all!!) 
<<<


{{{

----------------					
--ORACLE SYNTAX
----------------					

# CARTESIAN PRODUCT - if join condition is omittted
	
	select * from
	employees a, departments b	(20 x 8 rows = 160 rows)
	
	
 Types of Joins
	Oracle Proprietary 			SQL: 1999
	Joins (8i and prior): 			Compliant Joins:

	- Equijoin 				- Cross joins
	- Non-equijoin 				- Natural joins
	- Outer join 				- Using clause
	- Self join 				- Full or two sided outer joins
						- Arbitrary join conditions for outer joins
						
 Joins comparing SQL:1999 to Oracle Syntax
	Oracle Proprietary: 			SQL: 1999

	- Equijoin 				- Natural / Inner Join
	- Outer Join				- Left Outer Join
	- Self join 				- Join On
	- Non Equijoin 				- Join Using
	- Cartesian Product			- Cross Join


# EQUIJOIN (a.k.a simple join / inner join)

	SELECT last_name, employees.department_id, department_name
	FROM employees, departments
	WHERE employees.department_id = departments.department_id
	AND last_name = �Matos�;
	
	SELECT e.employee_id, e.last_name, e.department_id, d.department_id, d.location_id	<-- WITH ALIAS
	FROM employees e , departments d 
	WHERE e.department_id = d.department_id;
	
	SELECT e.last_name, d.department_name, l.city						<-- JOINING MORE THAN TWO TABLES (n-1)
	FROM employees e, departments d, locations l
	WHERE e.department_id = d.department_id
	AND d.location_id = l.location_id;

	
	--> to know how many tables to join, "n-1" (if you're joining 4 tables then you need 3 joins)
	
	
# NON-EQUIJOIN

	SELECT e.last_name, e.salary, j.grade_level 
	FROM employees e, job_grades j 
	WHERE e.salary 
	BETWEEN j.lowest_sal AND j.highest_sal;
	

# OUTER JOIN (Place the outer join symbol following the name of the column in the table without the matching rows - where you want it NULL)

	SELECT e.employee_id, e.last_name, e.department_id, d.department_id, d.location_id	<-- GRANT DOES NOT HAVE A DEPARTMENT
	FROM employees e , departments d 
	WHERE e.department_id = d.department_id (+);
	
	SELECT e.last_name, d.department_name, l.city						<-- CONTRACTING DEPARTMENT DOES NOT HAVE ANY EMPLOYEES
	FROM employees e, departments d, locations l
	WHERE e.department_id (+) = d.department_id 
	AND d.location_id (+) = l.location_id;
	
	
	--> You use an outer join to also see rows that do not meet the join condition.
	
	--> The outer join operator can appear on only one side of the expression the side that has information missing. It returns those rows from one table that have no direct match in the other table.
	
	--> A condition involving an outer join cannot use the IN operator or be linked to another condition by the OR operator.

	--> The UNION operator works around the issue of being able to use an outer join operator on one side of the expression. The ANSI full outer join also allows you to have an outer join on both sides of the expression.
	
	
# SELF JOIN

	SELECT worker.last_name || � works for � || manager.last_name 
	FROM employees worker, employees manager 
	WHERE worker.manager_id = manager.employee_id;
	

-------------------					
--SQL: 1999 SYNTAX
-------------------

# CROSS JOIN

	select * from employees		<-- result is Cartesian Product
	cross join departments;


# NATURAL JOIN

	select * from employees		<-- selects rows from the two tables that have equal values in all "matched columns" (the same name & data type)
	natural join departments;
	
	
# USING	(similar to equijoin, but shorter code than "ON")

	SELECT e.employee_id, e.last_name, d.location_id
	FROM employees e 
	JOIN departments d
	USING (department_id);
	WHERE e.department_id = 90;	<-- CAN'T DO THIS, do not use a "table name, alias, or qualifier" in the referenced columns ORA-25154: column part of USING clause cannot have qualifier
	
	select * 			<-- three way join
	from employees a
	join departments b
	using (department_id)
	join locations c
	using (location_id);


# ON (similar to equijoin)

	SELECT employee_id, city, department_name	<-- three way join
	FROM employees e
	JOIN departments d
	ON (d.department_id = e.department_id)
	JOIN locations l
	ON (d.location_id = l.location_id);
	
	
# LEFT OUTER JOIN

	SELECT e.last_name, e.department_id, d.department_name
	FROM employees e
	LEFT OUTER JOIN departments d
	ON (e.department_id = d.department_id);

This query retrieves all rows in the EMPLOYEES table, which is the left table even if there is no match in the DEPARTMENTS table.
This query was completed in earlier releases as follows:
 
   SELECT e.last_name, e.department_id, d.department_name
   FROM   hr.employees e, hr.departments d
   WHERE  e.department_id = d.department_id (+);   -- plus sign will have null, return all emp 
	
# RIGHT OUTER JOIN

	SELECT e.last_name, e.department_id, d.department_name
	FROM employees e
	RIGHT OUTER JOIN departments d
	ON (e.department_id = d.department_id);

This query retrieves all rows in the DEPARTMENTS table, which is the right table even if there is no match in the EMPLOYEES table.
This query was completed in earlier releases as follows:
 
   SELECT e.last_name, e.department_id, d.department_name
   FROM   hr.employees e, hr.departments d
   WHERE  e.department_id(+) = d.department_id ;   -- plus sign will have null, return all dept

	
	
# FULL OUTER JOIN

	SELECT e.last_name, e.department_id, d.department_name		<-- SQL :1999 Syntax
	FROM employees e
	FULL OUTER JOIN departments d
	ON (e.department_id = d.department_id);
	
	SELECT e.last_name, e.department_id, d.department_name		<-- Oracle Syntax
	FROM employees e, departments d
	WHERE e.department_id (+) = d.department_id
	UNION
	SELECT e.last_name, e.department_id, d.department_name
	FROM employees e, departments d
	WHERE e.department_id = d.department_id (+);

}}}


http://en.wikipedia.org/wiki/PCI_Express#Current_status
http://en.wikipedia.org/wiki/List_of_device_bandwidths
http://www.iphonetechie.com/2010/10/pdanet-4-18-cracked-deb-file-and-installation-tutorial-great-alternative-to-mywi-4-8-3-works-awesome/	
! references
''Bryn Llewellyn'' http://www.oracle.com/technetwork/database/multitenant-wp-12c-1949736.pdf
Oracle multi-tenant in the real world - Working with PDBs in 12c - Mike Dietrich
{{{
https://apex.oracle.com/pls/apex/f?p=202202:2:::::P2_SUCHWORT:multi2013
}}}
Multitenant Database Management http://www.oracle.com/technetwork/issue-archive/2014/14-nov/o64ocp12c-2349447.html
Basics of the Multitenant Container Database http://www.oracle.com/technetwork/issue-archive/2014/14-sep/o54ocp12c-2279221.html

! alter parameter
{{{
alter system set parameter=value container=current|all;
select name, value from v$system_parameter where ispdb_modifiable='TRUE' order by name;
}}}

! create user 
{{{
create user tim container=current|all; 
}}}

! grant user access to different PDBs
https://blog.dbi-services.com/the-privileges-to-connect-to-a-container/
{{{
SQL> create user C##USER1 identified by oracle container=all;
User created.
SQL> grant DBA to C##USER1 container=all;
Grant succeeded.
}}}

! grant select on v$pdbs
http://oracledbpro.blogspot.com/2015/09/cant-view-data-via-common-user-in.html?m=1
{{{
alter user C##TEST set container_data=all container = current;
}}}


! References 
http://oracle-base.com/articles/12c/articles-12c.php
<<<

@@Multitenant@@ : Overview of Container Databases (CDB) and Pluggable Databases (PDB) - This article provides a basic overview of the multitenant option, with links to more detailed articles on the functionality.

@@Multitenant@@ : Create and Configure a Container Database (CDB) in Oracle Database 12c Release 1 (12.1) - Take your first steps with the Oracle Database 12c Multitenant option by creating container databases.

@@GOOD STUFF - Multitenant@@ : Create and Configure a Pluggable Database (PDB) in Oracle Database 12c Release 1 (12.1) - Take your next steps with the Oracle Database 12c Multitenant option by creating pluggable databases.

@@GOOD STUFF - Multitenant@@ : Migrate a Non-Container Database (CDB) to a Pluggable Database (PDB) in Oracle Database 12c Release 1 (12.1) - Learn now to start converting your existing regular databases into pluggable databases in Oracle Database 12c Release 1 (12.1).

@@Multitenant@@ : Connecting to Container Databases (CDB) and Pluggable Databases (PDB) in Oracle Database 12c Release 1 (12.1) - This article explains how to connect to container databases (CDB) and pluggable databases (PDB) on Oracle 12c Release 1 (12.1).
{{{
SHOW CON_NAME
ALTER SESSION SET container = pdb1;
ALTER SESSION SET container = cdb$root;
}}}
@@Multitenant@@ : Startup and Shutdown Container Databases (CDB) and Pluggable Databases (PDB) in Oracle Database 12c Release 1 (12.1) - Learn how to startup and shutdown container databases (CDB) and pluggable databases (PDB) in Oracle 12c Release 1 (12.1).
{{{
SQL*Plus Command
ALTER PLUGGABLE DATABASE
Pluggable Database (PDB) Automatic Startup
Preserve PDB Startup State (12.1.0.2 onward)
}}}
@@Multitenant@@ : Configure Instance Parameters and Modify Container Databases (CDB) and Pluggable Databases (PDB) in Oracle Database 12c Release 1 (12.1) - This article shows how to configure instance parameters and modify the database for container databases (CDB) and pluggable databases (PDB) in Oracle Database 12c Release 1 (12.1).

@@Multitenant@@ : Manage Tablespaces in a Container Database (CDB) and Pluggable Database (PDB) in Oracle Database 12c Release 1 (12.1) - This article demonstrates how to manage tablespaces in a container database (CDB) and pluggable database (PDB) in Oracle Database 12c Release 1 (12.1).

@@Multitenant@@ : Manage Users and Privileges For Container Databases (CDB) and Pluggable Databases (PDB) in Oracle Database 12c Release 1 (12.1) - This article shows how to manage users and privileges for container databases (CDB) and pluggable databases (PDB) in Oracle Database 12c Release 1 (12.1).
{{{
Create Common Users
Create Local Users
Create Common Roles
Create Local Roles
Granting Roles and Privileges to Common and Local Users
}}}
@@Multitenant@@ : Backup and Recovery of a Container Database (CDB) and a Pluggable Database (PDB) in Oracle Database 12c Release 1 (12.1) - Learn how backup and recovery is affected by the multitenant option in Oracle Database 12c Release 1 (12.1).
{{{
RMAN Connections
Backup
   Container Database (CDB) Backup
   Root Container Backup
   Pluggable Database (PDB) Backup
Complete Recovery
   Tablespace and Datafile Backups
   Container Database (CDB) Complete Recovery
   Root Container Complete Recovery
   Pluggable Database (PDB) Complete Recovery
   Tablespace and Datafile Complete Recovery
Point In Time Recovery (PITR)
   Container Database (CDB) Point In Time Recovery (PITR)
   Pluggable Database (PDB) Point In Time Recovery (PITR)
   Table Point In Time Recovery (PITR) in PDBs
}}}
@@Multitenant@@ : Flashback of a Container Database (CDB) in Oracle Database 12c Release 1 (12.1) - Identify the restrictions when using flashback database against a container database (CDB) in Oracle 12c.

Multitenant : Resource Manager with Container Databases (CDB) and Pluggable Databases (PDB) in Oracle Database 12c Release 1 (12.1) - Control resource allocation between pluggable databases and within an individual pluggable database.

@@Multithreaded Model@@ using THREADED_EXECUTION in Oracle Database 12c Release 1 (12.1) - Learn how to switch the database between the multiprocess and multithreaded models in Oracle Database 12c Release 1 (12.1).

--

@@Multitenant@@ : Clone a Remote PDB or Non-CDB in Oracle Database 12c (12.1.0.2) - Clone PDBs from remote PDBs and Non-CDBs over database links in Oracle Database 12c (12.1.0.2).

@@Multitenant@@ : Database Triggers on Pluggable Databases (PDBs) in Oracle 12c Release 1 (12.1) - With the introduction of the multitenant option, database event triggers can be created in the scope of the CDB or PDB.

@@Multitenant@@ : PDB Logging Clause in Oracle Database 12c Release 1 (12.1.0.2) - The PDB logging clause is used to set the default tablespace logging clause for a PDB in Oracle Database 12c Release 1 (12.1.0.2).

@@Multitenant@@ : Metadata Only PDB Clones in Oracle Database 12c Release 1 (12.1.0.2) - Make structure-only copies of PDBs using the NO DATA clause added in Oracle Database 12c Release 1 (12.1.0.2).

@@Multitenant@@ : PDB CONTAINERS Clause in Oracle Database 12c Release 1 (12.1.0.2) - The PDB CONTAINERS clause allows data to be queried across multiple PDBs in Oracle Database 12c Release 1 (12.1.0.2).

@@Multitenant@@ : PDB Subset Cloning in Oracle Database 12c Release 1 (12.1.0.2) - Use subset cloning to limit the amount of tablespaces you bring across to your new PDB.

@@Multitenant@@ : Remove APEX Installations from the CDB in Oracle Database 12c Release 1 (12.1) - This article describes how to remove APEX from the CDB so you can install it directly in a PDB.

@@Multitenant@@ : Running Scripts in Container Databases (CDBs) and Pluggable Databases (PDBs) in Oracle Database 12c Release 1 (12.1) - This article presents a number of solutions to help transition your shell scripts to work with the multitenant option.
{{{
SET CONTAINER
TWO_TASK
Secure External Password Store
Scheduler
catcon.pl
}}}
<<<



From Yong Huang... 

http://yong321.freeshell.org/oranotes/LargePoolMtsPga.txt
http://yong321.freeshell.org/oranotes/PGA_and_PrivateMemViewedFromOS.txt  <-- good stuff
http://yong321.freeshell.org/oranotes/PGAIncreaseWithPLSQLTable.txt

Hmm... his investigations are awesome, I wonder how DBA_HIST_PGASTAT will be useful for time series analysis


-- PGA Sizing 
http://www.freelists.org/post/oracle-l/SGA-shared-pool-size,3

-- ASH PGA usage (in bytes)
https://bdrouvot.wordpress.com/2013/03/19/link-huge-pga-temp/



<<showtoc>>


! to log the PK/FK errors 

{{{

ALTER TABLE MY_MASTER_TABLE ADD 
CONSTRAINT MY_FK1
 FOREIGN KEY (MY_LOOKUP_TABLE1_ID)
 REFERENCES MY_LOOKUP_TABLE1 (ID)
 ENABLE
 NOVALIDATE;
 
ALTER TABLE MY_MASTER_TABLE ADD 
CONSTRAINT MY_FK2
 FOREIGN KEY (MY_LOOKUP_TABLE2_ID)
 REFERENCES MY_LOOKUP_TABLE2 (ID)
 ENABLE
 VALIDATE
 EXCEPTIONS INTO MY_EXCEPT_TABLE;

CREATE TABLE MY_EXCEPT_TABLE
(
  ROW_ID      ROWID,
  OWNER       VARCHAR2(30 BYTE),
  TABLE_NAME  VARCHAR2(30 BYTE),
  CONSTRAINT  VARCHAR2(30 BYTE)
);


}}}


! references
http://www.java2s.com/Code/Oracle/Table/Createtablewithforeignkey.htm
https://apexplained.wordpress.com/2013/04/20/the-emp-and-dept-tables-in-oracle/
https://www.techonthenet.com/oracle/foreign_keys/foreign_keys.php




! the plsql channel
http://tutorials.plsqlchannel.com/public/index.php - subscription good stuff

''Nice short,simple tutorial'' http://plsql-tutorial.com
''pl/sql basics video tutorial'' https://www.youtube.com/watch?v=_qBCjLKB_sM


''Debug PL/SQL''
http://st-curriculum.oracle.com/obe/db/11g/r2/prod/appdev/sqldev/plsql_debug/plsql_debug_otn.htm
http://sueharper.blogspot.com/2006/07/remote-debugging-with-sql-developer_13.html

''PL/SQL: The Scripting Language Liberator'' http://goo.gl/BIcDXL

https://www.quora.com/What-features-of-PL-SQL-should-a-beginner-tackler-first


Top 5 Basic Concept Job Interview Questions for Oracle Database PL/SQL Developers 
http://www.dbasupport.com/oracle/ora11g/Basic-Concept-Interview-Questions.shtml	
Converting a PV vm back into an HVM vm
http://blogs.oracle.com/wim/2011/01/converting_a_pv_vm_back_into_a.html
https://blogs.oracle.com/datawarehousing/entry/partition_wise_joins
Using Parallel Execution [ID 203238.1]
Parallel Execution the Large/Shared Pool and ORA-4031 [ID 238680.1]
What does the parameter parallel_automatic_tuning ? [ID 577869.1]
Master Note Parallel Execution Wait Events [ID 1097154.1]
WAITEVENT: "PX Deq Credit: send blkd" [ID 271767.1]
SELECTING FROM EXTERNAL TABLE WITH CLOB perform very slow and High Wait On 'Px Deq Credit: Send Blkd ' [ID 1300645.1]
Tips to Reduce Waits for "PX DEQ CREDIT SEND BLKD" at Database Level [ID 738464.1]
Old and new Syntax for setting Degree of Parallelism [ID 260845.1]
PARALLEL_EXECUTION_MESSAGE_SIZE Usage [ID 756242.1]


Report for the Degree of Parallelism on Tables and Indexes [ID 270837.1]  <-- AWESOME script.. 

http://fahdmirza.blogspot.com/2011/04/px-deq-credit-send-blkd-tuning.html
http://dbaspot.com/oracle-server/268584-px-deq-credit-send-blkd.html
http://iamsys.wordpress.com/2010/03/24/px-deq-credit-send-blkd-caused-by-ide-sql-developer-toad-plsql-developer/
http://www.dbacomp.com.br/blog/?p=34 <-- GOOD STUFF EXPLANATION
http://oracle-dba-yi.blogspot.com/2011/01/px-deq-credit-send-blkd.html
http://webcache.googleusercontent.com/search?q=cache:UtGFixYN_PEJ:www.asktherealtom.ch/%3Fp%3D8+PX+Deq+Credit:+send+blkd&cd=1&hl=en&ct=clnk&gl=us
http://iamsys.wordpress.com/2010/03/24/px-deq-credit-send-blkd-caused-by-ide-sql-developer-toad-plsql-developer/
http://www.freelists.org/post/oracle-l/PX-Deq-Credit-send-blkd,27
http://www.freelists.org/post/oracle-l/best-way-to-invoke-parallel-in-DW-loads,13
http://www.mail-archive.com/oracle-l@fatcity.com/msg64774.html <-- tuning large pool










http://tobeimpact.blogspot.com/2013/10/parallel-query-errors-out-with-ora.html
https://fred115.wordpress.com/2012/07/30/db-link-with-taf-does-it-auto-fail-over/
! parameters
{{{
-- essentials
parallel_max_servers - (default: automatic) The maximum number of parallel slave process that may be created on an instance. The default is calculated based on system parameters including CPU_COUNT and PARALLEL_THREADS_PER_CPU. On most systems the value will work out to be 20xCPU_COUNT.
parallel_servers_target - (default: automatic) The upper limit on the number of parallel slaves that may be in use on an instance at any given time if parallel queuing is enabled. The default is calculated automatically.
parallel_min_servers - (default: 0) The minimum number of parallel slave processes that should be kept running, regardless of usage. Usually set to eliminate the overhead of creating and destroying parallel processes.
parallel_threads_per_cpu - (default: 2) Used in various parallel calculations to represent the number of concurrent processes that a CPU can support

-- knobs
parallel_degree_policy - (default: MANUAL) Controls several parallel features including Automatic Degree of Parallelism (auto DOP), Parallel Statement Queuing and In-memory Parallel Execution
	MANUAL - disables everything
	LIMITED - only enables auto DOP, the PX queueing & in-memory PX remain disabled
	AUTO - enables everything
parallel_execution_message_size - (default: 16384) The size of parallel message buffers in bytes.
parallel_degree_level - New in 12c. The scaling factor for default DOP calculations. When the parameter value is set to 50 then the calculated default DOP will be multiplied by .5 thus reducing it to half.

-- resource mgt
pga_aggregate_limit - New in 12c. Has nothing to do with parallel queries. This parameter limits the process PGA memory usage. 
parallel_force_local - (default: FALSE) Determines whether parallel query slaves will be forced to execute only on the node that initiated the query (TRUE), or whether they will be allowed to spread on to multiple nodes in a RAC cluster (FALSE).
parallel_instance_group - Used to restrict parallel slaves to certain set of instances in a RAC cluster.
parallel_io_cap_enabled - (default: FALSE) Used in conjunction with the DBMS_RESOURCE_MANAGER.CALIBRATE_IO function to limit default DOP calculations based on the I/O capabilities of the system.

-- deprecated / old way
parallel_automatic_tuning - (default: FALSE) Deprecated since 10g. This parameter enabled an automatic DOP calculation on objects for which a parallelism attribute is set.
parallel_min_percent - (default: 0) Old throttling mechanism. It represents the minimum percentage of parallel servers that are needed for a parallel statement to execute.

-- recommended to leave it as it is
parallel_adaptive_multi_user - (default: TRUE) Old mechanism of throttling parallel statements by downgrading. Provides the ability to automatically downgrade the degree of parallelism for a given statement based on the workload when a query executes. In most cases, this parameter should be set to FALSE on Exadata, for reasons we'll discuss later in the chapter. The bigger problem with the downgrade mechanism though is that the decision about how many slaves to use is based on a single point in time, the point when the parallel statement starts.
parallel_degree_limit - (default: CPU) This parameter sets an upper limit on the DOP that can be applied to a single statement. The default means that Oracle will calculate a value for this limit based on the system's characteristics.
parallel_min_time_threshold - (default: AUTO) The minimum estimated serial execution time that will be trigger auto DOP. The default is AUTO, which translates to 10 seconds. When the PARALLEL_DEGREE_POLICY parameter is set to AUTO or LIMITED, any statement that is estimated to take longer than the threshold established by this parameter will be considered a candidate for auto DOP.
parallel_server - Has nothing to do with parallel queries. Set to true or false depending on whether the database is RAC enabled or not. This parameter was deprecated long ago and has been replaced by the CLUSTER_DATABASE parameter.
parallel_server_instances - Has nothing to do with parallel queries. It is set to the number of instances in a RAC cluster.

-- underscore params
_parallel_statement_queuing - (default: FALSE) related to auto DOP, if set to TRUE this enables PX queueing 
_parallel_cluster_cache_policy - (default: ADAPTIVE) related to auto DOP, if set to CACHE this enables the in-mem PX
_parallel_cluster_cache_pct - (default: 80) determines the percentage of the aggregate buffer cache size that is reserved for In-Memory PX, if segments are larger than 80% the size of the aggregate buffer cache, by default, queries using these tables will not qualify for In-Memory PX
_optimizer_ignore_hints - (default: FALSE) if set to TRUE will ignore hints
}}}


! configuration 

See this tiddler for details -> [[Auto DOP]]




also check out tiddlers here [[Parallel]]
Parallel Troubleshooting
http://www.oracledatabase12g.com/archives/checklist-for-performance-problems-with-parallel-execution.html
''XPLAN_ASH'' troubleshooting with ASH http://oracle-randolf.blogspot.com/2012/08/parallel-execution-analysis-using-ash.html

Parallel Processing With Standard Edition
http://antognini.ch/2010/09/parallel-processing-with-standard-edition/

Parallel_degree_limit hierarchy – CPU, IO, Auto or Integer
http://blogs.oracle.com/datawarehousing/2011/01/parallel_degree_limit_hierarch.html




Interval Partitioning and Parallel Query Limit Access Paths http://www.pythian.com/news/34543/interval-partitioning-and-parallel-query-limit-access-paths/ Parallel Distribution of aggregation and analytic functions.. gives a lot of food for thought how the chosen Parallel Distribution can influence the performance of operations

''Understanding Parallel Execution - part1'' http://www.oracle.com/technetwork/articles/database-performance/geist-parallel-execution-1-1872400.html
''Understanding Parallel Execution - part2'' http://www.oracle.com/technetwork/articles/database-performance/geist-parallel-execution-2-1872405.html




Parallel Load
{{{
alter table <table_name> parallel;
alter session enable parallel dml;

insert /*+ APPEND */ into parallel_t1
select level, 'x'
from dual
connect by level <= 1000000
;
}}}


Also Consider the following illustration.
{{{
  Both tables below have "nologging" set at table level.

  SQL> desc redo1
  Name                                      Null?    Type
  ----------------------------------------- -------- ----------
  X                                                  NUMBER
  Y                                                  NUMBER

  SQL> desc redotesttab
  Name                                      Null?    Type
  ----------------------------------------- -------- -------
  X                                                  NUMBER
  Y                                                  NUMBER

  begin
  for x in 1..10000 loop
  insert into scott.redotesttab values(x,x+1);
  -- or 
  -- insert /*+ APPEND */ into scott.redotesttab values(x,x+1);
  end loop;
  end;

  Note: This will generate redo even if you provide the hint because this 
        is not a direct-load insert.

Now, consider the following bulk inserts, direct and simple.

  SQL> select name,value from v$sysstat where name like '%redo size%';

  NAME                                                             VALUE
  ----------------------------------------------------------- ----------
  redo size                                                     27556720

  SQL> insert into scott.redo1 select * from scott.redotesttab;
  50000 rows created.

  SQL> select name,value from v$sysstat where name like '%redo size%';

  NAME                                                             VALUE
  ----------------------------------------------------------- ----------
  redo size                                                     28536820

  SQL> insert /*+ APPEND */ into scott.redo1 select * from scott.redotesttab;
  50000 rows created.

  SQL> select name,value from v$sysstat where name like '%redo size%';

  NAME                                                             VALUE
  ----------------------------------------------------------- ----------
  redo size                                                     28539944

You will notice that the redo generated via the simple insert is "980100" while
a direct insert generates only "3124".
}}}
Obsolete / Deprecated Initialization Parameters in 10G
  	Doc ID: 	Note:268581.1



-- COMPATIBLE

How To Change The COMPATIBLE Parameter And What Is The Significance?
  	Doc ID: 	733987.1


-- CHECK PARAMETER DEPENDENCIES, parameters affecting other parameters
<<<
http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CCUQFjAA&url=http%3A%2F%2Fyong321.freeshell.org%2Fcomputer%2FParameterDependencyAndStatistics.doc&ei=tzpbUJ3NJabY2gWHkYHoBw&usg=AFQjCNEWM-CRPvEED0uXs0pnpxWRltl4Bg
<<<
Master Note for Partitioning [ID 1312352.1]
http://blogs.oracle.com/db/entry/master_note_for_partitioning_id

Top Partition Performance Issues
  	Doc ID: 	Note:166215.1

How to Implement Partitioning in Oracle Versions 8 and 8i
  	Doc ID: 	Note:105317.1

How I Designed Table and Index Partitions Using Analytics
  	Doc ID: 	729847.1



-- PARTITION 

How to partition a non-partitioned table.
  	Doc ID: 	1070693.6

How to Backup Partition of Range Partitioned Table with Local Indexes
  	Doc ID: 	412264.1



http://blogs.sun.com/dlutz/entry/partition_alignment_guidelines_for_unified
A Comprehensive Guide to Oracle Partitioning with Samples
http://noriegaaoracleexpert.blogspot.com/2009/06/comprehensive-guide-to-oracle_16.html

SQL Access Advisor - Partitioning recommendation
http://www.oracle.com/webfolder/technetwork/tutorials/demos/db/11g/r2/11gr2_sqlaccessadv/11gr2_sqlaccessadv_viewlet_swf.html

Compressing Subpartition Segments
http://husnusensoy.wordpress.com/2008/01/23/compressing-subpartition-segments/

From Doug, Randolf, Kerry
http://jonathanlewis.wordpress.com/2010/03/17/partition-stats/

More on Interval Partitioning
http://www.rittmanmead.com/2010/08/07/more-on-interval-partitioning/

non-partitioned to partitioned table
http://www.dbapool.com/articles/031003.html
http://arjudba.blogspot.com/2008/11/how-to-convert-non-partitioned-table-to.html
http://www.oracle-base.com/articles/8i/PartitionedTablesAndIndexes.php


! determine the potential benefit of using partitioning, and the overhead
<<<
Partitioning is first a way to facilitate administration and secondly for performance.
Unfortunately, the performance benefits are not always attainable and it depends on the data and queries.

If queries select on a range of data (not single block reads), it will probably benefit from partitioning.
If queries select one row at a time (single block reads), it will probably not benefit from partitioning (maybe even worse).

Each query to a partition segment requires a little extra overhead to determine which partition to access.
For hash partitions, the overhead is a mathematical mod function that determines the partition.
For range and list partitions, a dictionary lookup is required to determine which partition the data resides.
So, the overhead is both logical reads and CPU.

For ranges of rows to be selected, the overhead is still applied but normally only once for the requested range.
Index blevel is probably less for partition indexes since there is less data in each partition.
But, the hash/range/list partition determination method overhead may not be noticeable for ranges (especially larger ranges).

Range partitioning is ideal when the partition key is a date since most queries on large tables filter by date.
Aligning range partitions with normal data access requests may result in full table/partition scans which can be really good.
So, knowing the data and data access requirements is key to a successful partitioning effort.

nice writeup by Jack Augustin

<<<
https://agilebits.com/home/licenses
http://alternativeto.net/software/1password/
https://lastpass.com
http://keepass.info/features.html
http://keepass.info/download.html

http://www.vilepickle.com/blog/2011/04/19/00105-using-dropbox-and-keepass-synchronize-passwords-while-staying-secure
/***
|''Name:''|PasswordOptionPlugin|
|''Description:''|Extends TiddlyWiki options with non encrypted password option.|
|''Version:''|1.0.2|
|''Date:''|Apr 19, 2007|
|''Source:''|http://tiddlywiki.bidix.info/#PasswordOptionPlugin|
|''Author:''|BidiX (BidiX (at) bidix (dot) info)|
|''License:''|[[BSD open source license|http://tiddlywiki.bidix.info/#%5B%5BBSD%20open%20source%20license%5D%5D ]]|
|''~CoreVersion:''|2.2.0 (Beta 5)|
***/
//{{{
version.extensions.PasswordOptionPlugin = {
	major: 1, minor: 0, revision: 2, 
	date: new Date("Apr 19, 2007"),
	source: 'http://tiddlywiki.bidix.info/#PasswordOptionPlugin',
	author: 'BidiX (BidiX (at) bidix (dot) info',
	license: '[[BSD open source license|http://tiddlywiki.bidix.info/#%5B%5BBSD%20open%20source%20license%5D%5D]]',
	coreVersion: '2.2.0 (Beta 5)'
};

config.macros.option.passwordCheckboxLabel = "Save this password on this computer";
config.macros.option.passwordInputType = "password"; // password | text
setStylesheet(".pasOptionInput {width: 11em;}\n","passwordInputTypeStyle");

merge(config.macros.option.types, {
	'pas': {
		elementType: "input",
		valueField: "value",
		eventName: "onkeyup",
		className: "pasOptionInput",
		typeValue: config.macros.option.passwordInputType,
		create: function(place,type,opt,className,desc) {
			// password field
			config.macros.option.genericCreate(place,'pas',opt,className,desc);
			// checkbox linked with this password "save this password on this computer"
			config.macros.option.genericCreate(place,'chk','chk'+opt,className,desc);			
			// text savePasswordCheckboxLabel
			place.appendChild(document.createTextNode(config.macros.option.passwordCheckboxLabel));
		},
		onChange: config.macros.option.genericOnChange
	}
});

merge(config.optionHandlers['chk'], {
	get: function(name) {
		// is there an option linked with this chk ?
		var opt = name.substr(3);
		if (config.options[opt]) 
			saveOptionCookie(opt);
		return config.options[name] ? "true" : "false";
	}
});

merge(config.optionHandlers, {
	'pas': {
 		get: function(name) {
			if (config.options["chk"+name]) {
				return encodeCookie(config.options[name].toString());
			} else {
				return "";
			}
		},
		set: function(name,value) {config.options[name] = decodeCookie(value);}
	}
});

// need to reload options to load passwordOptions
loadOptionsCookie();

/*
if (!config.options['pasPassword'])
	config.options['pasPassword'] = '';

merge(config.optionsDesc,{
		pasPassword: "Test password"
	});
*/
//}}}
https://blogs.oracle.com/UPGRADE/entry/why_is_every_patchset_now

-- CERTIFICATION MATRIX	
 
Operating System, RDBMS & Additional Component Patches Required for Installation PeopleTools - Master List [ID 756571.1]    ß go here

                PeopleTools Certifications - Suggested Fixes for PT 8.52 Note:1385944.1   ß click on here
 
                                Oracle Server - Enterprise Edition (Doc ID 1100831.1)  ß click on here

                                                Required Interim Patches for the Oracle Database with PeopleSoft [ID 1100831.1] ß click on here

PeopleSoft Enterprise PeopleTools Certification Table of Contents [ID 759851.1]




-- PERFORMANCE 

PeopleSoft Enterprise Performance on Oracle 10g Database (Doc ID 747254.1)
E-ORACLE:10g Master Performance Solution for Oracle 10g (Doc ID 656639.1)
EGP8.x: Performance issue while running Paycalc in GP with Oracle 9 and 10 as DB (Doc ID 652910.1)
EGP 8.x:Changing Global Payroll COBOL Process without changing delivered code (Doc ID 652805.1)
http://dbasrus.blogspot.com/2007/09/one-for-peoplesoft-folks.html

Performance issue with On Lines Pages and Batch Processes on Oracle 10G (Doc ID 651774.1)

Performance Issue at Tier Processing (Selection at Database Level) (Doc ID 755402.1)

Activity Batch Assignment Performance: Object Where Clause not filtering correct no records (Doc ID 518178.1)

Performance and Tuning: Oracle 10g R2 Real Application Cluster (RAC) with EnterpriseOne (Doc ID 748353.1)

Performance and Tuning UBE Performance and Tuning (Doc ID 748333.1)

Online Performance Configuration Guidelines for PeopleTools 8.45, 8.46, 8.47, 8.48 and 8.49 (Doc ID 747389.1)

Sizing System Hardware for JD Edwards EnterpriseOne (Doc ID 748339.1)

E- ORA: Is there any documentation on Oracle 10g RAC implemention in PeopleSoft? (Doc ID 663340.1)

E-INST: Does PeopleSoft support Oracle RAC (Real Application Clusters)? (Doc ID 620325.1)

E-ORA: Oracle RAC Clusterware support (Doc ID 663690.1)

How To Set Up Oracle RAC for Siebel Applications (Doc ID 473859.1)

What Are the Supported Oracle Real Application Clusters (RAC) Versions? (Doc ID 478215.1)

Oracle 10g RAC support for Analytics (Doc ID 482330.1)

PeopleTools Certification FAQs - Database Platforms - Oracle (Doc ID 756280.1)

Siebel Recommendation on table logging (Doc ID 730133.1)

What does Siebel recommend for the Oracle parameter "compatible" on 10g database (Doc ID 551979.1)

Oracle cluster (Doc ID 522337.1)

Support Status for Oracle Business Intelligence on VMware Virtualized Environments (Doc ID 475484.1)

E-PIA: Red Paper on Implementing Clustering and High Availability for PeopleSoft (Doc ID 612096.1)

E-PIA: Red Paper on Implementing Clustering and High Availability for PeopleSoft (Doc ID 612096.1)

747378.1 Clustering and High Availability for Enterprise Tools 8.4x (Doc ID 747378.1)

747962.1 PeopleSoft EPM Red Paper: PeopleSoft Enterprise Initial Consolidations —04/2007 (Doc ID 747962.1)


747962.1 PeopleSoft EPM Red Paper: PeopleSoft Enterprise Initial Consolidations

747962.1 PeopleSoft EPM Red Paper: PeopleSoft Enterprise Initial Consolidations —04/2007 (Doc ID 747962.1)

Is there a way to automatically kill long running SQL statements (Oracle DB only) at the database after a pre-determined maximum waiting time ? (Doc ID 753941.1)

E-CERT Red Hat Linux 4.0 64 bit certification (Doc ID 656686.1)

PeopleSoft Enterprise PeopleTools Certifications (Doc ID 747587.1)

PeopleSoft Performance on Oracle 10.2.0.2 http://www.freelists.org/post/oracle-l/PeopleSoft-Performance-on-Oracle-10202





-- Hidden Parameters

_disable_function_based_index
http://www.orafaq.com/parms/parm467.htm


-- SECURITY

747524.1 Securing Your PeopleSoft Application Environment (Doc ID 747524.1)



-- PAYROLL 

EPY: Performance issue with work table PS_WRK_SEQ_CHECK (Doc ID 646824.1)



http://dbasrus.blogspot.com/2007/09/more-on-peoplesoft.html
http://dbasrus.blogspot.com/2007/09/one-for-peoplesoft-folks.html

EPY: Performance issue with work table PS_WRK_SEQ_CHECK (Doc ID 646824.1)

EPY: Performance issue on Pay confirm process PSPEBUPD_S_BENF_NO (Doc ID 660649.1)

EPY: COBOL Performance Issues: Paycalc or other COBOL jobs take too long to run (Doc ID 607905.1)

E-ORACLE:10g Master Performance Solution for Oracle 10g (Doc ID 656639.1)

EPY - Bonus payroll performance slow due to FLSA processing (Doc ID 634806.1)

EPY 8.x:Performance issues on Paycalc/Dedcalc in release 8 SP1 and above (Doc ID 611138.1)

ETL8.8/GP8.8: Poor Performance GP Payroll Process (GPPDPRUN) modified TL Data (Doc ID 661283.1)

EGP: Performance issues with "UNKNOWN" sql statements in timing trace. (Doc ID 657792.1)

PeopleSoft Global Payroll Off-Cycle Payment Processing (Doc ID 704478.1)

EGP8.X: Global Payroll runs to 'Success' but does not process any data (Doc ID 637945.1)

EGP8.x: What are the tables to partition for Global Payroll Stream Processing ? (Doc ID 619386.1)

EGP 8.x:Changing Global Payroll COBOL Process without changing delivered code (Doc ID 652805.1)

EGP 8.9: Running payslip Generation Process using SFTP- Global Payroll (Doc ID 652909.1)

EGP8.x: Global Payroll Process fails on AIX with 105 Memory allocation error. (Doc ID 656695.1)

ETL9.0: AM/TL9.0: AM absence is doubling quantity when processing time admin. (Doc ID 664004.1)

EGP8.x : How to recognize when the Global Payroll is ending in error ? (Doc ID 636120.1)

EGP8.x: Performance issue while running Paycalc in GP with Oracle 9 and 10 as DB (Doc ID 652910.1)

EGP8.9/9.0: Is it possible to enable Commitment Reporting on Global Payroll? (Doc ID 662078.1)

EGP8.x: Global Payroll PayGroup sizing recommendation (Doc ID 639164.1)

PeopleSoft Global Payroll COBOL Array Information (Doc ID 701403.1)

EGP8.3SP1 How Far does Retro go back in history? (Doc ID 618944.1)

EGP: Deadlock when using streams and partitions (Doc ID 642914.1)

E1: 07: Pre-payroll Troubleshooting (Doc ID 625863.1)



-- TRIGGER PERFORMANCE ISSUES
Performance/Deadlock Issues Caused By SYNCID Database Triggers [ID 1059120.1]
E-WF: Database Locking Issue on PSSYSTEMID Table, Because of SYNCID Field in PSWORKLILST [ID 619750.1]
How Is SYNCID On The PS_PROJECT Record Maintained? [ID 1303668.1]
ECRM: Information about the SYNCID field and what is it used for. [ID 614739.1]
PeopleSoft Enterprise DFW Plug-In - SYNCID Database Trigger Diagnostic Check [ID 1074332.1]
TX Transaction and Enq: Tx - Row Lock Contention - Example wait scenarios [ID 62354.1]



-- PERFORMANCE INDEXES
E-AWE: Approval Framework Indexes for 9.1 Applications [ID 1289904.1]
E-AWE: Recommended Indexes for Application Cross Reference (XREF) Tables to Improve Performance of Approval Workflow Engine (AWE) [ID 1328945.1]














http://www.go-faster.co.uk/gp.stored_outlines.pdf
http://blog.psftdba.com/2010/03/oracle-plan-stability-stored-outlines.html
Exadata MAA best practices series
video: http://www.oracle.com/webfolder/technetwork/Exadata/MAA-BestP/Peoplesoft/021511_93782_source/index.htm
slides: http://www.oracle.com/webfolder/technetwork/Exadata/MAA-BestP/Peoplesoft/Peoplesoft.pdf

S317423: Deploying PeopleSoft Enterprise Applications on Exadata Tips, Techniques and Best Practices http://www.oracle.com/us/products/database/s317423-176382.pdf

Oracle PeopleSoft on Oracle Exadata Database Machine feb 2011 http://www.oracle.com/au/products/database/maa-wp-peoplesoft-on-exadata-321604.pdf

! and a bunch of other references when you google "peoplesoft on exadata"
2011 http://www.oracle.com/au/products/database/maa-wp-peoplesoft-on-exadata-321604.pdf
http://www.oracle.com/webfolder/technetwork/Exadata/MAA-BestP/Peoplesoft/Peoplesoft.pdf
2013 http://www.oracle.com/us/products/applications/peoplesoft-enterprise/psft-oracle-engineered-sys-1931256.pdf
best practices http://www.oracle.com/us/products/database/s317423-176382.pdf
2014 http://www.oracle.com/technetwork/database/availability/peoplesoft-maa-2044588.pdf
2013 http://www.oracle.com/us/products/applications/peoplesoft-enterprise/psft-payroll-engineered-sys-1931259.pdf  <-- good stuff








http://hakanbiroglu.blogspot.com/2013/04/installing-peoplesoft-92-pre-build.html#.XF41O2RKjOQ
http://hakanbiroglu.blogspot.com/2013/04/extending-peoplesoft-92-virtual-machine.html#.XF7klmRKjOQ

https://mani2web.wordpress.com/2016/02/17/installing-peopletools-8-55-peoplesoft-hcm-image-16-on-virtualbox-using-dpks-part-2/


"peoplesoft virtualbox vm download"
https://www.youtube.com/watch?v=AXNcL7ZKRVw  <-- good stuff , this is the patch used -16660429


PeopleSoft Update Manager (PUM) Home Page (Doc ID 1641843.2)

https://docs.oracle.com/cd/E91282_01/psft/pdf/Using_the_PeopleSoft_VirtualBox_Images_PeopleSoft_PeopleTools_8.54_Dec2015.pdf







<<<
The attached is a series of SQL I have used in the past to report on nVision activity.   I have used them on PeopleSoft HR and Finance versions 7.x, 8,x, and 9.1 but for 9.x only on tools up to 8.49.   I haven't done hands-on tuning for a few years.    But I can't imagine the process scheduler stuff has changed all that much in regards to nVision.     At the very least, if there are changes the enclosed SQL should make it more easy to adapt to anything new.    The summary SQLs towards the end are pretty good ones to use on a regular basis to monitor overall reporting performance for nVision.   The ones in the middle are handy for looking at what is running now from long execution time and long queue time.   The ones in the beginning are good for finding detailed data for a given time period (i.e.: look at stats for every report run rather than a summary).    Hope you find them useful…often getting lists from SQL is faster than logging into PeopleSoft and looking up stuff in Process Monitor.   I already shot it to Rajiv and Rajesh.
<<<

{{{

set pagesize 50000
set linesize 200
col oprid format a10
col submitted format a11
col report_id format a10
col layout_id format a40
col report_scope format a10
col StartTime format a11
col EndTime format a11
col Status format a10
col QueueTime format 9999999999
col Duration format 9999999999
col servernamerun format a5

SELECT TO_CHAR(r.rundttm,'MM-DD HH24:MI') Submitted,
       n.report_id,
       n.layout_id,
       n.report_scope,
       TO_CHAR(r.begindttm,'MM-DD HH24:MI') StartTime,
       TO_CHAR(r.enddttm,'MM-DD HH24:MI') EndTime,
       ROUND(TO_NUMBER(r.begindttm-r.rundttm)*1440) QueueTime,
       ROUND(TO_NUMBER(r.enddttm-r.begindttm)*1440) Duration
  FROM ps_nvs_report n,
       psprcsrqst r,
       psprcsparms p
 WHERE r.prcsinstance = p.prcsinstance
   AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
   AND p.origparmlist like '%-NRN%'
/







SELECT TO_CHAR(r.rundttm,'MM-DD HH24:MI')||','||
       r.oprid||','||
       n.report_id||','||
       n.layout_id||','||
       n.report_scope||','||
       TO_CHAR(r.begindttm,'MM-DD HH24:MI')||','||
       TO_CHAR(r.enddttm,'MM-DD HH24:MI')||','||
       ROUND(TO_NUMBER(r.begindttm-r.rundttm)*1440)||','||
       ROUND(TO_NUMBER(r.enddttm-r.begindttm)*1440)||','||
       DECODE(r.runstatus,'9','Success','7','Processing','8','Cancelled','3','Error','5','Queued',runstatus)||','
  FROM ps_nvs_report n,
       psprcsrqst r,
       psprcsparms p
 WHERE r.prcsinstance = p.prcsinstance
   AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
   AND p.origparmlist like '%-NRN%'
/






SELECT TO_CHAR(r.rundttm,'MM-DD HH24:MI') Submitted,
       r.oprid,
       n.report_id,
       n.layout_id,
       n.report_scope,
       TO_CHAR(r.begindttm,'MM-DD HH24:MI') StartTime,
       TO_CHAR(r.enddttm,'MM-DD HH24:MI') EndTime,
       DECODE(r.runstatus,'9','Success','7','Processing','8','Cancelled','3','Error','5','Queued',runstatus) Status,
       trunc((86400*(r.begindttm-r.rundttm))/60)-60*(trunc(((86400*(r.begindttm-r.rundttm))/60)/60)) QueueTime,
       trunc((86400*(r.enddttm-r.begindttm))/60)-60*(trunc(((86400*(r.enddttm-r.begindttm))/60)/60)) Duration
  FROM ps_nvs_report n,
       psprcsrqst r,
       psprcsparms p
 WHERE r.prcsinstance = p.prcsinstance
   AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
   AND p.origparmlist like '%-NRN%'
/




--- SQL to pull jobs executing longer than 30 minutes
SELECT ROUND(TO_NUMBER(r.begindttm-r.rundttm)*1440) QueueTime,
       ROUND(TO_NUMBER(SYSDATE-r.begindttm)*1440) Duration,
       r.prcsinstance,
       r.oprid,
       n.report_id,
       n.layout_id,
       n.report_scope,
       TO_CHAR(r.begindttm,'MM-DD HH24:MI') StartTime,
       TO_CHAR(r.enddttm,'MM-DD HH24:MI') EndTime
  FROM ps_nvs_report n,
       psprcsrqst r,
       psprcsparms p
 WHERE r.begindttm IS NOT NULL
   AND r.enddttm IS NULL
   AND r.runstatus IN ('6','7')
   AND ROUND(TO_NUMBER(SYSDATE-r.begindttm)*1440) >= 30
   AND r.prcsinstance = p.prcsinstance
   AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
   AND p.origparmlist like '%-NRN%'
ORDER BY Duration desc
/



--- CSV reports w/ processinstance during stress test
SELECT TO_CHAR(r.rundttm,'MM-DD HH24:MI')||','||
       r.prcsinstance||','||
       r.oprid||','||
       n.report_id||','||
       n.layout_id||','||
       n.report_scope||','||
       TO_CHAR(r.begindttm,'MM-DD HH24:MI')||','||
       TO_CHAR(r.enddttm,'MM-DD HH24:MI')||','||
       ROUND(TO_NUMBER(r.begindttm-r.rundttm)*1440)||','||
       ROUND(TO_NUMBER(r.enddttm-r.begindttm)*1440)||','||
       DECODE(r.runstatus,'9','Success','7','Processing','8','Cancelled','3','Error','5','Queued',runstatus)||','
  FROM ps_nvs_report n,
       psprcsrqst r,
       psprcsparms p
 WHERE r.rundttm>=TO_DATE('19-DEC-2006 09:00','DD-MON-YYYY HH24:MI')
   AND r.rundttm <=TO_DATE('19-DEC-2006 12:00','DD-MON-YYYY HH24:MI')
   AND r.prcsinstance = p.prcsinstance
   AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
   AND p.origparmlist like '%-NRN%'
/



--- reports w/ prcsinstance  during stress test

SELECT ROUND(TO_NUMBER(r.enddttm-r.begindttm)*1440) Duration,
       ROUND(TO_NUMBER(r.begindttm-r.rundttm)*1440) QueueTime,
       r.prcsinstance,
       r.oprid,
       n.report_id,
       n.layout_id,
       n.report_scope,
       DECODE(r.runstatus,'9','Success','7','Processing','8','Cancelled','3','Error','5','Queued',runstatus) Status,
       TO_CHAR(r.rundttm,'MM-DD HH24:MI') submitted,
       TO_CHAR(r.begindttm,'MM-DD HH24:MI') StartTime,
       TO_CHAR(r.enddttm,'MM-DD HH24:MI') EndTime
  FROM ps_nvs_report n,
       psprcsrqst r,
       psprcsparms p
 WHERE r.rundttm>=TO_DATE('19-DEC-2006 09:00','DD-MON-YYYY HH24:MI')
   AND r.rundttm <=TO_DATE('19-DEC-2006 12:00','DD-MON-YYYY HH24:MI')
   AND r.prcsinstance = p.prcsinstance
   AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
   AND p.origparmlist like '%-NRN%'
 ORDER BY Duration desc
/


--- reports w/ prcsinstance  for today
SELECT ROUND(TO_NUMBER(r.enddttm-r.begindttm)*1440) Duration,
       ROUND(TO_NUMBER(r.begindttm-r.rundttm)*1440) QueueTime,
       r.prcsinstance,
       r.oprid,
       n.report_id,
       n.layout_id,
       n.report_scope,
       TO_CHAR(r.rundttm,'MM-DD HH24:MI') submitted,
       TO_CHAR(r.begindttm,'MM-DD HH24:MI') StartTime,
       TO_CHAR(r.enddttm,'MM-DD HH24:MI') EndTime
  FROM ps_nvs_report n,
       psprcsrqst r,
       psprcsparms p
 WHERE r.rundttm>=TRUNC(SYSDATE)
   AND r.prcsinstance = p.prcsinstance
   AND r.runstatus = '9'
   AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
   AND p.origparmlist like '%-NRN%'
 ORDER BY Duration 
/


--- CSV w/ prcsinstance for today
SELECT ROUND(TO_NUMBER(r.enddttm-r.begindttm)*1440)||','||
       ROUND(TO_NUMBER(r.begindttm-r.rundttm)*1440)||','||
       r.prcsinstance||','||
       r.oprid||','||
       n.report_id||','||
       n.layout_id||','||
       n.report_scope||','||
       TO_CHAR(r.rundttm,'MM-DD HH24:MI')||','||
       TO_CHAR(r.begindttm,'MM-DD HH24:MI')||','||
       TO_CHAR(r.enddttm,'MM-DD HH24:MI')||','
  FROM ps_nvs_report n,
       psprcsrqst r,
       psprcsparms p
 WHERE r.rundttm>=TRUNC(SYSDATE)
   AND r.prcsinstance = p.prcsinstance
   AND r.runstatus = '9'
   AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
   AND p.origparmlist like '%-NRN%'
/

-- things currently processing, longest at top
SELECT ROUND(TO_NUMBER(r.begindttm-r.rundttm)*1440) QueueTime,
       ROUND(TO_NUMBER(SYSDATE-r.begindttm)*1440) Duration,
       r.prcsinstance,
       r.oprid,
       n.report_id,
       n.layout_id,
       n.report_scope,
       TO_CHAR(r.begindttm,'MM-DD HH24:MI') StartTime,
       TO_CHAR(r.enddttm,'MM-DD HH24:MI') EndTime
  FROM ps_nvs_report n,
       psprcsrqst r,
       psprcsparms p
 WHERE r.begindttm IS NOT NULL
   AND r.enddttm IS NULL
   AND r.prcsinstance = p.prcsinstance
   AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
   AND p.origparmlist like '%-NRN%'
ORDER BY Duration desc
/


--- things in queue, longest at bottom
SELECT ROUND(TO_NUMBER(SYSDATE-r.rundttm)*1440) QueueTime,
       r.prcsinstance,
       r.oprid,
       n.report_id,
       n.layout_id,
       n.report_scope,
       TO_CHAR(r.begindttm,'MM-DD HH24:MI') StartTime,
       TO_CHAR(r.enddttm,'MM-DD HH24:MI') EndTime
  FROM ps_nvs_report n,
       psprcsrqst r,
       psprcsparms p
 WHERE r.begindttm IS NULL
   AND r.prcsinstance = p.prcsinstance
   AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
   AND p.origparmlist like '%-NRN%'
ORDER BY QueueTime
/


--- SUMMARY SQLS ------------------------------------------------------------
--- counts by layout, duration
SELECT n.layout_id,
       ROUND(TO_NUMBER(r.enddttm-r.begindttm)*1440) Duration,
       COUNT(*)
  FROM ps_nvs_report n,
       psprcsrqst r,
       psprcsparms p
 WHERE r.rundttm>=TRUNC(SYSDATE)
   AND r.prcsinstance = p.prcsinstance
   AND r.runstatus = '9'
   AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
   AND p.origparmlist like '%-NRN%'
 GROUP BY n.layout_id, ROUND(TO_NUMBER(r.enddttm-r.begindttm)*1440)
 ORDER BY Duration 
/

--- counts by duration
SELECT ROUND(TO_NUMBER(r.enddttm-r.begindttm)*1440) Duration,
       COUNT(*)
  FROM ps_nvs_report n,
       psprcsrqst r,
       psprcsparms p
 WHERE r.rundttm>=TRUNC(SYSDATE)
   AND r.prcsinstance = p.prcsinstance
   AND r.runstatus = '9'
   AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
   AND p.origparmlist like '%-NRN%'
 GROUP BY ROUND(TO_NUMBER(r.enddttm-r.begindttm)*1440)
 ORDER BY Duration 
/

--- counts by OPRID
SELECT r.oprid,
       COUNT(*)
  FROM ps_nvs_report n,
       psprcsrqst r,
       psprcsparms p
 WHERE r.rundttm>=TRUNC(SYSDATE)
   AND r.prcsinstance = p.prcsinstance
   AND r.runstatus = '9'
   AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
   AND p.origparmlist like '%-NRN%'
 GROUP BY r.oprid
 ORDER BY count(*) 
/

--- counts by queue time
SELECT ROUND(TO_NUMBER(r.begindttm-r.rundttm)*1440) QueueTime,
       COUNT(*)
  FROM ps_nvs_report n,
       psprcsrqst r,
       psprcsparms p
 WHERE r.rundttm>=TRUNC(SYSDATE)
   AND r.prcsinstance = p.prcsinstance
   AND r.runstatus = '9'
   AND SUBSTR(p.origparmlist,(INSTR(p.origparmlist,'-NRN',1,1)+4),(INSTR(p.origparmlist,'-NBU',1,1)-INSTR(p.origparmlist,'-NRN',1,1)-5)) = n.report_id
   AND p.origparmlist like '%-NRN%'
 GROUP BY ROUND(TO_NUMBER(r.begindttm-r.rundttm)*1440)
 ORDER BY ROUND(TO_NUMBER(r.begindttm-r.rundttm)*1440) 
/








ROUND(TO_NUMBER(end_date - start_date)*1440) = elapsed minutes


SQL> desc ps_nvs_report
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 BUSINESS_UNIT                             NOT NULL VARCHAR2(5)
 REPORT_ID                                 NOT NULL VARCHAR2(8)
 LAYOUT_ID                                 NOT NULL VARCHAR2(50)
 REPORT_SCOPE                              NOT NULL VARCHAR2(10)
 NVS_DIR_TEMPLATE                          NOT NULL VARCHAR2(254)
 NVS_DOC_TEMPLATE                          NOT NULL VARCHAR2(254)
 NVS_LANG_TEMPLATE                         NOT NULL VARCHAR2(50)
 NVS_EMAIL_TEMPLATE                        NOT NULL VARCHAR2(254)
 NVS_DESCR_TEMPLATE                        NOT NULL VARCHAR2(254)
 NVS_AUTH_TEMPLATE                         NOT NULL VARCHAR2(254)
 OUTDESTTYPE                               NOT NULL VARCHAR2(3)
 OUTDESTFORMAT                             NOT NULL VARCHAR2(3)
 REQ_BU_ONLY                               NOT NULL VARCHAR2(1)
 NPLODE_DETAILS                            NOT NULL VARCHAR2(1)
 TRANSLATE_LEDGERS                         NOT NULL VARCHAR2(1)
 DESCR                                     NOT NULL VARCHAR2(30)
 EFFDT_OPTN                                NOT NULL VARCHAR2(1)
 TREE_EFFDT                                         DATE
 AS_OF_DT_OPTION                           NOT NULL VARCHAR2(1)
 AS_OF_DATE                                         DATE


SQL> desc psprcsrqst
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 PRCSINSTANCE                              NOT NULL NUMBER(38)
 JOBINSTANCE                               NOT NULL NUMBER(38)
 PRCSJOBSEQ                                NOT NULL NUMBER(38)
 PRCSJOBNAME                               NOT NULL VARCHAR2(12)
 PRCSTYPE                                  NOT NULL VARCHAR2(30)
 PRCSNAME                                  NOT NULL VARCHAR2(12)
 RUNLOCATION                               NOT NULL VARCHAR2(1)
 OPSYS                                     NOT NULL VARCHAR2(1)
 DBTYPE                                    NOT NULL VARCHAR2(1)
 DBNAME                                    NOT NULL VARCHAR2(8)
 SERVERNAMERQST                            NOT NULL VARCHAR2(8)
 SERVERNAMERUN                             NOT NULL VARCHAR2(8)
 RUNDTTM                                            DATE
 RECURNAME                                 NOT NULL VARCHAR2(30)
 OPRID                                     NOT NULL VARCHAR2(30)
 PRCSVERSION                               NOT NULL NUMBER(38)
 RUNSTATUS                                 NOT NULL VARCHAR2(2)
 RQSTDTTM                                           DATE
 LASTUPDDTTM                                        DATE
 BEGINDTTM                                          DATE
 ENDDTTM                                            DATE
 RUNCNTLID                                 NOT NULL VARCHAR2(30)
 PRCSRTNCD                                 NOT NULL NUMBER(38)
 CONTINUEJOB                               NOT NULL NUMBER(38)
 USERNOTIFIED                              NOT NULL NUMBER(38)
 INITIATEDNEXT                             NOT NULL NUMBER(38)
 OUTDESTTYPE                               NOT NULL VARCHAR2(3)
 OUTDESTFORMAT                             NOT NULL VARCHAR2(3)
 ORIGPRCSINSTANCE                          NOT NULL NUMBER(38)
 GENPRCSTYPE                               NOT NULL VARCHAR2(1)
 RESTARTENABLED                            NOT NULL VARCHAR2(1)
 TIMEZONE                                  NOT NULL VARCHAR2(9)


SQL> desc psprcsparms
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 PRCSINSTANCE                              NOT NULL NUMBER(38)
 CMDLINE                                   NOT NULL VARCHAR2(127)
 PARMLIST                                  NOT NULL VARCHAR2(254)
 WORKINGDIR                                NOT NULL VARCHAR2(127)
 OUTDEST                                   NOT NULL VARCHAR2(127)
 ORIGPARMLIST                              NOT NULL VARCHAR2(254)
 ORIGOUTDEST                               NOT NULL VARCHAR2(127)
 PRCSOUTPUTDIR                             NOT NULL VARCHAR2(254)
}}}
http://blog.orapub.com/20140110/Creating-A-Tool-Detailing-Oracle-Database-Process-CPU-Consumption.html

google for "fulltime.sh" script

{{{
#!/bin/sh
#set -x
# Note: This scripts call load_sess_stats.sql twice for data collection.
#
# Set the key variables
#
# use this for virtualised hosts:
PERF_SAMPLE_METHOD='-e cpu-clock'
# use this for physical hosts:
#PERF_SAMPLE_METHOD='-e cycles' 

refresh_time=5
uid=system
pwd=oracle
workdir=$PWD
perf_file=perf_report.txt
# perf for non-root
if [ $(cat /proc/sys/kernel/perf_event_paranoid) != 0 ]; then
	echo "Error: set perf_event_paranoid to 0 to allow non-root perf usage"
	echo "As root: echo 0 > /proc/sys/kernel/perf_event_paranoid"
	exit 1
fi
# perf sample method
echo "The perf sample method is set to: $PERF_SAMPLE_METHOD"
echo "Use cpu-clock for virtualised hosts, cycles for physical hosts"
# ctrl_c routine
ctrl_c() {
sqlplus -S / as sysdba <<EOF0 >& /dev/null
drop table op_perf_report;
drop table op_timing;
drop directory ext_dir;
EOF0
echo "End."
exit
}
trap ctrl_c SIGINT
#
sqlplus -S / as sysdba <<EOF1
set termout off echo off feed off
select /*perf profile*/
	    substr(a.spid,1,9) pid,
	    substr(b.sid,1,5) sid,
	    substr(b.serial#,1,5) serial#,
	    substr(b.machine,1,20) machine,
	    substr(b.username,1,10) username,
	    b.server, server,
	    substr(b.osuser,1,15) osuser,
	    substr(b.program,1,30) program
from v\$session b, v\$process a, v\$mystat c
where
b.paddr = a.addr
and b.sid != c.sid
and c.statistic# = 0
and type='USER'
order by spid
/
EOF1
read -p "Enter PID to profile: " ospid
echo "Sampling..."
# Setup, so be done once.
#
# As Oracle user
#

# Everything in this entire script is expected to be run
# from the below directory.
#
if ! ps -p $ospid >/dev/null; then ctrl_c; fi
sqlplus / as sysdba <<EOF2 >& /dev/null
set echo on feedback on verify on
create or replace directory ext_dir as '$workdir';
drop table op_perf_report;
create table op_perf_report (
  overhead      number,
  command       varchar2(100),
  shared_obj    varchar2(100),
  symbol        varchar2(100)
)
organization external (
  type              oracle_loader
  default directory ext_dir
  access parameters (
    records delimited  by newline
    nobadfile nodiscardfile nologfile
    fields  terminated by ','
    OPTIONALLY ENCLOSED BY '\\"' LDRTRIM
    missing field values are null
  )
  location ('$perf_file')
)
reject limit unlimited
/
  drop table op_timing;
  create table op_timing (
    time_seq number,
    item     varchar2(100),
    time_s   number
  );  
EOF2

while [ $refresh_time -gt 0 ]; do

if ! ps -p $ospid >/dev/null; then ctrl_c; fi
sqlplus / as sysdba <<EOF3 >& /dev/null
def ospid=$ospid
def timeseq=0

declare
  sid_var number;
  tot_cpu_s_var number;
  curr_wait number;
  curr_event varchar2(100);
begin

  select s.sid
  into   sid_var
  from   v\$process p,
         v\$session s
  where  p.addr = s.paddr
    and  p.spid = &ospid;

  
  select sum(value/1000000)
  into   tot_cpu_s_var
  from   v\$sess_time_model
  where  stat_name in ('DB CPU','background cpu time')
    and  sid = sid_var;

  insert into op_timing values (&timeseq , 'Oracle CPU sec' , tot_cpu_s_var );

  insert into op_timing
    select &timeseq, event, time_waited_micro/1000000
    from   v\$session_event
    where  sid = sid_var;

  select wait_time_micro/1000000, event into curr_wait, curr_event from v\$session_wait where sid=sid_var;
  insert into op_timing values ( 2, curr_event, curr_wait);
end;
/
--  select * from op_timing;
EOF3
if ! ps -p $ospid >/dev/null; then ctrl_c; fi
perf record -f $PERF_SAMPLE_METHOD -p $ospid >& /dev/null &
perf record -f $PERF_SAMPLE_METHOD -g -o callgraph.pdata -p $ospid >& /dev/null &
sleep $refresh_time
kill -INT %2 %1
clear 
if ! ps -p $ospid >/dev/null; then ctrl_c; fi
sqlplus / as sysdba <<EOF4 >& /dev/null
def ospid=$ospid
def timeseq=1

declare
  sid_var number;
  tot_cpu_s_var number;
  diff number;
  curr_wait number; 
  curr_event varchar2(100);
  
begin

  select s.sid
  into   sid_var
  from   v\$process p,
         v\$session s
  where  p.addr = s.paddr
    and  p.spid = &ospid;

  
  select sum(value/1000000)
  into   tot_cpu_s_var
  from   v\$sess_time_model
  where  stat_name in ('DB CPU','background cpu time')
    and  sid = sid_var;

  insert into op_timing values (&timeseq , 'Oracle CPU sec' , tot_cpu_s_var );

  insert into op_timing
    select &timeseq, event, time_waited_micro/1000000
    from   v\$session_event
    where  sid = sid_var;

  select count(*)
    into diff
    from op_timing a, op_timing b
    where a.time_seq=0 and b.time_seq=1 and a.item=b.item and a.time_s<>b.time_s;

  if diff = 0 then
    select a.wait_time_micro/1000000-b.time_s, a.event into curr_wait, curr_event from v\$session_wait a, op_timing b
    where a.sid=sid_var and b.time_seq=2;
    update op_timing set time_s = time_s + curr_wait where time_seq = 1 and item = curr_event;
  end if;

end;
/
EOF4

#perf report -t, 2> /dev/null | grep $ospid | grep -v [g]rep > $perf_file 
perf report -t, > $perf_file 2>/dev/null

if ! ps -p $ospid >/dev/null; then ctrl_c; fi
sqlplus -S / as sysdba <<EOF5
set termout off echo off feed off
variable tot_cpu_s_var number;
variable tot_wait_s_var number;
begin
  select end.time_s-begin.time_s
  into   :tot_cpu_s_var
  from   op_timing end,
         op_timing begin
  where  end.time_seq   = 1
    and  begin.time_seq = 0
    and  end.item = begin.item
    and  end.item = 'Oracle CPU sec';
  select sum(end.time_s-begin.time_s)
  into   :tot_wait_s_var
  from   op_timing end,
         op_timing begin
  where  end.time_seq   = 1
    and  begin.time_seq = 0
    and  end.item = begin.item
    and  end.item != 'Oracle CPU sec';
end;
/
set echo off heading off
select 'PID: '||p.spid||' SID: '||s.sid||' SERIAL: '||s.serial#||' USERNAME: '||s.username,
'CURRENT SQL: '||substr(q.sql_text,1,70)
from v\$session s, v\$process p, v\$sql q
where s.paddr=p.addr
and s.sql_id=q.sql_id (+)
and s.sql_child_number = q.child_number (+)
and p.spid=$ospid
/
set heading on
set serveroutput on
col raw_time_s format 99990.000  heading 'Time|secs'
col item       format a60        heading 'Time Component'
col perc       format 999.00     heading '%'
select 
       'cpu : '||rpt.symbol item,
       (rpt.overhead/100)*:tot_cpu_s_var raw_time_s,
       ((rpt.overhead/100)*:tot_cpu_s_var)/(:tot_wait_s_var+:tot_cpu_s_var)*100 perc
from   op_perf_report rpt
where  rpt.overhead > 2.0
union
select 
       'cpu : [?] sum of funcs consuming less than 2% of CPU time' item,
       sum((rpt.overhead/100)*:tot_cpu_s_var) raw_time_s,
       sum((rpt.overhead/100)*:tot_cpu_s_var)/(:tot_wait_s_var+:tot_cpu_s_var)*100 perc
from   op_perf_report rpt
where  rpt.overhead <= 2.0
group by 1,3
union
select 'wait: '||end.item, 
       end.time_s-begin.time_s raw_time_s,
       (end.time_s-begin.time_s)/(:tot_wait_s_var+:tot_cpu_s_var)*100 perc
from   op_timing end,
       op_timing begin
where  end.time_seq   = 1
  and  begin.time_seq = 0
  and  end.item = begin.item
  and  end.time_s-begin.time_s > 0
  and  end.item != 'Oracle CPU sec'
order by raw_time_s desc
/
set serverout off feed off echo off
truncate table op_timing;
EOF5
done
#perf report -g -i callgraph.pdata > callgraph.txt 2>/dev/null
#echo "The Call Graph file is callgraph.txt"

}}}
http://www.solarisinternals.com/wiki/index.php/Performance_Antipatterns
! 1) From awr_genwl.sql
''AWR CPU and IO Workload Report''

__''Tables used are:''__
- dba_hist_snapshot
- dba_hist_osstat
- dba_hist_sys_time_model
- dba_hist_sysstat

__''Comparison of methods''__
comparison-LAG_WITH_comparison.txt https://www.dropbox.com/s/z33yjepi71ja3jw/comparison-LAG_WITH_comparison.txt
comparison-s0.snap_id,absolutevalue-explanation.sql https://www.dropbox.com/s/jhz3b5f0z4fs1kv/comparison-s0.snap_id%2Cabsolutevalue-explanation.sql

__''Enhancements that could be done:''__
-- I could also make use of this Note  422414.1 that use the following tables:
dba_hist_sysmetric_summary <-- network bytes stat is interesting (Network Traffic Volume Per Sec = Network_bytes_per_sec)... Update: possible to add this on the awr_genwl.sql, the thing is.. metrics are different from sysstat values.. on systat you just get the delta and the rate, in metric the sampling is different let's say the snap duration is 10mins = (intsize/100)/60 what metric does is it samples on a per 60sec interval (num_interval) and get the max, min, avg, std_dev of those samples. so keep that in mind when using this values.
-- DBA_HIST_SERVICE_STAT
-- For the memory usage.. I’ll put in the sysstat metric “session pga memory”, in that way I’ll have rough estimate on memory requirements for the sessions
-- Then for the Network usage.. I’ll put in “bytes sent via SQL*Net to client” and “bytes sent via SQL*Net to dblink”.. each on separate columns.. in this way I’ll know the network requirements (transfer rate) on specific workloads which will be useful for determining the right network capacity (on the hardware & on wire – bandwidth). Could also be useful on a WAN setup, but I still have to do some tests.

!! CPU Capacity
<<<
!!!"Snap|ID"
{{{
s0.snap_id id,
}}}
- This is the beginning value of dba_hist_snapshot, this is your marker when you want to drill down to that particular period by creating  an AWR report using awrrpt.sql

the objective of the tool/script is what "start and end SNAP_ID" you feed in when running @?/rdbms/admin/awrrpt.sql 
should be the same "start and end SNAP_ID" when you see it in a time series manner. So that when you find a peak period, you are good to go on drilling down on the larger reports (awrrpt.sql)

You can see an example AWR report here (http://karlarao.tiddlyspot.com/#%5B%5BAWR%20Sample%20-%2010.2.0.3%5D%5D) which has SNAP_ID 338-339... now we usually have this report by using the awrrpt.sql 

then on a time series manner.. what values you see on the long report is the same when you look at SNAP_ID 338... look at the DB Time here (http://lh3.ggpht.com/_F2x5WXOJ6Q8/S2hR6V8NjCI/AAAAAAAAAo0/YM_c7VhFKiI/dba_hist3.png).. 1324.58÷60 = 22.08... so that is the beauty of the script.. 

Example using LAG
{{{
select * from 
  (
  select 
     lag(a.snap_id) over(order by a.snap_id) as id,
     b.value-lag(b.value) over(order by a.snap_id) delta
  from dba_hist_snapshot a, dba_hist_osstat b
  where 
      a.dbid = b.dbid 
  and a.instance_number = b.instance_number 
  and a.snap_id = b.snap_id
  and b.stat_name='BUSY_TIME'
  order by a.snap_id
  )
where id = 338

        ID      DELTA
---------- ----------
       338      46982
}}}

// NOTE:
- Before, I was having issues using the LAG function because it makes this column use the s1.snap_id which is wrong.. but finally figured out how to make sense of LAG. 
- The s0.snap_id must be used as a column when doing the SQL trick "e.snap_id = s0.snap_id + 1" (see the old version of the scripts)
//

!!!"Snap|Start|Time"
{{{
  TO_CHAR(s0.END_INTERVAL_TIME,'YY/MM/DD HH24:MI') tm,
}}}
- This is the time value associated with the SNAP_ID

!!!"i|n|s|t|#"
{{{
  s0.instance_number inst,
}}}
- The instance number, on a RAC environment you have to run the script on each of the nodes

!!!"Snap|Dur|(m)"
{{{
  round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) dur,
}}}
- This is the "Elapsed" value that you see on the AWR report. The delta value of Begin and End Snaps.
- The unit is in minutes, the long AWR report usually shows it in minutes

!!!"C|P|U"
{{{
  s3t1.value AS cpu,
}}}
- From the Oracle perspective, this is the number of CPUs you have on your database. 
- Based on dba_hist_osstat value s3t1.stat_name       = 'NUM_CPUS'

!!!"***|Total|CPU|Time|(s)"
{{{
  (round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value cap,
}}}
- The formula is 
''(Snap Dur minutes * 60) * NUM_CPUS''
- The unit is in seconds
- Essentially this is how many seconds of CPU time you can have on a particular snap period. Remember that CPU cycles are finite but you can endlessly wait on WAIT time. On a usual 10mins snap duration, that would be 600 seconds.. if on a particular period you incurred a total of 500 seconds of CPU (see requirements section) then most likely you are on the 83% CPU utilization (500 sec /600 sec)

<<<
!! CPU requirements
<<<
!!!"DB|Time"
{{{
  (s5t1.value - s5t0.value) / 1000000 as dbt,
}}}
!!!"DB|CPU"
{{{
(s6t1.value - s6t0.value) / 1000000 as dbc,
}}}
!!!"Bg|CPU"
{{{
  (s7t1.value - s7t0.value) / 1000000 as bgc,
}}}
!!!"RMAN|CPU"
{{{
  round(DECODE(s8t1.value,null,'null',(s8t1.value - s8t0.value) / 1000000),2) as rman,
}}}
!!!"A|A|S"
{{{
  ((s5t1.value - s5t0.value) / 1000000)/60 /  round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas,
- - - - - -
AAS = DB Time/Elapsed Time
= (1871.36/60)/10.06
= 3.100331345
}}}
!!!"***|Total|Oracle|CPU|(s)"
{{{
  round(((s6t1.value - s6t0.value) / 1000000) + ((s7t1.value - s7t0.value) / 1000000),2) totora,
}}}
!!!"OS|Load"
{{{
  round(s2t1.value,2) AS load,
}}}
!!!"***|Total|OS|CPU|(s)"
{{{
  (s1t1.value - s1t0.value)/100 AS totos,
}}}
<<<

!! Memory requirements
<<<
!!!"Physical|Memory|(mb)"
{{{
  s4t1.value/1024/1024 AS mem, 
}}}
<<<

!! IO requirements 
<<<
!!!"IOPs|r"
{{{
   ((s15t1.value - s15t0.value)  / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as IORs, 
}}}
!!!"IOPs|w"
{{{
   ((s16t1.value - s16t0.value)  / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as IOWs, 
}}}
!!!"IOPs|redo"
{{{
   ((s13t1.value - s13t0.value)  / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as IORedo, 
}}}
!!!"IO r|(mb)/s"
{{{
   (((s11t1.value - s11t0.value)* &_blocksize)/1024/1024)  / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60) 
      as IORmbs, 
}}}
!!!"IO w|(mb)/s"
{{{
   (((s12t1.value - s12t0.value)* &_blocksize)/1024/1024)  / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60) 
      as IOWmbs, 
}}}
!!!"Redo|(mb)/s"
{{{
   ((s14t1.value - s14t0.value)/1024/1024)  / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
     as redosizesec, 
}}}
<<<

!! some SYSSTAT delta values 
<<<
!!!"Sess"
{{{
     s9t0.value logons, 
}}}
!!!"Exec|/s"
{{{
   ((s10t1.value - s10t0.value)  / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                  + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                  + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                  + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2))*60)
    ) as exs, 
}}}
<<<

!! CPU Utilization
<<<
!!!"Oracle|CPU|%"
{{{
  ((round(((s6t1.value - s6t0.value) / 1000000) + ((s7t1.value - s7t0.value) / 1000000),2)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                                              + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                                              + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                                              + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oracpupct,
}}}
!!!"RMAN|CPU|%"
{{{
  ((round(DECODE(s8t1.value,null,'null',(s8t1.value - s8t0.value) / 1000000),2)) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                                              + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                                              + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                                              + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as rmancpupct,
}}}
!!!"OS|CPU|%"
{{{
  (((s1t1.value - s1t0.value)/100) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                                              + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                                              + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                                              + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oscpupct,
}}}
!!!"U|S|R|%"
{{{
  (((s17t1.value - s17t0.value)/100) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                                              + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                                              + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                                              + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oscpuusr,
}}}
!!!"S|Y|S|%"
{{{
  (((s18t1.value - s18t0.value)/100) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                                              + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                                              + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                                              + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oscpusys,
}}}
!!!"I|O|%"
{{{
  (((s19t1.value - s19t0.value)/100) / ((round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
                                                                                              + EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
                                                                                              + EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
                                                                                              + EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2)*60)*s3t1.value))*100 as oscpuio
}}}
<<<

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

! 2) From awr_topevents.sql
''AWR Top Events Report, a version of "Top 5 Timed Events" but across SNAP_IDs with AAS metric''
{{{
Sample output:
														   AWR Top Events Report

			     i
			     n
	   Snap 	     s	     Snap												     A
	   Start	     t	      Dur					   Event			  Time	  Avgwt DB Time      A
   SNAP_ID Time 	     #	      (m) Event 				    Rank	  Waits 	   (s)	   (ms)       %      S Wait Class
---------- --------------- --- ---------- ---------------------------------------- ----- -------------- -------------- -------- ------- ------ ---------------
       338 10/01/17 06:50    1	    10.05 CPU time				       1	   0.00 	435.67	   0.00      33    0.7 CPU
       338 10/01/17 06:50    1	    10.05 db file sequential read		       2       18506.00 	278.94	  15.07      21    0.5 User I/O
       338 10/01/17 06:50    1	    10.05 PX Deq Credit: send blkd		       3       79918.00 	177.36	   2.22      13    0.3 Other
       338 10/01/17 06:50    1	    10.05 direct path read			       4      374300.00 	148.74	   0.40      11    0.2 User I/O
       338 10/01/17 06:50    1	    10.05 log file parallel write		       5	2299.00 	 82.60	  35.93       6    0.1 System I/O
}}}
{{{
														   AWR Top Events Report

			     i
			     n
	   Snap 	     s	     Snap												     A
	   Start	     t	      Dur					   Event			  Time	  Avgwt DB Time      A
   SNAP_ID Time 	     #	      (m) Event 				    Rank	  Waits 	   (s)	   (ms)       %      S Wait Class
---------- --------------- --- ---------- ---------------------------------------- ----- -------------- -------------- -------- ------- ------ ---------------
       336 10/01/17 06:30    1	    10.12 direct path read			       1       49893.00 	955.83	  19.16      51    1.6 User I/O
       336 10/01/17 06:30    1	    10.12 db file sequential read		       2	9477.00 	472.07	  49.81      25    0.8 User I/O
       336 10/01/17 06:30    1	    10.12 db file parallel write		       3	3776.00 	286.48	  75.87      15    0.5 System I/O
       336 10/01/17 06:30    1	    10.12 log file parallel write		       4	2575.00 	163.31	  63.42       9    0.3 System I/O
       336 10/01/17 06:30    1	    10.12 log file sync 			       5	1564.00 	156.64	 100.15       8    0.3 Commit
}}}
__''Tables used are:''__
- dba_hist_snapshot
- dba_hist_system_event
- dba_hist_sys_time_model


<<<
!!!# "Snap|Start|Time"
!!!# "Snap|ID"
!!!# "i|n|s|t|#"
!!!# "Snap|Dur|(m)"
!!!# "C|P|U"
!!!# "A|A|S"
{{{
AAS = DB Time/Elapsed Time

Begin Snap:       338 17-Jan-10 06:50:58        31       2.9
  End Snap:       339 17-Jan-10 07:01:01        30       2.2

01/17/10 06:50:58
01/17/10 07:01:01

   Elapsed (SnapDur):               10.05 (mins) = 603    (sec)
   DB Time:                         22.08 (mins) = 1324.8 (sec)
   AAS = 2.197014925						<-- ADDM AAS is 2.2,  ASHRPT AAS is 2.7

-- THIS IS DB CPU / DB TIME... TO GET % OF DB CPU ON DB TIME ON TOP 5 TIMED EVENTS SECTION
((round ((s6t1.value - s6t0.value) / 1000000, 2)) / ((s5t1.value - s5t0.value) / 1000000))*100 as pctdbt,     

-- THIS IS DB CPU (min) / SnapDur (min) TO GET THE % OF AAS
(round ((s6t1.value - s6t0.value) / 1000000, 2))/60 /  round(EXTRACT(DAY FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 1440 
+ EXTRACT(HOUR FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) * 60 
+ EXTRACT(MINUTE FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) 
+ EXTRACT(SECOND FROM s1.END_INTERVAL_TIME - s0.END_INTERVAL_TIME) / 60, 2) aas,     

------ FROM AWR ... TOTAL AAS is 1.8.. 2.3 if you include the other events at the bottom
													A
									     Time    Avgwt DB Time	A
   SNAP_ID Event					     Waits	      (s)     (ms)	 %	S Wait Class
---------- ---------------------------------------- -------------- -------------- -------- ------- ------ ---------------
       338 CPU time					      0.00	   435.67     0.00	33    0.7
       338 db file sequential read			  18506.00	   278.94    15.07	21    0.5 User I/O
       338 PX Deq Credit: send blkd			  79918.00	   177.36     2.22	13    0.3 Other
       338 direct path read				 374300.00	   148.74     0.40	11    0.2 User I/O
       338 log file parallel write			   2299.00	    82.60    35.93	 6    0.1 System I/O


------ FROM ASHRPT ... TOTAL AAS is 1.99.. 2.47 if you include the other events at the bottom

Top User Events                    DB/Inst: IVRS/ivrs  (Jan 17 06:50 to 07:01)

                                                               Avg Active
Event                               Event Class     % Activity   Sessions
----------------------------------- --------------- ---------- ----------
CPU + Wait for CPU                  CPU                  36.20       0.98
PX Deq Credit: send blkd            Other                12.88       0.35
db file sequential read             User I/O             12.27       0.33
direct path read                    User I/O              7.36       0.20
PX qref latch                       Other                 4.91       0.13
          -------------------------------------------------------------

Top Background Events              DB/Inst: IVRS/ivrs  (Jan 17 06:50 to 07:01)

                                                               Avg Active
Event                               Event Class     % Activity   Sessions
----------------------------------- --------------- ---------- ----------
db file sequential read             User I/O              6.75       0.18
db file parallel write              System I/O            3.68       0.10
log file parallel write             System I/O            3.68       0.10
control file parallel write         System I/O            1.84       0.05
log file sequential read            System I/O            1.84       0.05
          -------------------------------------------------------------
}}}
!!!# "Event"
!!!# "Waits"
!!!# "Time|(s)"
!!!# "Avgwt|(ms)"
!!!# "Idle"
!!!# "DB Time|%"
!!!# "Wait Class"
<<<

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

! 3) RAC stuff


Global Cache Load Profile

-- Estimated Interconnect traffic
        ROUND(((RPT_PARAMS(STAT_DBBLK_SIZE) *
                (RPT_STATS(STAT_GC_CR_RV) + RPT_STATS(STAT_GC_CU_RV) +
                 RPT_STATS(STAT_GC_CR_SV) + RPT_STATS(STAT_GC_CU_SV))) +
               (200 *
                (RPT_STATS(STAT_GCS_MSG_RCVD) + RPT_STATS(STAT_GES_MSG_RCVD) +
                 RPT_STATS(STAT_GCS_MSG_SNT)  + RPT_STATS(STAT_GES_MSG_SNT))))
               / 1024 / RPT_STATS(STAT_ELAPSED), 2);


Global Cache Efficiency Percentages - Target local+remote 100%

Global Cache and Enqueue Services - Workload Characteristics

Global Cache and Enqueue Services - Messaging Statistics

-- More RAC Statistics
-- RAC Report Summary

Global CR Served Stats

Global CURRENT Served Stats

Global Cache Transfer Stats

Global Enqueue Statistics

Segments by Global Cache Buffer Busy <-- possible

Global Cache Transfer Stats <-- possible
{{{
New interacUve report for analyzing AWR data 
* Performance Hub report generated from SQL*Plus  
* @$ORACLE_HOME/rdbms/admin/perfhubrpt.sql 
* OR calling dbms_perf.report_perfub(….) function 
* Single view of DB performance 
* ADDM, SQL Tuning, Real-Time SQL Monitoring, ASH AnalyUcs  
* Switch between ASH analyUcs, workload view, ADDM findings and SQL monitoring seamlessly 
* Supports both real-Ume & historical mode 
* Historical view of SQL Monitoring reports 
}}}
http://www.oracle.com/technetwork/oem/db-mgmt/con8450-sqltuning-expertspanel-2338901.pdf

''DBMS_PERF'' http://docs.oracle.com/database/121/ARPLS/d_perf.htm#ARPLS75006
''Oracle Database 12c: EM Express Performance Hub'' http://www.oracle.com/technetwork/database/manageability/emx-perfhub-1970118.html
''Oracle Database 12c: EM Express Active Reports'' http://www.oracle.com/technetwork/database/manageability/emx-activerep-1970119.html


! usage
RDBMS 12.1.0.2 & Cell 12.1.2.1.0 exposes detailed Exadata statistics on historical perfhub report https://twitter.com/karlarao/status/573025645254479872

{{{

-- active
set pages 0 linesize 32767 trimspool on trim on long 1000000 longchunksize 10000000
spool perfhub_active.html
select dbms_perf.report_perfhub(is_realtime=>1,type=>'active') from dual;
spool off

-- historical
set pages 0 linesize 32767 trimspool on trim on long 1000000 longchunksize 10000000
spool perfhub_active2.html
select dbms_perf.report_perfhub(is_realtime=>0,type=>'active') from dual;
spool off

-- historical, without explicitly specifying the "type"
set pages 0 linesize 32767 trimspool on trim on long 1000000 longchunksize 10000000
spool perfhub_active3.html
select dbms_perf.report_perfhub(is_realtime=>0) from dual;
spool off

}}}


! 11.2.0.4 vs 12c 
''11204''
{{{
Specify the Report Type
~~~~~~~~~~~~~~~~~~~~~~~
Would you like an HTML report, or a plain text report?
Enter 'html' for an HTML report, or 'text' for plain text
Defaults to 'html'
Enter value for report_type: html
}}}
''12c''
{{{
Specify the Report Type
~~~~~~~~~~~~~~~~~~~~~~~
AWR reports can be generated in the following formats.  Please enter the
name of the format at the prompt.  Default value is 'html'.

'html'          HTML format (default)
'text'          Text format
'active-html'   Includes Performance Hub active report

Enter value for report_type: active-html
}}}




! 19c 
https://docs.oracle.com/en/database/oracle/oracle-database/19/arpls/DBMS_PERF.html#GUID-290C18B9-A2EF-468D-9D6E-B31D717082BB
How To Get Historical SQL Monitor Report For SQL Statements (Doc ID 2555350.1)

{{{


--8wff5yszg5kc0
-- Execution Started Feb 15, 2024 5:34:37 PM GMT-00:00
-- Ended Feb 16, 2024 5:33:39 AM GMT-00:00

--3b7bqnza70gjx
-- Execution Started Feb 16, 2024 10:26:33 PM GMT-00:00


set pages 0 linesize 32767 trimspool on trim on long 1000000 longchunksize 10000000
spool perfhub_history_8wff5yszg5kc0.html
select dbms_perf.report_perfhub(is_realtime=>0,type=>'active',selected_start_time=>to_date('15-FEB-2024 17:00:00','dd-MON-YYYY hh24:mi:ss'),selected_end_time=>to_date('16-FEB-2024 06:00:00','dd-MON-YYYY hh24:mi:ss')) from dual;
spool off

set pages 0 linesize 32767 trimspool on trim on long 1000000 longchunksize 10000000
spool sql_details_history_8wff5yszg5kc0.html
select dbms_perf.report_perfhub(sql_id=>'8wff5yszg5kc0', type=>'active', is_realtime=>0, outer_start_time=>to_date('15-FEB-2024 17:00:00','dd-MON-YYYY hh24:mi:ss'), outer_end_time=>to_date('16-FEB-2024 06:00:00','dd-MON-YYYY hh24:mi:ss'), selected_start_time=>to_date('15-FEB-2024 17:00:00','dd-MON-YYYY hh24:mi:ss'), selected_end_time=>to_date('16-FEB-2024 06:00:00','dd-MON-YYYY hh24:mi:ss')) from dual;
spool off


set pages 0 linesize 32767 trimspool on trim on long 1000000 longchunksize 10000000
spool perfhub_history_3b7bqnza70gjx.html
select dbms_perf.report_perfhub(is_realtime=>0,type=>'active',selected_start_time=>to_date('16-FEB-2024 22:00:00','dd-MON-YYYY hh24:mi:ss'),selected_end_time=>to_date('17-FEB-2024 23:00:00','dd-MON-YYYY hh24:mi:ss')) from dual;
spool off

set pages 0 linesize 32767 trimspool on trim on long 1000000 longchunksize 10000000
spool sql_details_history_3b7bqnza70gjx.html
select dbms_perf.report_perfhub(sql_id=>'3b7bqnza70gjx', type=>'active', is_realtime=>0, outer_start_time=>to_date('16-FEB-2024 22:00:00','dd-MON-YYYY hh24:mi:ss'), outer_end_time=>to_date('17-FEB-2024 23:00:00','dd-MON-YYYY hh24:mi:ss'), selected_start_time=>to_date('16-FEB-2024 22:00:00','dd-MON-YYYY hh24:mi:ss'), selected_end_time=>to_date('17-FEB-2024 23:00:00','dd-MON-YYYY hh24:mi:ss')) from dual;
spool off


}}}






Here's my investigation on the topic and the reason why it's 24cores on an x2-2 box... and some quirks on the graphing of the "CPU cores line"
http://www.evernote.com/shard/s48/sh/c7f8b7b5-4ceb-40e3-b877-9d00380749af/d76f3f66364a6454a9adafc2ae24c798
http://blogs.oracle.com/rtd/entry/performance_tips
http://www.red-gate.com/products/oracle-development/deployment-suite-for-oracle/webinars/webinar-archive

! Session/System level perf monitoring
* Perfsheet (Performance Visualization) – For Session Monitoring, uses excel sheet
* Ashmon (Active Session Monitoring) – For monitoring Database Session , Ashmon on 64bit http://db-optimizer.blogspot.com/2010/10/ashmon-on-64bit-oracle-11gr2.html, by marcin at github https://github.com/pioro/orasash/
* DB Optimizer - the production version of Ashmon, with cool Visual SQL Tuning! (just like Dan Tow has envisioned)
* ASH Viewer by Alexander Kardapolov http://j.mp/dNidrB, http://ronr.blogspot.com/2012/10/ash-for-standard-edition-or-without.html
* Lab128 (trial software) – Tool for Oracle Tuning, Monitoring and trace SQL/Stored procedures transactions http://www.lab128.com/lab128_download.html http://www.lab128.com/lab128_new_features.html http://www.lab128.com/lab128_rg/html/contents.html Lab128 has automated the pstack sampling, os_explain, & reporting. Good tool to know where the query was spending time http://goo.gl/fyH5x
* Mumbai (freeware) - Performance monitoring tool that integrated Snapper, Orasrp, Statspack viewer, alert log viewer, nice session level profiling, and lots of good stuff! https://marcusmonnig.wordpress.com/mumbai/ 
* EMlight by Obzora http://obzora.com/home.html - a lightweight web based EM
* Google Chrome AWR Formatter by Tyler Muth - http://tylermuth.wordpress.com/2011/04/20/awr-formatter/ - when you want to drill down on AWR statistics for a specific SNAP_ID this tool can be very helpful. This works only on html format of AWR. I would use it together with the Firefighting Diagnosis excel template of Craig Shallahamer to quickly account the RT = ST+QT
* Snapper (Oracle Session Snapper) - Reports Oracle session level performance counter and wait information in real time http://tech.e2sn.com/oracle-scripts-and-tools/session-snapper - doesn't require Diag&Tuning pack
* MOATS - http://blog.tanelpoder.com/2011/03/29/moats-the-mother-of-all-tuning-scripts/ , http://www.oracle-developer.net/utilities.php
* RAC-aware MOATS - http://jagjeet.wordpress.com/2012/05/13/sqlplus-dashboard-for-rac/ has a cool AAS dashboard with Exadata metrics (smart scans, flash cache, etc.) - this requires Diag&Tuning Pack
* oratop (MOS 1500864.1) - near real-time monitoring of databases, RAC and Single Instance, much like RAC-aware MOATS - doesn't require Diag&Tuning pack, no cool AAS dashboard
* Oracle LTOM (Oracle Lite Onboard Monitor) – Provides automatic session tracing
* Orapub's OSM scripts - A toolkit for database monitoring and workload characterization
* JL references http://jonathanlewis.wordpress.com/2009/06/23/glossary/ , http://jonathanlewis.wordpress.com/2009/12/18/simple-scripts/ , http://jonathanlewis.wordpress.com/statspack-examples/ , http://jonathanlewis.wordpress.com/2010/03/17/partition-stats/
* List of end-user monitoring tools http://www.real-user-monitoring.com/the-complete-list-of-end-user-experience-monitoring-tools/ , http://www.alexanderpodelko.com/PerfManagement.html
* [[ASH masters, AWR masters]] - a collection of ASH and AWR scripts I've been using for years to do session level profiling and workload characterization
* orachk collection manager http://www.fuadarshad.com/2015/02/exadata-12c-new-features-rmoug-slides.html
* [[report_sql_monitor_html.sql]] sql monitor reports
* [[Performance Hub report]] performance hub reports

! SQL Tuning
* SQLTXPLAIN (Oracle Extended Explain Plan Statistics) – Provides details about all schema objects in which the SQL statement depends on.
* Orasrp (Oracle Session Resource Planner) – Builds complete detailed session profile
* gxplan - Visualization of explain plan
* 10053 viewer - http://jonathanlewis.wordpress.com/2010/04/30/10053-viewer/

! Forecasting
* r2toolkit - http://karlarao.tiddlyspot.com/#r2project This is a performance toolkit that uses AWR data and Linear Regression to identify what metric/statistic is driving the database server’s workload. The data points can be very useful for capacity planning giving you informed decisions and completely avoiding guesswork!









Kyle's notes 
https://sites.google.com/site/oraclemonitor/notes
<<showtoc>>


{{{
network diagnostic tools
		ping
		traceroute
		host
		dig
		netstat
		gnome-netttool (GUI)

verify ip connectivity
		ping				<-- packet loss & latancy measurement tool (sends ICMP - internet control message protocol, default is 64byte)
		traceroute			<-- displays network path to a destination (uses UCP frames to probe the path)
		mtr					<-- a tool that combines ping & traceroute

New and Modified utilities
			ping6
			traceroute6
			tracepath6
			ip -6
			host -t AAAA hostname6.domain6

        ip
	    route -n 		<-- display routing table
	    traceroute <ip>	<-- diagnose routing problems
}}}


! step by step 
http://www.ateam-oracle.com/testing-latency-and-throughput/
<<<
The following is a simple list of steps to collect throughput and latency data.

Run MTR to see general latency and packet loss between servers.
Execute a multi-stream iperf test to see total throughput.
Execute UDP/jitter test if your setup will be using UDP between servers.
Execute jmeter tests against application/rest endpoint(s).
<<<



! latency and hops
https://www.digitalocean.com/community/tutorials/how-to-use-traceroute-and-mtr-to-diagnose-network-issues
https://www.thegeekdiary.com/how-to-use-qperf-to-measure-network-bandwidth-and-latency-performance-in-linux/
https://arjanschaaf.github.io/is-the-network-the-limit/
http://paulbakker.io/docker/docker-cloud-network-performance/
How to use qperf to measure network bandwidth and latency performance https://access.redhat.com/solutions/2122681
https://www.opsdash.com/blog/network-performance-linux.html


! Using Traceroute, Ping, MTR, and PathPing, qperf
https://www.pluralsight.com/blog/it-ops/troubleshoot-ping-traceroute
https://www.cisco.com/en/US/docs/internetworking/troubleshooting/guide/tr1907.html#wp1020813

Displaying Routing Information With the traceroute Command https://docs.oracle.com/cd/E23824_01/html/821-1453/ipv6-admintasks-72.html
Troubleshoot network performance issues with ping and traceroute https://help.salesforce.com/s/articleView?id=000326878&type=1
Using Traceroute, Ping, MTR, and PathPing https://www.clouddirect.net/knowledge-base/KB0011455/using-traceroute-ping-mtr-and-pathping
https://www.howtogeek.com/134132/how-to-use-traceroute-to-identify-network-problems/
How to use qperf to measure network bandwidth and latency performance https://access.redhat.com/solutions/2122681


! latency and throughput example commands 
https://www.ateam-oracle.com/testing-latency-and-throughput




! bandwidth (iperf)
Oracle Cloud Infrastructure: Bandwidth iperf test https://www.youtube.com/watch?v=z6aGcy25gX8
https://github.com/esnet/iperf/issues/547
https://oracle-randolf.blogspot.com/2017/02/oracle-database-cloud-dbaas-performance.html , https://oracle-randolf.blogspot.com/search/label/DBaaS





! references 
Network Troubleshooting Tools https://learning.oreilly.com/library/view/network-troubleshooting-tools/059600186X/
Network Maintenance and Troubleshooting Guide: Field-Tested Solutions for Everyday Problems, Second Editon https://learning.oreilly.com/library/view/network-maintenance-and/9780321647672/ch11.html
DevOps Troubleshooting for Linux Server: Is the Server Down? Tracking Down the Source of Network Problems https://learning.oreilly.com/library/view/devops-troubleshooting-for/9780133258813/ch05.html#ch05lev1sec7









! System level OS perf monitoring
* kSar - a SAR grapher - http://sourceforge.net/projects/ksar/ , https://www.linux.com/news/visualize-sar-data-ksar , https://www.thomas-krenn.com/en/wiki/Linux_Performance_Analysis_using_kSar
{{{
export LC_ALL=C
sar -A -f /var/log/sysstat/sa15 > sardata.txt
cat /var/log/sysstat/sar?? > /tmp/sar.all     <- merge multiple days
}}}
* OSWatcher (Oracle OS Watcher) - Reports CPU, RAM and Network stress, and is a new alternative for monitoring Oracle servers (includes session level ps)
* Oracle Cluster Health Monitor - http://goo.gl/UZqS5 (includes session level ps)
* nmon
* Dynamic Tracing Tools - ''DTrace'' - Solaris,Linux   ''ProbeVue'' - AIX
* top, vmstat, mpstat - http://smartos.org/2011/05/04/video-the-gregg-performance-series/
* turbostat.c http://developer.amd.com/Assets/51803A_OpteronLinuxTuningGuide_SCREEN.pdf, http://manpages.ubuntu.com/manpages/precise/man8/turbostat.8.html, http://lxr.free-electrons.com/source/tools/power/x86/turbostat/, http://stuff.mit.edu/afs/sipb/contrib/linux/tools/power/x86/turbostat/turbostat.c
* vm performance and CPU contention [[esxtop, vmstat, top, mpstat steal]]

! Session level OS perf monitoring
* iotop http://guichaz.free.fr/iotop/ , for RHEL http://people.redhat.com/jolsa/iotop/ , topio Solaris http://yong321.freeshell.org/freeware/pio.html
* atop alternative to iotop on RHEL4 http://www.atoptool.nl/index.php
* collectl http://collectl.sourceforge.net/ , http://collectl-utils.sourceforge.net/ , detailed process accounting (you can also do ala ''iotop'') http://collectl.sourceforge.net/Process.html
* prstat Solaris
{{{
Memory per process accounting: collectl -sZ -i:1 --procopts m
IO per process accounting: collectl -sZ -i:1
}}}
* iodump http://www.xaprb.com/blog/2009/08/23/how-to-find-per-process-io-statistics-on-linux/  <-- I'm a bit dubious about this..done a test case comparing to collectl.. it can't get the top processes doing the io.. related links: http://goo.gl/NwUcs , http://goo.gl/zVEFE , http://goo.gl/eQg3d
* perf top http://anton.ozlabs.org/blog/2009/09/04/using-performance-counters-for-linux/ <-- kernel profiling tool for linux, much like dtrace probe on syscall, ''wiki'' https://perf.wiki.kernel.org/index.php/Tutorial#Live_analysis_with_perf_top
* Digger - the tool for tracing of unix processes http://alexanderanokhin.wordpress.com/tools/digger/
* per-process level cpu scheduling - latency.c http://eaglet.rain.com/rick/linux/schedstat/
* vtune http://software.intel.com/en-us/intel-vtune-amplifier-xe
* BPF https://blog.memsql.com/bpf-linux-performance/
* perf Basic usage of perf command ( tracing tool ) (Doc ID 2174289.1)
* cputrack (solaris) - process level CPU counters	- starting solaris 8 http://www.scalingbits.com/performance/tracing

! Network 
* uperf http://www.uperf.org/, http://www.uperf.org/manual.html
* rds-stress   http://oss.oracle.com/pipermail/rds-devel/2007-November/000237.html, http://oss.oracle.com/~okir/rds/2008-Feb-29/scalability/
* pingplotter http://www.pingplotter.com/
* netem WAN performance simulator http://www.linuxfoundation.org/collaborate/workgroups/networking/netem , http://www.oracle.com/technetwork/articles/wartak-rac-vm-3-096492.html#9a
* network speed test without flash http://openspeedtest.com/results/5244312

! Storage/IO
* EMC ControlCenter (ECC)
* asm_metric.pl 



Orion - see tiddlers below
SQLIO (for SQL Server) - http://sqlserverpedia.com/wiki/SAN_Performance_Tuning_with_SQLIO
ASMIOSTAT Script to collect iostats for ASM disks Doc ID: 	437996.1
''asmcmd''
{{{
asmcmd iostat -et --io --region -G DATA 5
}}}
also see [[asm_metrics.pl]]








.
Customer Knowledge Exchange
https://metalink2.oracle.com/metalink/plsql/f?p=130:14:6788425522391793279::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,375443.1,1,1,1,helvetica

''* Master Note: Database Performance Overview [ID 402983.1]''

''Performance Tools Quick Reference Guide 	Doc ID:	Note:438452.1''
<<<
* Query Tuning
Enterprise Manager (SQL Tuning Advisor)
AWR SQL Report
SQLTXPLAIN
TRCANLZR
PL/SQ_ Profiler
LTOM (Session Trace Collector)
OPDG
SQL Tuning Health-Check Script [ID 1366133.1]

* OS Data
OS_Watcher

* Database Tuning
Enterprise Manager ADDM
ADDM Report
STATSPACK
AWR Report
OPDG

* Hang, Locking, and Transient Issues
ASH Report
LTOM (Hang Detector, Data Recorder)
HangFG

* Error/Crash Issues
Stackx
ORA-600/ORA-7445 Troubleshooter

* RAC
RDA
RACcheck - RAC Configuration Audit Tool [ID 1268927.1]  - sample report http://dl.dropbox.com/u/25153503/Oracle/raccheck.html

* ASM tools used by Support : KFOD, KFED, AMDU [ID 1485597.1]
<<<


Oracle Performance Diagnostic Guide (OPDG)
 	Doc ID:	Note:390374.1

Performance Improvement Tips for Oracle on UNIX
  	Doc ID: 	Note:1005636.6

How to use OS commands to diagnose Database Performance issues?
  	Doc ID: 	Note:224176.1

Introduction to Tuning Oracle7 / Oracle8 / 8i / 9i 
  Doc ID:  Note:61998.1 



-- DATABASE HEALTH CHECK

How to Perform a Healthcheck on the Database
  	Doc ID: 	122669.1

My Oracle Support Health Checks Catalog [ID 868955.1]

Avoid Known Problems and Improve Stability - New Database, Middleware, E-Business Suite, PeopleSoft, Siebel & JD Edwards Health Checks Released! [ID 1206734.1]



-- ANALYSIS

Yet Another Performance Profiling Method (Or YAPP-Method) (Doc ID 148518.1

Some Reasons for Poor Performance at Database,Network and Client levels
  	Doc ID: 	Note:242495.1

Performance Improvement Tips for Oracle on UNIX
  	Doc ID: 	1005636.6

CHECKLIST-What else can influence the Performance of the Database
  	Doc ID: 	148462.1

Abrupt Spikes In Number Of Sessions Causing Slow Performance.
  	Doc ID: 	736635.1

TROUBLESHOOTING: Advanced Query Tuning
  	Doc ID: 	163563.1

Note 233112.1 START HERE> Diagnosing Query Tuning Problems Using a Decision Tree

Note 372431.1 TROUBLESHOOTING: Tuning a New Query
Note 179668.1 TROUBLESHOOTING: Tuning Slow Running Queries
Note 122812.1 Tuning Suggestions When Query Cannot be Modified
Note 67522.1 Diagnosing Why a Query is Not Using an Index

Note 214106.1 Using TKProf to compare actual and predicted row counts

What is the Oracle Diagnostic Methodology (ODM)?
  	Doc ID: 	312789.1



-- ORACLE SUPPORT CASE STUDIES, COE

-- chris warticki
http://blogs.oracle.com/support/

Case Study Master (Doc ID 342534.1)
https://metalink2.oracle.com/metalink/plsql/f?p=130:14:4157667604321941359::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,342534.1,1,1,1,helvetica

Case Study: Diagnosing Another Buffer Busy Waits Issue
 	Doc ID:	Note:358303.1

Freelist Management with Oracle 8i
 	Doc ID:	Note:157250.1

Network Performance Considerations in Designing Client/Server Applications
  	Doc ID: 	76412.1

Database Writer and Buffer Management
  	Doc ID: 	91062.1

http://netappdb.blogspot.com/

Determining CPU Resource Usage for Linux and Unix
  	Doc ID: 	Note:466996.1

Measuring Memory Resource Usage for Linux and Unix
  	Doc ID: 	Note:467018.1

Linux Kernel: The SLAB Allocator
 	Doc ID:	Note:434351.1

Best Practices for Load Testing
  	Doc ID: 	Note:466452.1

What is DARV
  	Doc ID: 	391153.1




-- CUSTOM APPS TUNING
http://blogs.oracle.com/theshortenspot/entry/troubleshooting_in_a_nutshell
Performance Troubleshooting Guides For Oracle Utilities CCB, BI & Oracle ETM [ID 560382.1]



-- STATSPACK

Systemwide Tuning using UTLESTAT Reports in Oracle7/8
  	Doc ID: 	Note:62161.1

Note 228913.1 Systemwide Tuning using STATSPACK Reports

New system statistics in Oracle 8i bstat/estat report
  	Doc ID: 	134346.1

Statistics Package (STATSPACK) Guide
  	Doc ID: 	394937.1

FAQ- Statspack Complete Reference
  	Doc ID: 	94224.1

Using AWR/Statspack reports to help solve some Portal Performance Problems scenarios
  	Doc ID: 	565812.1

Two Types of Automatic Statistics Collected in 10g
  	Doc ID: 	559029.1

Creating a StatsPack performance report
  	Doc ID: 	149124.1

Gathering a StatsPack snapshot
  	Doc ID: 	149121.1

What is StatsPack and where are the READMEs?
  	Doc ID: 	149115.1

Systemwide Tuning using STATSPACK Reports
  	Doc ID: 	228913.1

Sharing StatsPack snapshot data between two or more databases
  	Doc ID: 	149122.1

What We Did to Track and Detect Init Parameter Changes in our Database
  	Doc ID: 	436776.1

Oracle Database 10g Migration/Upgrade: Known Issues and Best Practices with Self-Managing Database
  	Doc ID: 	332889.1

How To Integrate Statspack with EM 10G
  	Doc ID: 	274436.1

Installing and Using Standby Statspack in 11gR1
  	Doc ID: 	454848.1


-- DISABLE AWR
Package for disabling AWR without a Diagnostic Pack license in Oracle
  	Doc ID: 	436386.1


-- AWR
Solving Convertible or Lossy data in Data Dictionary objects when changing the NLS_CHARACTERSET
  	Doc ID: 	258904.1

Although AWR snapshot is dropped, WRH$_SQLTEXT still shows some relevant entries
  	Doc ID: 	798526.1

High Storage Consumption for LOBs in SYSAUX Tablespace
  	Doc ID: 	396502.1


-- AWR BASELINE
How to Generate an AWR Report and Create Baselines [ID 748642.1]


-- AWR ERRORS
OERR: ORA-13711 Some snapshots in the range [%s, %s] are missing key statistic [ID 287886.1]
Troubleshooting: AWR Snapshot Collection issues [ID 1301503.1]
ORA-12751 cpu time or run time policy violation [ID 761298.1]      <-- usually happens when you are on high CPU, high SYS CPU
AWR or STATSPACK Snapshot collection extremely slow in 11gR2 [ID 1392603.1]
Bug 13372759: AWR SNAPSHOTS HANGING
Bug 13257247 - AWR Snapshot collection hangs due to slow inserts into WRH$_TEMPSTATXS. [ID 13257247.8]   <-- BUGG!!!! will cause a bloated IO MB/s number


-- EXPORT IMPORT AWR
http://gavinsoorma.com/2009/07/exporting-and-importing-awr-snapshot-data/
How to Transport AWR Data [ID 872733.1]
http://dboptimizer.com/2011/04/16/importing-multiple-databases-awr-repositories/





-- EVENTS

What is the "WF - Contention'' Enqueue ?
  	Doc ID: 	Note:358208.1

Consistent gets - examination
http://www.dba-oracle.com/m_consistent_gets.htm







-- SGA

FREQUENT RESIZE OF SGA
  	Doc ID: 	742599.1





-- BUFFER CACHE

Understanding and Tuning Buffer Cache and DBWR
  	Doc ID: 	Note:62172.1

Note 1022293.6 HOW A TABLE CAN BE CACHED IN MEMORY BUFFER CACHE

How to Identify The Segment Associated with Buffer Busy Waits
 	Doc ID:	Note:413931.1

Resolving Intense and "Random" Buffer Busy Wait Performance Problems
 	Doc ID:	Note:155971.1

Case Study: Diagnosing Another Buffer Busy Waits Issue
 	Doc ID:	Note:358303.1

DB_WRITER_PROCESSES or DBWR_IO_SLAVES? 
  Doc ID:  Note:97291.1 

Database Writer and Buffer Management 
  Doc ID:  Note:91062.1 

STATISTIC "cache hit ratio" - Reference Note
  	Doc ID: 	Note:33883.1

Oracle9i NF: Dynamic Buffer Cache Advisory
  	Doc ID: 	Note:148511.1

How To Identify a Hot Block Within The Database Buffer Cache.
  	Doc ID: 	Note:163424.1

What is "v$bh"? How should it be used?
  	Doc ID: 	73582.1



-- BUFFER BUSY WAITS

How To Identify a Hot Block Within The Database Buffer Cache.
  	Doc ID: 	163424.1

Difference Between 'Buffer Busy Waits' and 'Latch: Cache Buffers Chains"?
  	Doc ID: 	833303.1

Abrupt Spikes In Number Of Sessions Causing Slow Performance.
  	Doc ID: 	736635.1

How to Identify Which Latch is Associated with a "latch free" wait
  	Doc ID: 	413942.1

New system statistics in Oracle 8i bstat/estat report
  	Doc ID: 	134346.1

ACTIVE: DML HANGING - BUFFER BUSY WAITS
  	Doc ID: 	1061802.6

How to Identify The Segment Associated with Buffer Busy Waits
  	Doc ID: 	413931.1





-- BUFFER POOL

Oracle Multiple Buffer Pools Feature
  	Doc ID: 	135223.1

ORACLE8.X: HOW TO MAKE SMALL FREQUENTLY USED TABLES STAY IN MEMORY
  	Doc ID: 	1059295.6

Multiple BUFFER subcaches: What is the total BUFFER CACHE size?
  	Doc ID: 	138226.1

HOW A TABLE CAN BE CACHED IN MEMORY/BUFFER CACHE <-- oracle 7
  	Doc ID: 	1022293.6





-- LARGE POOL 

Fundamentals of the Large Pool (Doc ID 62140.1)




-- SHARED POOL 

Using the Oracle DBMS_SHARED_POOL Package
  	Doc ID: 	Note:61760.1

How to Pin a Cursor in the Shared Pool
  	Doc ID: 	Note:726780.1

90+percent of the shared pool memory though no activity on the database
  	Doc ID: 	Note:552391.1

How to Pin SQL Statements in Memory Using DBMS_SHARED_POOL
  	Doc ID: 	Note:152679.1

90+percent of the shared pool memory though no activity on the database
  	Doc ID: 	552391.1

HOW TO FIND THE SESSION HOLDING A LIBRARY CACHE LOCK
  	Doc ID: 	122793.1

Dump In msqsub() When Querying V$SQL_PLAN
  	Doc ID: 	361342.1

Troubleshooting and Diagnosing ORA-4031 Error
  	Doc ID: 	396940.1

When Cursor_Sharing=Similar/Force do not Share Cursors When Literals are Used?
  	Doc ID: 	364845.1

Handling and resolving unshared cursors/large version_counts
  	Doc ID: 	296377.1

How to Identify Resource Intensive SQL for Tuning
  	Doc ID: 	232443.1

How using synonyms may affect database performance and scalability
  	Doc ID: 	131272.1

Example "Top SQL" queries from V$SQLAREA
  	Doc ID: 	235146.1

ORA-4031 Common Analysis/Diagnostic Scripts
  	Doc ID: 	430473.1

Understanding and Tuning the Shared Pool
  	Doc ID: 	62143.1




-- SHARED POOL PIN

How to Automate Pinning Objects in Shared Pool at Database Startup
  	Doc ID: 	101627.1

PINNING ORACLE APPLICATIONS OBJECTS INTO THE SHARED POOL
  	Doc ID: 	69925.1

How To Use SYS.DBMS_SHARED_POOL In a PL/SQL Stored procedure To Pin objects in Oracle's Shared Pool.
  	Doc ID: 	305529.1

How to Pin a Cursor in the Shared Pool
  	Doc ID: 	726780.1

How to Pin SQL Statements in Memory Using DBMS_SHARED_POOL
  	Doc ID: 	152679.1



-- HARD/SOFT PARSE

How to work out how many of the parse count are hard/soft?
  	Doc ID: 	34433.1



-- COMMIT

Does Auto-Commit Perform Commit On Select?
  	Doc ID: 	371984.1






-- FREELISTS & FREELISTS GROUS

Freelist Management with Oracle 8i
 	Doc ID:	Note:157250.1

How To Solve High ITL Waits For Given Segments.
 	Doc ID:	Note:464041.1






-- EBS

Troubleshooting Oracle Applications Performance Issues
  	Doc ID: 	Note:169935.1 	

MRP Core/Mfg Performance Tuning and Troubleshooting Guide
  	Doc ID: 	100956.1








-- LATCH

What are Latches and What Causes Latch Contention 
  Doc ID:  Note:22908.1 
  
How to Match a Row Cache Object Child Latch to its Row Cache
  	Doc ID: 	Note:468334.1
  	




-- CHECKPOINT

Manual Log Switching Causing "Thread 1 Cannot Allocate New Log" Message in the Alert Log 
  Doc ID:  Note:435887.1 

Checkpoint Tuning and Troubleshooting Guide 
  Doc ID:  Note:147468.1 

Alert Log Messages: Private Strand Flush Not Complete 
  Doc ID:  Note:372557.1 

DB Redolog Archive Once A Minute 
  Doc ID:  Note:370151.1 

Automatic Checkpoint Tuning in 10g 
  Doc ID:  Note:265831.1 

WHY REDO LOG SPACE REQUESTS ALWAYS INCREASE AND NEVER DECREASE? 
  Doc ID:  Note:1025593.6 




-- OS LEVEL (Linux - Puschitz)

Oracle MetaLink Note:200266.1
Oracle MetaLink Note:225751.1
Oracle MetaLink Note:249213.1
Oracle MetaLink Note:260152.1
Oracle MetaLink Note:262004.1
Oracle MetaLink Note:265194.1
Oracle MetaLink Note:270382.1
Oracle MetaLink Note:280463.1
Oracle MetaLink Note:329378.1
Oracle MetaLink Note:344320.1

http://www.oracle.com/technology/pub/notes/technote_rhel3.html
http://www.redhat.com/whitepapers/rhel/OracleonLinux.pdf
http://www.redhat.com/magazine/001nov04/features/vm/			<-- Understanding Virtual Memory by Norm Murray and Neil Horman
http://kerneltrap.org/node/2450						<-- Feature: High Memory In The Linux Kernel
http://www.redhat.com/whitepapers/rhel/AdvServerRASMpdfRev2.pdf








-- hang

What To Do and Not To Do When 'shutdown immediate' Hangs
  	Doc ID: 	Note:375935.1

Bug:5057695: Shutdown Immediate Very Slow To Close Database.
  	Doc ID: 	Note:428688.1

Diagnosing Database Hanging Issues
  	Doc ID: 	Note:61552.1

Bug No. 	5057695 SHUTDOWN IMMEDIATE SLOW TO CLOSE DOWN DATABASE WITH INACTIVE JDBC THIN SESSIONS 

How to Debug Hanging Sessions?
  	Doc ID: 	178721.1

ORA-0054: When Dropping or Truncating Table, When Creating or Rebuilding Index 
  Doc ID:  117316.1 

Connection To / As Sysdba and Shutdown Immediate Hang 
  Doc ID:  314365.1 

How To Use Truss With Opatch?
  	Doc ID: 	470225.1

How to Trace Unix System Calls
  	Doc ID: 	110888.1

TECH: Getting a Stack Trace from a CORE file
  	Doc ID: 	1812.1

TECH: Using Truss / Trace on Unix
  	Doc ID: 	28588.1

How to Process an Express Core File Using dbx, dbg, dde, gdb or ladebug
  	Doc ID: 	118252.1

How to Process an Express Server Core File Using gdb
  	Doc ID: 	189760.1

Procwatcher: Script to Monitor and Examine Oracle and CRS Processes
  	Doc ID: 	459694.1

Interpreting HANGANALYZE trace files to diagnose hanging and performance problems
  	Doc ID: 	215858.1

CASE STUDY: Using Real-Time Diagnostic Tools to Diagnose Intermittent Database Hangs
  	Doc ID: 	370363.1

HANGFG User Guide
  	Doc ID: 	362094.1

No Response from the Server, Does it Hang or Spin?
  	Doc ID: 	68738.1

Diagnosing Webforms Hanging
  	Doc ID: 	179612.1

Database Performance FAQ
  	Doc ID: 	402983.1

Steps to generate HANGANALYZE trace files
  	Doc ID: 	175006.1

How To Display Information About Processes on SUN Solaris
  	Doc ID: 	70609.1





-- INTERNALS

Database Internals (Events, Blockdumps)
https://metalink2.oracle.com/metalink/plsql/f?p=130:14:4157667604321941359::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,267951.1,1,1,1,helvetica




-- SPA 

SQL PERFORMANCE ANALYZER 10.2.0.x to 10.2.0.y EXAMPLE SCRIPTS (Doc ID 742644.1)





-- TRACE

Interpreting Raw SQL_TRACE and DBMS_SUPPORT.START_TRACE output
  Metalink Note 39817.1
  This is the event used to implement the DBMS_SUPPORT trace, which is a
  superset of Oracle's SQL_TRACE facility. At level 4, bind calls are included in the
  trace output; at level 8, wait events are included, which is the default level for
  DBMS_SUPPORT; and at level 12, both binds and waits are included. See the
  excellent Oracle Note 39817.1 for a detailed explanation of the raw information in
  the trace file.

How to Obtain Tracing of Optimizer Computations (EVENT 10053)
 	Doc ID:	Note:225598.1

Recommended Method for Obtaining 10046 trace for Tuning
 	Doc ID:	Note:376442.1

EVENT: 10046 "enable SQL statement tracing (including binds/waits)"
 	Doc ID:	Note:21154.1

Tracing Oracle Applications using Event 10046
 	Doc ID:	Note:171647.1

Troubleshooting (Tracing)
 	Doc ID:	Note:117820.1

Note 246821.1   trace.sql                - Traces a sql statement ensuring that the rows column will be populated
Note 156969.1   coe_trace.sql            - SQL Tracing Apps online transactions with Event 10046 (11.5)
Note 156970.1   coe_trace_11.sql         - SQL Tracing Apps online transactions with Event 10046 (11.0)
Note 156971.1   coe_trace_all.sql        - Turns SQL Trace ON for all open DB Sessions (8.0-9.0)
Note 156966.1   coe_event_10046.sql      - SQL Tracing online transactions using Event 10046 7.3-9.0
Note 171647.1                            - Tracing Oracle Applications using Event 10046
Note 179848.1   bde_system_event_10046.sql - SQL Trace any transaction with Event 10046 8.1-9.0
Note 224270.1   TRCANLZR.sql - Trace Analyzer - Interpreting Raw SQL Traces generated by EVENT 10046
Note 296559.1   FAQ: Common Tracing Techniques within the Oracle Applications 11i


Introduction to Trace Analyzer and SQLTXPLAIN For System Admins and DBAs (Doc ID 864002.1)


Tracing Sessions in Oracle Using the DBMS_SUPPORT Package
  	Doc ID: 	62160.1

Tracing sessions: waiting on an enqueue
  	Doc ID: 	102925.1

Cannot Read User Trace File Even ''_trace_files_public''=True In 10G RAC
  	Doc ID: 	283379.1






How to Turn on Tracing of Calls to Database
 	Doc ID:	Note:187913.1

Note 1058210.6 HOW TO ENABLE SQL TRACE FOR ANOTHER SESSION USING ORADEBUG

Getting 10046 Trace for Export and Import
 	Doc ID:	Note:258418.1

Library Cache Latch Waits Cause Database Slowdown On Tracing Sessions With Event 10046
 	Doc ID:	Note:311105.1

How To Display The Values Of A Bind Variable In A SQL Statement
 	Doc ID:	Note:1068973.6

Introduction to ORACLE Diagnostic EVENTS
 	Doc ID:	Note:218105.1

When Conventional Thinking Fails: A Performance Case Study in Order Management Workflow customization
 	Doc ID:	Note:431619.1

How to Set SQL Trace on with 10046 Event Trace which Provides the Bind Variables
 	Doc ID:	Note:160124.1

Diagnostics for Query Tuning Problems
 	Doc ID:	Note:68735.1

Master note for diagnosing Portal/Database Performance Issues
 	Doc ID:	Note:578806.1

Debug and Validate Invalid Objects
 	Doc ID:	Note:300056.1

How to Submit a Testcase to Oracle Support for Reproducing an Execution Plan
 	Doc ID:	Note:390610.1

How to Run SQL Testcase Builder from ADRCI [Video] [ID 1174105.1]  <-- new stuff


How to Log a Good Performance Service Request
 	Doc ID:	Note:210014.1

Index Rebuild Is Hanging Or Taking Too Long
 	Doc ID:	Note:272762.1

Tracing session created through dblink
 	Doc ID:	Note:258754.1

Overview Reference for SQL_TRACE, TKProf and Explain Plan
 	Doc ID:	Note:199081.1

Diagnostics for Query Tuning Problems
 	Doc ID:	Note:68735.1

12099 ?? <-- what?



-- PL/SQL PROFILER

Implementing and Using the PL/SQL Profiler (Doc ID 243755.1)





-- DBMS_SUPPORT

The DBMS_SUPPORT Package
  	Doc ID: 	Note:62294.1



-- DBMS_APPLICATION_INFO

PACKAGE DBMS_APPLICATION_INFO Specification
  	Doc ID: 	Note:30366.1






-- RMAN 

RMAN Performance Tuning Diagnostics
 	Doc ID:	Note:311068.1




-- CPU 

How to Diagnose high CPU usage problems 
  Doc ID:  352648.1 

Diagnosing High CPU Utilization
  	Doc ID: 	Note:164768.1

http://www.freelists.org/post/oracle-l/CPU-used-by-this-Session-and-Wait-time




-- V$OSSTAT

Aix 5.3 On Power 5: Does Oracle Recommend 'SMT' Be Enabled Or Not (And Why) (Doc ID 308393.1)
MMNL Process Consuming High CPU (Doc ID 460127.1)
Bug 6164409 - v$osstat shows wrong values for load data (Doc ID 6164409.8)
Bug 6417713 - Linux PowerPC: Dump during startup / during select from V$OSSTAT (Doc ID 6417713.8)
Difference In V$OSSTAT xxx_TICKS and xxx_TIME between 10.1 and 10.2 (Doc ID 433937.1)
Bug 4527873 - Linux: V$OSSTAT view may return no rows (Doc ID 4527873.8)
Document TitleBug 3559340 - V$OSSTAT may contain no data on some platforms with large number of CPUs (Doc ID 3559340.8)
Bug 8777336 - multiple kstat calls while getting socket count and core count for v$osstat (Doc ID 8777336.8)
Very large value for OS_CPU_WAIT_TIME FROM V$OSSTAT / AWR Report (Doc ID 889396.1)
Bug 7447648 - HPUX: OS_CPU_WAIT_TIME value from V$OSSTAT is incorrect on HPUX (Doc ID 7447648.8)
KSUGETOSSTAT FAILED: OP = PSTAT_GETPROCESSOR, LOCATION = SLSGETACTIVE () (Doc ID 375860.1)
ADDM reports ORA-13711 error in OEM on HP-UX Itanium (Doc ID 845668.1)
Bug 5010657 - HPUX-Itanium: No rows from V$OSSTAT / incorrect CPU_COUNT (Doc ID 5010657.8)
ORA-7445 (ksbnfy) (Doc ID 753033.1)


-- IO WAIT
PROBLEM : "CPU I/O WAIT" metric has values > 100% on Linux, HP-UX and AIX Hosts [ID 436855.1]





-- IO

I/O Tuning with Different RAID Configurations
 	Doc ID:	Note:30286.1

CHECKLIST-What else can influence the Performance of the Database
 	Doc ID:	Note:148462.1	Type:	

Avoiding I/O Disk Contention
 	Doc ID:	Note:148342.1

Tuning I/O-related waits
 	Doc ID:	Note:223117.1


 	
 	
 	
-- COE ORACLE SUPPORT TOOLS


Doc ID:	Note:301137.1 OS Watcher User Guide
Doc ID:	Note:433472.1 OS Watcher For Windows (OSWFW) User Guide

OSW System Profile - Sample
 	Doc ID:	Note:461054.1
 	
LTOM System Profiler - Sample Output
 	Doc ID:	Note:461052.1
 	
OS Watcher Graph (OSWg) User Guide
 	Doc ID:	Note:461053.1
 	
OSW System Profile - Sample
 	Doc ID:	NOTE:461054.1
 	
Performance Tools Quick Reference Guide
 	Doc ID:	Note:438452.1
 	
LTOM - The On-Board Monitor User Guide
 	Doc ID:	Note:352363.1
 	
LTOM System Profiler - Sample Output
 	Doc ID:	NOTE:461052.1
 	
Linux sys_checker.sh O/S Shell script to gather critical O/S at periodic intervals
 	Doc ID:	Note:278072.1
 	
Linux Kernel: The SLAB Allocator
 	Doc ID:	Note:434351.1
 	
OSW System Profile - Sample
 	Doc ID:	Note:461054.1
 	
How To Start OSWatcher Every System Boot
 	Doc ID:	Note:580513.1
 	
51.	Diagnostic Tools Catalog
href="showdoc?db=NOT&id=362791.1&blackframe=0">Core / Stack Trace Extraction Tool (Stackx) User Guide 

559339.1 	08-APR-2008	Generic	Generic	REFERENCE

Doc ID 459694.1 Procwatcher Script to Monitor and Examine Oracle and CRS Processes

Script to Collect RAC Diagnostic Information (racdiag.sql)
  	Doc ID: 	135714.1

Script to Collect OPS Diagnostic Information (opsdiag.sql)
  	Doc ID: 	205809.1

STACKX User Guide
  	Doc ID: 	362791.1





-- ADVISORS

PERFORMANCE TUNING USING 10g ADVISORS AND MANAGEABILITY FEATURES
  	Doc ID: 	276103.1



-- LGWR

LGWR and Asynchronous I/O
  	Doc ID: 	422058.1



-- INDEX

Poor IO performance doing index rebuild online after migrating to another storage
  	Doc ID: 	258907.1


-- MULTIBLOCK READ COUNT

SSTIOMAX AND DB_FILE_MULTIBLOCK_READ_COUNT IN ORACLE 7 AND 8
  	Doc ID: 	131530.1



-- INITRANS

INITRANS relationship with DB_BLOCK_SIZE.
  	Doc ID: 	151473.1



-- DUMP

How to Dump Redo Log File Information
  	Doc ID: 	1031381.6

How to Obtain a Segment Header Dump
  	Doc ID: 	249814.1

How To Determine The Block Header Size
  	Doc ID: 	1061465.6

Obtaining systemstate dumps or 10046 traces at master site during snapshot refresh hang 
  Doc ID:  273238.1 





-- WAIT EVENTS

How I Monitor WAITS to help tune long running queries
  	Doc ID: 	431447.1


-- enq: HW - contention
'enq HW - contention' For Busy LOB Segment [ID 740075.1]
How To Analyze the Wait Statistic: 'enq: HW - contention' [ID 419348.1]
Thread: enq: HW - contention waits http://forums.oracle.com/forums/thread.jspa?threadID=644850&tstart=44
http://www.freelists.org/post/oracle-l/enq-HW-contention-waits
http://forums.oracle.com/forums/thread.jspa?threadID=892508
http://orainternals.wordpress.com/2008/05/16/resolving-hw-enqueue-contention/
{{{
http://www.orafaq.com/forum/t/164483/0/
--to allocate extent to the table
alter table emp allocate extent;
--the table has columns named col1 and col2 which are clob
--to allocate extents to the columns 
alter table emp modify lob (col1) (allocate extent (size 10m))
/ 
alter table emp modify lob (col2) (allocate extent (size 10m))
/ 
>> alter table theBLOBtable modify lob (theBLOBcolumn) (allocate extent (instance 1));
>> Remember to include the "instance 1" so space is added below HWM, even if
you're not using RAC (ignore documentation's caution: only use it on RAC).
}}}


-- resmgr: become active
The session is waiting for a resource manager active session slot. This event occurs when the resource manager is enabled and the number of active sessions in the session's current consumer group exceeds the current resource plan's active session limit for the consumer group. To reduce the occurrence of this wait event, increase the active session limit for the session's current consumer group.

High "Resmgr:Cpu Quantum" Wait Events In 11g Even When Resource Manager Is Disabled [ID 949033.1]
NOTE:786346.1 - Resource Manager and Sql Tunning Advisory DEFAULT_MAINTENANCE_PLAN
NOTE:756734.1 - 11g: Scheduler Maintenance Tasks or Autotasks
NOTE:806893.1 - Large Waits With The Wait Event "Resmgr:Cpu Quantum"
NOTE:392037.1 - Database Hangs. Sessions wait for 'resmgr:cpu quantum'
No Database User Can Login Except Sys And System because Resource Manager Internal_Quiesce Plan Enabled [ID 396970.1]
Thread: ALTER SYSTEM SUSPEND https://forums.oracle.com/forums/thread.jspa?threadID=852356
<<<
The ALTER SYSTEM SUSPEND - statement halts all input and output (I/O) to datafiles (file header and file data) and control files. The suspended state lets you back up a database without I/O interference. When the database is suspended all preexisting I/O operations are allowed to complete and any new database accesses are placed in a queued state.

ALTER SYSTEM QUIESCE RESTRICTED - Non-DBA active sessions will continue until they become inactive. An active session is one that is currently inside of a transaction, a query, a fetch, or a PL/SQL statement; or a session that is currently holding any shared resources (for example, enqueues). No inactive sessions are allowed to become active. For example, If a user issues a SQL query in an attempt to force an inactive session to become active, the query will appear to be hung. When the database is later unquiesced, the session is resumed, and the blocked action is processed
<<<





-- LOB

LOB Performance Guideline
  	Doc ID: 	268476.1




-- OS TRACE

How to use truss command on IBM AIX 
  Doc ID:  245350.1 

TECH: Using Truss / Trace on Unix 
  Doc ID:  28588.1 

How to Trace Unix System Calls 
  Doc ID:  110888.1 

How to Trace the Forms Runtime Process Using TRUSS/STRACE 
  Doc ID:  275510.1 

Troubleshooting Tips For Spinning/Hanging F60WEBMX Processes 
  Doc ID:  457381.1 

Diagnosing Webforms Hanging 
  Doc ID:  179612.1 

How To Capture A Truss Of F60WEBMX When There Is No Process ID (PID) 
  Doc ID:  438913.1 

How to Run Truss 
  Doc ID:  146428.1 

QREF: Trace commands Summary 
  Doc ID:  16782.1 

TECH: Using Truss / Trace on Unix 
  Doc ID:  28588.1 

Database Startup, Shutdown Or New Connections Hang With Truss Showing OS Failing Semtimedop Call With Err#11 EAGAIN 
  Doc ID:  760968.1 

How To Verify Whether DIRECTIO is Being Used 
  Doc ID:  555601.1 

How To Perform System Tracing For All Forms Runtime Processes? 
  Doc ID:  400144.1 

ALERT: Hang During Startup/Shutdown on Unix When System Uptime > 248 Days 
  Doc ID:  118228.1 

How To Use Truss With Opatch? 
  Doc ID:  470225.1 

Note 110888.1 - How to Trace Unix System Calls

How to Troubleshoot Spinning / Runaway Web Deployed Forms Runtime Processes? 
  Doc ID:  206681.1 

ORA-7445[ksuklms] After Upgrade To 10.2.0.4 
  Doc ID:  725951.1 







-- QMN

Queue Monitor Process: Architecture and Known Issues 
  Doc ID:  305662.1 

Queue Monitor Coordinator Process delays Database Opening due to Replication Queue Tables with Large HighWaterMark 
  Doc ID:  564663.1 

'IPC Send Timeout Detected' errors between QMON Processes after RAC reconfiguration 
  Doc ID:  458912.1 

Queue Monitor Coordinator Process consuming 100% of 1 cpu 
  Doc ID:  604246.1 



-- OS TOOLS , SOLARIS

http://developers.sun.com/solaris/articles/tuning_solaris.html




-- SPACE MANAGEMENT

BMB versus Freelist Segment: DBMS_SPACE.UNUSED_SPACE and DBA_TABLES.EMPTY_BLOCKS (Doc ID 149516.1)
Automatic Space Segment Management in RAC Environments (Doc ID 180608.1)
How to Deallocate Unused Space from a Table, Index or Cluster. (Doc ID 115586.1)
When to use DBMS_SPACE.UNUSED_SPACE or DBMS_SPACE.FREE_BLOCKS Procedures (Doc ID 116565.1)




-- LOCKS, ENQUEUES, DEADLOCKS

FAQ about Detecting and Resolving Locking Conflicts
  	Doc ID: 	15476.1

The Performance Impact of Deadlock Detection
  	Doc ID: 	285270.1

What to do with "ORA-60 Deadlock Detected" Errors
  	Doc ID: 	62365.1

Understanding and Reading Systemstates
  	Doc ID: 	423153.1

Tracing sessions: waiting on an enqueue
  	Doc ID: 	102925.1

WAITEVENT: "enqueue" Reference Note
  	Doc ID: 	34566.1

VIEW: "V$LOCK" Reference Note
  	Doc ID: 	29787.1

TX Transaction locks - Example wait scenarios
  	Doc ID: 	62354.1

ORA-60 DEADLOCK DETECTED ON CONCURRENT DML INITRANS/MAXTRANS
  	Doc ID: 	115467.1

OERR: ORA 60 "deadlock detected while waiting for resource"
  	Doc ID: 	18251.1

ORA-60 / Deadlocks Most Common Causes
  	Doc ID: 	164661.1

Credit Card Authorization Slow And 'row lock contention'
  	Doc ID: 	431084.1

Deadlock Error Not in Alert.log and No Trace File Generated on OPS or RAC
  	Doc ID: 	262226.1

How to Interpret the Different Types of Locks in Lock Manager 1.6
  	Doc ID: 	75705.1









{{{
This is a thorough and systematic performance review and a comprehensive report will be given. 
No changes or tuning will be done during the activity. From the detailed report we could do another engagement acting on the bottlenecks found. 

--------------------------------------------------------------------------------
The Tuning Document

1) Infrastructure Overview

2) Recommendations

3) Performance Summary

4) Operating System Performance Analysis
    - CPU
    - Memory
    - Swap
    - Storage
    - Network

5) Oracle Performance Analysis
  Database Bottlenecks - this includes but not limited to the following:
    - Stresser of the database server's components (CPU,IO,Memory,Network) on low and peak periods using Linear Regression Analysis
    - ETL period / Ad hoc reports  affecting database server performance
    - Issues on particular wait events
    - Configuration issues, example would be Parallelism parameters
    - Long running SQLs
    - etc.

6) Application Performance Analysis
  Top SQLs
    - Top SQLs - SELECT
    - Top SQLs - INSERT
    - Top SQLs - UPDATE
    - Top SQLs - MERGE
    - Top SQLs - PARALLEL
    - Unstable execution plans

7) References and Metalink Notes


--------------------------------------------------------------------------------
Things needed prior and during the activity

Below are the documents we need before the activity:
1) Most recent RDA of the database 
3) Hardware, Storage, and network architecture that includes the Database, Application Server, BI environment 
4) Hardware, Storage (raw and usable), and network make and model (plus specs)
5) Workload period of the following (day and time of the month):
- work hours
- peak and off peak
- ETL period
- reports period
- (OLTP) transaction processing
- backup (RMAN, filesystem copy, tape, SAN mirroring)


Here are the things that we need during the tuning activity:
1) It is critical to have AWR/Statspack data, ideally it should represent the following workload periods:
- work hours
- peak and off peak
- ETL period
- reports period
- (OLTP) transaction processing
- backup (RMAN, filesystem copy, tape, SAN mirroring)

The snap period (interval) should be at least 15mins. And the data retention should be at least 30 days to have enough data samples during workload characterization.
AWR needs a diagnostic and tuning pack license. Statspack is a free tool. Any of them should be installed.

2) SAR data of the database server


Below are some of the tools that will be used during the activity:
•	OSWatcher (Oracle OS Watcher) - Reports CPU, RAM and Network stress, and is a new alternative for monitoring Oracle servers
•	Perfsheet (Performance Visualization) – For Session Monitoring, uses excel sheet
•	Ashmon (Active Session Monitoring) – For monitoring Database Session
•	Lab 128 (trial software) – Tool for Oracle Tuning, Monitoring  and trace SQL/Stored procedures transactions
•	SQLTXPLAIN (Oracle Extended Explain Plan Statistics) – Provides details about all schema objects in which the SQL statement depends on.
•	Orasrp (Oracle Session Resource Planner) – Builds complete detailed session profile
•	Snapper (Oracle Session Snapper) - Reports Oracle session level performance counter and wait information in real time
•	Oracle LTOM (Oracle Lite Onboard Monitor) – Provides automatic session tracing
•	AWR r2toolkit - A toolkit for workload characterization and forecasting
•	gxplan - Visualization of explain plan
}}}

References:
Total Performance Management http://www.allenhayden.com/cgi/getdoc.pl?file=perfmgmt.pdf
https://www.evernote.com/shard/s48/sh/e654bbad-d3e9-4ea4-b162-be9f2e7f736e/e8201e4222b04cba01ead70347fc5c77  ''<-- details snapper''

! another good format

{{{

1) Overview	
	* State the problem

2) Database and Workload Overview

		Configuration
			* Describe the platform/configuration/environment
		Workload
			* workload patterns, specific jobs during day/night

3) Performance Summary	

		itemize the findings
			* State what you found. List things in order of importance (impact). Be concise. Show 1 graph that illustrates your observation even though you may have 3. These items should all be measurable, ie. # connect/disconnect per hour, IO’s per second, etc.

4) Recommendations	

		short term
			* High impact items that can be done within a 1 week to 1 month. Or items that are so easy to implement that it makes sense to just get them done and checked off the list. Items that must be done before a good reading may be collected, or higher impact items may be done.
		near term
			* Items that will take 2-3 months to implement.
		long term
			* Items that will take an extended time to implement due to the size of the effort or the development cycle.

5) Conclusion
	* Summarize your report in 1-2 paragraphs. 

6) Appendix A - details on specific issues
	* Here is where you will put all the details. You can tell your stories here. Reference your stories, graphs, etc. from the body of your report (Observations, Recommendations)
	...	
   Appendix B - References	

}}}


! another good format 
{{{
EXECUTIVE SUMMARY	- Not all reports deserve a Table of Contents (ToC) and an Executive Summary (ES). Uses both only if your report is becoming too lengthy, else remove both ToC and ES.
An Executive Summary should be detailed enough for someone to read and have a good idea of what is going on. At the same time it must be brief, factual, and fluid. Some people will only read this Section. Proofread it as much as you can.
	FINDINGS - Include in this section your findings at the highest level possible. List below is just an example; you will have a different list. Order of this list of findings loosely matches the high and medium impact of your Report	
	RECOMMENDATIONS - There should be almost a one to one relationship between Findings and Recommendations bullets. But sometimes one Finding may spawn two Recommendations, or two Findings can be solved with one Recommendation	
OVERVIEW - State the problem. This section should be short. Try to keep it that way. Describe the WHY this engagement and WHAT are the goals. This section is usually less than one page long. Anything between half a page and two pages is fine. 
Putting one graph is OK, if it provides some high level view of what is this engagement about. 
This section is like a high-level situation summary. It tells the story that brought us here, but it is brief.	
SYSTEM CONFIGURATION - Describe the platform/configuration/environment. This section describes hardware and software but not the applications. It includes version of database and size.  If there is anything installed on the system other than the database, it is briefly described here. 
It includes CPU, Memory and Storage characteristics. On Exadata systems this section defines:  Exadata, CRS and Database versions; System model (X2-?, X3-?, X4-?); size of Rack; ASM. In single instance systems, this section may take less than ½ page. On RAC it may take more than ½. And in Exadata it may take one full page or even more. Please try not go over 2 pages. Use a simple Word Table, or just Tabs.	
APPLICATIONS - Description of the applications on this database. What they do, users, amount of data, growth, major interfaces. List concerns if any. Sometimes DBAs know little about the applications, so you may need to talk to the Developers, or to some users.	
FINDINGS	
	HIGH IMPACT	- Most important finding goes here. If you end up with 20 items as high, 10 as medium and 5 as low, that is fine. Ideally, you want to balance these 3 lists, but do not force this balance
	MEDIUM IMPACT - First finding that is important but not “that” important to make it on the first list. We do not want everything on one “everything-is-urgent” list
	LOW IMPACT - First finding that may need a change, but if we don’t implement it that is ok. For example: number of sessions is kind of high. If sessions were very high we may list it under medium
	NO IMPACT - First finding that is clean or simply does not affect the system. For example: system statistics are not collected and have default values. Or, redo log is healthy
RECOMMENDATIONS	
	SHORT TERM	 - Items that can and should be implemented soon (usually within a week or a month). Or items that are so easy to implement that it makes sense to just get them done and checked off the list. Items that must be done before a good reading may be collected, or other higher impact items may be done. It thus is possible to have a medium or low impact item with a recommendation on this list, and pushing a high impact finding to the near term list if the former is a requirement for the latter. 
For the most part, use your common sense. You want in this list those items that can be, or that should be implemented sooner than the rest.
	NEAR TERM - Items that may take longer to implement (usually a month or more). It is common to list here those items that have to be implemented soon, but may require first the implementation of some of the “short term” items. Or items that need some coordination, like changes to the OS.
	LONG TERM - Items that may take an extended time to implement due to the size of the effort or the development cycle (could be two or three months). For example: an upgrade	
CONCLUSION - Summarize your report in less than one page if possible. Be positive in your closing remarks (and be factual all the time).
This section is like a high-level action plan. It tells the story of what needs to be done in order to improve the health of the system, but it is brief.
	
APPENDICES	- Here is where you will put all the details and most of your graphs and cut&paste pieces. You can also tell more about your stories here. Use “Heading 2” style for each section of your appendices. For code or trace text, use font courier size 8 dark blue to make it more readable, while reducing footprint. 
You may reference into here all your stories, graphs, etc. from the body of your report (Findings and Recommendations). 
If you make a reference to a document provided by customer, you can cut and paste that particular piece, then place the actual document under a folder “Sources”. Refer to that document using its actual file name, for example awrrpt_1_10708_10709.txt.
Directory Structure Example:
•	Folder: Customer Name Health-Check Report and Supporting Docs v09
o	File: Customer Name Health-Check Report v09.docx
o	File: Customer Name Health-Check Report v09.pdf
o	Folder: Sources
	Zip: alert_logs.zip
	Zip: ash.zip
	Zip: awr_tool_kit.zip
	Zip: exachk.zip
	File: Some_reference_document.pdf 
A good size for a report is anywhere between 10 and 30 pages. For a one-week health-check, producing a report that is 10 to 20 pages long is fine. For a 3 weeks engagement, a report that is 20 to 30 pages long is normal. If you find yourself with a report that is 50+ pages then most probably you want to remove some big chunks and make them separate files under the Sources directory. Then have a short summary and a reference in your report instead. 
Avoid cluttering your report since it makes it harder for everyone to read it. So, keep it simple, factual, bullet oriented and with a nice and natural flow from paragraph to paragraph, and from section to section.

}}}


! another good format I used before

[img[ https://i.imgur.com/dCSgpMF.png ]]









! commands 

{{{


@snapper out 1 120 "select sid from v$session where status = 'ACTIVE'"
@snapper all 1 5 qc=276
@snapper ash=sql_id+sid+event+wait_class+module+service,stats 5 5 qc=138
@snapper ash=sql_id+sid+event+wait_class+module+service,stats 5 1 sid=2164
@snapper ash=sql_id+sid+event+wait_class+module+service,stats 5 5 "select sid from v$session where program like 'nqsserver%'"
@snapper ash=event+wait_class,stats,gather=ts,tinclude=CPU,sinclude=redo|reads|writes 5 5 "select sid from v$session where username like 'DBFS%'"   <-- get sysstat values
@snapper ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats 5 5 "select sid from v$session"  <-- ALL PROCESSES - start with this!
@snapper ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats,gather=a 5 5 "select sid from v$session"  <-- ALL PROCESSES - with BUFG and LATCH
@snapper ash=event+wait_class,stats,gather=tsw,tinclude=CPU,sinclude=redo|reads|writes 5 5 "select sid from v$session where username like 'USER%' or program like '%DBW%' or program like '%CKP%' or program like '%LGW%'"    <-- get ASM redundancy/parity test case
@snapper ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats 5 5 "select sid from v$session where username = 'DBFS' or program like '%SMC%' or program like '%W00%'"                                         <-- get DBFS and other background processes
@snapper ash=sql_id+sid+event+wait_class+module+service,stats 5 5 ALL
@snapper ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats 5 5 1374
@snapper ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats 5 5 "select sid from v$session where username = 'DBFS'"
-- the snapperloop, copy the snapperloop file in the same directory then do a spool then run any of the commands below
@snapperloop ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats 5 5 263
@snapperloop ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats 5 5 "select sid from v$session where username = 'SYSADM' and module = 'EX_APPROVAL'"
@snapperloop ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats 5 5 "select sid from v$session where username = 'SYSADM'"
@snapperloop ash=sql_id+sid+event+wait_class+module+service+blocking_session+p2+p3,stats 5 5 "select sid from v$session where username = 'SYSADM' and sid = 821"
@snapper "stats,gather=s,sinclude=total IO requests|physical.*total bytes" 10 1 all <-- get the total IO MB/s


-- snapper manual begin and end
select sid from v$session where sql_id = '8gqt47kymkn6u'
set serveroutput on
var snapper refcursor
snapper4.sql all,begin 5 1 1538
snapper4.sql all,end 5 1 1538


-- for non-exadata
vmstat 2 100000000 | while read line; do echo "`date +%T`" "$line" ; done  >> vmstat_1.txt
iostat -xcd 1 100000 |  while read line; do echo "`date +%T`" "$line" ; done >> iostat_1.txt
mpstat -P ALL 1 100000 | while read line; do echo "`date +%T`" "$line" ; done  >> mpstat_1.txt
while : ; do top -c -n 45; echo "--"; sleep 1; done | while read line; do echo "`date +%T`" "$line" ; done  >> top_1.txt

-- sort the output by IO latency
$ less iostat_1.txt  | sort -rnk11 | less
$ less snapper1.txt | grep -i "db file" | sort -rnk5 -t','  | less

-- for exadata
while : ; do dcli -l root -g all_group uptime >> uptime.txt; echo "-"; sleep 2 ; done
dcli -l root -g all_group --vmstat 2 >> vmstat.txt
less /opt/oracle.oswatcher/osw/archive/oswtop/pd01db03.us.cbre.net_top_12.03.08.1600.dat.bz2 | grep -A20 "load average:" > loadspike.txt
metric iorm.pl spools timestamp 08:35:51 while : ; do ./metric_iorm.pl >> metriciorm_1.txt; echo "--"; sleep 10; done | while read line; do echo "`date +%T`" "$line" ; done 
metric_iorm.pl [root@enkcel04 ~]# cat iorm.sh
dcli -l root -g /root/cell_group -x metric_iorm.pl | while read line; do echo "`date +%T`" "$line" ; done
while : ; do ./iorm.sh ; echo "---" ; sleep 10; done >> iorm.txt
cat iorm.txt | grep "Total Disk Throughput"


-- solaris
vmstat 1 100000 | while read line; do echo "`date +%T`" "$line" ; done  >> vmstat_1.txt
iostat -xnc 1 100000 |  while read line; do echo "`date +%T`" "$line" ; done >> iostat_1.txt
while : ; do top -c -n 45; echo "--"; sleep 1; done | while read line; do echo "`date +%T`" "$line" ; done  >> top_1.txt
prstat -mL |  while read line; do echo "`date +%T`" "$line" ; done >> prstat_1.txt
mpstat 1 100000 | while read line; do echo "`date +%T`" "$line" ; done  >> mpstat_1.txt


-- solaris lockstat and prstat
lockstat -o lockstat.out5 -C -i 997 -s10 -D20 -n 1000000 sleep 60
lockstat -C sleep 5 > lockstat-C.out
lockstat -H sleep 5 > lockstat-H.out
lockstat -kIW sleep 5 > lockstat-kIW.out
lockstat -kgIW sleep 5 > lockstat-kgIW.out
lockstat -I sleep 5 > lockstat-I.out
/usr/bin/prstat -Z -n 1 5 40 > prstatz.out2
/usr/bin/prstat -mL 5 40 >prstatml.out2

less lockstat.out5 | grep "% " | sort -rnk1 | less
less lockstat-C.out | grep -B1 -A1 Hottest | sort -nk1
less lockstat-H.out | grep -B1 -A1 Hottest | sort -nk1
less lockstat-kIW.out | grep -B1 -A1 Hottest | sort -nk1
less lockstat-kgIW.out | grep -B1 -A3 Hottest | sort -nk1
less prstat -Z
less prstat -mL

-- solaris ASM troubleshooting 
truss -aefo asm.out asmcmd ls DATASBX/TGR
export DBI_TRACE=1
asmcmd
http://www.brendangregg.com/DTrace/iotop
./iotop -CP 5 10





-- aix 
lparstat 10 1000000 | while read line; do echo "`date +%T`" "$line" ; done >> lparstat.txt &
then do 
     cat lparstat.txt | sort -rnk6 | more
iostat -DRTl 10 100
iostat -st 10 100


-- quickly kill a session
* top -c
* copy the output on kill.txt
* on command line do this
Karl@Karl-LaptopDell ~/home
$ cat kill.txt | grep "biprd2 (" | awk '{print "kill -9 " $1}'
kill -9 14349
kill -9 15425
kill -9 3735



}}}





http://books.perl.org/topx
http://use.perl.org/~Ovid/journal/29332
http://www.perlmonks.org/?node_id=543480
http://www.amazon.com/Only-the-best-Perl-books/lm/1296HDTC2HVBH




-- perl pattern matching
http://www.tjhsst.edu/~dhyatt/perl/exA.html
http://www.addedbytes.com/cheat-sheets/regular-expressions-cheat-sheet/
http://xenon.stanford.edu/~xusch/regexp/analyzer.html
http://stackoverflow.com/questions/8286796/how-to-debug-perl-within-a-bash-wrapper
http://www.thegeekstuff.com/2010/05/perl-debugger/
http://www.mail-archive.com/beginners@perl.org/msg92480.html



{{{
$ cat dbd-test.pl
#!/u01/app/oracle/product/11.2.0/dbhome_1/perl/bin/perl

use DBI;

$dbname   = 'paprd1';
$user     = 'system';
$password = 'Ske1et0n';
$dbd      = 'Oracle';
$conn     = "dbi:$dbd:$dbname";

print "Connecting to database\n";
$dbh = DBI->connect( $conn,$user,$password);
$cur = $dbh->prepare('select tablespace_name from dba_tablespaces');
$cur->execute();
while (($tablespace_name) = $cur->fetchrow) {
   print "$tablespace_name\n";
}

}}}

Logical Reads vs Physical Reads
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:6643159615303

Do fast full index scans do physical disk reads?
http://www.mail-archive.com/oracle-l@fatcity.com/msg23688.html
* smartd is a really cool tool and there's a really cool documentation here http://smartmontools.sourceforge.net/badblockhowto.html, plus a calculator that you can use http://homepage2.nifty.com/cars/misc/chs2lba.html
* it all boils down to replacing the HD.. but first you need to tie the failed block device with the physical serial number and their location on the motherboard.. 

http://forums.fedoraforum.org/showthread.php?t=122196
http://www.linuxjournal.com/content/know-when-your-drives-are-failing-smartd
http://serverfault.com/questions/64239/physically-identify-the-failed-hard-drive
http://www.linuxquestions.org/questions/linux-general-1/how-do-physically-identify-a-failed-raid-disk-561021/

http://www.techrepublic.com/blog/opensource/using-smartctl-to-get-smart-status-information-on-your-hard-drives/1389
''smart gui tool'' http://unixfoo.blogspot.com/2009/03/gsmartcontrol-gui-for-smartctl.html
http://morebigdata.blogspot.com/2012/09/pignalytics-pigs-eat-anything-reading.html
https://software.intel.com/en-us/articles/pin-a-dynamic-binary-instrumentation-tool
http://moluccan.co.uk/Joomla/index.php/crib-sheet/287-pinning-cluster-nodes
http://www.filibeto.org/sun/lib/nonsun/oracle/11.2.0.1.0/E11882_01/install.112/e10816/postinst.htm#BABGIJDH
http://blog.ronnyegner-consulting.de/2010/03/11/creating-a-oracle-10g-release-2-or-11g-release-1-database-on-a-11g-release-2-cluster/
<<showtoc>> 

<<<
@@
Starting version 12.1.0.2 the "Plan Line ID" was introduced in SQL Monitor - Plan Tab. And for versions below 12.1.0.2, to get "Plan Line ID" on Plan Tab use SQL Developer 4.2
@@
<<<

<<<
@@ What's important about the Plan Line ID ? @@

On SQL Monitor reports the first two sections are Plan Statistics and Plan tabs. And the way you would correlate these two information is through Plan Line ID. 
* Plan Statistics - is where you check for details where the time is being spent on the execution plan
* Plan Tab - contains the critical information for drilling down further (see wiki [[“Object Name”, “Access Predicates”, “Filter Predicates”, “Projection”]]) on the specific part of the SQL text  
@@Imagine ''__without__'' Plan Line ID on Plan Tab section of SQL Monitor and you are reading a 1000 lines of SQL execution plan and the bottleneck is on line 342. What would you do? Well, you start from line 1 then hit the down arrow key then count 342 times until you reach that specific Plan Line.@@ As a consultant dealing with remote databases if this is the only info you have it's not a good troubleshooting experience, and you are better off getting this info using DBMS_XPLAN.DISPLAY_CURSOR (Column Projection and Predicate filter/access Information sections)

On SQL Developer 4.2 there's an enhanced "Real Time SQL Monitoring viewer" where the Plan Tab section Plan Line ID is aligned with “Obj”,“Access”,“Filter”,“Projection”,“QBlock” - all of that info exposed in every row, no more clicking on every line (yes you click on every line of SQL Monitor html report to expose filter/access predicates). This improvement is very powerful and makes troubleshooting very easy. 

In SQL Monitor or SQL Developer - "Real Time SQL Monitoring viewer" you have to flip back and forth on both the Plan Statistics and Plan Tabs. I personally like having the "time spent" and "other plan info" shown all in one line. That's why I use @@DB Optimizer - my go-to SQL profiling/tuning tool (see the bottom section of this post)@@ whenever I'd like to dig in deeper on the business logic behind the SQL. It makes it easy to go through hundreds of lines of exec plan with all info in one line, VST and SQL_TEXT (with dynamic highlighting) on the top, and system level and session level profiling on another tab. And all of this can be saved offline! :)  

<<<



! 12.1.0.2
!! yes Plan Line ID on SQL Monitor Plan Tab was introduced
<<<
Plan Statistics    
[img(95%,95%)[http://i.imgur.com/azLlOxk.png]]
Plan Tab
[img(95%,95%)[http://i.imgur.com/MGU1JYt.png]]
<<<

! 12.1.0.1
!! no Plan Line ID on SQL Monitor Plan Tab 
<<<
Plan Statistics
[img(95%,95%)[http://i.imgur.com/fojDlku.png]]
Plan Tab
[img(95%,95%)[http://i.imgur.com/Z0E9WyY.png]]
<<<

! @@ How to get Plan Tab - Plan Line ID from 11.2 (I think even 11.1 - SQL_PLAN_LINE_ID was introduced) to 12.1.0.1  ??? @@ <- Use SQL Developer 4.2 
On version 4.2 of SQL Developer they enhanced the "Real Time SQL Monitoring viewer" http://www.oracle.com/technetwork/developer-tools/sql-developer/sqldev-newfeatures-v42-3211987.html 
!! 12.1.0.1 SQL Developer -> Tools -> Real Time SQL Monitor
<<<
Plan Statistics
[img(95%,95%)[http://i.imgur.com/cQ5ctHG.png]]
Plan Tab
[img(95%,95%)[http://i.imgur.com/zWrVlDL.png]]
<<<

!! 11.2.0.4 SQL Developer -> Tools -> Real Time SQL Monitor
<<<
Plan Statistics
[img(95%,95%)[http://i.imgur.com/AnPiIE3.png]]
Plan Tab
[img(95%,95%)[http://i.imgur.com/HcMYsfA.png]]
<<<

! DB Optimizer - my go-to SQL profiling/tuning tool 
<<<
@@DB Optimizer - Tuning Tab@@

* Here I can quickly correlate where the time is being spent vs what the logic behind the SQL (VST - Visual SQL Tuning Diagram) vs SQL TEXT. When I click on SKEW (B) object the corresponding sections in SQL TEXT with any association to SKEW (B) gets highlighted. The VST Diagram area also shows the row count and filter ratios/rows between joined tables, and also the join method used and execution path (dark green - START, red - FINISH). Plus the object details (index,tables,stats,histogram) around the SQL_ID. 
* Here we are doing many to many join of the SKEW table (alias A to D). And as we join step by step from A to D you can see on the "Actual Statistics" section "CR Buffer Gets" column that the number linearly increases per table join. From 1.67M (A->B) to 3.3M (B->C) to 4.9M (C->D). That's why this is CPU/LIO intensive SQL. 
* It produces this row source stats by injecting gather_plan_statistics hint when you run the SQL. So yes the SQL has to finish to get "Actual Statistics" data and here I modified the SQL from 10000000 to 1000. On cases where I can't easily do this "trick" to produce the "Actual Statistics" and if the SQL would run for hours, I would just grab the VST diagram, Execution Plan from DB optimizer and correlate this info with SQL Monitor. But then the pain here is DB Optimizer doesn't show Plan Line ID, arghh. But it allows wildcard search on the execution plan so I can just key in the specific operation and from there I can highlight and get to my Plan Line ID and do correlation. This issue is not a deal breaker for me, but I hope they fix this on the next release :) 

[img(95%,95%)[http://i.imgur.com/5bY7S3R.png]]

While doing SQL tuning on one tab, you can also do system level and session level profiling on another tab. This interface looks like OEM. And it's very powerful and fast and RAC-aware. Drilling down from System to Session level is very easy. And I can read PL/SQL packages/procedures right away where the SQL_ID bottleneck is coming from. 


@@DB Optimizer - Profiling Tab@@
[img(95%,95%)[http://i.imgur.com/ptkNUEL.png]]

And the beauty about this is I can save my profiling and SQL tuning sessions offline together with my screenshots, SQL Monitor, and SQLD360 files. 
[img(50%,50%)[http://i.imgur.com/TQYkUF4.png]]

<<<


! the testcase SQL I used 
{{{
create table hr.skew as select * from dba_objects; 

select /*+ monitor ordered
        use_nl(b) use_nl(c) use_nl(d)
        full(a) full(b) full(c) full(d) */
    count(*)
from
    hr.skew a,
    hr.skew b,
    hr.skew c,
    hr.skew d
where
    a.object_id = b.object_id
and b.object_id = c.object_id
and c.object_id = d.object_id
and rownum <= 10000000;
}}}
















''MindMap - Plan Stability'' http://www.evernote.com/shard/s48/sh/727c84ca-a25e-4ffa-89f9-4d1e96c471c4/dcad83781f8a07f8983e26fbb8c066a3

''Plan Stability - Apress Book (bind peek, ACS, dynamic sampling, cardinality feedback)'' - https://www.evernote.com/shard/s48/sh/013cd51e-e484-49ac-911b-e01bdd54ac06/ce780dd4ca02d3d0b72b493acf8c33fd


http://coskan.wordpress.com/2011/01/26/plan-stability-through-upgrade-to-11g-introduction/
<<<
1-Introduction
2-Building the test
3-Why is my plan changed?-bugfixes : how you can find which bug fix may caused your plan change
4-Why is my plan changed?-new optimizer parameters : how you can find which parameter change/addition may caused your plan change
5-Why is my plan changed?-extra nested loop : what is the new nested loop step you will see after 11G upgrade
6-Why is my plan changed?-stats : I will try to explain how to understand if your stats are the problem
7-Why is my plan changed?-adaptive cursor sharing : I will talk a “little” about adaptive cursor sharing which may cause different plans for binded sqls after upgrade
8-Opening plan change case on MOS-SQLT : I will try to save the time you spend with Oracle Support when you raise a call for post upgrade performance degredation
9-Plan Baselines-Introduction : What are plan baselines they how they work
10-Plan Baselines-Using SQL Tuning sets : How to create plan baselines from tuning set ?
11-Plan Baselines-Using SQL Cache : How to create plan baselines from SQL Cache ?
12-Plan Baselines-Moving Baselines : How to move your plan baselines between database ?
13-Plan Baselines-Faking Baselines : How to fake the plan baseline?s
14-Plan Baselines-Capturing Baselines : How to capture baselines?
15-Plan Baselines-Management : How to manage your baselines?
16-Testing Statistics with Pending Stats : I’ll go through how you can use pending statistics during upgrades
17-Comparing Statistics : I’ll explain comparing the statistics
18-Cardinality Feedback Feature : I’ll go through new built in cardinality feedback feature which may cause problems
19-Where is the sqlid of active session ? : I’ll show you how you can find what your sql_id when it is null
20-Testing hintless database : I’ll explain how you can get rid of hints
21-Upgrade Day/Week : What needs to be ready for smooth upgrade ?
22-Before after analysis-mining problems : How you can spot possible problems comparing tuning sets
23-Before after analysis-graphs to sell : Using perfsheet to sell your work
24-Further Reading : Compilation of References I used during series and some helpfull links
25-Tools used : Index of the tools I used during series
<<<

Part 1 - http://avdeo.com/2011/06/02/oracle-sql-plan-management-part-1/
Part 2 - http://avdeo.com/2011/06/07/oracle-sql-plan-management-%e2%80%93-part-2/
Part 3 - http://avdeo.com/2011/08/07/oracle-sql-plan-management-%E2%80%93-part-3/

{{{
http://kerryosborne.oracle-guy.com/2008/09/sql-tuning-advisor/
http://kerryosborne.oracle-guy.com/2008/10/unstable-plans/
http://kerryosborne.oracle-guy.com/2008/10/explain-plan-lies/
  http://kerryosborne.oracle-guy.com/2008/12/oracle-outlines-aka-plan-stability/
  http://kerryosborne.oracle-guy.com/2009/03/bind-variable-peeking-drives-me-nuts/
  http://kerryosborne.oracle-guy.com/2009/04/oracle-sql-profiles/
  http://kerryosborne.oracle-guy.com/2009/04/do-sql-plan-baselines-use-hints/
  http://kerryosborne.oracle-guy.com/2009/04/do-sql-plan-baselines-use-hints-take-2/
  http://kerryosborne.oracle-guy.com/2009/05/awr-dbtime-script/


http://oracle-randolf.blogspot.com/2009/03/plan-stability-in-10g-using-existing.html
http://jonathanlewis.wordpress.com/2008/03/06/dbms_xplan3/

http://antognini.ch/papers/SQLProfiles_20060622.pdf
}}}



hourim.wordpress.com series 
https://hourim.wordpress.com/2021/07/28/why-my-execution-plan-has-not-been-shared-part-7/
https://hourim.wordpress.com/2020/05/10/why-my-execution-plan-has-not-been-shared-part-6/
https://blog.toadworld.com/2017/06/13/why-my-execution-plan-has-not-been-shared-part-v
https://blog.toadworld.com/2017/05/05/why-my-execution-plan-has-not-been-shared-part-iv
https://blog.toadworld.com/why-my-execution-plan-has-not-been-shared-part-iii
https://blog.toadworld.com/why-my-execution-plan-has-not-been-shared-part-ii
https://blog.toadworld.com/why-my-execution-plan-has-not-been-shared-part-i




ACS - adaptive cursor sharing 





http://www.nocoug.org/download/2012-05/Kevin_Closson_Modern_Platform_Topics.pdf

actual presentation at OakTable World http://www.youtube.com/watch?v=S8Ih1NpOlNI#start=0:00;end=37:47;cycles=-1;autoreplay=false;showoptions=false
{{{
CBS -> Newest Full Episodes
ESPN3
Fox News -> Latest News
LiveNews -> Bloomberg
Youtube -> Most Popular
MTV -> Shows
}}}
http://en.wikipedia.org/wiki/Polynomial_regression
http://office.microsoft.com/en-gb/help/choosing-the-best-trendline-for-your-data-HP005262321.aspx
http://newtonexcelbach.wordpress.com/2011/02/04/fitting-high-order-polynomials/
http://stats.stackexchange.com/questions/19555/finding-degree-of-polynomial-in-regression-analysis
http://www.purplemath.com/modules/polyends.htm
http://www.algebra.com/algebra/homework/Polynomials-and-rational-expressions/Polynomials-and-rational-expressions.faq.question.538327.html
How To Setup ASM (10.2 & 11.1) On An Active/Passive Cluster (Non-RAC). [ID 1319050.1]  <-- a different variety..
How To Setup ASM (11.2) On An Active/Passive Cluster (Non-RAC). [ID 1296124.1]

http://blogs.oracle.com/xpsoluxdb/entry/clusterware_11gr2_setting_up_an_activepassive_failover_configuration  <-- GOOD STUFF using ACTION_SCRIPT


<<showtoc>>


! tour 
https://www.pluralsight.com/courses/tekpub-postgres



! workflow 
! installation  and upgrade
! commands
! performance and troubleshooting
!! sizing and capacity planning
!! benchmark
!! troubleshooting 
https://www.slideshare.net/SvetaSmirnova/performance-schema-for-mysql-troubleshooting-75654421

! high availability 
! security

















.
<<showtoc>>

! merge 
https://wiki.postgresql.org/wiki/MergeTestExamples
https://blog.dbi-services.com/postgres-vs-oracle-access-paths-0/
<<<
Postgres vs. Oracle access paths – intro
Postgres vs. Oracle access paths I – Seq Scan
Postgres vs. Oracle access paths II – Index Only Scan
Postgres vs. Oracle access paths III – Partial Index
Postgres vs. Oracle access paths IV – Order By and Index
Postgres vs. Oracle access paths V – FIRST ROWS and MIN/MAX
Postgres vs. Oracle access paths VI – Index Scan
Postgres vs. Oracle access paths VII – Bitmap Index Scan
Postgres vs. Oracle access paths VIII – Index Scan and Filter
Postgres vs. Oracle access paths IX – Tid Scan
Postgres vs. Oracle access paths X – Update
Postgres vs. Oracle access paths XI – Sample Scan
<<<
''18000 mAh - .4 kilos'' http://www.buy.com/prod/energizer-xp18000-emergency-power-for-notebooks/q/loc/111/212003408.html
''6,000 mAh - .2 kilos'' http://www.zagg.com/accessories/zaggsparq.php

http://www.energizerpowerpacks.com/us/products/xp8000/
http://www.energizerpowerpacks.com/us/products/xp18000/
http://pc.mmgn.com/Forums/social/Energizer-XP-18000-Universal-P
emc118561 Sistina LVM2 is reporting duplicate PV on RHEL
emc120281 How to set up a Linux host to use emcpower devices in LVM

Configuring Oracle ASMLib on Multipath Disks on Linux [ID 394956.1]   <-- not detailed EMC Powerpath
Configuring Oracle ASMLib on Multipath Disks [ID 309815.1]  <-- detailed EMC Powerpath
ORA-15072 when creating a diskgroup with external redundancy [ID 396015.1] <-- If EMC based storage but use the normal Linux multipath driver is used, then the following map settings should be set in /etc/sysconfig/oracleasm
How to List the Single Path Devices for an EMC PowerPath Multipathing Device [ID 420839.1] <-- EMC Powerpath on 2.4 kernel
How To Setup ASM on Linux Using ASMLIB Disks, Raw Devices or Block Devices? [ID 580153.1] <-- mentions 10gR2 and 11gR2 configuration
ASM 11.2 Configuration KIT (ASM 11gR2 Installation & Configuration, Deinstallation, Upgrade, ASM Job Role Separation. [ID 1092213.1]  <-- ASM 11gR2
Oracle ASM and Multi-Pathing Technologies [ID 294869.1]  <-- details all the multipathing technologies!!!
http://www.oracle.com/technetwork/database/asm.pdf <-- another guide more comprehensive that details multipathing technologies!!!
http://www.oracle.com/technetwork/topics/linux/multipath-097959.html Configuring Oracle ASMLib on Multipath Disks 
http://www.oracle.com/technetwork/database/device-mapper-udev-asm.pdf Configuring udev and device mapper for Oracle RAC 10g Release 2 on SLES9
http://www.emcstorageinfo.com/2007/07/emc-powerpath-pseudo-devices.html <-- nice visualization of EMC Powerpath
http://goo.gl/xCjAi <-- Powerpath install guide
http://www.oracle.com/technetwork/database/netapp-asm3329-129196.pdf <-- Netapp ASM

Master Note for Automatic Storage Management (ASM) [ID 1187723.1]
Consolidated Reference List Of Notes For Migration / Upgrade Service Requests [ID 762540.1]  <-- migration consolidated SRs

ASMLIB Interacting with persistent names generated by udev or devlabel [ID 372783.1] <-- ASMLIB uses file /proc/partitions, mentions ORACLEASM_SCANORDER=emcpower
FAQ ASMLIB CONFIGURE,VERIFY, TROUBLESHOOT [ID 359266.1] <-- mentions ORACLEASM_SCANORDER=emcpower
http://www.james.labocki.com/?p=155 <-- Configuring Oracle ASM on Enterprise Linux 5
http://jcnarasimhan.blogspot.com/2009/08/managing-asm-disk-discovery.html <-- nice guide
http://forums.oracle.com/forums/thread.jspa?threadID=910819&tstart=0 <-- the forum
http://it.toolbox.com/blogs/surachart/check-the-device-asmlib-on-multipath-32222
http://www.freelists.org/post/oracle-l/Can-ASMLib-and-EMC-PowerPath-work-together
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/App_Networking/Oracle11gR2_RAC_B200-M1_8_Node_Certification.pdf <-- Deploying Oracle 11gR2 RAC on the Cisco Unified Computing  System with EMC CLARiiON Storage
http://www.oracle.com/technetwork/database/asm-on-emc-5-3-134797.pdf   <-- Using Oracle Database 10g’s Automatic Storage  Management with EMC Storage Technology 
http://www.ardentperf.com/2008/02/13/oracle-clusterware-on-rhel5oel5-with-udev-and-multipath/
http://blog.capdata.fr/index.php/installation-asm-sur-suse-10-en-64-bits-avec-multipathing-emc-powerpath/
http://gjilevski.wordpress.com/2008/05/07/automatic-storage-management-asm-faq-for-oracle-10g-and-11g-r1/   <--ASM FAQ




LVM on multipath http://christophe.varoqui.free.fr/faq.html
MDADM multipath http://www.linuxtopia.org/online_books/rhel5/installation_guide/rhel5_s2-s390info-multipath.html
DM-Multipath http://willsnotes.wordpress.com/2010/10/13/linux-rhel-5-configuring-multipathing-with-dm-multipath/
Comparison of Powerpath vs dm-multipath http://blog.thilelli.net/post/2009/02/09/Comparison%3A-EMC-PowerPath-vs-GNU/Linux-dm-multipath
http://blog.thilelli.net/post/2007/12/01/Nifty-Tool-For-Querying-Heterogeneous-SCSI-Devices 


Microsoft script center http://technet.microsoft.com/en-us/scriptcenter/bb410849
-- coolmaster 800W
http://www.youtube.com/watch?v=IE-cO2mqTGQ

-- Sun servers power calculators
http://www.oracle.com/us/products/servers-storage/sun-power-calculators/index.html



http://laurentschneider.com/wordpress/2010/10/whats-your-favorite-shell-in-windows.html
Using DTrace to understand mpstat and vmstat output http://prefetch.net/articles/dtracecookbook.html
Top Ten DTrace (D) Scripts http://prefetch.net/articles/solaris.dtracetopten.html
Observing I/O Behavior With The DTraceToolkit http://prefetch.net/articles/observeiodtk.html
http://arup.blogspot.com/2011/01/what-makes-great-presentation.html
/***
|Name:|PrettyDatesPlugin|
|Description:|Provides a new date format ('pppp') that displays times such as '2 days ago'|
|Version:|1.0 ($Rev: 3646 $)|
|Date:|$Date: 2008-02-27 02:34:38 +1000 (Wed, 27 Feb 2008) $|
|Source:|http://mptw.tiddlyspot.com/#PrettyDatesPlugin|
|Author:|Simon Baird <simon.baird@gmail.com>|
|License:|http://mptw.tiddlyspot.com/#TheBSDLicense|
!!Notes
* If you want to you can rename this plugin. :) Some suggestions: LastUpdatedPlugin, RelativeDatesPlugin, SmartDatesPlugin, SexyDatesPlugin.
* Inspired by http://ejohn.org/files/pretty.js
***/
//{{{
Date.prototype.prettyDate = function() {
	var diff = (((new Date()).getTime() - this.getTime()) / 1000);
	var day_diff = Math.floor(diff / 86400);

	if (isNaN(day_diff))      return "";
	else if (diff < 0)        return "in the future";
	else if (diff < 60)       return "just now";
	else if (diff < 120)      return "1 minute ago";
	else if (diff < 3600)     return Math.floor(diff/60) + " minutes ago";
	else if (diff < 7200)     return "1 hour ago";
	else if (diff < 86400)    return Math.floor(diff/3600) + " hours ago";
	else if (day_diff == 1)   return "Yesterday";
	else if (day_diff < 7)    return day_diff + " days ago";
	else if (day_diff < 14)   return  "a week ago";
	else if (day_diff < 31)   return Math.ceil(day_diff/7) + " weeks ago";
	else if (day_diff < 62)   return "a month ago";
	else if (day_diff < 365)  return "about " + Math.ceil(day_diff/31) + " months ago";
	else if (day_diff < 730)  return "a year ago";
	else                      return Math.ceil(day_diff/365) + " years ago";
}

Date.prototype.formatString_orig_mptw = Date.prototype.formatString;

Date.prototype.formatString = function(template) {
	return this.formatString_orig_mptw(template).replace(/pppp/,this.prettyDate());
}

// for MPTW. otherwise edit your ViewTemplate as required.
// config.mptwDateFormat = 'pppp (DD/MM/YY)'; 
config.mptwDateFormat = 'pppp'; 

//}}}
To prevent certain IP Adresses from connecting to database,you have to add 2 parameters to the SQLNET.ORA file of your 
database and then restart the listener,
the 2 parameters are: 
tcp.validnode_checking = yes
tcp.excluded_nodes = (155.23.0.100)
http://answers.microsoft.com/en-us/windows/forum/windows_7-windows_install/reactivating-windows-7-after-moving-to-a-different/3d5e2cdd-e4c6-4951-ae8a-d25c0c3db0a0
http://answers.microsoft.com/en-us/windows/forum/windows_7-windows_install/a-problem-in-activating-a-geographically/af737349-797a-e011-9b4b-68b599b31bf5

http://sourceforge.net/projects/oraresprof/

http://www.dbasupport.com/oracle/ora10g/open_source1.shtml
http://www.dbasupport.com/oracle/ora10g/open_source2.shtml

http://www.sqltools-plusplus.org:7676/links.html

http://sourceforge.net/projects/hotsos-ilo/
http://sourceforge.net/projects/hotsos-ilo/#item3rd-1

http://www.oracledba.ru/orasrp/

http://sourceforge.net/projects/etprof

Carry forms Method-R
http://www.prweb.com/releases/2008/05/prweb839554.htm

Alex
http://www.pythian.com/blogs/author/alex




source code of ORASRP
https://twiki.cern.ch/twiki/bin/view/PSSGroup/SQLTraceAnalysis

simple profiler
http://www.niall.litchfield.dial.pipex.com/SimpleProfiler/SimpleProfiler.html

How to set trace for others sessions, for your own session and at instance level
http://www.petefinnigan.com/ramblings/how_to_set_trace.htm

pete downloads
http://www.petefinnigan.com/tools.htm

appsdba.com
http://www.appsdba.com/blog/?p=24

performance as a service
http://carymillsap.blogspot.com/2008/06/performance-as-service.html

https://orainternals.files.wordpress.com/2012/04/2012_327_riyaj_pstack_truss_doc.pdf
https://blogs.oracle.com/myoraclediary/entry/how_to_backup_putty_session
{{{
How to Backup PuTTY Session
By user782636 on Feb 16, 2012

1).On the source computer, open up a command prompt , and run: 

regedit /ea puTTY.reg HKEY_CURRENT_USER\Software\SimonTatham\PuTTY

The above one will create the file named puTTY.reg in the present working directory

2).Copy puTTY.reg onto the new computer 

3).On the target computer, open up a command prompt  and run:

regedit /s puTTY.reg

OR 

Just double click the puTTY.reg file .this will add the registry contents to the windows registry after asking confirmation.
}}}
http://linux-sxs.org/networking/openssh.putty.html
http://www.vanemery.com/Linux/VNC/vnc-over-ssh.html
http://www.windowstipspage.com/2010/06/configure-putty-connection-manager.html    <-- must read for config settings
http://www.thegeekstuff.com/2009/03/putty-extreme-makeover-using-putty-connection-manager/

http://www.thegeekstuff.com/2009/07/10-practical-putty-tips-and-tricks-you-probably-didnt-know/	  <-- migrate to another machine 
http://dag.wieers.com/blog/content/improving-putty-settings-on-windows   <-- save putty sessions

http://mxu.wikia.com/wiki/Cannot_bring_back_PuttyCM_windows_after_it_is_minimized   <- application already started error "disable hide when minimized" (tools -> options)
* ''QoS uses ClusterHealthMonitor and analyses every minute''
* QoS makes use of the following technologies:
** policy managed databases 
** and utilizes server pools to have that "true grid layer" to be able to automatically stand up instances in any available server/host
** QoS policies
** QoS metrics
** QoS recommendations
* these are the good stuff presentations on QoS 
** QoS Management in a Consolidated Environment OOW 2011 Presentation http://www.oracle.com/technetwork/products/clusterware/qos-management-oow11-1569557.pdf, mixed workload QoS http://www.soug.ch/fileadmin/user_upload/Downloads_public/Breysse_Exadata_Workload_Management.pdf
** QOS ppt http://www.slideshare.net/prassinos/oracle-quality-of-service-management-meeting-slas-in-a-grid-environment

''official documentation''
New Feature on 11.2.0.2 http://download.oracle.com/docs/cd/E11882_01/server.112/e17128/chapter1_2.htm
QOS FAQ paper http://www.oracle.com/technetwork/database/exadata/faq-qosmanagement-511893.pdf
QOS OTN front page http://www.oracle.com/technetwork/database/clustering/overview/qosmanageent-508184.html
Introduction to Oracle Database QoS Management http://download.oracle.com/docs/cd/E11882_01/server.112/e24611/apqos_intro.htm#APQOS109
Installing and Enabling Oracle Database QoS Management - http://download.oracle.com/docs/cd/E11882_01/server.112/e24611/install_config.htm#APQOS151 <-- hhmmm it utilizes RAC server pools



-- qos 
https://docs.oracle.com/cd/E11882_01/server.112/e24611/apqos_admin.htm#APQOS158
http://docs.oracle.com/cd/E11882_01/server.112/e24611/apqos_intro.htm#APQOS317
https://docs.oracle.com/database/121/RILIN/srvpool.htm#RILIN1247
http://docs.oracle.com/cd/E11882_01/server.112/e24611/apqos_admin.htm#APQOS158




https://www.dwavesys.com/

.
Query tuning by eliminating throwaway - Martin Berg
@@http://focalpoint.altervista.org/throwaway2.pdf@@

http://www.orafaq.com/maillist/oracle-l/2004/02/10/0119.htm
<<<
I've downloaded and read the paper, thanks to Mogens and Cary and my impressions are that the paper has some good ideas but is neither revolutionary nor overly applicable. Basically, the basic idea of the paper is that people usually process many rows more then necessary to produce the desired output. The excess rows are called "throwaway", thus the name of the article. The author then analyzes the throwaway caused by each access method (as of oracle 8.0.4, with notable exceptions of bitmap methods, star schema and hash) and comes to the conclusion that the only cure is to properly index tables, so that predicates are resolved by using index scans. The problem is, in my opinion, directly the opposite: how to design database schema in order to be able to write queries that execute quickly. To that end, I found more useful material in Ralph Kimball's book "The Data Warehouse Toolkit" and in Jonathan's and Tom Kyte's books then in Martin's article. I must confess that the whole debate made me very curious about Dan's book and that I ordered it from Barnes & Noble, but as I am busy with the 10g, it will have to wait at least 4 to 6 weeks. 
I am not writing this to denigrate Martin's effort, but to basically point out that Anjo's, Cary's and Jonathan's method based on the wait inerface, together with the business knowledge (one must understand what is it that he or she wants to accomplish, in the first place) is the ultimate in SQL tuning. There is no easy method that will take a horrendous query, which would justify the capital punishment for the author, insert it into a "method", and then following few easy steps, end up with a missile which will execute in milliseconds. If someone wants to find number of AT&T subscribers per state, he or she will have to do something like:

        SELECT COUNT(*) FROM SUBSCRIBERS GROUP BY STATE; Given the number of subscribers, it will be a big query and there is no room for improvement. What I see as my role is to prevent design which would result in splitting the subsciber entity into several tables, therefore making the query above into a join. If that happens, no amount of methodical tuning the SQL will help.
<<<
{{{
alter session set NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS';
select sysdate from dual;
SELECT SESSIONTIMEZONE FROM DUAL;
SELECT current_timestamp FROM DUAL;
SELECT dbtimezone FROM DUAL;
}}}



! exercise 
{{{

select to_timestamp(localtimestamp) from dual;
select from_tz(to_timestamp(localtimestamp), 'UTC') at time zone 'America/New_York' from dual



-- considers DST, in my case the column needs to be DATE instead of TIMESTAMP
select CAST(FROM_TZ(CAST(localtimestamp AS TIMESTAMP), 'UTC')  at time zone 'America/New_York' AS Date) as local_time from dual;

	05:20:32 SYS@cdb1> create table test_tz (ts_col date);

	Table created.

	05:20:40 SYS@cdb1> insert into test_tz select CAST(FROM_TZ(CAST(localtimestamp AS TIMESTAMP), 'UTC')  at time zone 'America/New_York' AS Date) as local_time from dual;

	1 row created.

	05:20:45 SYS@cdb1> select * from test_tz;

	TS_COL
	--------------------
	14-NOV-2019 00:20:45

	05:20:51 SYS@cdb1> select CAST(FROM_TZ(CAST(localtimestamp AS TIMESTAMP), 'UTC')  at time zone 'America/New_York' AS Date) as local_time from dual;

	LOCAL_TIME
	--------------------
	14-NOV-2019 00:21:03


-- playing with TIMESTAMP 

	05:02:42 SYS@cdb1> !date
	Thu Nov 14 05:04:07 UTC 2019

	05:04:07 SYS@cdb1> select to_timestamp(localtimestamp) from dual;

	TO_TIMESTAMP(LOCALTIMESTAMP)
	---------------------------------------------------------------------------
	14-NOV-19 05.04.09.526736 AM

	05:04:09 SYS@cdb1> !date
	Thu Nov 14 05:07:40 UTC 2019

	05:07:40 SYS@cdb1> select CAST(FROM_TZ(CAST(localtimestamp AS TIMESTAMP), 'UTC')  at time zone 'America/New_York' AS Date) as DESIRED_FIELD_NAME from dual;

	DESIRED_FIELD_NAME
	--------------------
	14-NOV-2019 00:07:42

	05:07:42 SYS@cdb1> select CAST(FROM_TZ(CAST(localtimestamp AS TIMESTAMP), 'UTC')  at time zone 'America/New_York' AS Date) as local_time from dual;

	LOCAL_TIME
	--------------------
	14-NOV-2019 00:08:22

	05:08:22 SYS@cdb1> create table test_tz (timestamp(3)) ;
	create table test_tz (timestamp(3))
	                               *
	ERROR at line 1:
	ORA-00902: invalid datatype


	05:16:46 SYS@cdb1> create table test_tz (ts_col timestamp(3));

	Table created.

	05:17:27 SYS@cdb1>
	05:17:28 SYS@cdb1> insert into test_tz values ('14-NOV-2019 00:08:22');
	insert into test_tz values ('14-NOV-2019 00:08:22')
	                            *
	ERROR at line 1:
	ORA-01849: hour must be between 1 and 12


	05:17:50 SYS@cdb1> insert into test_tz select CAST(FROM_TZ(CAST(localtimestamp AS TIMESTAMP), 'UTC')  at time zone 'America/New_York' AS Date) as local_time from dual;

	1 row created.

	05:18:09 SYS@cdb1> select * from test_tz;

	TS_COL
	---------------------------------------------------------------------------
	14-NOV-19 12.18.09.000 AM

	05:18:18 SYS@cdb1> drop table test_tz purge;

	Table dropped.

	05:19:18 SYS@cdb1> create table test_tz (ts_col timestamp);

	Table created.

	05:19:23 SYS@cdb1> insert into test_tz select CAST(FROM_TZ(CAST(localtimestamp AS TIMESTAMP), 'UTC')  at time zone 'America/New_York' AS Date) as local_time from dual;

	1 row created.

	05:19:28 SYS@cdb1> select * from test_tz;

	TS_COL
	---------------------------------------------------------------------------
	14-NOV-19 12.19.28.000000 AM

	05:19:31 SYS@cdb1> drop table test_tz purge;

	Table dropped.

	05:19:50 SYS@cdb1> create table test_tz (ts_col timestamp(0));

	Table created.

	05:20:00 SYS@cdb1> insert into test_tz select CAST(FROM_TZ(CAST(localtimestamp AS TIMESTAMP), 'UTC')  at time zone 'America/New_York' AS Date) as local_time from dual;

	1 row created.

	05:20:05 SYS@cdb1> select * from test_tz;

	TS_COL
	---------------------------------------------------------------------------
	14-NOV-19 12.20.05 AM

	05:20:10 SYS@cdb1> drop table test_tz purge;

	Table dropped.



 -- get TIMEZONE OFFSET

 	select * from  V$TIMEZONE_NAMES where lower(tzname) like '%mountain%';

 	SELECT TZ_OFFSET('US/Eastern') FROM DUAL;
 	SELECT TZ_OFFSET('US/Mountain') FROM DUAL;

	 05:43:15 SYS@cdb1> select * from  V$TIMEZONE_NAMES where tzname like '%New%';

	TZNAME                                                           TZABBREV                                                             CON_ID
	---------------------------------------------------------------- ---------------------------------------------------------------- ----------
	America/New_York                                                 LMT                                                                       0
	America/New_York                                                 EST                                                                       0
	America/New_York                                                 EDT                                                                       0
	America/New_York                                                 EWT                                                                       0
	America/New_York                                                 EPT                                                                       0
	America/North_Dakota/New_Salem                                   LMT                                                                       0
	America/North_Dakota/New_Salem                                   MST                                                                       0
	America/North_Dakota/New_Salem                                   MDT                                                                       0
	America/North_Dakota/New_Salem                                   MWT                                                                       0
	America/North_Dakota/New_Salem                                   MPT                                                                       0
	America/North_Dakota/New_Salem                                   CST                                                                       0

	TZNAME                                                           TZABBREV                                                             CON_ID
	---------------------------------------------------------------- ---------------------------------------------------------------- ----------
	America/North_Dakota/New_Salem                                   CDT                                                                       0
	Canada/Newfoundland                                              LMT                                                                       0
	Canada/Newfoundland                                              NST                                                                       0
	Canada/Newfoundland                                              NDT                                                                       0
	Canada/Newfoundland                                              NWT                                                                       0
	Canada/Newfoundland                                              NPT                                                                       0
	Canada/Newfoundland                                              NDDT                                                                      0
	US/Pacific-New                                                   LMT                                                                       0
	US/Pacific-New                                                   PST                                                                       0
	US/Pacific-New                                                   PDT                                                                       0
	US/Pacific-New                                                   PWT                                                                       0

	TZNAME                                                           TZABBREV                                                             CON_ID
	---------------------------------------------------------------- ---------------------------------------------------------------- ----------
	US/Pacific-New                                                   PPT                                                                       0

	23 rows selected.

	05:43:29 SYS@cdb1> SELECT TZ_OFFSET('US/Eastern') FROM DUAL;

	TZ_OFFS
	-------
	-05:00

	05:45:44 SYS@cdb1> select * from  V$TIMEZONE_NAMES where tzname like '%Eastern%';

	TZNAME                                                           TZABBREV                                                             CON_ID
	---------------------------------------------------------------- ---------------------------------------------------------------- ----------
	Canada/Eastern                                                   LMT                                                                       0
	Canada/Eastern                                                   EST                                                                       0
	Canada/Eastern                                                   EDT                                                                       0
	Canada/Eastern                                                   EWT                                                                       0
	Canada/Eastern                                                   EPT                                                                       0
	US/Eastern                                                       LMT                                                                       0
	US/Eastern                                                       EST                                                                       0
	US/Eastern                                                       EDT                                                                       0
	US/Eastern                                                       EWT                                                                       0
	US/Eastern                                                       EPT                                                                       0

	10 rows selected.

	05:45:57 SYS@cdb1> select tz_offset('America/New_York') from dual;

	TZ_OFFS
	-------
	-05:00


-- CONVERT back and forth 

	18:08:00 SYS@cdb1> !date
	Tue Nov 19 18:08:24 UTC 2019

	18:08:24 SYS@cdb1> SELECT FROM_TZ(TIMESTAMP '2019-11-19 18:08:24', 'UTC') AT TIME ZONE 'US/Eastern' from dual;

	FROM_TZ(TIMESTAMP'2019-11-1918:08:24','UTC')ATTIMEZONE'US/EASTERN'
	---------------------------------------------------------------------------
	19-NOV-19 01.08.24.000000000 PM US/EASTERN

	18:08:53 SYS@cdb1> -- 1:09 PM is my laptop clock
	18:09:15 SYS@cdb1>
	18:09:18 SYS@cdb1> SELECT FROM_TZ(TIMESTAMP '2019-11-19 01:08:24', 'US/Eastern') AT TIME ZONE 'UTC' from dual;

	FROM_TZ(TIMESTAMP'2019-11-1901:08:24','US/EASTERN')ATTIMEZONE'UTC'
	---------------------------------------------------------------------------
	19-NOV-19 06.08.24.000000000 AM UTC

	18:10:15 SYS@cdb1> -- 18:08:24 is the server clock (06.08.24)
	18:10:44 SYS@cdb1>
	18:10:44 SYS@cdb1> -- it added 5 hours
	18:11:00 SYS@cdb1>
	18:11:00 SYS@cdb1> select * from  V$TIMEZONE_NAMES where tzname like '%Eastern%';

	TZNAME                                                           TZABBREV                                                             CON_ID
	---------------------------------------------------------------- ---------------------------------------------------------------- ----------
	Canada/Eastern                                                   LMT                                                                       0
	Canada/Eastern                                                   EST                                                                       0
	Canada/Eastern                                                   EDT                                                                       0
	Canada/Eastern                                                   EWT                                                                       0
	Canada/Eastern                                                   EPT                                                                       0
	US/Eastern                                                       LMT                                                                       0
	US/Eastern                                                       EST                                                                       0
	US/Eastern                                                       EDT                                                                       0
	US/Eastern                                                       EWT                                                                       0
	US/Eastern                                                       EPT                                                                       0

	10 rows selected.

	18:11:11 SYS@cdb1> SELECT TZ_OFFSET('US/Eastern') FROM DUAL;

	TZ_OFFS
	-------
	-05:00


}}}
/***
|Name:|QuickOpenTagPlugin|
|Description:|Changes tag links to make it easier to open tags as tiddlers|
|Version:|3.0.1 ($Rev: 3861 $)|
|Date:|$Date: 2008-03-08 10:53:09 +1000 (Sat, 08 Mar 2008) $|
|Source:|http://mptw.tiddlyspot.com/#QuickOpenTagPlugin|
|Author:|Simon Baird <simon.baird@gmail.com>|
|License:|http://mptw.tiddlyspot.com/#TheBSDLicense|
***/
//{{{
config.quickOpenTag = {

	dropdownChar: (document.all ? "\u25bc" : "\u25be"), // the little one doesn't work in IE?

	createTagButton: function(place,tag,excludeTiddler) {
		// little hack so we can do this: <<tag PrettyTagName|RealTagName>>
		var splitTag = tag.split("|");
		var pretty = tag;
		if (splitTag.length == 2) {
			tag = splitTag[1];
			pretty = splitTag[0];
		}
		
		var sp = createTiddlyElement(place,"span",null,"quickopentag");
		createTiddlyText(createTiddlyLink(sp,tag,false),pretty);
		
		var theTag = createTiddlyButton(sp,config.quickOpenTag.dropdownChar,
                        config.views.wikified.tag.tooltip.format([tag]),onClickTag);
		theTag.setAttribute("tag",tag);
		if (excludeTiddler)
			theTag.setAttribute("tiddler",excludeTiddler);
    		return(theTag);
	},

	miniTagHandler: function(place,macroName,params,wikifier,paramString,tiddler) {
		var tagged = store.getTaggedTiddlers(tiddler.title);
		if (tagged.length > 0) {
			var theTag = createTiddlyButton(place,config.quickOpenTag.dropdownChar,
                        	config.views.wikified.tag.tooltip.format([tiddler.title]),onClickTag);
			theTag.setAttribute("tag",tiddler.title);
			theTag.className = "miniTag";
		}
	},

	allTagsHandler: function(place,macroName,params) {
		var tags = store.getTags(params[0]);
		var filter = params[1]; // new feature
		var ul = createTiddlyElement(place,"ul");
		if(tags.length == 0)
			createTiddlyElement(ul,"li",null,"listTitle",this.noTags);
		for(var t=0; t<tags.length; t++) {
			var title = tags[t][0];
			if (!filter || (title.match(new RegExp('^'+filter)))) {
				var info = getTiddlyLinkInfo(title);
				var theListItem =createTiddlyElement(ul,"li");
				var theLink = createTiddlyLink(theListItem,tags[t][0],true);
				var theCount = " (" + tags[t][1] + ")";
				theLink.appendChild(document.createTextNode(theCount));
				var theDropDownBtn = createTiddlyButton(theListItem," " +
					config.quickOpenTag.dropdownChar,this.tooltip.format([tags[t][0]]),onClickTag);
				theDropDownBtn.setAttribute("tag",tags[t][0]);
			}
		}
	},

	// todo fix these up a bit
	styles: [
"/*{{{*/",
"/* created by QuickOpenTagPlugin */",
".tagglyTagged .quickopentag, .tagged .quickopentag ",
"	{ margin-right:1.2em; border:1px solid #eee; padding:2px; padding-right:0px; padding-left:1px; }",
".quickopentag .tiddlyLink { padding:2px; padding-left:3px; }",
".quickopentag a.button { padding:1px; padding-left:2px; padding-right:2px;}",
"/* extra specificity to make it work right */",
"#displayArea .viewer .quickopentag a.button, ",
"#displayArea .viewer .quickopentag a.tiddyLink, ",
"#mainMenu .quickopentag a.tiddyLink, ",
"#mainMenu .quickopentag a.tiddyLink ",
"	{ border:0px solid black; }",
"#displayArea .viewer .quickopentag a.button, ",
"#mainMenu .quickopentag a.button ",
"	{ margin-left:0px; padding-left:2px; }",
"#displayArea .viewer .quickopentag a.tiddlyLink, ",
"#mainMenu .quickopentag a.tiddlyLink ",
"	{ margin-right:0px; padding-right:0px; padding-left:0px; margin-left:0px; }",
"a.miniTag {font-size:150%;} ",
"#mainMenu .quickopentag a.button ",
"	/* looks better in right justified main menus */",
"	{ margin-left:0px; padding-left:2px; margin-right:0px; padding-right:0px; }", 
"#topMenu .quickopentag { padding:0px; margin:0px; border:0px; }",
"#topMenu .quickopentag .tiddlyLink { padding-right:1px; margin-right:0px; }",
"#topMenu .quickopentag .button { padding-left:1px; margin-left:0px; border:0px; }",
"/*}}}*/",
		""].join("\n"),

	init: function() {
		// we fully replace these builtins. can't hijack them easily
		window.createTagButton = this.createTagButton;
		config.macros.allTags.handler = this.allTagsHandler;
		config.macros.miniTag = { handler: this.miniTagHandler };
		config.shadowTiddlers["QuickOpenTagStyles"] = this.styles;
		store.addNotification("QuickOpenTagStyles",refreshStyles);
	}
}

config.quickOpenTag.init();

//}}}
! official docs 
R style guide http://r-pkgs.had.co.nz/style.html
searchable documentation http://www.rdocumentation.org/
introduction to R http://cran.r-project.org/doc/manuals/R-intro.html

! open courses
https://www.datacamp.com/community/open-courses

! structured learning

Introduction to R
https://www.datacamp.com/courses/introduction-to-r

Try R
@@http://tryr.codeschool.com/levels/1/challenges/1 <- ''vectors, matrices, summary statistics''@@

R programming fundamentals
@@http://www.pluralsight.com/courses/table-of-contents/r-programming-fundamentals  <- ''functions, flow control, packages, import data, exploring data with R''@@


data.table
https://www.datacamp.com/courses/data-analysis-the-data-table-way

Data Analysis and Statistical Inference
https://www.datacamp.com/courses/data-analysis-and-statistical-inference_mine-cetinkaya-rundel-by-datacamp

Introduction to Computational Finance and Financial Econometrics
https://www.datacamp.com/courses/introduction-to-computational-finance-and-financial-econometrics

dplyr 
https://www.datacamp.com/courses/dplyr

Build Web Apps in R with Shiny
https://www.udemy.com/build-web-apps-in-r-with-shiny/?dtcode=JB9Dbu61IsjQ

https://www.coursera.org/course/statistics?utm_content=bufferc229d&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer



! presentations
Bigvis http://www.meetup.com/nyhackr/events/112271042/ , http://files.meetup.com/1406240/bigvis.pdf , http://goo.gl/0DTE4a



<<showtoc>>

! R data formats: RData, Rda, Rds, robj, etc
http://stackoverflow.com/questions/21370132/r-data-formats-rdata-rda-rds-etc

! .rda vs. .RData
http://r.789695.n4.nabble.com/rda-vs-RData-td4581445.html	
https://stackoverflow.com/questions/8345759/how-to-save-a-data-frame-in-r
{{{
save(foo,file="data.Rda")
load("data.Rda")
}}}

! saving-and-loading-r-objects
http://www.fromthebottomoftheheap.net/2012/04/01/saving-and-loading-r-objects/
https://www.r-bloggers.com/a-better-way-of-saving-and-loading-objects-in-r/

! Saving Data into R Data Format: RDS and RDATA
http://www.sthda.com/english/wiki/saving-data-into-r-data-format-rds-and-rdata

! robj
http://stackoverflow.com/questions/34192038/how-to-open-robj-file-in-rstudio-or-r

! Creating Classes in R: S3, S4, R5 (RC), or R6
http://stackoverflow.com/questions/27219132/creating-classes-in-r-s3-s4-r5-rc-or-r6
Introduction to R6 classes https://rpubs.com/wch/24456



https://www.futurelearn.com/courses/business-analytics-forecasting/1/todo/5986
statistics.com
the practical forecasting book 2nd Ed
https://machinelearningmastery.com/introduction-to-time-series-forecasting-with-python/


! different forecasting trends 
http://jcflowers1.iweb.bsu.edu/rlo/trends.htm
[img[ http://i.imgur.com/CY7siAi.png]]


! automated forecasting tools
ETS and ARIMA from forecast package
http://www.forecastpro.com/
http://www.thrivetech.com/inventory-forecasting-software/






http://www.burns-stat.com/documents/books/the-r-inferno/
http://www.jason-french.com/blog/2013/03/11/installing-r-in-linux/
https://cran.r-project.org/doc/manuals/r-release/R-admin.html
https://idre.ucla.edu/
http://www.ats.ucla.edu/stat/

http://www.ats.ucla.edu/stat/r/library/matrix_alg.htm
code examples http://www.ats.ucla.edu/stat/r/library/matrix.txt

! coding style guide 
http://google-styleguide.googlecode.com/svn/trunk/Rguide.xml

! free data sets
http://r4stats.com/
http://www.census.gov 
Free large data sets http://stackoverflow.com/questions/2674421/free-large-datasets-to-experiment-with-hadoop


! REFERENCES:

http://gallery.r-enthusiasts.com/thumbs.php
http://www.rstudio.com/ide/

-- time series
http://www.packtpub.com/article/creating-time-series-charts-r
Time Series in R with ggplot2 http://stackoverflow.com/questions/4973031/time-series-in-r-with-ggplot2
Displaying time-series data: Stacked bars, area charts or lines…you decide! http://vizwiz.blogspot.com/2012/08/displaying-time-series-data-stacked.html
R Lattice Plot Beats Excel Stacked Area Trend Chart http://chartsgraphs.wordpress.com/2008/10/05/r-lattice-plot-beats-excel-stacked-area-trend-chart/
rainbow: An R Package for Visualizing Functional Time Series http://journal.r-project.org/archive/2011-2/RJournal_2011-2_Lin~Shang.pdf
Using R for Time Series Analysis http://a-little-book-of-r-for-time-series.readthedocs.org/en/latest/src/timeseries.html
Plotting Time Series data using ggplot2 http://www.r-bloggers.com/plotting-time-series-data-using-ggplot2/
Simple time series plot using R : Part 1 http://programming-r-pro-bro.blogspot.com/2011/09/simple-plot-using-r.html
How to convert a daily times series into an averaged weekly? http://stackoverflow.com/questions/11892063/how-to-convert-a-daily-times-series-into-an-averaged-weekly
Plotting AWR database metrics using R http://dbastreet.com/blog/?p=946
R Programming Language Connectivity to Oracle http://dbastreet.com/blog/?p=913
Scripted Collection of OS Watcher Files on Exadata http://tylermuth.wordpress.com/2012/11/02/scripted-collection-of-os-watcher-files-on-exadata/
Using R for Advanced Charts http://processtrends.com/toc_r.htm#Excel Stacked Chart

-- stacked area
http://stackoverflow.com/questions/5030389/getting-a-stacked-area-plot-in-r
How do I create a stacked area plot with many areas, or where the legend “points” at the respective areas? http://stackoverflow.com/questions/6275895/how-do-i-create-a-stacked-area-plot-with-many-areas-or-where-the-legend-points
Stacked Area Histogram in R http://stackoverflow.com/questions/2241290/stacked-area-histogram-in-r
Grayscale stacked area plot in R http://stackoverflow.com/questions/6071990/grayscale-stacked-area-plot-in-r

-- bar chart 
http://stackoverflow.com/questions/6437080/bar-chart-of-constant-height-for-factors-in-time-series

-- scatter plot IOPS
http://dboptimizer.com/2013/01/04/r-slicing-and-dicing-data/ , https://sites.google.com/site/oraclemonitor/r-slicing-and-dicing-data , http://datavirtualizer.com/r-data-structures/
   http://www.statmethods.net/input/datatypes.html
   http://nsaunders.wordpress.com/2010/08/20/a-brief-introduction-to-apply-in-r/
   http://stackoverflow.com/questions/2545228/converting-a-dataframe-to-a-vector-by-rows
   http://stat.ethz.ch/R-manual/R-patched/library/base/html/colSums.html
http://dboptimizer.com/2013/01/02/no-3d-charts-in-excel-try-r/ , http://www.oaktable.net/content/no-3d-charts-excel-try-r

-- bubble charts
http://flowingdata.com/2010/11/23/how-to-make-bubble-charts/

-- google vis in R - motion chart
http://cran.r-project.org/web/packages/googleVis/vignettes/googleVis.pdf

-- device utilization
http://dtrace.org/blogs/brendan/2011/12/18/visualizing-device-utilization/
https://blogs.oracle.com/dom/entry/visualising_performance

-- google searches
stacked time series area chart R http://goo.gl/BvINM
stacked area chart R http://goo.gl/fbHJO


http://www.r-bloggers.com/from-spreadsheet-thinking-to-r-thinking






! headfirst R 

{{{
source("http://www.headfirstlabs.com/books/hfda/hfda.R")
employees
hist(employees$received, breaks=50)

> sd(employees$received)
[1] 2.432138
> summary(employees$received)
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
 -1.800   4.600   5.500   6.028   6.700  25.900 

 
head(employees,n=30)
plot(employees$requested[employees$negotiated==TRUE], employees$received[employees$negotiated==TRUE])
plot(employees$received,employees$requested)
cor(employees$requested[employees$negotiated==TRUE], employees$received[employees$negotiated==TRUE])
cor(employees$received,employees$requested)


graphing packages:
ggplot2
lattice


ggplot2 terminologies:
	
	The data is what we want to visualize. It consists of variables, which are stored as
	columns in a data frame.
	• Geoms are the geometric objects that are drawn to represent the data, such as bars,
	lines, and points.
	• Aesthetic attributes, or aesthetics, are visual properties of geoms, such as x and y
	position, line color, point shapes, etc.
	• There are mappings from data values to aesthetics.
	• Scales control the mapping from the values in the data space to values in the aesthetic
	space. A continuous y scale maps larger numerical values to vertically higher positions
	in space.
	• Guides show the viewer how to map the visual properties back to the data space.
	The most commonly used guides are the tick marks and labels on an axis.
	

install.packages("ggplot2")
install.packages("gcookbook")

OR execute this 

install.packages(c("ggplot2", "gcookbook"))
	
The downloaded packages are in
        C:\Users\Karl\AppData\Local\Temp\Rtmpmy2Jzo\downloaded_packages
The downloaded packages are in
        C:\Users\Karl\AppData\Local\Temp\Rtmpmy2Jzo\downloaded_packages
        
-- run this on each R session if you want to use ggplot2
library(ggplot2)

for the book do this 

library(ggplot2)
library(gcookbook)

** The primary repository for distributing R packages is called CRAN (the Comprehensive R Archive Network),

data <- read.csv("datafile.csv")
data <- read.csv("datafile.csv", header=FALSE)
names(data) <- c("Column1","Column2","Column3") # Manually assign the header names
data <- read.csv("datafile.csv", sep="\t") 	# sep=" " if space delimited, sep="\t" if tab delimited
data <- read.csv("datafile.csv", stringsAsFactors=FALSE)		# don't convert strings as factors


-- help
?hist
}}}







Stan for the beginners [Bayesian inference] in 6 mins (close captioned) https://www.youtube.com/watch?v=tLprFqSWS1w
A visual guide to Bayesian thinking https://www.youtube.com/watch?v=BrK7X_XlGB8
http://andrewgelman.com/2014/01/21/everything-need-know-bayesian-statistics-learned-eight-schools/
http://andrewgelman.com/2014/01/17/think-statistical-evidence-statistical-evidence-cant-conclusive/
An Introduction to Bayesian Inference using R Interfaces to Stan http://user2016.org/tutorials/15.html
http://andrewgelman.com/2012/08/30/a-stan-is-born/
http://mc-stan.org/interfaces/rstan.html
https://github.com/stan-dev/rstan/wiki/RStan-Getting-Started
video https://icerm.brown.edu/video_archive/#/play/1107 	Scalable Bayesian Inference with Hamiltonian Monte Carlo - Michael Betancourt, University of Warwick
Scalable Bayesian Inference with Hamiltonian Monte Carlo https://www.youtube.com/watch?v=VnNdhsm0rJQ
Efficient Bayesian inference with Hamiltonian Monte Carlo -- Michael Betancourt (Part 1) https://www.youtube.com/watch?v=pHsuIaPbNbY
Hamiltonian Monte Carlo and Stan -- Michael Betancourt (Part 2) https://www.youtube.com/watch?v=xWQpEAyI5s8
https://cran.r-project.org/web/packages/rstan/vignettes/rstan.html
https://rpubs.com/pviefers/CologneR
http://astrostatistics.psu.edu/su14/lectures/Daniel-Lee-Stan-1.pdf

! others 
http://stackoverflow.com/questions/31409591/difference-between-forecast-and-predict-function-in-r
http://stackoverflow.com/questions/28695076/parallel-predict
https://www.r-bloggers.com/parallel-r-model-prediction-building-and-analytics/

{{{
  brew cask install 'xquartz'
  brew cask install 'r-app'
  r
  brew cask install 'rstudio'
}}}

https://www.google.com/search?q=install+rstudio+for+mac&oq=install+rstudio+for+mac&aqs=chrome..69i57j0l5.4464j1j1&sourceid=chrome&ie=UTF-8
https://stackoverflow.com/questions/20457290/installing-r-with-homebrew
http://macappstore.org/rstudio/
http://mdzhang.com/posts/osx-r-rstudio/ <- GOOD STUFF


! 2022 
* read this https://techrah.github.io/posts/build-openmp-macos-catalina-complete
* download software 
https://www.rstudio.com/products/rstudio/download/#download
https://cloud.r-project.org/
* install the downloaded software 
* modify zshrc
{{{
cat .zshrc 

### python pyenv
# https://stackoverflow.com/questions/10574684/where-to-place-path-variable-assertions-in-zsh
eval "$(pyenv init -)"


### r install 
disable r
alias r="/Library/Frameworks/R.framework/Versions/Current/Resources/bin/R"
alias R=r
}}}







http://www.burns-stat.com/documents/books/tao-te-programming/
http://www.r-statistics.com/2013/03/updating-r-from-r-on-windows-using-the-installr-package/
http://www.r-statistics.com/2011/04/how-to-upgrade-r-on-windows-7/
http://www.r-statistics.com/2010/04/changing-your-r-upgrading-strategy-and-the-r-code-to-do-it-on-windows/
http://stackoverflow.com/questions/1401904/painless-way-to-install-a-new-version-of-r-on-windows
http://www.evernote.com/shard/s48/sh/377737c7-4ebc-46b1-bc35-3ecc718b871b/50cea23088bd9102903f413e18615628
http://www.evernote.com/shard/s48/sh/bbf96104-f3b2-467d-b98f-6adcf6d0cf04/4a3dad99cbf84cc639ec6cb00dba99e9

http://www.techradar.com/news/software/applications/7-of-the-best-linux-remote-desktop-clients-716346?artc_pg=2
http://www.nomachine.com/screenshots.php
http://remmina.sourceforge.net/downloads.shtml
http://www.evernote.com/shard/s48/sh/799368fe-07f0-4ebf-8a92-8b295e9bcf0d/61f0bb8e887507684925fad01d3f9245
http://www.evernote.com/shard/s48/sh/287ae327-a298-4b86-8b41-b50ad0ec8666/815212754838567a1e72396c2d4dc730
http://www.evernote.com/shard/s48/sh/821bb643-57e9-4278-b659-890680aab8c0/75558365366c24074eea581ecf104e47
! Types of networking per OS: 
Linux:         Host-Only, Bridged, or BOTH
Windows:   NAT (only)

Legend:
HOST -> host VM, or your server, or your base OS
CLIENT -> guest VM
http://www.evernote.com/shard/s48/sh/4908cf1d-c5fa-4e8e-ba72-391356994634/afda9f7757b016297efc78b79c767d4e

! Network setup explanation at oracle-l

http://www.freelists.org/post/oracle-l/Oracle-Virtualbox-Networking-Multiple-VMS-question-for-advanced-Virtualbox-users,4
https://mail.google.com/mail/u/0/#search/oracle-l+R%26D+server/13809eba19a37b01

<<showtoc>>

''IPs and Installation Matrix'' https://docs.google.com/spreadsheet/ccc?key=0ApH46jS7ZPdJdFpVNlgzdGpBUHJ6U1ZZZzl1bmxtT1E&hl=en_US#gid=0
''MindMap - DesktopServer'' http://www.evernote.com/shard/s48/sh/c3a94bff-007a-4df4-906e-e5079aa8c5cf/ea124916fb8a86d12cb99645e31f3867

! Build photos of the R&D Server
''DesktopServer1'' - contains pre build photos - https://www.evernote.com/shard/s48/sh/c12754e2-e166-4c43-8073-0701ef865a04/cd8c8bb240092da5d368bfa2b126b5ee
''DesktopServer2'' - contains build photos,wiring it,anaconda photo - https://www.evernote.com/shard/s48/sh/d1f64502-ca17-40cc-b33e-e0cfe578700f/f8f52a3dcb5079aba3edf5bcc4db9bd8
''DesktopServer3'' - contains the device mapping - https://www.evernote.com/shard/s48/sh/1bb03446-c3d9-4bef-86a0-1dc1ab3dca67/674b39c5f83409906534ee1d3235503a

! Disk layout, config, and benchmark
[[LVM config history]] shows how I configured the devices and the idea/reasoning behind it, also showing the partition table and layout
[[LVMstripesize,AUsize,UEKkernel]] IO config options test case comparison
''udev ASM - single path'' - https://www.evernote.com/shard/s48/sh/485425bc-a16f-4446-aebd-988342e3c30e/edc860d713dd4a66ff57cbc920b4a69c
''load before and after using UEK kernel'' - https://www.evernote.com/shard/s48/sh/d8e45c77-a6b8-4923-9e7a-f7a008af30cc/72c6522f6937a00ea8e6145e9af1af4e , faster and lower load due to OEL rq_affinity enabled
[[R&D server IO performance]] explains Calibrate IO, Short Stroking, Stripe size, UEK vs Regular Kernel, ASM redundancy / SAN redundancy, effect of ASM redundancy on read/write IOPS – SLOB test case
[[R&D cpu_speed]] 

! Networking
[[R&D Server VirtualBox networking]] architecture explanation at evernote and oracle-l
[[R&D DNS]]
[[R&D Headless Autostart]]
[[R&D Mail Server]]
[[R&D Server Daily Report]]
[[R&D Server Samba]]


! Backups 
[[R&D rsync]]

! HW failures
[[R&D drive fail]]

! Others
SSD head option:
http://www.amazon.com/Seagate-Momentus-Solid-Hybrid-ST95005620AS/dp/B003NSBF32
small two port router: 
http://ttcshelbyville.wordpress.com/tag/smallest-router/




! 2020 update 
{{{
that machine retired a few years ago. 
these days I only need a few "always on" environments, costing me $22/month 
it's more expensive that what you are getting w/ a long term on-prem machine at $1800 w/ 12 VMs and bare metal DB + tons of storage
but you have the flexibility and all overhead cost are tucked in 


the new cloud "always-on" environments are ($22/month): 

2 Digital Ocean Droplets w/ 512M memory, 20GB storage, 1CPU @ $5/month each 
both configured w/ 1GB swapfile to trick the OS w/ more memory 

        1st droplet is the DB server (12.1.0.2), I ran a swingbench workload here for a month and no issues 
                [root@karldevfedora ~]# uptime
                22:21:52 up 966 days,  5:10,  1 user,  load average: 0.00, 0.03, 0.05

        2nd droplet is dev box (runs apache, etc.)
                root@karldevubuntu:~# uptime
                17:21:48 up 967 days,  2:51,  1 user,  load average: 0.08, 0.02, 0.01

1 win-vps bronze1 plan w/ 4GB memory, 75GB storage, 1CPU @ $12/month 
I also created swapfile/pagefile for performance 


the $22 is the cheapest always-on cloud offerings out there. And they are pretty stable just look at the uptime :-)

then if I need a hadoop cluster to test stuff I can just Vagrant script that to my laptop virtualbox or to google compute or digital ocean 


}}}








{{{
processor       : 7
vendor_id       : GenuineIntel
cpu family      : 6
model           : 42
model name      : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping        : 7
cpu MHz         : 1600.000
cache size      : 8192 KB
physical id     : 0
siblings        : 8
core id         : 3
cpu cores       : 4
apicid          : 7
initial apicid  : 7
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes xsave avx lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid
bogomips        : 6821.79
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:


OraPub CPU speed statistic is 613.404
Other statistics: stdev=6.161 PIOs=0 pio/lio=0 avg lios/test=1085634 avg time(ms)/test=1770)

PL/SQL procedure successfully completed.


       LIO        PIO   DURATION
---------- ---------- ----------
   1090320          0        180
   1085634          0        177
   1085634          0        176
   1085634          0        178
   1085634          0        180
   1085634          0        175
   1085634          0        176

7 rows selected.

Linux desktopserver.local 2.6.32-200.13.1.el5uek #1 SMP Wed Jul 27 21:02:33 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux


HOST_NAME                      INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
desktopserver.local            dw           11.2.0.3.0

1 row selected.


NAME                                 TYPE        VALUE
------------------------------------ ----------- ----------------------------------------------------------------------------------------------------
cpu_count                            integer     8
parallel_threads_per_cpu             integer     2
resource_manager_cpu_allocation      integer     8


..........................................................................
OraPub CPU speed statistic is 557.797
Other statistics: stdev=32.06 PIOs=0 pio/lio=0 avg lios/test=1085634 avg time(ms)/test=1951.667)

PL/SQL procedure successfully completed.


       LIO        PIO   DURATION
---------- ---------- ----------
   1090320          0        199
   1085634          0        206
   1085634          0        207
   1085634          0        203
   1085634          0        184
   1085634          0        185
   1085634          0        186

7 rows selected.

Linux desktopserver.local 2.6.32-200.13.1.el5uek #1 SMP Wed Jul 27 21:02:33 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux


HOST_NAME                      INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
desktopserver.local            dw           11.2.0.3.0

1 row selected.


NAME                                 TYPE        VALUE
------------------------------------ ----------- ----------------------------------------------------------------------------------------------------
cpu_count                            integer     8
parallel_threads_per_cpu             integer     2
resource_manager_cpu_allocation      integer     8

OraPub CPU speed statistic is 573.428
Other statistics: stdev=4.555 PIOs=0 pio/lio=0 avg lios/test=1085634 avg time(ms)/test=1893.333)

PL/SQL procedure successfully completed.


       LIO        PIO   DURATION
---------- ---------- ----------
   1090320          0        199
   1085634          0        191
   1085634          0        188
   1085634          0        190
   1085634          0        191
   1085634          0        188
   1085634          0        188

7 rows selected.

Linux desktopserver.local 2.6.32-200.13.1.el5uek #1 SMP Wed Jul 27 21:02:33 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux


HOST_NAME                      INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
desktopserver.local            dw           11.2.0.3.0

1 row selected.


NAME                                 TYPE        VALUE
------------------------------------ ----------- ----------------------------------------------------------------------------------------------------
cpu_count                            integer     8
parallel_threads_per_cpu             integer     2
resource_manager_cpu_allocation      integer     8

OraPub CPU speed statistic is 561.588
Other statistics: stdev=5.951 PIOs=0 pio/lio=0 avg lios/test=1085634 avg time(ms)/test=1933.333)

PL/SQL procedure successfully completed.


       LIO        PIO   DURATION
---------- ---------- ----------
   1090320          0        196
   1085634          0        192
   1085634          0        193
   1085634          0        193
   1085634          0        197
   1085634          0        194
   1085634          0        191

7 rows selected.

Linux desktopserver.local 2.6.32-200.13.1.el5uek #1 SMP Wed Jul 27 21:02:33 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux


HOST_NAME                      INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
desktopserver.local            dw           11.2.0.3.0

1 row selected.


NAME                                 TYPE        VALUE
------------------------------------ ----------- ----------------------------------------------------------------------------------------------------
cpu_count                            integer     8
parallel_threads_per_cpu             integer     2
resource_manager_cpu_allocation      integer     8


OraPub CPU speed statistic is 569.407
Other statistics: stdev=3.587 PIOs=0 pio/lio=0 avg lios/test=1085634 avg time(ms)/test=1906.667)

PL/SQL procedure successfully completed.


       LIO        PIO   DURATION
---------- ---------- ----------
   1090320          0        195
   1085634          0        190
   1085634          0        190
   1085634          0        190
   1085634          0        190
   1085634          0        191
   1085634          0        193

7 rows selected.

Linux desktopserver.local 2.6.32-200.13.1.el5uek #1 SMP Wed Jul 27 21:02:33 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux


HOST_NAME                      INSTANCE_NAM VERSION
------------------------------ ------------ -----------------
desktopserver.local            dw           11.2.0.3.0

1 row selected.


NAME                                 TYPE        VALUE
------------------------------------ ----------- ----------------------------------------------------------------------------------------------------
cpu_count                            integer     8
parallel_threads_per_cpu             integer     2
resource_manager_cpu_allocation      integer     8
}}}
! Database
<<<
!! database app dev
http://www.oracle.com/technetwork/database/enterprise-edition/databaseappdev-vm-161299.html
	OTN_Developer_Day_VM.ova

!! oem r4 vm template
http://www.oracle.com/technetwork/oem/enterprise-manager/downloads/oem-templates-1741850.html
	V45533-01.zip
	V45532-01.zip
	V45531-01.zip
	V45530-01.zip

!! 18084575	EXADATA 12.1.1.1.1 (MOS NOTE 1667407.1) (Patch)
	p18084575_121111_Linux-x86-64.zip
	V46534-01.zip

!! database template for oem12cR4
11.2.0.3_Database_Template_for_EM12_1_0_4_Linux_x64.zip
<<<


! tableau 
	TableauDesktop-32bit.exe
	TableauDesktop-64bit.exe
	TableauServer-32bit.exe


! OS 
<<<
! oracle linux
http://www.oracle.com/technetwork/server-storage/linux/downloads/vm-for-hol-1896500.html
	OracleLinux65.ova

! solaris 11 vbox
http://www.oracle.com/technetwork/server-storage/solaris11/downloads/vm-templates-2245495.html
	sol-11_2-vbox.ova

!! oel 7 64bit 
	V46135-01.iso

!! oel 5.7 64bit 
	V27570-01.zip

!! fedora 20 
	http://download.fedoraproject.org/pub/fedora/linux/releases/20/Fedora/x86_64/iso/Fedora-20-x86_64-DVD.iso

!! Oracle VM 3.2.4 Manager & Server VMs
http://www.oracle.com/technetwork/community/developer-vm/index.html#ovm
http://www.oracle.com/technetwork/server-storage/vm/template-1482544.html
http://www.oracle.com/technetwork/server-storage/vm/learnmore/index.html
OracleVMServer3.2.4-b525.ova
OracleVMManager3.2.4-b524.ova
<<<


! Storage 
<<<
!! zfs simulator 
http://www.oracle.com/technetwork/server-storage/sun-unified-storage/downloads/sun-simulator-1368816.html
OracleZFSStorageVM-OS8.2.zip
<<<


! Hadoop
<<<
!! cloudera quickstart vm
http://www.cloudera.com/content/support/en/downloads/quickstart_vms/cdh-5-1-x1.html
cloudera-quickstart-vm-5.1.0-1-virtualbox.7z

!! Big Data Lite Virtual Machine
http://www.oracle.com/technetwork/database/bigdata-appliance/oracle-bigdatalite-2104726.html
2f362b3937220c4a4b95a3ac0b1ac2a1 bigdatalite-3.0.zip.001
83d728dfbc68a84d84797048c44c001c bigdatalite-3.0.zip.002
ffae62b6469f57266bc9dfbb18e0626c bigdatalite-3.0.zip.003
4c70364e8257b14069e352397c5af49e bigdatalite-3.0.zip.004
872006a378dfa9bbba53edb9ea89ab1f bigdatalite-3.0.zip.005
693f4750563445f2613739af8bbf9574 bigdatalite-3.0.zip.006
<<<




''The BIOS does not detect or recognize the ATA / SATA hard drive''
http://knowledge.seagate.com/articles/en_US/FAQ/168595en


https://ask.fedoraproject.org/question/7231/how-to-triage-comreset-failed-error-at-startup/
http://www.tomshardware.com/forum/250403-32-disk-drive-sudden-death
[[rsync]] commands here

{{{
* note that when you pull/restart the router or some reason the DHCP IPs got messed up all you 
have to do is delete the known_hosts and create an empty known_hosts file and then edit the /etc/hosts entries 
for the new IPs of the devices.. of course before that you have to do nmap -sP 192.168.203.* to check all "alive" devices

* then after that, you should be able to passwordlessly login on the devices and start the rsync
}}}

[[WD-MyBookLive]]
[[rsync]] shows how I backup my files and iphone/ipad to WD mybooklive 
http://www.freelists.org/post/oracle-l/IO-performance,13
https://mail.google.com/mail/u/0/#search/oracle-l+short+stroke/137cab643fef1ed4
* Text Processing in Python https://www.amazon.com/Text-Processing-Python-David-Mertz/dp/0321112547/ref=sr_1_fkmr2_3?ie=UTF8&qid=1486829301&sr=8-3-fkmr2&keywords=python+text+processing+tricks
* Python 3 Text Processing with NLTK 3 Cookbook  https://www.amazon.com/Python-Text-Processing-NLTK-Cookbook/dp/1782167854/ref=sr_1_fkmr2_1?ie=UTF8&qid=1486829301&sr=8-1-fkmr2&keywords=python+text+processing+tricks
* Text Analytics with Python: A Practical Real-World Approach to Gaining Actionable Insights from your Data https://www.amazon.com/Text-Analytics-Python-Real-World-Actionable/dp/148422387X/ref=pd_rhf_se_s_cp_1?_encoding=UTF8&pd_rd_i=148422387X&pd_rd_r=BQ961BNXXCRJC8FF1M0T&pd_rd_w=nXtOu&pd_rd_wg=f3MpB&psc=1&refRID=BQ961BNXXCRJC8FF1M0T
* Automate the Boring Stuff with Python: Practical Programming for Total Beginners https://www.amazon.com/Automate-Boring-Stuff-Python-Programming/dp/1593275994/ref=pd_rhf_se_s_cp_2?_encoding=UTF8&pd_rd_i=1593275994&pd_rd_r=BQ961BNXXCRJC8FF1M0T&pd_rd_w=nXtOu&pd_rd_wg=f3MpB&psc=1&refRID=BQ961BNXXCRJC8FF1M0T
* Violent Python: A Cookbook for Hackers, Forensic Analysts, Penetration Testers and Security Engineers  https://www.amazon.com/Violent-Python-Cookbook-Penetration-Engineers/dp/1597499579/ref=pd_rhf_se_s_cp_6?_encoding=UTF8&pd_rd_i=1597499579&pd_rd_r=BQ961BNXXCRJC8FF1M0T&pd_rd_w=nXtOu&pd_rd_wg=f3MpB&psc=1&refRID=BQ961BNXXCRJC8FF1M0T
* Data Wrangling with Python: Tips and Tools to Make Your Life Easier https://www.amazon.com/Data-Wrangling-Python-Tools-Easier/dp/1491948817/ref=pd_rhf_se_s_cp_5?_encoding=UTF8&pd_rd_i=1491948817&pd_rd_r=BQ961BNXXCRJC8FF1M0T&pd_rd_w=nXtOu&pd_rd_wg=f3MpB&psc=1&refRID=BQ961BNXXCRJC8FF1M0T
* Text Mining with R: A tidy approach https://www.amazon.com/Text-Mining-R-tidy-approach/dp/1491981652/ref=sr_1_1?ie=UTF8&qid=1486829376&sr=8-1&keywords=R+text+processing
* Mastering Text Mining with R https://www.amazon.com/Mastering-Text-Mining-Ashish-Kumar/dp/178355181X/ref=pd_sbs_14_1?_encoding=UTF8&pd_rd_i=178355181X&pd_rd_r=QJR23QWXME6RM0EX70S9&pd_rd_w=1qsnW&pd_rd_wg=Qb4hQ&psc=1&refRID=QJR23QWXME6RM0EX70S9
* An Introduction to Information Theory: Symbols, Signals and Noise (Dover Books on Mathematics) https://www.amazon.com/Introduction-Information-Theory-Symbols-Mathematics/dp/0486240614/ref=sr_1_8?ie=UTF8&qid=1486829376&sr=8-8&keywords=R+text+processing
* Automated Data Collection with R: A Practical Guide to Web Scraping and Text Mining https://www.amazon.com/Automated-Data-Collection-Practical-Scraping-ebook/dp/B014T25K5O/ref=sr_1_7?ie=UTF8&qid=1486829376&sr=8-7&keywords=R+text+processing
* Pro Bash Programming: Scripting the Linux Shell (Expert's Voice in Linux)  https://www.amazon.com/Pro-Bash-Programming-Scripting-Experts/dp/1430219971/ref=sr_1_fkmr0_2?ie=UTF8&qid=1486857179&sr=8-2-fkmr0&keywords=bash+text+processing
* Python Scripting for Computational Science: 3 (Texts in Computational Science and Engineering) https://www.amazon.com/Python-Scripting-Computational-Science-Engineering-ebook/dp/B001NLKSSO/ref=mt_kindle?_encoding=UTF8&me=
* Network Programmability and Automation: Skills for the Next-Generation Network Engineer  https://www.amazon.com/Network-Programmability-Automation-Next-Generation-Engineer/dp/1491931256/ref=sr_1_17?ie=UTF8&qid=1487107643&sr=8-17&keywords=python+scripting
* Taming Text: How to Find, Organize, and Manipulate It https://www.safaribooksonline.com/library/view/taming-text-how/9781933988382/


! bash 
* Wicked Cool Shell Scripts, 2nd Edition https://www.safaribooksonline.com/library/view/wicked-cool-shell/9781492018322/



! ruby 
* Text Processing with Ruby https://www.safaribooksonline.com/library/view/text-processing-with/9781680501575/
* Practical Ruby for System Administration  https://www.amazon.com/Practical-System-Administration-Experts-Source/dp/1590598210/ref=sr_1_304?ie=UTF8&qid=1487114901&sr=8-304&keywords=python+cloud
* Everyday Scripting with Ruby: For Teams, Testers, and You 1st Ed https://www.amazon.com/Everyday-Scripting-Ruby-Teams-Testers/dp/0977616614/ref=pd_sim_14_1?_encoding=UTF8&pd_rd_i=0977616614&pd_rd_r=09RA80YPVNPHFX6W9HRC&pd_rd_w=fpkVD&pd_rd_wg=xTaj2&psc=1&refRID=09RA80YPVNPHFX6W9HRC
* Wicked Cool Ruby Scripts: Useful Scripts that Solve Difficult Problems 1st Ed https://www.amazon.com/Wicked-Cool-Ruby-Scripts-Difficult/dp/1593271824/ref=pd_sim_14_4?_encoding=UTF8&pd_rd_i=1593271824&pd_rd_r=3NVCPNJ5PT5MGVZ7XB8X&pd_rd_w=pDh4w&pd_rd_wg=7Nm4C&psc=1&refRID=3NVCPNJ5PT5MGVZ7XB8X


! python 
* Python Quick Start for Linux System Administrators https://app.pluralsight.com/library/courses/python-linux-system-administrators/table-of-contents	
* The Hitchhiker's Guide to Python: Best Practices for Development https://www.amazon.com/Hitchhikers-Guide-Python-Practices-Development/dp/1491933178/ref=pd_sim_14_12?_encoding=UTF8&pd_rd_i=1491933178&pd_rd_r=KEAW97J3NQEXG21CKWBY&pd_rd_w=QIwyC&pd_rd_wg=pzB4O&psc=1&refRID=KEAW97J3NQEXG21CKWBY
* Python Data Science Handbook: Essential Tools for Working with Data https://www.amazon.com/Python-Data-Science-Handbook-Essential/dp/1491912057/ref=pd_sim_14_3?_encoding=UTF8&pd_rd_i=1491912057&pd_rd_r=KEAW97J3NQEXG21CKWBY&pd_rd_w=QIwyC&pd_rd_wg=pzB4O&psc=1&refRID=KEAW97J3NQEXG21CKWBY
* Data Visualization with Python and JavaScript: Scrape, Clean, Explore & Transform Your Data https://www.amazon.com/Data-Visualization-Python-JavaScript-Transform/dp/1491920513/ref=pd_sim_14_4?_encoding=UTF8&pd_rd_i=1491920513&pd_rd_r=WAW4GKP39JZJPXKYZ54H&pd_rd_w=2BBbQ&pd_rd_wg=oYNJw&psc=1&refRID=WAW4GKP39JZJPXKYZ54H
* Mining the Social Web: Data Mining Facebook, Twitter, LinkedIn, Google+, GitHub, and More https://www.amazon.com/Mining-Social-Web-Facebook-LinkedIn/dp/1449367615/ref=sr_1_160?ie=UTF8&qid=1487114754&sr=8-160&keywords=python+cloud
* Web Scraping with Python: Collecting Data from the Modern Web https://www.amazon.com/Web-Scraping-Python-Collecting-Modern/dp/1491910291/ref=pd_bxgy_14_img_2?_encoding=UTF8&pd_rd_i=1491910291&pd_rd_r=4J8EDG8PV293H5RYSV8E&pd_rd_w=piDKd&pd_rd_wg=aqY1t&psc=1&refRID=4J8EDG8PV293H5RYSV8E
* Building Data Pipelines with Python https://www.safaribooksonline.com/library/view/building-data-pipelines/9781491970270/
* Fluent Python: Clear, Concise, and Effective Programming https://www.amazon.com/Fluent-Python-Concise-Effective-Programming/dp/1491946008/ref=pd_bxgy_14_img_2?_encoding=UTF8&pd_rd_i=1491946008&pd_rd_r=BDT30TJXXE3E6CNWNQ87&pd_rd_w=Sf9I5&pd_rd_wg=HjtNo&psc=1&refRID=BDT30TJXXE3E6CNWNQ87   <- nicee!
* Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython https://www.amazon.com/Python-Data-Analysis-Wrangling-IPython/dp/1449319793/ref=sr_1_4?ie=UTF8&qid=1487114575&sr=8-4&keywords=python+cloud
* Python for Data Science For Dummies https://www.amazon.com/Python-Data-Science-Dummies-Computers-ebook/dp/B00TWK3RHW/ref=mt_kindle?_encoding=UTF8&me=
*  Python: Visual QuickStart Guide, Third Edition https://www.safaribooksonline.com/library/view/python-visual-quickstart/9780133435160/ch11.html
* Python Essential Reference (4th Edition)  https://www.amazon.com/Python-Essential-Reference-David-Beazley/dp/0672329786/ref=pd_bxgy_14_img_3?_encoding=UTF8&pd_rd_i=0672329786&pd_rd_r=QJ4H03MEJM726QZG56QZ&pd_rd_w=ZKBa5&pd_rd_wg=Btaz5&psc=1&refRID=QJ4H03MEJM726QZG56QZ
* Python in Practice: Create Better Programs Using Concurrency, Libraries, and Patterns (Developer's Library) https://www.amazon.com/Python-Practice-Concurrency-Libraries-Developers/dp/0321905636/ref=sr_1_106?ie=UTF8&qid=1487114691&sr=8-106&keywords=python+cloud
* Data Science from Scratch: First Principles with Python https://www.amazon.com/Data-Science-Scratch-Principles-Python/dp/149190142X/ref=pd_bxgy_14_img_3?_encoding=UTF8&pd_rd_i=149190142X&pd_rd_r=WAW4GKP39JZJPXKYZ54H&pd_rd_w=0j9ZK&pd_rd_wg=oYNJw&psc=1&refRID=WAW4GKP39JZJPXKYZ54H
* Data Wrangling with Python: Tips and Tools to Make Your Life Easier - REST vs Streaming API https://www.amazon.com/Data-Wrangling-Python-Tools-Easier/dp/1491948817/ref=pd_sim_14_6?_encoding=UTF8&pd_rd_i=1491948817&pd_rd_r=KEAW97J3NQEXG21CKWBY&pd_rd_w=QIwyC&pd_rd_wg=pzB4O&psc=1&refRID=KEAW97J3NQEXG21CKWBY

* Python for Unix and Linux System Administration 1st Ed https://www.amazon.com/Python-Unix-Linux-System-Administration/dp/0596515820/ref=sr_1_4?ie=UTF8&qid=1487708727&sr=8-4&keywords=python+system+administration
* Pro Python System Administration 1st Ed https://www.amazon.com/Python-System-Administration-Experts-Source/dp/1430226056/ref=sr_1_3?ie=UTF8&qid=1487708727&sr=8-3&keywords=python+system+administration
* Pro Python System Administration 2nd ed https://www.amazon.com/Python-System-Administration-Rytis-Sileika/dp/148420218X/ref=sr_1_1?ie=UTF8&qid=1487708727&sr=8-1&keywords=python+system+administration
* Foundations of Python Network Programming 3rd ed https://www.amazon.com/Foundations-Python-Network-Programming-Brandon/dp/1430258543/ref=pd_bxgy_14_img_2?_encoding=UTF8&pd_rd_i=1430258543&pd_rd_r=MGTKMKGFA3SGN0Z8XW2T&pd_rd_w=ppXt3&pd_rd_wg=gKBwL&psc=1&refRID=MGTKMKGFA3SGN0Z8XW2T



! hadoop
* Data Analytics with Hadoop: An Introduction for Data Scientists https://www.amazon.com/Data-Analytics-Hadoop-Introduction-Scientists/dp/1491913703/ref=pd_sim_14_18?_encoding=UTF8&pd_rd_i=1491913703&pd_rd_r=KEAW97J3NQEXG21CKWBY&pd_rd_w=QIwyC&pd_rd_wg=pzB4O&psc=1&refRID=KEAW97J3NQEXG21CKWBY




! spark 
* Advanced Analytics with Spark: Patterns for Learning from Data at Scale https://www.amazon.com/Advanced-Analytics-Spark-Patterns-Learning/dp/1491912766/ref=pd_sim_14_27?_encoding=UTF8&pd_rd_i=1491912766&pd_rd_r=KEAW97J3NQEXG21CKWBY&pd_rd_w=QIwyC&pd_rd_wg=pzB4O&psc=1&refRID=KEAW97J3NQEXG21CKWBY



! regex
* Regular Expression Recipes: A Problem-Solution Approach https://www.amazon.com/Regular-Expression-Recipes-Problem-Solution-Approach/dp/159059441X/ref=sr_1_189?ie=UTF8&qid=1487114792&sr=8-189&keywords=python+cloud



! bad data
* Bad Data Handbook: Cleaning Up The Data So You Can Get Back To Work https://www.amazon.com/Bad-Data-Handbook-Cleaning-Back/dp/1449321887/ref=sr_1_272?ie=UTF8&qid=1487114866&sr=8-272&keywords=python+cloud
















<<showtoc>>

! python data cleaning 
* Practical Data Cleaning with Python https://www.safaribooksonline.com/live-training/courses/practical-data-cleaning-with-python/0636920152798/#schedule
** code repo: https://resources.oreilly.com/live-training/practical-data-cleaning-with-python
* Data Wrangling with Python: Tips and Tools to Make Your Life Easier - https://www.amazon.com/Data-Wrangling-Python-Tools-Easier/dp/1491948817/ref=pd_sim_14_6?_encoding=UTF8&pd_rd_i=1491948817&pd_rd_r=KEAW97J3NQEXG21CKWBY&pd_rd_w=QIwyC&pd_rd_wg=pzB4O&psc=1&refRID=KEAW97J3NQEXG21CKWBY


! python and text
* video: Natural Language Text Processing with Python https://www.safaribooksonline.com/library/view/natural-language-text/9781491976487/
* NLP applied - Applied Text Analysis with Python https://www.safaribooksonline.com/library/view/applied-text-analysis/9781491963036/
* Python for Secret Agents vol1 - https://www.safaribooksonline.com/library/view/python-for-secret/9781783980420/ , https://www.amazon.com/Python-Secret-Agents-Steven-Lott-ebook/dp/B00N2RWMMW/ref=asap_bc?ie=UTF8
* Python for Secret Agents vol2 - https://www.safaribooksonline.com/library/view/python-for-secret/9781785283406/ , https://www.amazon.com/gp/product/B017XSFKHY/ref=dbs_a_def_rwt_bibl_vppi_i3
* Text Processing in Python - https://www.safaribooksonline.com/library/view/text-processing-in/0321112547/ , https://www.amazon.com/Text-Processing-Python-David-Mertz/dp/0321112547/ref=sr_1_fkmr2_3?ie=UTF8&qid=1486829301&sr=8-3-fkmr2&keywords=python+text+processing+tricks
* NLP - Text Analytics with Python: A Practical Real-World Approach to Gaining Actionable Insights from your Data https://www.safaribooksonline.com/library/view/text-analytics-with/9781484223871/ ,  https://www.amazon.com/Text-Analytics-Python-Real-World-Actionable/dp/148422387X/ref=pd_rhf_se_s_cp_1?_encoding=UTF8&pd_rd_i=148422387X&pd_rd_r=BQ961BNXXCRJC8FF1M0T&pd_rd_w=nXtOu&pd_rd_wg=f3MpB&psc=1&refRID=BQ961BNXXCRJC8FF1M0T
* NLP - Taming Text: How to Find, Organize, and Manipulate It - https://www.safaribooksonline.com/library/view/taming-text-how/9781933988382/


! python and ML 
* Data Mining for Business Analytics: Concepts, Techniques, and Applications in R https://www.amazon.com/Data-Mining-Business-Analytics-Applications/dp/1118879368/ref=asap_bc?ie=UTF8
* Introduction to Machine Learning with Python: A Guide for Data Scientists https://www.amazon.com/Introduction-Machine-Learning-Python-Scientists/dp/1449369413/ref=sr_1_3?ie=UTF8&qid=1526456824&sr=8-3&keywords=Introduction+to+Machine+Learning+with+Python&dpID=51ZPksI0E9L&preST=_SX218_BO1,204,203,200_QL40_&dpSrc=srch
* Mastering Machine Learning with Python in Six Steps: A Practical Implementation Guide to Predictive Data Analytics Using Python https://www.safaribooksonline.com/library/view/mastering-machine-learning/9781484228661/



! python and R
* Python for R Users: A Data Science Approach 1st Edition https://www.amazon.com/Python-Users-Data-Science-Approach/dp/1119126762/ref=sr_1_10?s=books&ie=UTF8&qid=1527090532&sr=1-10&keywords=python+cloud+computing



! visualization 
* Making data visual https://www.safaribooksonline.com/library/view/making-data-visual/9781491960493/ch08.html#casestudies_fruitfly
** code repo and examples: https://makingdatavisual.github.io/figurelist.html#fourviews
** https://github.com/MakingDataVisual/makingdatavisual.github.io
** https://resources.oreilly.com/examples/0636920041320




! python - definitive guides
* https://learnxinyminutes.com/docs/python/
* https://learnxinyminutes.com/docs/python3/
* https://learnxinyminutes.com/docs/pythonstatcomp/
* https://learnxinyminutes.com/docs/r/
* https://learnxinyminutes.com/docs/ruby/
* https://learnxinyminutes.com/docs/javascript/
* https://learnxinyminutes.com/docs/json/
* https://learnxinyminutes.com/docs/bash/
* Programming Python, 3rd Edition https://www.safaribooksonline.com/library/view/programming-python-3rd/0596009259/
** code repo: https://resources.oreilly.com/examples/9780596009250
* Fluent Python: Clear, Concise, and Effective Programming - https://www.amazon.com/Fluent-Python-Concise-Effective-Programming/dp/1491946008/ref=pd_bxgy_14_img_3?_encoding=UTF8&pd_rd_i=1491946008&pd_rd_r=7FEQ86HMR2M56ZD7ANWF&pd_rd_w=xkHOG&pd_rd_wg=uUBgD&psc=1&refRID=7FEQ86HMR2M56ZD7ANWF
* The Hitchhiker's Guide to Python: Best Practices for Development - https://www.amazon.com/Hitchhikers-Guide-Python-Practices-Development/dp/1491933178/ref=pd_sim_14_12?_encoding=UTF8&pd_rd_i=1491933178&pd_rd_r=KEAW97J3NQEXG21CKWBY&pd_rd_w=QIwyC&pd_rd_wg=pzB4O&psc=1&refRID=KEAW97J3NQEXG21CKWBY
* Mastering Object-oriented Python https://www.amazon.com/Mastering-Object-oriented-Python-Steven-Lott/dp/1783280972/ref=pd_bxgy_14_img_2?_encoding=UTF8&pd_rd_i=1783280972&pd_rd_r=7FEQ86HMR2M56ZD7ANWF&pd_rd_w=xkHOG&pd_rd_wg=uUBgD&psc=1&refRID=7FEQ86HMR2M56ZD7ANWF
* Python Pocket Reference: Python In Your Pocket https://www.amazon.com/Python-Pocket-Reference-Your-OReilly/dp/1449357016/ref=pd_sbs_14_3?_encoding=UTF8&pd_rd_i=1449357016&pd_rd_r=FPF805WNWTGN2YRE8GND&pd_rd_w=513QW&pd_rd_wg=QCK4v&psc=1&refRID=FPF805WNWTGN2YRE8GND
* Python Crash Course: A Hands-On, Project-Based Introduction to Programming https://www.amazon.com/Python-Crash-Course-Hands-Project-Based/dp/1593276036/ref=pd_sbs_14_1?_encoding=UTF8&pd_rd_i=1593276036&pd_rd_r=FPF805WNWTGN2YRE8GND&pd_rd_w=513QW&pd_rd_wg=QCK4v&psc=1&refRID=FPF805WNWTGN2YRE8GND
* A Smarter Way to Learn Python: Learn it faster. Remember it longer https://www.amazon.com/Smarter-Way-Learn-Python-Remember-ebook/dp/B077Z55G3B/ref=tmm_kin_swatch_0?_encoding=UTF8&qid=&sr=
* Python Playground: Geeky Projects for the Curious Programmer https://www.amazon.com/Python-Playground-Projects-Curious-Programmer/dp/1593276044/ref=pd_sbs_14_5?_encoding=UTF8&pd_rd_i=1593276044&pd_rd_r=FPF805WNWTGN2YRE8GND&pd_rd_w=513QW&pd_rd_wg=QCK4v&psc=1&refRID=FPF805WNWTGN2YRE8GND
* Learn Python 3 the Hard Way: A Very Simple Introduction to the Terrifyingly Beautiful World of Computers and Code https://www.amazon.com/Learn-Python-Hard-Way-Introduction/dp/0134692888/ref=pd_sbs_14_12?_encoding=UTF8&pd_rd_i=0134692888&pd_rd_r=FPF805WNWTGN2YRE8GND&pd_rd_w=513QW&pd_rd_wg=QCK4v&psc=1&refRID=FPF805WNWTGN2YRE8GND


! python and databases
* python flyby randy johnson https://github.com/dallasdba/dbascripts
* python for pl/sql developers http://arup.blogspot.com/2017/01/python-for-plsql-developers-series.html
* mysql for python https://www.safaribooksonline.com/library/view/mysql-for-python/9781849510189/
** code repo: https://github.com/mythstack/MySQL-for-Python-Example-Code
** https://resources.oreilly.com/examples/9781849510189/tree/master



! python and hadoop
https://www.amazon.com/Hadoop-Python-Donald-Miner-ebook/dp/B07D1MP4HS/ref=sr_1_7?ie=UTF8&qid=1526432174&sr=8-7&keywords=python+hadoop&dpID=512eUcTjkTL&preST=_SY445_QL70_&dpSrc=srch



! python and network
Foundations of Python Network Programming https://www.amazon.com/Foundations-Python-Network-Programming-Brandon/dp/1430258543/ref=pd_bxgy_14_img_2?_encoding=UTF8&pd_rd_i=1430258543&pd_rd_r=MGTKMKGFA3SGN0Z8XW2T&pd_rd_w=ppXt3&pd_rd_wg=gKBwL&psc=1&refRID=MGTKMKGFA3SGN0Z8XW2T



! python and sysad 
* video: Python Quick Start for Linux System Administrators
* Pro Python System Administration 2nd ed. Edition https://www.amazon.com/Python-System-Administration-Rytis-Sileika/dp/148420218X/ref=sr_1_1?ie=UTF8&qid=1487708727&sr=8-1&keywords=python+system+administration
* Python for Unix and Linux System Administration 1st Edition https://www.amazon.com/Python-Unix-Linux-System-Administration/dp/0596515820/ref=sr_1_4?ie=UTF8&qid=1487708727&sr=8-4&keywords=python+system+administration



! python and cloud 
* Network Programmability and Automation: Skills for the Next-Generation Network Engineer https://www.amazon.com/Network-Programmability-Automation-Next-Generation-Engineer/dp/1491931256/ref=pd_cp_14_1?_encoding=UTF8&pd_rd_i=1491931256&pd_rd_r=2c30dcee-5ea3-11e8-895a-49ce3940778b&pd_rd_w=agUgD&pd_rd_wg=5rhXw&pf_rd_i=desktop-dp-sims&pf_rd_m=ATVPDKIKX0DER&pf_rd_p=80460301815383741&pf_rd_r=PPAFW1MPB01E7178K92N&pf_rd_s=desktop-dp-sims&pf_rd_t=40701&psc=1&refRID=PPAFW1MPB01E7178K92N  
* Practical Network Automation: Leverage the power of Python and Ansible to optimize your network https://www.amazon.com/Practical-Network-Automation-Leverage-optimize/dp/1788299469/ref=pd_bxgy_14_img_3?_encoding=UTF8&pd_rd_i=1788299469&pd_rd_r=2c30dcee-5ea3-11e8-895a-49ce3940778b&pd_rd_w=D611W&pd_rd_wg=5rhXw&pf_rd_i=desktop-dp-sims&pf_rd_m=ATVPDKIKX0DER&pf_rd_p=3914568618330124508&pf_rd_r=PPAFW1MPB01E7178K92N&pf_rd_s=desktop-dp-sims&pf_rd_t=40701&psc=1&refRID=PPAFW1MPB01E7178K92N
** Extending Ansible https://www.amazon.com/Extending-Ansible-Rishabh-Das-ebook/dp/B01BSTEDL8/ref=sr_1_4?ie=UTF8&qid=1527091700&sr=8-4&keywords=python+ansible&dpID=515EZNRMhZL&preST=_SX342_QL70_&dpSrc=srch
* Mastering Python Networking: Your one stop solution to using Python for network automation, DevOps, and SDN https://www.amazon.com/Mastering-Python-Networking-solution-automation/dp/1784397008/ref=sr_1_5_sspa?ie=UTF8&qid=1527091545&sr=8-5-spons&keywords=cloud+automation+python&psc=1
* Programming Google App Engine with Python: Build and Run Scalable Python Apps on Google's Infrastructure https://www.amazon.com/Programming-Google-Engine-Python-Infrastructure/dp/1491900253/ref=sr_1_7?s=books&ie=UTF8&qid=1527090532&sr=1-7&keywords=python+cloud+computing
* Cloud Native Python: Build and deploy resilent applications on the cloud using microservices, AWS, Azure and more https://www.amazon.com/Cloud-Native-Python-applications-microservices/dp/1787129314/ref=sr_1_1_sspa?s=books&ie=UTF8&qid=1527090532&sr=1-1-spons&keywords=python+cloud+computing&psc=1



! python and PL/SQL
* Introduction to Python for PL/SQL Developers - Full Series https://community.oracle.com/docs/DOC-1005069
* Learning PYTHON for PLSQL Developers https://www.youtube.com/watch?v=FbssyLrfkzo





! bash and sysad 
* Command Line Kung Fu: Bash Scripting Tricks, Linux Shell Programming Tips, and Bash One-liners  https://www.amazon.com/Command-Line-Kung-Programming-One-liners-ebook/dp/B00JRGCFLA/ref=sr_1_4?s=books&ie=UTF8&qid=1527090482&sr=1-4&keywords=bash+system+administration&dpID=41PVsjWk4OL&preST=_SY445_QL70_&dpSrc=srch







11.2.0.2 Grid infrastructure, private interconnect bonding new feature HAIP http://dbastreet.com/blog/?p=515
http://www.cyberciti.biz/tips/linux-bond-or-team-multiple-network-interfaces-nic-into-single-interface.html <-- nice linux guide
http://www.pythian.com/blog/changing-hostnames-in-oracle-rac/
{{{

------------------------------------------------
Change IP Step by Step: 
------------------------------------------------

Scenario:

      There are two subsidiaries (company A and B) of a certain multinational company, they are located on one building and servers residing on one data center. 
      Company A was acquired by another private company and because of this, Company B has to change its subnet from 192.168.203 to 172.168.203
      Below are the old entries of /etc/hosts file of Company B: 

	    [root@racnode1 ~]# cat /etc/hosts
	    # Do not remove the following line, or various programs
	    # that require network functionality will fail.
	    127.0.0.1	localhost.localdomain	localhost

	    # Public Network (eth0)
	    192.168.203.11	racnode1.us.oracle.com	racnode1
	    192.168.203.12	racnode2.us.oracle.com	racnode2

	    # Public VIP
	    192.168.203.111	racnode1-vip.us.oracle.com	racnode1-vip
	    192.168.203.112	racnode2-vip.us.oracle.com	racnode2-vip

	    # Private Interconnect
	    10.10.10.11	racnode1-priv.us.oracle.com	racnode1-priv
	    10.10.10.12	racnode2-priv.us.oracle.com	racnode2-priv

      Below will be the new entries of /etc/hosts file of Company B: 

	    [root@racnode1 ~]# cat /etc/hosts
	    # Do not remove the following line, or various programs
	    # that require network functionality will fail.
	    127.0.0.1	localhost.localdomain	localhost

	    # Public Network (eth0)
	    172.168.203.11	racnode1.us.oracle.com	racnode1
	    172.168.203.12	racnode2.us.oracle.com	racnode2

	    # Public VIP
	    172.168.203.111	racnode1-vip.us.oracle.com	racnode1-vip
	    172.168.203.112	racnode2-vip.us.oracle.com	racnode2-vip

	    # Private Interconnect
	    10.10.10.11	racnode1-priv.us.oracle.com	racnode1-priv
	    10.10.10.12	racnode2-priv.us.oracle.com	racnode2-priv

Considerations:
  
      - There will be no data center movement, only physical rewiring will happen. Some servers of Company B were already moved to 172.168.203, only the RAC
	servers were left. There was a route going to 192.168.203 that's why they could still access the servers
      - The EMC CX500 storage is assigned on the 192.168.203 subnet together with the management console. EMC engineer said there will be no problems
	with the IP address change on the RAC servers
      - Since the IP addresses will be changed, the Net Services entries have to be modified
      - Also the database link going to the RAC servers have to be modified to reflect the new IPs
      - DNS entries on the 172.168.203 have to be created
      - DNS entries on the 192.168.203 have to be deleted
      - NFS mountpoints on the servers should be noted, edit the /etc/exports on the source servers to reflect the new IPs

So here it goes...

1) Shut down everything except the CRS stack (execute on racnode1)

      a) verify the status

	    [oracle@racnode1 ~]$ crs_stat2
	    HA Resource                                   Target     State             
	    -----------                                   ------     -----             
	    ora.orcl.db                                   ONLINE     ONLINE on racnode1
	    ora.orcl.orcl1.inst                           ONLINE     ONLINE on racnode1
	    ora.orcl.orcl2.inst                           ONLINE     ONLINE on racnode2
	    ora.orcl.orcl_service.cs                      ONLINE     ONLINE on racnode1
	    ora.orcl.orcl_service.orcl1.srv               ONLINE     ONLINE on racnode1
	    ora.orcl.orcl_service.orcl2.srv               ONLINE     ONLINE on racnode2
	    ora.racnode1.ASM1.asm                         ONLINE     ONLINE on racnode1
	    ora.racnode1.LISTENER_RACNODE1.lsnr           ONLINE     ONLINE on racnode1
	    ora.racnode1.gsd                              ONLINE     ONLINE on racnode1
	    ora.racnode1.ons                              ONLINE     ONLINE on racnode1
	    ora.racnode1.vip                              ONLINE     ONLINE on racnode1
	    ora.racnode2.ASM2.asm                         ONLINE     ONLINE on racnode2
	    ora.racnode2.LISTENER_RACNODE2.lsnr           ONLINE     ONLINE on racnode2
	    ora.racnode2.gsd                              ONLINE     ONLINE on racnode2
	    ora.racnode2.ons                              ONLINE     ONLINE on racnode2
	    ora.racnode2.vip                              ONLINE     ONLINE on racnode2


      b) stop the services, instances, ASM, and nodeapps (execute on racnode1)

	    [oracle@racnode1 ~]$ srvctl stop service -d orcl
	    [oracle@racnode1 ~]$ srvctl stop database -d orcl
	    [oracle@racnode1 ~]$ srvctl stop asm -n racnode1
	    [oracle@racnode1 ~]$ srvctl stop asm -n racnode2
	    [oracle@racnode1 ~]$ srvctl stop nodeapps -n racnode1
	    [oracle@racnode1 ~]$ srvctl stop nodeapps -n racnode2

	    [oracle@racnode1 ~]$ crs_stat2
	    HA Resource                                   Target     State             
	    -----------                                   ------     -----             
	    ora.orcl.db                                   OFFLINE    OFFLINE           
	    ora.orcl.orcl1.inst                           OFFLINE    OFFLINE           
	    ora.orcl.orcl2.inst                           OFFLINE    OFFLINE           
	    ora.orcl.orcl_service.cs                      OFFLINE    OFFLINE           
	    ora.orcl.orcl_service.orcl1.srv               OFFLINE    OFFLINE           
	    ora.orcl.orcl_service.orcl2.srv               OFFLINE    OFFLINE           
	    ora.racnode1.ASM1.asm                         OFFLINE    OFFLINE           
	    ora.racnode1.LISTENER_RACNODE1.lsnr           OFFLINE    OFFLINE           
	    ora.racnode1.gsd                              OFFLINE    OFFLINE           
	    ora.racnode1.ons                              OFFLINE    OFFLINE           
	    ora.racnode1.vip                              OFFLINE    OFFLINE           
	    ora.racnode2.ASM2.asm                         OFFLINE    OFFLINE           
	    ora.racnode2.LISTENER_RACNODE2.lsnr           OFFLINE    OFFLINE           
	    ora.racnode2.gsd                              OFFLINE    OFFLINE           
	    ora.racnode2.ons                              OFFLINE    OFFLINE           
	    ora.racnode2.vip                              OFFLINE    OFFLINE           


2) Backup OCR and Voting Disk (execute on racnode1)

      a) Query OCR location

	    [oracle@racnode1 ~]$ ocrcheck
	    Status of Oracle Cluster Registry is as follows :
		    Version                  :          2
		    Total space (kbytes)     :     262144
		    Used space (kbytes)      :       4592
		    Available space (kbytes) :     257552
		    ID                       : 1841304007
		    Device/File Name         : /u02/oradata/orcl/OCRFile
						Device/File integrity check succeeded

						Device/File not configured

		    Cluster registry integrity check succeeded

      b) Query Voting Disk location

	    [oracle@racnode1 ~]$ crsctl query css votedisk
	    0.     0    /u02/oradata/orcl/CSSFile

	    located 1 votedisk(s).

      c) Backup the files using "dd"

	    dd if=/u02/oradata/orcl/OCRFile of=/u03/flash_recovery_area/OCRFile_backup
	    dd if=/u02/oradata/orcl/CSSFile of=/u03/flash_recovery_area/CSSFile_backup


3) Change the public interface

      a) Verify first the interface (both nodes)

	    [oracle@racnode1 ~]$ $ORA_CRS_HOME/bin/oifcfg getif 
	    eth0  192.168.203.0  global  public
	    eth1  10.10.10.0  global  cluster_interconnect

	    [oracle@racnode2 ~]$ $ORA_CRS_HOME/bin/oifcfg getif 
	    eth0  192.168.203.0  global  public
	    eth1  10.10.10.0  global  cluster_interconnect

      b) View the available interface names on each node by running the command (both nodes)

	    [oracle@racnode1 ~]$ oifcfg iflist
	    eth0  192.168.203.0
	    eth1  10.10.10.0

	    [oracle@racnode2 ~]$ oifcfg iflist
	    eth0  192.168.203.0
	    eth1  10.10.10.0

      c) In our case the interface eth0 has to be changed. There is no modify command, so we have to delete and redefine the interface. 
	  When you execute the "oifcfg", the changes will also reflect on other nodes. (execute on racnode1)

	    [oracle@racnode1 ~]$ oifcfg delif -global eth0

	    [oracle@racnode1 ~]$ $ORA_CRS_HOME/bin/oifcfg getif 
	    eth1  10.10.10.0  global  cluster_interconnect

	    [oracle@racnode2 ~]$ $ORA_CRS_HOME/bin/oifcfg getif 
	    eth1  10.10.10.0  global  cluster_interconnect

	    [oracle@racnode1 ~]$ oifcfg setif -global eth0/172.168.203.0:public

	  The CRS installation user (oracle) must be used for this command, otherwise you'll get the following errors

	    [karao@racnode1 bin]$ ./oifcfg delif -global eth0
	    PRIF-4: OCR error while deleting the configuration for the given interface

	    [karao@racnode1 bin]$ ./oifcfg setif -global eth0/172.168.203.0:public
	    PROC-5: User does not have permission to perform a cluster registry operation on this key. Authentication error [User does not have permission to perform this operation] [0]
	    PRIF-11: cluster registry error

      d) Verify the change (both nodes)

	    [oracle@racnode1 ~]$ $ORA_CRS_HOME/bin/oifcfg getif 
	    eth0  172.168.203.0  global  public
	    eth1  10.10.10.0  global  cluster_interconnect

	    [oracle@racnode2 ~]$ $ORA_CRS_HOME/bin/oifcfg getif
	    eth0  172.168.203.0  global  public
	    eth1  10.10.10.0  global  cluster_interconnect


4) Modify the VIP address

      a) Verify current VIP (execute on racnode1)

	    [oracle@racnode1 ~]$ srvctl config nodeapps -n racnode1 -a 
	    VIP exists.: /racnode1-vip.us.oracle.com/192.168.203.111/255.255.255.0/eth0

	    [oracle@racnode1 ~]$ srvctl config nodeapps -n racnode2 -a 
	    VIP exists.: /racnode2-vip.us.oracle.com/192.168.203.112/255.255.255.0/eth0

	    Below is the summary of the output:
		  VIP Hostname is 'racnode1-vip.us.oracle.com'
		  VIP IP address is '192.168.203.111'
		  VIP subnet mask is '255.255.255.0'
		  Interface Name used by the VIP is called 'eth0'

      b) Verify that the VIP is no longer running by executing the 'ifconfig' (both nodes)

	    [oracle@racnode1 ~]$ /sbin/ifconfig 
	    [oracle@racnode2 ~]$ /sbin/ifconfig 

      c) Change the VIP (we modified the Public IP so we must change the VIP to the same subnet as well)
	Below are some notes to remember: 
	      # The root user should be used for this action, otherwise you'll get the error below
		  [oracle@racnode1 ~]$ srvctl modify nodeapps -n racnode1 -A 172.168.203.111/255.255.255.0/eth0
		  PRKO-2117 : This command should be executed as the system privilege user.
	      # The variable ORACLE_HOME must be initialised, otherwise you'll get the error below
		  ****ORACLE_HOME environment variable not set!
		      ORACLE_HOME should be set to the main
		      directory that contains Oracle products.
		      Set and export ORACLE_HOME, then re-run.

	    You could specify on the "srvctl" command either IP or hostname, in my case, I want the output of the "srvctl config nodeapps -n racnode1 -a" 
	    command to show the VIP hostname (Option 1), below will show you two ways to do it: 

	    First set ORACLE_HOME
		  [root@racnode1 ~]# export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1

	    Option 1 (execute on racnode1):

		  ** Modify /etc/hosts to contain new VIP IPs on both nodes
		  # Public VIP
		  172.168.203.111	racnode1-vip.us.oracle.com	racnode1-vip
		  172.168.203.112	racnode2-vip.us.oracle.com	racnode2-vip

		  [root@racnode1 ~]# /u01/app/oracle/product/10.2.0/db_1/bin/srvctl modify nodeapps -n racnode1 -A racnode1-vip.us.oracle.com/255.255.255.0/eth0

		  [oracle@racnode1 ~]$ srvctl config nodeapps -n racnode1 -a
		  VIP exists.: /racnode1-vip.us.oracle.com/172.168.203.111/255.255.255.0/eth0

		  [root@racnode1 ~]# /u01/app/oracle/product/10.2.0/db_1/bin/srvctl modify nodeapps -n racnode2 -A racnode2-vip.us.oracle.com/255.255.255.0/eth0

		  [oracle@racnode1 bin]$ srvctl config nodeapps -n racnode2 -a
		  VIP exists.: /racnode2-vip.us.oracle.com/172.168.203.112/255.255.255.0/eth0

	    Option 2 (execute on racnode1):

		  No modifications on /etc/hosts yet

		  [root@racnode1 ~]# /u01/app/oracle/product/10.2.0/db_1/bin/srvctl modify nodeapps -n racnode1 -A 172.168.203.111/255.255.255.0/eth0

		  [oracle@racnode1 ~]$ srvctl config nodeapps -n racnode1 -a
		  VIP exists.: /172.168.203.111/172.168.203.111/255.255.255.0/eth0

		  [root@racnode1 ~]# /u01/app/oracle/product/10.2.0/db_1/bin/srvctl modify nodeapps -n racnode2 -A 172.168.203.112/255.255.255.0/eth0

		  [oracle@racnode1 ~]$ srvctl config nodeapps -n racnode2 -a
		  VIP exists.: /172.168.203.112/172.168.203.111/255.255.255.0/eth0	    

      d) Verify the change (execute on racnode1)

	    [oracle@racnode1 bin]$ srvctl config nodeapps -n racnode1 -a
	    VIP exists.: /racnode1-vip.us.oracle.com/172.168.203.111/255.255.255.0/eth0

	    [oracle@racnode1 bin]$ srvctl config nodeapps -n racnode2 -a
	    VIP exists.: /racnode2-vip.us.oracle.com/172.168.203.112/255.255.255.0/eth0


4) Shut down CRS (both nodes)

	    [root@racnode1 bin]# ./crsctl stop crs
	    Stopping resources.
	    Successfully stopped CRS resources 
	    Stopping CSSD.
	    Shutting down CSS daemon.
	    Shutdown request successfully issued.

	    [root@racnode2 bin]# ./crsctl stop crs
	    Stopping resources.
	    Successfully stopped CRS resources 
	    Stopping CSSD.
	    Shutting down CSS daemon.
	    Shutdown request successfully issued.


5) Modify IP address on OS level (/etc/hosts), Net Services files (tnsnames.ora, listener.ora), OCFS2 (if available),etc. (both nodes)
   Backup the files first before modification
   This is the time where network engineers can rewire on the servers

      OS level: 
	    /etc/hosts
	    /etc/sysconfig/network
	    /etc/resolv.conf
	    /etc/sysconfig/network-scripts/ifcfg-eth0

      Net Services:
	    tnsnames.ora
	    listener.ora

      OCFS2 (change to the new IP):
	    /etc/ocfs2/cluster.conf

      NTP server address
	    /etc/ntp.conf


6) Restart server, verify RAC components

	    [oracle@racnode1 ~]$ crs_stat2
	    HA Resource                                   Target     State             
	    -----------                                   ------     -----             
	    ora.orcl.db                                   ONLINE     ONLINE on racnode1
	    ora.orcl.orcl1.inst                           ONLINE     ONLINE on racnode1
	    ora.orcl.orcl2.inst                           ONLINE     ONLINE on racnode2
	    ora.orcl.orcl_service.cs                      ONLINE     ONLINE on racnode1
	    ora.orcl.orcl_service.orcl1.srv               ONLINE     ONLINE on racnode1
	    ora.orcl.orcl_service.orcl2.srv               ONLINE     ONLINE on racnode2
	    ora.racnode1.ASM1.asm                         ONLINE     ONLINE on racnode1
	    ora.racnode1.LISTENER_RACNODE1.lsnr           ONLINE     ONLINE on racnode1
	    ora.racnode1.gsd                              ONLINE     ONLINE on racnode1
	    ora.racnode1.ons                              ONLINE     ONLINE on racnode1
	    ora.racnode1.vip                              ONLINE     ONLINE on racnode1
	    ora.racnode2.ASM2.asm                         ONLINE     ONLINE on racnode2
	    ora.racnode2.LISTENER_RACNODE2.lsnr           ONLINE     ONLINE on racnode2
	    ora.racnode2.gsd                              ONLINE     ONLINE on racnode2
	    ora.racnode2.ons                              ONLINE     ONLINE on racnode2
	    ora.racnode2.vip                              ONLINE     ONLINE on racnode2


7) Application testing


-------------


Fallback procedure:

1) Shut down everything plus the CRS stack (execute on racnode1)

      a) Shutdown RAC components

	    [oracle@racnode1 ~]$ srvctl stop service -d orcl
	    [oracle@racnode1 ~]$ srvctl stop database -d orcl
	    [oracle@racnode1 ~]$ srvctl stop asm -n racnode1
	    [oracle@racnode1 ~]$ srvctl stop asm -n racnode2
	    [oracle@racnode1 ~]$ srvctl stop nodeapps -n racnode1
	    [oracle@racnode1 ~]$ srvctl stop nodeapps -n racnode2

	    [oracle@racnode1 ~]$ crs_stat2
	    HA Resource                                   Target     State             
	    -----------                                   ------     -----             
	    ora.orcl.db                                   OFFLINE    OFFLINE           
	    ora.orcl.orcl1.inst                           OFFLINE    OFFLINE           
	    ora.orcl.orcl2.inst                           OFFLINE    OFFLINE           
	    ora.orcl.orcl_service.cs                      OFFLINE    OFFLINE           
	    ora.orcl.orcl_service.orcl1.srv               OFFLINE    OFFLINE           
	    ora.orcl.orcl_service.orcl2.srv               OFFLINE    OFFLINE           
	    ora.racnode1.ASM1.asm                         OFFLINE    OFFLINE           
	    ora.racnode1.LISTENER_RACNODE1.lsnr           OFFLINE    OFFLINE           
	    ora.racnode1.gsd                              OFFLINE    OFFLINE           
	    ora.racnode1.ons                              OFFLINE    OFFLINE           
	    ora.racnode1.vip                              OFFLINE    OFFLINE           
	    ora.racnode2.ASM2.asm                         OFFLINE    OFFLINE           
	    ora.racnode2.LISTENER_RACNODE2.lsnr           OFFLINE    OFFLINE           
	    ora.racnode2.gsd                              OFFLINE    OFFLINE           
	    ora.racnode2.ons                              OFFLINE    OFFLINE           
	    ora.racnode2.vip                              OFFLINE    OFFLINE     

      b) Shut down CRS (both nodes)

	    [root@racnode1 bin]# ./crsctl stop crs
	    Stopping resources.
	    Successfully stopped CRS resources 
	    Stopping CSSD.
	    Shutting down CSS daemon.
	    Shutdown request successfully issued.

	    [root@racnode2 bin]# ./crsctl stop crs
	    Stopping resources.
	    Successfully stopped CRS resources 
	    Stopping CSSD.
	    Shutting down CSS daemon.
	    Shutdown request successfully issued.

2) Put back the OCR and Voting Disk using "dd" (execute on racnode1)
      
      a) Use "dd" to restore

	    [root@racnode1 ~]# dd if=/u03/flash_recovery_area/OCRFile_backup of=/u02/oradata/orcl/OCRFile
	    9640+0 records in
	    9640+0 records out

	    [root@racnode1 ~]# dd if=/u03/flash_recovery_area/CSSFile_backup of=/u02/oradata/orcl/CSSFile
	    20000+0 records in
	    20000+0 records out

      b) Change permissions and ownership

	    [root@racnode1 ~]# chown root:oinstall /u02/oradata/orcl/OCRFile
	    [root@racnode1 ~]# chown oracle:oinstall /u02/oradata/orcl/CSSFile
	    [root@racnode1 ~]# chmod 640 /u02/oradata/orcl/OCRFile
	    [root@racnode1 ~]# chmod 644 /u02/oradata/orcl/CSSFile

      c) Verify the restore, notice that "oifcfg iflist" still outputs the 172.168.203 subnet, after reconfiguring the interfaces and restart it will output 192.168.203.0

	    [oracle@racnode1 ~]$ $ORA_CRS_HOME/bin/oifcfg getif
	    eth0  192.168.203.0  global  public
	    eth1  10.10.10.0  global  cluster_interconnect
	    [oracle@racnode1 ~]$ oifcfg iflist
	    eth0  172.168.203.0
	    eth1  10.10.10.0

	    [oracle@racnode2 ~]$ $ORA_CRS_HOME/bin/oifcfg getif
	    eth0  192.168.203.0  global  public
	    eth1  10.10.10.0  global  cluster_interconnect
	    [oracle@racnode2 ~]$ oifcfg iflist
	    eth0  172.168.203.0
	    eth1  10.10.10.0


3) Put back the OS level (/etc/hosts) files, Net Services files (tnsnames.ora, listener.ora), OCFS2 (if available),etc. (both nodes)
   Also put back the old wire configuration

      OS level: 
	    /etc/hosts
	    /etc/sysconfig/network
	    /etc/resolv.conf
	    /etc/sysconfig/network-scripts/ifcfg-eth0

      Net Services:
	    tnsnames.ora
	    listener.ora

      OCFS2 (change to the new IP):
	    /etc/ocfs2/cluster.conf

      NTP server address
	    /etc/ntp.conf

4) Restart the server, check the CRS and RAC components

      a) Check the interfaces and VIP (both nodes)

	    [oracle@racnode1 ~]$ $ORA_CRS_HOME/bin/oifcfg getif
	    eth0  192.168.203.0  global  public
	    eth1  10.10.10.0  global  cluster_interconnect

	    [oracle@racnode1 ~]$ oifcfg iflist
	    eth0  192.168.203.0
	    eth1  10.10.10.0

	    [oracle@racnode1 ~]$ srvctl config nodeapps -n racnode1 -a
	    VIP exists.: /racnode1-vip.us.oracle.com/192.168.203.111/255.255.255.0/eth0

	    [oracle@racnode2 ~]$ $ORA_CRS_HOME/bin/oifcfg getif
	    eth0  192.168.203.0  global  public
	    eth1  10.10.10.0  global  cluster_interconnect

	    [oracle@racnode2 ~]$ oifcfg iflist
	    eth0  192.168.203.0
	    eth1  10.10.10.0

	    [oracle@racnode2 ~]$ srvctl config nodeapps -n racnode2 -a
	    VIP exists.: /racnode2-vip.us.oracle.com/192.168.203.112/255.255.255.0/eth0

      b) Check RAC components

	    [oracle@racnode1 ~]$ crs_stat2
	    HA Resource                                   Target     State             
	    -----------                                   ------     -----             
	    ora.orcl.db                                   ONLINE     ONLINE on racnode1
	    ora.orcl.orcl1.inst                           ONLINE     ONLINE on racnode1
	    ora.orcl.orcl2.inst                           ONLINE     ONLINE on racnode2
	    ora.orcl.orcl_service.cs                      ONLINE     ONLINE on racnode1
	    ora.orcl.orcl_service.orcl1.srv               ONLINE     ONLINE on racnode1
	    ora.orcl.orcl_service.orcl2.srv               ONLINE     ONLINE on racnode2
	    ora.racnode1.ASM1.asm                         ONLINE     ONLINE on racnode1
	    ora.racnode1.LISTENER_RACNODE1.lsnr           ONLINE     ONLINE on racnode1
	    ora.racnode1.gsd                              ONLINE     ONLINE on racnode1
	    ora.racnode1.ons                              ONLINE     ONLINE on racnode1
	    ora.racnode1.vip                              ONLINE     ONLINE on racnode1
	    ora.racnode2.ASM2.asm                         ONLINE     ONLINE on racnode2
	    ora.racnode2.LISTENER_RACNODE2.lsnr           ONLINE     ONLINE on racnode2
	    ora.racnode2.gsd                              ONLINE     ONLINE on racnode2
	    ora.racnode2.ons                              ONLINE     ONLINE on racnode2
	    ora.racnode2.vip                              ONLINE     ONLINE on racnode2















































root@karl:/home/karao/Documents/VirtualMachines/vmware-update-2.6.27-5.5.7-2# ./runme.pl 
Updating /usr/bin/vmware-config.pl ... already patched
Updating /usr/bin/vmware ... No patch needed/available
Updating /usr/bin/vmnet-bridge ... No patch needed/available
Updating /usr/lib/vmware/bin/vmware-vmx ... No patch needed/available
Updating /usr/lib/vmware/bin-debug/vmware-vmx ... No patch needed/available
VMware modules in "/usr/lib/vmware/modules/source" has been updated.

Before running VMware for the first time after update, you need to configure it 
for your running kernel by invoking the following command: 
"/usr/bin/vmware-config.pl". Do you want this script to invoke the command for 
you now? [yes] 

Making sure services for VMware Server are stopped.

Stopping VMware services:
   Virtual machine monitor                                             done
   Bridged networking on /dev/vmnet0                                   done
   DHCP server on /dev/vmnet1                                          done
   Host-only networking on /dev/vmnet1                                 done
   DHCP server on /dev/vmnet8                                          done
   NAT service on /dev/vmnet8                                          done
   Host-only networking on /dev/vmnet8                                 done
   Virtual ethernet                                                    done

Configuring fallback GTK+ 2.4 libraries.

In which directory do you want to install the mime type icons? 
[/usr/share/icons] 

What directory contains your desktop menu entry files? These files have a 
.desktop file extension. [/usr/share/applications] 

In which directory do you want to install the application's icon? 
[/usr/share/pixmaps] 

/usr/share/applications/vmware-server.desktop: warning: value "vmware-server.png" for key "Icon" in group "Desktop Entry" is an icon name with an extension, but there should be no extension as described in the Icon Theme Specification if the value is not an absolute path
/usr/share/applications/vmware-console-uri-handler.desktop: warning: value "vmware-server.png" for key "Icon" in group "Desktop Entry" is an icon name with an extension, but there should be no extension as described in the Icon Theme Specification if the value is not an absolute path
Trying to find a suitable vmmon module for your running kernel.

None of the pre-built vmmon modules for VMware Server is suitable for your 
running kernel.  Do you want this program to try to build the vmmon module for 
your system (you need to have a C compiler installed on your system)? [yes] 

Using compiler "/usr/bin/gcc". Use environment variable CC to override.

What is the location of the directory of C header files that match your running
kernel? [/lib/modules/2.6.27-11-generic/build/include] 

Extracting the sources of the vmmon module.

Building the vmmon module.

Building for VMware Server 1.0.0.
Using 2.6.x kernel build system.
make: Entering directory `/tmp/vmware-config0/vmmon-only'
make -C /lib/modules/2.6.27-11-generic/build/include/.. SUBDIRS=$PWD SRCROOT=$PWD/. modules
make[1]: Entering directory `/usr/src/linux-headers-2.6.27-11-generic'
  CC [M]  /tmp/vmware-config0/vmmon-only/linux/driver.o
  CC [M]  /tmp/vmware-config0/vmmon-only/linux/driverLog.o
  CC [M]  /tmp/vmware-config0/vmmon-only/linux/hostif.o
/tmp/vmware-config0/vmmon-only/linux/hostif.c: In function ‘HostIF_SetFastClockRate’:
/tmp/vmware-config0/vmmon-only/linux/hostif.c:3441: warning: passing argument 2 of ‘send_sig’ discards qualifiers from pointer target type
  CC [M]  /tmp/vmware-config0/vmmon-only/common/comport.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/cpuid.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/hash.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/memtrack.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/phystrack.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/task.o
cc1plus: warning: command line option "-Werror-implicit-function-declaration" is valid for C/ObjC but not for C++
cc1plus: warning: command line option "-Wdeclaration-after-statement" is valid for C/ObjC but not for C++
cc1plus: warning: command line option "-Wno-pointer-sign" is valid for C/ObjC but not for C++
cc1plus: warning: command line option "-Wstrict-prototypes" is valid for Ada/C/ObjC but not for C++
In file included from /tmp/vmware-config0/vmmon-only/common/task.c:1195:
/tmp/vmware-config0/vmmon-only/common/task_compat.h: In function ‘void Task_Switch_V45(VMDriver*, Vcpuid)’:
/tmp/vmware-config0/vmmon-only/common/task_compat.h:2667: warning: ‘sysenterState.SysenterStateV45::validEIP’ may be used uninitialized in this function
/tmp/vmware-config0/vmmon-only/common/task_compat.h:2667: warning: ‘sysenterState.SysenterStateV45::cs’ may be used uninitialized in this function
/tmp/vmware-config0/vmmon-only/common/task_compat.h:2667: warning: ‘sysenterState.SysenterStateV45::rsp’ may be used uninitialized in this function
/tmp/vmware-config0/vmmon-only/common/task_compat.h:2667: warning: ‘sysenterState.SysenterStateV45::rip’ may be used uninitialized in this function
  CC [M]  /tmp/vmware-config0/vmmon-only/common/vmciContext.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/vmciDatagram.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/vmciDriver.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/vmciDs.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/vmciGroup.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/vmciHashtable.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/vmciProcess.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/vmciResource.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/vmciSharedMem.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/vmx86.o
  CC [M]  /tmp/vmware-config0/vmmon-only/vmcore/compat.o
  CC [M]  /tmp/vmware-config0/vmmon-only/vmcore/moduleloop.o
  LD [M]  /tmp/vmware-config0/vmmon-only/vmmon.o
  Building modules, stage 2.
  MODPOST 1 modules
WARNING: modpost: module vmmon.ko uses symbol 'init_mm' marked UNUSED
  CC      /tmp/vmware-config0/vmmon-only/vmmon.mod.o
  LD [M]  /tmp/vmware-config0/vmmon-only/vmmon.ko
make[1]: Leaving directory `/usr/src/linux-headers-2.6.27-11-generic'
cp -f vmmon.ko ./../vmmon.o
make: Leaving directory `/tmp/vmware-config0/vmmon-only'
The module loads perfectly in the running kernel.

This program previously created the file /dev/vmmon, and was about to remove 
it.  Somebody else apparently did it already.

This program previously created the file /dev/parport0, and was about to remove
it.  Somebody else apparently did it already.

This program previously created the file /dev/parport1, and was about to remove
it.  Somebody else apparently did it already.

This program previously created the file /dev/parport2, and was about to remove
it.  Somebody else apparently did it already.

This program previously created the file /dev/parport3, and was about to remove
it.  Somebody else apparently did it already.

You have already setup networking.

Would you like to skip networking setup and keep your old settings as they are?
(yes/no) [yes] no

Do you want networking for your virtual machines? (yes/no/help) [yes] 

Would you prefer to modify your existing networking configuration using the 
wizard or the editor? (wizard/editor/help) [wizard] 

The following bridged networks have been defined:

. vmnet0 is bridged to eth0

Do you wish to configure another bridged network? (yes/no) [no] 

Do you want to be able to use NAT networking in your virtual machines? (yes/no)
[yes] 

Configuring a NAT network for vmnet8.

The NAT network is currently configured to use the private subnet 
192.168.203.0/255.255.255.0.  Do you want to keep these settings? [yes] no

Do you want this program to probe for an unused private subnet? (yes/no/help) 
[yes] no

What will be the IP address of your host on the private 
network? 172.168.203.0

What will be the netmask of your private network? 255.255.255.0

The following NAT networks have been defined:

. vmnet8 is a NAT network on private subnet 172.168.203.0.

Do you wish to configure another NAT network? (yes/no) [no] 

Do you want to be able to use host-only networking in your virtual machines? 
[yes]   

Configuring a host-only network for vmnet1.

The host-only network is currently configured to use the private subnet 
10.10.10.0/255.255.255.0.  Do you want to keep these settings? [yes] 

The following host-only networks have been defined:

. vmnet1 is a host-only network on private subnet 10.10.10.0.

Do you wish to configure another host-only network? (yes/no) [no] 

Extracting the sources of the vmnet module.

Building the vmnet module.

Building for VMware Server 1.0.0.
Using 2.6.x kernel build system.
make: Entering directory `/tmp/vmware-config0/vmnet-only'
make -C /lib/modules/2.6.27-11-generic/build/include/.. SUBDIRS=$PWD SRCROOT=$PWD/. modules
make[1]: Entering directory `/usr/src/linux-headers-2.6.27-11-generic'
  CC [M]  /tmp/vmware-config0/vmnet-only/driver.o
  CC [M]  /tmp/vmware-config0/vmnet-only/hub.o
  CC [M]  /tmp/vmware-config0/vmnet-only/userif.o
  CC [M]  /tmp/vmware-config0/vmnet-only/netif.o
  CC [M]  /tmp/vmware-config0/vmnet-only/bridge.o
  CC [M]  /tmp/vmware-config0/vmnet-only/filter.o
  CC [M]  /tmp/vmware-config0/vmnet-only/procfs.o
  CC [M]  /tmp/vmware-config0/vmnet-only/smac_compat.o
  CC [M]  /tmp/vmware-config0/vmnet-only/smac_linux.x86_64.o
  LD [M]  /tmp/vmware-config0/vmnet-only/vmnet.o
  Building modules, stage 2.
  MODPOST 1 modules
WARNING: modpost: missing MODULE_LICENSE() in /tmp/vmware-config0/vmnet-only/vmnet.o
see include/linux/module.h for more information
  CC      /tmp/vmware-config0/vmnet-only/vmnet.mod.o
  LD [M]  /tmp/vmware-config0/vmnet-only/vmnet.ko
make[1]: Leaving directory `/usr/src/linux-headers-2.6.27-11-generic'
cp -f vmnet.ko ./../vmnet.o
make: Leaving directory `/tmp/vmware-config0/vmnet-only'
The module loads perfectly in the running kernel.

Please specify a port for remote console connections to use [902] 

 * Stopping internet superserver xinetd                                                                                                                   [ OK ] 
 * Starting internet superserver xinetd                                                                                                                   [ OK ] 
Configuring the VMware VmPerl Scripting API.

Building the VMware VmPerl Scripting API.

Using compiler "/usr/bin/gcc". Use environment variable CC to override.

Installing the VMware VmPerl Scripting API.

The installation of the VMware VmPerl Scripting API succeeded.

Do you want this program to set up permissions for your registered virtual 
machines?  This will be done by setting new permissions on all files found in 
the "/etc/vmware/vm-list" file. [no] 

Generating SSL Server Certificate

In which directory do you want to keep your virtual machine files? 
[/home/karao/Documents/VirtualMachines] 

Do you want to enter a serial number now? (yes/no/help) [no] 

Starting VMware services:
   Virtual machine monitor                                             done
   Virtual ethernet                                                    done
   Bridged networking on /dev/vmnet0                                   done
   Host-only networking on /dev/vmnet1 (background)                    done
   Host-only networking on /dev/vmnet8 (background)                    done
   NAT service on /dev/vmnet8                                          done
   Starting VMware virtual machines...                                 done

The configuration of VMware Server 1.0.8 build-126538 for Linux for this 
running kernel completed successfully.





---------------------------
Resources
---------------------------

Note 276434.1 Modifying the VIP or VIP Hostname of a 10g Oracle Clusterware Node
Note 283684.1 How to Change Interconnect/Public Interface IP Subnet in a 10g Cluster
Note 271121.1 - How to change VIP and VIP/Hostname in 10g
Bug: 4500688 - THE INTERFACE NAME SHOULD BE SPECIFY WHEN EXECUTING 'SRVCTL MODIFY NODEAPPS'

---------------

Oracle® Clusterware Administration and Deployment Guide 11g Release 1 (11.1)
2 Administering Oracle Clusterware

* Changing Network Addresses

----------------

http://forums.oracle.com/forums/thread.jspa?threadID=339447
http://surachartopun.com/2007/01/i-want-to-change-ip-address-on-oracle.html
http://www.ikickass.com/changeoracle10gracvip
http://orcl-experts.info/index.php?name=FAQ&id_cat=9
http://www.db-nemec.com/RAC_IP_Change.html

-----------------




put back to 192


root@karl:/home/karao/Documents/VirtualMachines/vmware-update-2.6.27-5.5.7-2# ./runme.pl 
Updating /usr/bin/vmware-config.pl ... already patched
Updating /usr/bin/vmware ... No patch needed/available
Updating /usr/bin/vmnet-bridge ... No patch needed/available
Updating /usr/lib/vmware/bin/vmware-vmx ... No patch needed/available
Updating /usr/lib/vmware/bin-debug/vmware-vmx ... No patch needed/available
VMware modules in "/usr/lib/vmware/modules/source" has been updated.

Before running VMware for the first time after update, you need to configure it 
for your running kernel by invoking the following command: 
"/usr/bin/vmware-config.pl". Do you want this script to invoke the command for 
you now? [yes] 

Making sure services for VMware Server are stopped.

Stopping VMware services:
   Virtual machine monitor                                             done
   Bridged networking on /dev/vmnet0                                   done
   DHCP server on /dev/vmnet1                                          done
   Host-only networking on /dev/vmnet1                                 done
   DHCP server on /dev/vmnet8                                          done
   NAT service on /dev/vmnet8                                          done
   Host-only networking on /dev/vmnet8                                 done
   Virtual ethernet                                                    done

Configuring fallback GTK+ 2.4 libraries.

In which directory do you want to install the mime type icons? 
[/usr/share/icons] 

What directory contains your desktop menu entry files? These files have a 
.desktop file extension. [/usr/share/applications] 

In which directory do you want to install the application's icon? 
[/usr/share/pixmaps] 

/usr/share/applications/vmware-server.desktop: warning: value "vmware-server.png" for key "Icon" in group "Desktop Entry" is an icon name with an extension, but there should be no extension as described in the Icon Theme Specification if the value is not an absolute path
/usr/share/applications/vmware-console-uri-handler.desktop: warning: value "vmware-server.png" for key "Icon" in group "Desktop Entry" is an icon name with an extension, but there should be no extension as described in the Icon Theme Specification if the value is not an absolute path
Trying to find a suitable vmmon module for your running kernel.

None of the pre-built vmmon modules for VMware Server is suitable for your 
running kernel.  Do you want this program to try to build the vmmon module for 
your system (you need to have a C compiler installed on your system)? [yes] 

Using compiler "/usr/bin/gcc". Use environment variable CC to override.

What is the location of the directory of C header files that match your running
kernel? [/lib/modules/2.6.27-11-generic/build/include] 

Extracting the sources of the vmmon module.

Building the vmmon module.

Building for VMware Server 1.0.0.
Using 2.6.x kernel build system.
make: Entering directory `/tmp/vmware-config0/vmmon-only'
make -C /lib/modules/2.6.27-11-generic/build/include/.. SUBDIRS=$PWD SRCROOT=$PWD/. modules
make[1]: Entering directory `/usr/src/linux-headers-2.6.27-11-generic'
  CC [M]  /tmp/vmware-config0/vmmon-only/linux/driver.o
  CC [M]  /tmp/vmware-config0/vmmon-only/linux/driverLog.o
  CC [M]  /tmp/vmware-config0/vmmon-only/linux/hostif.o
/tmp/vmware-config0/vmmon-only/linux/hostif.c: In function ‘HostIF_SetFastClockRate’:
/tmp/vmware-config0/vmmon-only/linux/hostif.c:3441: warning: passing argument 2 of ‘send_sig’ discards qualifiers from pointer target type
  CC [M]  /tmp/vmware-config0/vmmon-only/common/comport.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/cpuid.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/hash.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/memtrack.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/phystrack.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/task.o
cc1plus: warning: command line option "-Werror-implicit-function-declaration" is valid for C/ObjC but not for C++
cc1plus: warning: command line option "-Wdeclaration-after-statement" is valid for C/ObjC but not for C++
cc1plus: warning: command line option "-Wno-pointer-sign" is valid for C/ObjC but not for C++
cc1plus: warning: command line option "-Wstrict-prototypes" is valid for Ada/C/ObjC but not for C++
In file included from /tmp/vmware-config0/vmmon-only/common/task.c:1195:
/tmp/vmware-config0/vmmon-only/common/task_compat.h: In function ‘void Task_Switch_V45(VMDriver*, Vcpuid)’:
/tmp/vmware-config0/vmmon-only/common/task_compat.h:2667: warning: ‘sysenterState.SysenterStateV45::validEIP’ may be used uninitialized in this function
/tmp/vmware-config0/vmmon-only/common/task_compat.h:2667: warning: ‘sysenterState.SysenterStateV45::cs’ may be used uninitialized in this function
/tmp/vmware-config0/vmmon-only/common/task_compat.h:2667: warning: ‘sysenterState.SysenterStateV45::rsp’ may be used uninitialized in this function
/tmp/vmware-config0/vmmon-only/common/task_compat.h:2667: warning: ‘sysenterState.SysenterStateV45::rip’ may be used uninitialized in this function
  CC [M]  /tmp/vmware-config0/vmmon-only/common/vmciContext.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/vmciDatagram.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/vmciDriver.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/vmciDs.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/vmciGroup.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/vmciHashtable.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/vmciProcess.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/vmciResource.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/vmciSharedMem.o
  CC [M]  /tmp/vmware-config0/vmmon-only/common/vmx86.o
  CC [M]  /tmp/vmware-config0/vmmon-only/vmcore/compat.o
  CC [M]  /tmp/vmware-config0/vmmon-only/vmcore/moduleloop.o
  LD [M]  /tmp/vmware-config0/vmmon-only/vmmon.o
  Building modules, stage 2.
  MODPOST 1 modules
WARNING: modpost: module vmmon.ko uses symbol 'init_mm' marked UNUSED
  CC      /tmp/vmware-config0/vmmon-only/vmmon.mod.o
  LD [M]  /tmp/vmware-config0/vmmon-only/vmmon.ko
make[1]: Leaving directory `/usr/src/linux-headers-2.6.27-11-generic'
cp -f vmmon.ko ./../vmmon.o
make: Leaving directory `/tmp/vmware-config0/vmmon-only'
The module loads perfectly in the running kernel.

This program previously created the file /dev/vmmon, and was about to remove 
it.  Somebody else apparently did it already.

You have already setup networking.

Would you like to skip networking setup and keep your old settings as they are?
(yes/no) [yes] no

Do you want networking for your virtual machines? (yes/no/help) [yes] 

Would you prefer to modify your existing networking configuration using the 
wizard or the editor? (wizard/editor/help) [wizard] 

The following bridged networks have been defined:

. vmnet0 is bridged to eth0

Do you wish to configure another bridged network? (yes/no) [no] 

Do you want to be able to use NAT networking in your virtual machines? (yes/no)
[yes] 

Configuring a NAT network for vmnet8.

The NAT network is currently configured to use the private subnet 
172.168.203.0/255.255.255.0.  Do you want to keep these settings? [yes] no

Do you want this program to probe for an unused private subnet? (yes/no/help) 
[yes] no   

What will be the IP address of your host on the private 
network? 192.168.203.0

What will be the netmask of your private network? 255.255.255.0

The following NAT networks have been defined:

. vmnet8 is a NAT network on private subnet 192.168.203.0.

Do you wish to configure another NAT network? (yes/no) [no] 

Do you want to be able to use host-only networking in your virtual machines? 
[yes] 

Configuring a host-only network for vmnet1.

The host-only network is currently configured to use the private subnet 
10.10.10.0/255.255.255.0.  Do you want to keep these settings? [yes] 

The following host-only networks have been defined:

. vmnet1 is a host-only network on private subnet 10.10.10.0.

Do you wish to configure another host-only network? (yes/no) [no] 

Extracting the sources of the vmnet module.

Building the vmnet module.

Building for VMware Server 1.0.0.
Using 2.6.x kernel build system.
make: Entering directory `/tmp/vmware-config0/vmnet-only'
make -C /lib/modules/2.6.27-11-generic/build/include/.. SUBDIRS=$PWD SRCROOT=$PWD/. modules
make[1]: Entering directory `/usr/src/linux-headers-2.6.27-11-generic'
  CC [M]  /tmp/vmware-config0/vmnet-only/driver.o
  CC [M]  /tmp/vmware-config0/vmnet-only/hub.o
  CC [M]  /tmp/vmware-config0/vmnet-only/userif.o
  CC [M]  /tmp/vmware-config0/vmnet-only/netif.o
  CC [M]  /tmp/vmware-config0/vmnet-only/bridge.o
  CC [M]  /tmp/vmware-config0/vmnet-only/filter.o
  CC [M]  /tmp/vmware-config0/vmnet-only/procfs.o
  CC [M]  /tmp/vmware-config0/vmnet-only/smac_compat.o
  CC [M]  /tmp/vmware-config0/vmnet-only/smac_linux.x86_64.o
  LD [M]  /tmp/vmware-config0/vmnet-only/vmnet.o
  Building modules, stage 2.
  MODPOST 1 modules
WARNING: modpost: missing MODULE_LICENSE() in /tmp/vmware-config0/vmnet-only/vmnet.o
see include/linux/module.h for more information
  CC      /tmp/vmware-config0/vmnet-only/vmnet.mod.o
  LD [M]  /tmp/vmware-config0/vmnet-only/vmnet.ko
make[1]: Leaving directory `/usr/src/linux-headers-2.6.27-11-generic'
cp -f vmnet.ko ./../vmnet.o
make: Leaving directory `/tmp/vmware-config0/vmnet-only'
The module loads perfectly in the running kernel.

Please specify a port for remote console connections to use [902] 

 * Stopping internet superserver xinetd                                                                                               [ OK ] 
 * Starting internet superserver xinetd                                                                                               [ OK ] 
Configuring the VMware VmPerl Scripting API.

Building the VMware VmPerl Scripting API.

Using compiler "/usr/bin/gcc". Use environment variable CC to override.

Installing the VMware VmPerl Scripting API.

The installation of the VMware VmPerl Scripting API succeeded.

Do you want this program to set up permissions for your registered virtual 
machines?  This will be done by setting new permissions on all files found in 
the "/etc/vmware/vm-list" file. [no] 

Generating SSL Server Certificate

In which directory do you want to keep your virtual machine files? 
[/home/karao/Documents/VirtualMachines] 

Do you want to enter a serial number now? (yes/no/help) [no] 

Starting VMware services:
   Virtual machine monitor                                             done
   Virtual ethernet                                                    done
   Bridged networking on /dev/vmnet0                                   done
   Host-only networking on /dev/vmnet1 (background)                    done
   Host-only networking on /dev/vmnet8 (background)                    done
   NAT service on /dev/vmnet8                                          done
   Starting VMware virtual machines...                                 done

The configuration of VMware Server 1.0.8 build-126538 for Linux for this 
running kernel completed successfully.

root@karl:/home/karao/Documents/VirtualMachines/vmware-update-2.6.27-5.5.7-2# 



}}}

http://oraclue.com/2010/11/01/issue-with-oracle-11-2-0-2-new-redundant-interconnect/

Troubleshooting case study for 9i RAC ..PRKC-1021 : Problem in the clusterware https://blogs.oracle.com/gverma/entry/troubleshooting_case_study_for
Troubleshooting done to make root.sh work after a 10gR2 CRS (10.2.0.1) installation on HP-UX PA RISC 64-bit OS https://blogs.oracle.com/gverma/entry/troubleshooting_done_to_make_r
crsctl start crs does not work in 10gR2 https://blogs.oracle.com/gverma/entry/crsctl_start_crs_does_not_work
Considerations for virtual IP setup before doing the 10gR2 CRS install https://blogs.oracle.com/gverma/entry/considerations_for_virtual_ip
10gR2 CRS case study: CRS would not start after reboot - stuck at /etc/init.d/init.cssd startcheck https://blogs.oracle.com/gverma/entry/10gr2_crs_case_study_crs_would

CRS would not start on Exadata http://www.evernote.com/shard/s48/sh/f107ae7b-be88-44f4-8b18-dca7e9e7f1f6/2af0a58a0f24d726b8c7c15ff1e4cdc7
RAC Reference
http://morganslibrary.org/reference/rac.html

RAC Health Check
http://oraexplorer.com/2009/05/rac-assessment-from-oracle/
Oracle RACOne Node -- Changes in 11.2.0.2 [ID 1232802.1]
Administering Oracle RAC One Node http://docs.oracle.com/cd/E11882_01/rac.112/e16795/onenode.htm#BABGAJGH
Oracle RAC One Node http://docs.oracle.com/cd/E11882_01/server.112/e17157/unplanned.htm#BABICFCD
Using Oracle Universal Installer to Install Oracle RAC One Node http://docs.oracle.com/cd/E11882_01/install.112/e24660/racinstl.htm#CIHGGAAE

{{{
1). Verifying an existing Oracle RAC One Node Database
srvctl config database -d <db_name>
srvctl status database -d <db_name>
srvctl config database -d racone
srvctl status database -d racone


2). Performing an online migration
srvctl relocate database -d <db_unique_name> {[-n <target>] [-w <timeout>] | -a [-r]} [-v]
srvctl relocate database -d racone -n harac1 -w 15 -v

3) Converting an Oracle RAC One Node Database to Oracle RAC or vice versa
To convert a database from Oracle RAC One Node to Oracle RAC:
  srvctl convert database -d <db_unique_name> -c RAC [-n <node>]
  srvctl convert database -d racone -c RAC -n harac1
  Add more instances on other nodes as required:
  [oracle@harac2 bin]$ srvctl add instance -d racone -i racone_1 -n harac1
  [oracle@harac2 bin]$ srvctl add instance -d racone -i racone_3 -n lfmsx3

To convert a database from Oracle RAC to Oracle RAC One Node:
  During the RAC to RACOne conversion, please ensure that the addition instances are removed using DBCA before we run the "srvctl convert database" command.
  srvctl convert database -d <db_unique_name> -c RACONENODE -i <inst prefix> -w <timeout>
  Eg:  srvctl convert database -d racone -c RACONENODE -w 30 -i racone


4) Upgrading an Oracle RAC One Node database from 11.2.0.1 to 11.2.0.2
see Oracle RACOne Node -- Changes in 11.2.0.2 [ID 1232802.1]
}}}


! some things you should know about instance relocation
* if you are explicitly relocating an instance then the instance name will change from inst_1 to inst_2
* if the instance just suddenly shuts down.. then it will be inst_1 on node1 and still be inst_1 on node2

behavior of relocation to sessions
* it's seamless
killing the pmon
* it will require relogin

and this behavior sucks because it will also mess up your DBFS mounting


! start stop rac one node
{{{
$ cat pmoncheck
dcli -l oracle -g /home/oracle/dbs_group ps -ef | grep pmon | grep -v grep | grep -v ASM


$ cat stopall.sh
srvctl stop listener

 srvctl stop database -d testdb
 srvctl stop database -d soltst
 srvctl stop database -d solprd
 srvctl stop database -d solnpi
 srvctl stop database -d soldev
 srvctl stop database -d reltst
 srvctl stop database -d relprd
 srvctl stop database -d reldev
 srvctl stop database -d jira
 srvctl stop database -d ifstst
 srvctl stop database -d ifsprd
 srvctl stop database -d ifsnpi
 srvctl stop database -d ifsdev
 srvctl stop database -d gl91
 srvctl stop database -d adminrep


$ cat startall.sh
 srvctl start listener

srvctl start database -d adminrep
srvctl start database -d reltst  -n haioda1
srvctl start database -d testdb  -n haioda1
srvctl start database -d gl91    -n haioda1
srvctl start database -d relprd  -n haioda1
srvctl start database -d solprd  -n haioda1
srvctl start database -d testdb2 -n haioda1
srvctl start database -d reldev  -n haioda1

srvctl start database -d soldev  -n haioda2
srvctl start database -d ifsprd  -n haioda2
srvctl start database -d ifsdev  -n haioda2
srvctl start database -d solnpi  -n haioda2
srvctl start database -d ifsnpi  -n haioda2
srvctl start database -d ifstst  -n haioda2
srvctl start database -d soltst  -n haioda2
srvctl start database -d jira    -n haioda2

}}}







https://blogs.oracle.com/XPSONHA/entry/installation_procedure_rac_nod
INSTANCE_GROUPS and PARALLEL_INSTANCE_GROUP

http://christianbilien.wordpress.com/2007/09/12/strategies-for-rac-inter-instance-parallelized-queries-part-12/
http://christianbilien.wordpress.com/2007/09/14/strategies-for-parallelized-queries-across-rac-instances-part-22/
http://www.oraclemagician.com/white_papers/par_groups.pdf

http://www.oracledatabase12g.com/archives/checklist-for-performance-problems-with-parallel-execution.html   <-- CHECKLIST!

http://satya-racdba.blogspot.com/2009/12/srvctl-commands.html
http://yong321.freeshell.org/oranotes/SingleClientAccessName.txt
http://www.mydbspace.com/?p=324 how scan works

{{{
[pd01db01:oracle:dbm1] /home/oracle
> srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node pd01db04
SCAN VIP scan2 is enabled
SCAN VIP scan2 is not running
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node pd01db01

[pd01db01:oracle:dbm1] /home/oracle
>

[pd01db01:oracle:dbm1] /home/oracle
>

[pd01db01:oracle:dbm1] /home/oracle
> srvctl start scan

[pd01db01:oracle:dbm1] /home/oracle
> srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node pd01db04
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node pd01db02
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node pd01db01
}}}



! connect to a specific instance 
{{{

# get the running SCAN listener on the node
ps -ef | grep -i listen 

# check listener IPs and registered PDBs
lsnrctl status
select name, guid, open_mode from v$pdbs

# validate IP of listener on that node 
ifconfig


# use host 

[oracle@ka-dbcs-rac1 ~]$ lsnrctl status | grep -i host
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.123.67.213)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.123.9.75)(PORT=1521)))

[oracle@ka-dbcs-rac1 ~]$ host 10.123.9.75
domain name pointer ka-dbcs-rac1
[oracle@ka-dbcs-rac1 ~]$ host 10.123.67.213 
domain name pointer ka-dbcs-rac1-vip


# for port forwarding use the VIP 
# on your vm do the following 
# 10.139.67.213 is the VIP 
# 123.123.12.92 is the RAC public IP

export ORACLE_HOME=/home/opc/instantclient_19_15
export SQLPATH=$ORACLE_HOME
export LD_LIBRARY_PATH=$ORACLE_HOME
export TNS_ADMIN=/home/opc
export PATH="$ORACLE_HOME:$PATH"
ssh -N -f -L localhost:1523:10.139.67.213:1521 oracle@123.123.12.92
sqlplus admin/e#1@localhost:1523/fa60a3e61.a.phx.oraclevcn.com


# use the IP on EZ connect
karao@YY.YY.YY.YY:1521/05327e6da9840a5ee06387c1d60a2bd9
}}}






you have to manually bring back SCAN to the preferred node on a failover scenario 
https://twitter.com/martinberx/status/681445522050256896

<<showtoc>>


! the setup and initial test case
{{{

/u01/app/oracle/product/11.2.0.3/dbhome_1/

srvctl add service -d oltp -s oltp_srvc -r oltp1,oltp2
srvctl start service -d oltp -s oltp_srvc
srvctl stop service -d oltp -s oltp_srvc
srvctl remove service -d oltp -s oltp_srvc

### disable route 
srvctl disable service -d oltp -s oltp_srvc -i oltp2      <-- this wont stop the service, you have to manually stop it
srvctl stop service -d oltp -s oltp_srvc -i oltp2                 

srvctl enable service -d oltp -s oltp_srvc -i oltp1,oltp2       <-- this wont start the service, you have to manually start it
srvctl start service -d oltp -s oltp_srvc                 <-- this will start the service

### modify route 
srvctl modify service -d oltp -s oltp_srvc -n -i oltp1 -a oltp2     <-- this sets the preferred and available instances without stopping, removes the oltp2 on crsctl

srvctl modify service -d oltp -s oltp_srvc -n -i oltp1,oltp2      <-- adds the oltp2 back but not started
srvctl start service -d oltp -s oltp_srvc                       <-- have to start manually after 



------------------------------------------------------------------------------------------------------------------------

1) the current state

    $ srvctl config service -d oltp
    Service name: oltp_srvc
    Service is enabled
    Server pool: oltp_oltp_srvc
    Cardinality: 1
    Disconnect: false
    Service role: PRIMARY
    Management policy: AUTOMATIC
    DTP transaction: false
    AQ HA notifications: false
    Failover type: NONE
    Failover method: NONE
    TAF failover retries: 0
    TAF failover delay: 0
    Connection Load Balancing Goal: LONG
    Runtime Load Balancing Goal: NONE
    TAF policy specification: NONE
    Edition:
    Preferred instances: oltp1
    Available instances:

    $ crsctl stat res -t | less
    ora.oltp.db
          1        ONLINE  ONLINE       enkx3db01                Open,STABLE
          2        ONLINE  ONLINE       enkx3db02                Open,STABLE
    ora.oltp.oltp_srvc.svc
          1        ONLINE  ONLINE       enkx3db01                STABLE

2) Add the 2nd insntace on service
* this adds the service to the config as available node 
* also if you try executing "srvctl start service" it will error with PRCC-1014 : oltp_srvc was already running because you have the preferred service already running and the available is not going to kick in unless preferred instances are gone or if it's modified to be a preferred instance

    srvctl modify service -d oltp -s oltp_srvc -n -i oltp1 -a oltp2 

    $ srvctl config service -d oltp
    Service name: oltp_srvc
    Service is enabled
    Server pool: oltp_oltp_srvc
    Cardinality: 1
    Disconnect: false
    Service role: PRIMARY
    Management policy: AUTOMATIC
    DTP transaction: false
    AQ HA notifications: false
    Failover type: NONE
    Failover method: NONE
    TAF failover retries: 0
    TAF failover delay: 0
    Connection Load Balancing Goal: LONG
    Runtime Load Balancing Goal: NONE
    TAF policy specification: NONE
    Edition:
    Preferred instances: oltp1
    Available instances: oltp2

    ora.oltp.db
          1        ONLINE  ONLINE       enkx3db01                Open,STABLE
          2        ONLINE  ONLINE       enkx3db02                Open,STABLE
    ora.oltp.oltp_srvc.svc
          1        ONLINE  ONLINE       enkx3db01                STABLE

3) Expand the resources 

    srvctl modify service -d oltp -s oltp_srvc -n -i oltp1,oltp2      <-- adds the oltp2 back but not started

        $ srvctl config service -d oltp
        Service name: oltp_srvc
        Service is enabled
        Server pool: oltp_oltp_srvc
        Cardinality: 2
        Disconnect: false
        Service role: PRIMARY
        Management policy: AUTOMATIC
        DTP transaction: false
        AQ HA notifications: false
        Failover type: NONE
        Failover method: NONE
        TAF failover retries: 0
        TAF failover delay: 0
        Connection Load Balancing Goal: LONG
        Runtime Load Balancing Goal: NONE
        TAF policy specification: NONE
        Edition:
        Preferred instances: oltp1,oltp2
        Available instances:

        ora.oltp.db
              1        ONLINE  ONLINE       enkx3db01                Open,STABLE
              2        ONLINE  ONLINE       enkx3db02                Open,STABLE
        ora.oltp.oltp_srvc.svc
              1        ONLINE  ONLINE       enkx3db01                STABLE
              2        OFFLINE OFFLINE                               STABLE

    srvctl start service -d oltp -s oltp_srvc                       <-- have to start manually after 

        Service name: oltp_srvc
        Service is enabled
        Server pool: oltp_oltp_srvc
        Cardinality: 2
        Disconnect: false
        Service role: PRIMARY
        Management policy: AUTOMATIC
        DTP transaction: false
        AQ HA notifications: false
        Failover type: NONE
        Failover method: NONE
        TAF failover retries: 0
        TAF failover delay: 0
        Connection Load Balancing Goal: LONG
        Runtime Load Balancing Goal: NONE
        TAF policy specification: NONE
        Edition:
        Preferred instances: oltp1,oltp2
        Available instances:

        ora.oltp.db
              1        ONLINE  ONLINE       enkx3db01                Open,STABLE
              2        ONLINE  ONLINE       enkx3db02                Open,STABLE
        ora.oltp.oltp_srvc.svc
              1        ONLINE  ONLINE       enkx3db01                STABLE
              2        ONLINE  ONLINE       enkx3db02                STABLE

4) Reduce the resources

    srvctl modify service -d oltp -s oltp_srvc -n -i oltp1 -a oltp2

    $ srvctl config service -d oltp
    Service name: oltp_srvc
    Service is enabled
    Server pool: oltp_oltp_srvc
    Cardinality: 1
    Disconnect: false
    Service role: PRIMARY
    Management policy: AUTOMATIC
    DTP transaction: false
    AQ HA notifications: false
    Failover type: NONE
    Failover method: NONE
    TAF failover retries: 0
    TAF failover delay: 0
    Connection Load Balancing Goal: LONG
    Runtime Load Balancing Goal: NONE
    TAF policy specification: NONE
    Edition:
    Preferred instances: oltp1
    Available instances: oltp2

    ora.oltp.db
          1        ONLINE  ONLINE       enkx3db01                Open,STABLE
          2        ONLINE  ONLINE       enkx3db02                Open,STABLE
    ora.oltp.oltp_srvc.svc
          1        ONLINE  ONLINE       enkx3db01                STABLE

  5) Create two dbms_scheduler jobs for Expand and Reduce of nodes 

  --Expand job
    srvctl modify service -d oltp -s oltp_srvc -n -i oltp1,oltp2      <-- adds the oltp2 back but not started
    srvctl start service -d oltp -s oltp_srvc                       <-- have to start manually after 

  --Reduce job 
    srvctl modify service -d oltp -s oltp_srvc -n -i oltp1 -a oltp2

}}}

! scheduling/automating it 
!! cron job 
{{{
vi expand.sh 
#!/bin/bash
srvctl modify service -d oltp -s oltp_srvc -n -i oltp1,oltp2
srvctl start service -d oltp -s oltp_srvc


vi reduce.sh 
#!/bin/bash
srvctl modify service -d oltp -s oltp_srvc -n -i oltp1 -a oltp2
}}}

!! dbms_scheduler - doesn't seem to work, seems to be having some environment issues 
{{{

begin
dbms_scheduler.create_credential(
 credential_name => '"SYSTEM"."ORACLE_CRED"',
  username => 'oracle',
  password => 'enk1tec');
 end;
 /

SELECT u.name CREDENTIAL_OWNER, O.NAME CREDENTIAL_NAME, C.USERNAME,
  DBMS_ISCHED.GET_CREDENTIAL_PASSWORD(O.NAME, u.name) pwd
FROM SYS.SCHEDULER$_CREDENTIAL C, SYS.OBJ$ O, SYS.USER$ U
WHERE U.USER# = O.OWNER#
  AND C.OBJ#  = O.OBJ# ;

begin
DBMS_SCHEDULER.create_job (
job_name => '"SYSTEM"."EXPAND_SHRINK_SERVICE"',
JOB_TYPE => 'EXECUTABLE',
JOB_ACTION => '/home/oracle/dba/karao/scripts/expand.sh',
repeat_interval => 'FREQ=MINUTELY;BYSECOND=0',
start_date => SYSTIMESTAMP,
number_of_arguments => 0
);

dbms_scheduler.set_attribute('"SYSTEM"."EXPAND_SHRINK_SERVICE"','credential_name','"SYSTEM"."ORACLE_CRED"');

dbms_Scheduler.enable('"SYSTEM"."EXPAND_SHRINK_SERVICE"');
END;
/

exec dbms_scheduler.run_job('"SYSTEM"."EXPAND_SHRINK_SERVICE"',FALSE);


select additional_info from dba_scheduler_job_run_details where job_name like '%EXPAND_SHRINK_SERVICE%';



BEGIN
    SYS.DBMS_SCHEDULER.DROP_JOB(job_name => '"SYSTEM"."EXPAND_SHRINK_SERVICE"',
                                defer => false,
                                force => true);
END;
/

}}}








http://allthingsoracle.com/an-introduction-to-11-2-rac-server-pools/
OBE Grid list http://apex.oracle.com/pls/apex/f?p=44785:2:0:FORCE_QUERY::2,CIR,RIR:P2_PRODUCT_ID,P2_RELEASE_ID:2011,71
OBE - policy managed databases http://www.oracle.com/webfolder/technetwork/tutorials/demos/db/11g/r2/grid_rac/10_rac_dbca/rac_dbca_viewlet_swf.html
Server Pool experiments in RAC 11.2 http://martincarstenbach.wordpress.com/2010/02/12/server-pool-experiments-in-rac-11-2/
Policy managed databases http://martincarstenbach.wordpress.com/2010/01/26/policy-managed-databases/

http://cgswong.blogspot.com/2010/12/oracle11gr2-rac-faq.html



-- server pool
http://oracleinaction.com/server-pools/
http://www.hhutzler.de/blog/managing-server-pools/
policyset https://docs.oracle.com/database/121/CWADD/pbmgmt.htm#CWADD92636
policyset example http://blog.dbi-services.com/oracle-policy-managed-databases-policies-and-policy-sets/



How To Test Application Continuity Using A Standalone Java Program (Doc ID 1602233.1)
TAF ENABLED GLOBAL SERVICE in GDS ENVIRONMNET 12C. (Doc ID 2283193.1)
Application Continuity Throws Exception No more data to read from socket For Commits After Failover (Doc ID 2197029.1)



{{{
How To Test Application Continuity Using A Standalone Java Program (Doc ID 1602233.1)	To BottomTo Bottom	

In this Document
	Goal
	Solution

Applies to:
JDBC - Version 12.1.0.1.0 and later
Information in this document applies to any platform.
Goal

 The document provides step-by-step instructions and a simple standalone Java Program which can be used to test the Application Continuity feature in the 12c JDBC driver.
Solution

 Application Continuity in the 12c JDBC driver can be tested with a simple standalone java program and a RAC 12C Database Cluster.

 

Step1 -  Creating a Database Service for Application Continuity on a RAC database

 

1) Add a service , say acservice, using srvctl on both instances of a two node RAC cluster, orcl1 and orlc2.
srvctl add service -s acservice -d orcl -r orcl1,orcl2

 

2) Start up the Service
 srvctl start service -s acservice -d orcl

 

3) Enable the service for Application Continuity
srvctl modify service -d ORCL -s acservice -failovertype TRANSACTION -replay_init_time 300 -failoverretry 30 -failoverdelay 3 -notification TRUE -commit_outcome TRUE

 
If the service is not AC-enabled, the exception java.lang.ClassCastException: oracle.jdbc.driver.T4CConnection cannot be cast to oracle.jdbc.replay.ReplayableConnection would be generated.


Step2 -  Running the Sample AcTest.java Program

 

1) Create and run the AcTest.java program, provided below,  with the 12.1.0.1 JDBC driver.  Please ensure the java program is named AcTest.java (noticed the "A' and "T" in uppercase).
If the connection is successful, the following output is seen:   
You are Connected to RAC Instance -  ORCL1

 

2) When the program pauses, shut down ORCL1 RAC instance

   
srvctl stop instance -i ORCL1 -d ORCL

 

3) If the replay is successful, the connection will be successfully established on the ORCL2 RAC instance

  
After Replay Connected to RAC Instance - ORCL2

 

AcTest.java

========

import java.sql.*;
import oracle.jdbc.*;
public class AcTest
{
public static void main(String[] args) throws SQLException,java.lang.InterruptedException
{
oracle.jdbc.replay.OracleDataSource  AcDatasource = oracle.jdbc.replay.OracleDataSourceFactory.getOracleDataSource();
AcDatasource.setURL("jdbc:oracle:thin:@<HOST>:<PORT>/acservice");
AcDatasource.setUser("<USER>");
AcDatasource.setPassword("<PASSWORD>");

Connection conn = AcDatasource.getConnection();

conn.setAutoCommit(false);

PreparedStatement stmt = conn.prepareStatement("select instance_name from v$instance");
ResultSet rset = stmt.executeQuery();
while (rset.next())
{
System.out.println("You are Connected to RAC Instance - "+ rset.getString(1));
}

Thread.currentThread().sleep(60000);

((oracle.jdbc.replay.ReplayableConnection)conn).beginRequest();

PreparedStatement stmt1 = conn.prepareStatement("select instance_name from v$instance");
ResultSet rset1 = stmt1.executeQuery();
while (rset1.next())
{
System.out.println("After Replay Connected to RAC Instance - "+rset1.getString(1));
}

rset.close();
stmt.close();
rset1.close();

stmt1.close();

conn.close();

((oracle.jdbc.replay.ReplayableConnection)conn).endRequest();
}
}

}}}
12c https://github.com/ardentperf/racattack-vagrantfile
11gR2 https://github.com/ardentperf/racattack
<<showtoc>>


! info
https://gerardnico.com/db/oracle/transaction_table
https://sites.google.com/site/embtdbo/wait-event-documentation/oracle-enqueues#TOC-enq:-TX---allocate-ITL-entry
http://yong321.freeshell.org/computer/deadlocks.txt
https://antognini.ch/2013/05/itl-deadlocks-script/
https://karlarao.github.io/karlaraowiki/index.html#ITL


! related articles
Reading deadlock trace files
https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1528515465282

INITRANS Cause of deadlock
https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:6042872196732

DEADLOCK DETECTED - DELETE statement - how/why is it waiting in SHARE mode
https://community.oracle.com/thread/934554?start=15&tstart=0

https://www.google.com/search?q=Global+Enqueue+Services+Deadlock+detected+ITL&oq=Global+Enqueue+Services+Deadlock+detected+ITL&aqs=chrome..69i57j33l2.1528j0j0&sourceid=chrome&ie=UTF-8
https://www.google.com/search?q=oracle+delete+deadlock+ITL&oq=oracle+delete+deadlock+ITL&aqs=chrome..69i57j69i64l3.6839j1j0&sourceid=chrome&ie=UTF-8
https://www.google.com/search?q=oracle+Global+Enqueue+Services+Deadlock+detected&oq=oracle+Global+Enqueue+Services+Deadlock+detected&aqs=chrome..69i57j69i64l3.1124j0j0&sourceid=chrome&ie=UTF-8

http://arup.blogspot.com/2011/01/more-on-interested-transaction-lists.html

https://asktom.oracle.com/pls/asktom/asktom.search?tag=deadlock-enq-tx-allocate-itl-entry

http://russellcurtis.blogspot.com/2010/06/global-enqueue-services-deadlock.html

Global Enqueue Services Deadlock detected https://community.oracle.com/thread/3986541


! mos notes 
How to Diagnose Different ORA-00060 Deadlock Types Using Deadlock Graphs in Trace (Doc ID 1559695.1)
Troubleshooting "ORA-00060 Deadlock Detected" Errors (Doc ID 62365.1)
How to Identify ORA-00060 Deadlock Types Using Deadlock Graphs in Trace (Doc ID 1507093.1)
Top 5 Database and/or Instance Performance Issues in RAC Environment (Doc ID 1373500.1)
EM 12c, EM 13c: Using The Deadlock Parser Tool For Gathering EM Repository Deadlock Information (Doc ID 2222769.1)
Troubleshooting "Global Enqueue Services Deadlock detected" (Doc ID 1443482.1)









! MOS 
Troubleshooting "Global Enqueue Services Deadlock detected" (Doc ID 1443482.1)
{{{
1. TX deadlock in Exclusive(X) mode
2. TX deadlock in Share(S) mode
3. TM deadlock
4. Single resource deadlock for TX , TM, IV or LB
5. LB deadlock
6. Known Issues
7. Further Diagnosis
8. Deadlock Parser Tool (Enterprise Manager)
}}}
ORA-60 DEADLOCK DUE TO BITMAP INDEX IN RAC (Doc ID 1496403.1)
Global Enqueue Services Deadlock Detected During Table Statistics Gatherings and High Execution of Update sys.col_usage$ (Doc ID 2347644.1)
Goldengate Have A Lot Of "ORA-00060: Deadlock Detected" Errors In BODS(Exadata) Production Database (Doc ID 1634549.1)
Does Logminer Show All SQL Statements Involved In An ORA-60 Deadlock Error? (Doc ID 1108508.1)
https://www.hhutzler.de/blog/ges-locks-and-deadlocks/
https://recurrentnull.wordpress.com/2014/04/19/deadlock-parser-parsing-lmd0-trace-files/
https://jonathanlewis.wordpress.com/2013/02/22/deadlock-detection/
Bug 17165204 - Self deadlock while updating HCC compressed tables (Doc ID 17165204.8)
Goldengate Have A Lot Of "ORA-00060: Deadlock Detected" Errors In BODS(Exadata) Production Database (Doc ID 1634549.1)
Global Enqueue Services Deadlock detected - Single resource deadlock: blocking enqueue which blocks itself, f 1 (Doc ID 973178.1)
deadlock on RAC https://community.oracle.com/thread/841950
https://oracle-base.com/articles/misc/deadlocks
https://groups.google.com/forum/#!topic/comp.databases.oracle.server/oemqz8ThbiU
http://olashowunmi.blogspot.com/2015/07/ora-00060-deadlock-detected-while.html


! articles 

!! ASH XID 
http://oraama.blogspot.com/2014/07/who-changed-data-in-table.html
	Bug 5998048 - Deadlock on COMMIT updating AUD$ / Performance degradation when FGA is enabled (Doc ID 5998048.8)
ges deadlock xid https://www.google.com/search?q=ges+deadlock+xid&oq=ges+deadlock+xid&aqs=chrome..69i57j33.2542j0j0&sourceid=chrome&ie=UTF-8
Global Enqueue Services Deadlock detected https://community.oracle.com/thread/2585677

!! enqueue mode
https://logicalread.com/diagnosing-oracle-wait-for-tx-enqueue-mode-6-mc01/#.XQ1ir2RKjOQ
https://sites.google.com/site/embtdbo/wait-event-documentation/oracle-enqueues#TOC-enq:-TX---allocate-ITL-entry

!! KJUSERPR
Script to Collect RAC Diagnostic Information (racdiag.sql) (Doc ID 135714.1)
{{{
set numwidth 5
column state format a16 tru;
column event format a30 tru;
select dl.inst_id, s.sid, p.spid, dl.resource_name1,
decode(substr(dl.grant_level,1,8),'KJUSERNL','Null','KJUSERCR','Row-S (SS)',
'KJUSERCW','Row-X (SX)','KJUSERPR','Share','KJUSERPW','S/Row-X (SSX)',
'KJUSEREX','Exclusive',request_level) as grant_level,
decode(substr(dl.request_level,1,8),'KJUSERNL','Null','KJUSERCR','Row-S (SS)',
'KJUSERCW','Row-X (SX)','KJUSERPR','Share','KJUSERPW','S/Row-X (SSX)',
'KJUSEREX','Exclusive',request_level) as request_level,
decode(substr(dl.state,1,8),'KJUSERGR','Granted','KJUSEROP','Opening',
'KJUSERCA','Canceling','KJUSERCV','Converting') as state,
s.sid, sw.event, sw.seconds_in_wait sec
from gv$ges_enqueue dl, gv$process p, gv$session s, gv$session_wait sw
where blocker = 1
and (dl.inst_id = p.inst_id and dl.pid = p.spid)
and (p.inst_id = s.inst_id and p.addr = s.paddr)
and (s.inst_id = sw.inst_id and s.sid = sw.sid)
order by sw.seconds_in_wait desc;
}}}


!! select for update 
https://hoopercharles.wordpress.com/2011/11/21/select-for-update-in-what-order-are-the-rows-locked/


















.






-- job class
http://www.resolvinghere.com/sof/14337075.shtml
https://oracleexamples.wordpress.com/2009/05/03/run-jobs-in-a-particular-instance-using-services/
http://www.dba-oracle.com/job_scheduling/job_classes.htm
https://books.google.com/books?id=NaKoglGZoDwC&pg=PA259&lpg=PA259&dq=oracle+rac+job+class+tim+hall&source=bl&ots=9R2oiBljiw&sig=JERYlcNluxQ5YoTd7H0Nro5xczQ&hl=en&sa=X&ved=0ahUKEwij0Lygm-bLAhWCLB4KHba7AmUQ6AEILjAD#v=onepage&q=oracle%20rac%20job%20class%20tim%20hall&f=false
http://www.ritzyblogs.com/OraTalk/PostID/108/Using-FAN-callouts-relocate-a-service-back

automatic service relocation
http://www.oracledatabase12g.com/wp-content/uploads/2009/08/Session6.pdf
http://jarneil.wordpress.com/2010/11/05/11gr2-database-services-and-instance-shutdown/
https://forums.oracle.com/message/10479460
http://indico.cern.ch/getFile.py/access?resId=1&materialId=slides&confId=135581
http://bdrouvot.wordpress.com/2012/12/13/rac-one-node-avoid-automatic-database-relocation/
http://ilmarkerm.blogspot.com/2012/05/scipt-to-automatically-move-rac-11gr2.html   <-- this is the script
http://www.ritzyblogs.com/OraTalk/PostID/108/Using-FAN-callouts-relocate-a-service-back

{{{
#!/bin/bash
#
# GI callout script to catch INSTANCE up event from clusterware and relocate services to preferred instance
# Copy or symlink this script to $GRID_HOME/racg/usrco
# Tested on Oracle Linux 5.8 with 11.2.0.3 Oracle Grid Infrastructure and 11.2.0.2 & 11.2.0.3 Oracle Database Enterprise Edition
# 2012 Ilmar Kerm <ilmar.kerm@gmail.com>
#

LOGFILE=/u02/app/oracle/grid_callout/log.txt
SCRIPTDIR=`dirname $0`

# Determine grid home
if [[ "${SCRIPTDIR:(-11)}" == "/racg/usrco" ]]; then
  CRS_HOME="${SCRIPTDIR:0:$(( ${#SCRIPTDIR} - 11 ))}"
  export CRS_HOME
fi

# Only execute script for INSTANCE events
if [ "$1" != "INSTANCE" ]; then
  exit 0
fi

STATUS=""
DATABASE=""
INSTANCE=""

# Parse input arguments
args=("$@")
for arg in ${args[@]}; do
  if [[ "$arg" == *=* ]]; then
    KEY=${arg%=*}
    VALUE=${arg#*=}
    
    case "$KEY" in
      status)
        STATUS="$VALUE"
        ;;
      database)
        DATABASE="$VALUE"
        ;;
      instance)
        INSTANCE="$VALUE"
        ;;
    esac
    
  fi
done

# If database, status and instance values are not set, then exit
# status must be up
if [[ -z "$DATABASE" || -z "$INSTANCE" || "$STATUS" != "up" ]]; then
  exit 0
fi

echo "`date`" >> "$LOGFILE"
echo "[$DATABASE][`hostname`] Instance $INSTANCE up" >> "$LOGFILE"

#
# Read database software home directory from clusterware
#
DBCONFIG=`$CRS_HOME/bin/crsctl status res ora.$DATABASE.db -f | grep "ORACLE_HOME="`
if [ -z "$DBCONFIG" ]; then
  exit 0
fi
declare -r "$DBCONFIG"
echo "ORACLE_HOME=$ORACLE_HOME" >> "$LOGFILE"

# Array function
in_array() {
    local hay needle=$1
    shift
    for hay; do
        [[ $hay == $needle ]] && return 0
    done
    return 1
}

#
# Read information about services
#
for service in `$CRS_HOME/bin/crsctl status res | grep -E "ora\.$DATABASE\.(.+)\.svc" | sed -rne "s/NAME=ora\.$DATABASE\.(.+)\.svc/\1/gip"`; do
  SERVICECONFIG=`$ORACLE_HOME/bin/srvctl config service -d $DATABASE -s $service`
  echo "Service $service" >> "$LOGFILE"
  if [[ "$SERVICECONFIG" == *"Service is enabled"* ]]; then
    echo " enabled" >> "$LOGFILE"
    PREFERRED=( `echo "$SERVICECONFIG" | grep "Preferred instances:" | sed -rne "s/.*\: ([a-zA-Z0-9]+)/\1/p" | tr "," "\n"` )
    #
    # Check if current instance is preferred for this service
    #
    if in_array "$INSTANCE" "${PREFERRED[@]}" ; then
      echo " preferred" >> "$LOGFILE"
      #
      # Check if service is already running on current instance
      #
      SRVSTATUS=`$ORACLE_HOME/bin/srvctl status service -d $DATABASE -s $service`
      if [[ "$SRVSTATUS" == *"is not running"* ]]; then
          #
          # if service is not running, then start it
          #
        echo " service stopped, starting" >> "$LOGFILE"
        $ORACLE_HOME/bin/srvctl start service -d "$DATABASE" -s "$service" >> "$LOGFILE"
      else
        #
        # Service is running, but is it running on preferred instance?
        #
        RUNNING=( `echo "$SRVSTATUS" | sed -rne "s/.* ([a-zA-Z0-9]+)/\1/p" | tr "," "\n"` )
        #echo "${RUNNING[@]} = ${PREFERRED[@]}"
        if ! in_array "$INSTANCE" "${RUNNING[@]}" ; then
          echo " not running on preferred $INSTANCE" >> "$LOGFILE"
          #
          # Find the first non-preferred running instance
          #
          CURRENT=""
          for inst in "${RUNNING[@]}"; do
            if ! in_array "$inst" "${PREFERRED[@]}" ; then
              CURRENT="$inst"
              break
            fi
          done
          #
          # Relocate
          #
          if [[ -n "$CURRENT" ]]; then
            echo " relocate $CURRENT -> $INSTANCE" >> "$LOGFILE"
            $ORACLE_HOME/bin/srvctl relocate service -d "$DATABASE" -s "$service" -i "$CURRENT" -t "$INSTANCE" >> "$LOGFILE"
          fi
        else
          #
          # Service is already running on preferred instance, no need to do anything
          #
          echo " running on preferred $INSTANCE" >> "$LOGFILE"
        fi
      fi
    fi
  fi
done
}}}
http://www.freelists.org/post/oracle-l/monitor-rac-database-services,7   <-- nice scripts
http://coskan.wordpress.com/2010/12/29/how-to-monitor-services-on-11gr2/   <-- 11gR2
http://yong321.freeshell.org/oranotes/Service.txt  <-- 10gR2, 11gR1


''rac11gr2_mon.pl''  http://db.tt/nKIlmSlV

Oracle RAC Database aware Applications - A Developer’s Checklist  http://www.oracle.com/technetwork/database/availability/racdbawareapplications-1933522.pdf
Node Evictions on RAC , what to do and what to collect
http://www.linkedin.com/groupAnswers?viewQuestionAndAnswers=&discussionID=16757466&gid=2922607&trk=EML_anet_qa_cmnt-cDhOon0JumNFomgJt7dBpSBA

11.1 OCR Backup Management - Best Practice Advice?
http://www.linkedin.com/groupItem?view=&srchtype=discussedNews&gid=2922607&item=28217802&type=member&trk=EML_anet_qa_cmnt-cDhOon0JumNFomgJt7dBpSBA

Can we have VIP's on all public network interfaces with diff network masks.
http://www.linkedin.com/groupAnswers?viewQuestionAndAnswers=&discussionID=30266596&gid=2922607&trk=EML_anet_qa_cmnt-cDhOon0JumNFomgJt7dBpSBA

How does one ensure basic compliance with best practices for a Grid stack ?
http://www.linkedin.com/groupAnswers?viewQuestionAndAnswers=&discussionID=16817767&gid=2922607&trk=EML_anet_qa_cmnt-cDhOon0JumNFomgJt7dBpSBA
Oracle Support Master Note for Real Application Clusters (RAC), Oracle Clusterware and Oracle Grid Infrastructure (Doc ID 1096952.1)
11gR2 Clusterware and Grid Home - What You Need to Know (Doc ID 1053147.1)

RAC Assurance Support Team: RAC Starter Kit and Best Practices (Generic)
  	Doc ID: 	810394.1

RAC Assurance Support Team: RAC Starter Kit (Windows)
  	Doc ID: 	811271.1

http://www.oracle.com/technology/products/database/clustering/certify/tech_generic_linux_new.html

RAC: Frequently Asked Questions
  	Doc ID: 	Note:220970.1

Smooth the Transition to Real Application Clusters
  	Doc ID: 	Note:206037.1
  	
Step-By-Step Install of RAC with OCFS on Windows 2003 (9i)
  	Doc ID: 	Note:178882.1
  	
How To Check The Certification Matrix for Real Application Clusters
  	Doc ID: 	Note:184875.1 	



-- PLANNING

Smooth the Transition to Real Application Clusters
  	Doc ID: 	206037.1

RAC Assurance Support Team: RAC Starter Kit and Best Practices (Generic)
  	Doc ID: 	810394.1



-- SETUP GUIDES

 Metalink Note#: 178882.1    Step-By-Step Install of RAC with OCFS on Windows 2000
 Metalink Note#: 236155.1    Step-By-Step Install of RAC with RAW Datafiles on Windows 2000
 Metalink Note#: 254815.1    Step-By-Step Install of 9i RAC on Veritas DBE/AC and Solaris
 Metalink Note#: 247216.1    Step-By-Step Install of RAC on Fujitsu PrimePower with PrimeCluster
 Metalink Note#: 184821.1    Step-By-Step Install of 9.2.0.4 RAC on Linux
 Note 184821.1 		     Step-By-Step Installation of 9.2.0.5 RAC on Linux
 Metalink Note#: 182177.1    Step-By-Step Install of RAC on HP-UX
 Metalink Note#: 175480.1    Step-By-Step Install of RAC on HP Tru64 Unix Cluster
 Metalink Note#: 180012.1    Step-By-Step Install of RAC on HP OpenVMS Cluster
 Metalink Note#: 199457.1    Step-By-Step Install of RAC on IBM AIX (RS/6000)

Where to find Step-By-Step RAC setup guides:
RAC Step-By-Step Installation on IBM RS/6000 see Note 199457.1
RAC Step-By-Step Installation on LINUX see Note 184821.1
RAC Step-By-Step Installation on COMPAQ OPEN VMS see Note 180012.1
RAC Step-By-Step Installation on SUN CLUSTER V3 see Note 175465.1
RAC Step-By-Step Installation on WINDOWS 2000 or NT see Note 178882.1
RAC Step-By-Step Installation on HP TRU64 UNIX CLUSTER see Note 175480.1
RAC Step-By-Step Installation on HP-UX see Note 182177.1



-- GRID INFRASTRUCTURE

Oracle Support Master Note for Real Application Clusters (RAC), Oracle Clusterware and Oracle Grid Infrastructure (Doc ID 1096952.1)
11gR2 Clusterware and Grid Home - What You Need to Know (Doc ID 1053147.1)
11gR2 Install (Non-RAC): Understanding New Changes With All New 11.2 Installer [ID 884232.1]
11gR2 Clusterware and Grid Home - What You Need to Know [ID 1053147.1]



-- RAC ON WINDOWS
Oracle RAC Clusterware Installation on Windows Commonly Missed / Misunderstood Prerequisites (Doc ID 388730.1)



-- TROUBLESHOOTING

Remote Diagnostic Agent (RDA) 4 - RAC Cluster Guide (Doc ID 359395.1) - RDA RAC
CRS 10gR2/ 11gR1/ 11gR2 Diagnostic Collection Guide [ID 330358.1]

RAC Survival Kit: Troubleshooting a Hung Database
  	Doc ID: 	Note:206567.1

Data Gathering for Troubleshooting RAC Issues
  	Doc ID: 	Note:556679.1 	

RAC: Ave Receive Time for Current Block is Abnormally High in Statspack
  	Doc ID: 	243593.1

Doc ID: 563566.1 gc lost blocks diagnostics

POOR RAC-INTERCONNECT PERFORMANCE AFTER UPGRADE FROM RHEL3 TO RHEL4/OEL4
  	Doc ID: 	400959.1

EXCESSIVE GETS FOR SHARED POOL SIMULATOR LATCH causing hang/performance problem
  	Doc ID: 	563149.1

Rac Database Is Slow on Windows
  	Doc ID: 	271254.1

Note 213416.1 - RAC: Troubleshooting Windows NT/2000 Service Hangs

Intermittent high elapsed times reported on wait events in AMD-Based systems Or using NTP
  	Doc ID: 	828523.1

'Diag Dummy Wait' On Rac Instance
  	Doc ID: 	360815.1



-- CLUSTER HEALTH MONITOR

Introducing Cluster Health Monitor (IPD/OS) (Doc ID 736752.1)

How to Monitor, Detect and Analyze OS and RAC Resource Related Degradation and Failures on Windows
  	Doc ID: 	810915.1

How to install Oracle Cluster Health Monitor (former IPD/OS) on Windows
  	Doc ID: 	811151.1

How to Collect 'Cluster Health Monitor' (former IPD/OS) Data on Windows Platform for Oracle Support (Doc ID 847485.1)






-- COE TOOLS

Subject: 	Procwatcher: Script to Monitor and Examine Oracle and CRS Processes
  	Doc ID: 	Note:459694.1 	Type: 	BULLETIN




-- PERFORMANCE

Oracle RAC Tuning Tips by Joel Goodman
http://oukc.oracle.com/static05/opn/oracle9i_database/49466/040908_49466_source/index.htm

Understanding RAC Internals by Barb Lundhild
http://oukc.oracle.com/static05/opn/oracle9i_database/40168/053107_40168_source/index.htm

http://www.oracle.com/technology/tech/java/newsletter/articles/oc4j_data_sources/oc4j_ds.htm



-- MULTIPATHING 

Subject: 	Oracle ASM and Multi-Pathing Technologies
  	Doc ID: 	Note:294869.1 	Type: 	WHITE PAPER
  	Last Revision Date: 	17-JAN-2008 	Status: 	PUBLISHED




-- ebusiness suite

Configuring Oracle Applications Release 12 with 10g R2 RAC
  	Doc ID: 	Note:388577.1 	



-- AIX

Status of Certification of Oracle Clusterware with HACMP 5.3 & 5.4
  	Doc ID: 	Note:404474.1



-- ADD NODE

Adding a Node to a 10g RAC Cluster (10g R1)
  	Doc ID: 	Note:270512.1

Unable To Start Asm Instance After Adding Node To Rac Cluster
  	Doc ID: 	Note:399889.1
  	
  	


-- DELETE NODE

Removing a Node from a 10g RAC Cluster (only applicable to 10gR1)
  	Doc ID: 	Note:269320.1

		# on B.7 when removing nodeapps, it will not cleanly remove the VIP
		also 
		on B.12 you have to run it on all the remaining RAC nodes


How To Remove a 10g RAC Node On Windows?
  	Doc ID: 	Note:603637.1



-- CLONE

Manually Cloning Oracle Applications Release 11i with 10g or 11g RAC
  	Doc ID: 	760637.1




-- OCR / VOTING DISK

How to recreate OCR/Voting disk accidentally deleted
  	Doc ID: 	Note:399482.1

How to move the OCR location ?
      - stop the CRS stack on all nodes using
	  "init.crs stop"
      - Edit /var/opt/oracle/ocr.loc on all nodes and set up ocrconfig_loc=new OCR device
      - Restore from one of the automatic physical backups using ocrconfig -restore.
      - Run ocrcheck to verify.
      - reboot to restart the CRS stack.
      - additional information can be found at

How to Restore a Lost Voting Disk in 10g (Doc ID 279793.1)
OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE), including moving from RAW Devices to Block Devices. (Doc ID 428681.1)
RAC on Windows: How To Reinitialize the OCR and Vote Disk (without a full reinstall of Oracle Clusterware) [ID 557178.1]



-- REINSTALL
How to Reinstall CRS Without Disturbing Installed Oracle RDBMS Home(s) [ID 456021.1]
How To clean up after a Failed (or successful) Oracle Clusterware Installation on Windows [ID 341214.1]
RAC on Windows: How To Reinitialize the OCR and Vote Disk (without a full reinstall of Oracle Clusterware) [ID 557178.1]
WIN: Manually Removing all Oracle Components on Microsoft Windows Platforms [ID 124353.1]




-- CLUSTERWARE

Note 337737.1 Oracle Clusterware - ASM - Database Version Compatibility
Note 363254.1 Applying one-off Oracle Clusterware patches in a mixed version home environment

10g RAC: How to Clean Up After a Failed CRS Install
  	Doc ID: 	Note:239998.1

10g RAC: Troubleshooting CRS Root.sh Problems
  	Doc ID: 	Note:240001.1

Oracle Clusterware: Components installed.
  	Doc ID: 	556976.1



-- VIP

Oracle 10g VIP (Virtual IP) changes in Oracle 10g 10.1.0.4
  	Doc ID: 	Note:296878.1

How to Configure Virtual IPs for 10g RAC
  	Doc ID: 	Note:264847.1
  	
VIPCA cannot be run under RHEL/OEL 5
  	Doc ID: 	Note:577298.1
  	
Modifying the VIP or VIP Hostname of a 10g Oracle Clusterware Node
  	Doc ID: 	Note:276434.1
  	
Should the Database Instance Be Brought Down after VIP service crashes?
  	Doc ID: 	Note:391454.1
  	

-- SCAN 
How to Setup SCAN Listener and Client for TAF and Load Balancing [Video] (Doc ID 1188736.1)
11gR2 Grid Infrastructure Single Client Access Name (SCAN) Explained (Doc ID 887522.1)
http://oracle-dba-yi.blogspot.com/2011/04/11gr2-scan-faq.html


-- SCAN add another listener 
How to Configure A Second Listener on a Separate Network in 11.2 Grid Infrastructure [ID 1063571.1]c


-- SCAN just started one listener - ADD SCAN LISTENER
How to start the SCAN listener on new 11Gr2 install?
http://kr.forums.oracle.com/forums/thread.jspa?threadID=1120482
How to add SCAN LISTENER in 11gR2 - http://learnwithme11g.wordpress.com/2010/09/03/how-to-add-scan-listener-in-11gr2-2/
How to update the IP address of the SCAN VIP resources (ora.scan<n>.vip) (Doc ID 952903.1)
How to Modify SCAN Setting or SCAN Listener Port after Installation (Doc ID 972500.1)


-- SCAN name resolution
PRVF-4664 PRVF-4657: Found inconsistent name resolution entries for SCAN name (Doc ID 887471.1)


-- SCAN reset to 1521 default port
WebLogic Server and Oracle 11gR2 JDBC Driver SCAN feature [ID 1304816.1]
How to integrate a 10g/11gR1 RAC database with 11gR2 clusterware (SCAN) [ID 1058646.1]
How to Configure A Second Listener on a Separate Network in 11.2 Grid Infrastructure [ID 1063571.1]
Changing Default Listener Port Number [ID 359277.1]
How to Create Multiple Oracle Listeners and Multiple Listener Addresses [ID 232010.1]
Listening Port numbers [ID 99721.1]
How to Modify SCAN Setting or SCAN Listener Port after Installation [ID 972500.1]
Using the TNS_ADMIN variable and changing the default port number of all Listeners in an 11.2 RAC for an 11.2, 11.1, and 10.2 Database [ID 1306927.1] <-- GOOD STUFF
How to update the IP address of the SCAN VIP resources (ora.scan.vip) [ID 952903.1]
How to Troubleshoot Connectivity Issue with 11gR2 SCAN Name [ID 975457.1]
ORA-12545 or ORA-12537 While Connecting to RAC through SCAN name [ID 970619.1]
Tracing Techniques for Listeners in 11.2 RAC Environments [ID 1325284.1]
SCAN Address Cannot Resolve Instance Name ORA-12521 [ID 1235773.1]
Top 5 Issues That Cause Troubles with Scan VIP and Listeners [ID 1373350.1]
ORA-12541 intermittently with DBLinks using SCAN listener [ID 1269630.1]
How to Modify SCAN Setting or SCAN Listener Port after Installation [ID 972500.1]
Remote Clients Receive ORA-12160 or ORA-12561 Errors Connecting To 11GR2 RAC Via SCAN Listeners [ID 1291985.1]
11gR2 Grid Infrastructure Single Client Access Name (SCAN) Explained [ID 887522.1]
Problem: RAC Metrics: Unable to get E-mail Notification for some metrics against Cluster Databases [ID 403886.1]
How to Configure A Second Listener on a Separate Network in 11.2 Grid Infrastructure [ID 1063571.1]
Thread: Multiple listener on RAC 11.2 -> https://forums.oracle.com/forums/thread.jspa?threadID=972062
https://sites.google.com/site/connectassysdba/oracle-rac-11-2-multiple-listener
http://myoracle4u.blogspot.com/2011/07/configure-scan-listener-in-11gr2-rac.html
How to Add SCAN Listener in 11gR2 RAC http://myoracle4u.blogspot.com/2011/07/configure-scan-listener-in-11gr2-rac.html


-- SCAN PERFORMANCE
Scan Listener, Queuesize, SDU, Ports [ID 1292915.1]
  	


-- JUMBO FRAMES

Recommendation for the Real Application Cluster Interconnect and Jumbo Frames
  	Doc ID: 	341788.1

Tuning Inter-Instance Performance in RAC and OPS
  	Doc ID: 	181489.1



-- RDS / INFINIBAND

Doc ID: 751343.1 RAC Support for RDS Over Infiniband 
Doc ID: 368464.1 How to Setup IPMP as Cluster Interconnect
Doc ID: 283107.1 Configuring Solaris IP Multipathing (IPMP) for the Oracle 10g VIP



  	
-- INTERCONNECT

How to Change Interconnect/Public Interface IP Subnet in a 10g Cluster
  	Doc ID: 	Note:283684.1

Recommendation for the Real Application Cluster Interconnect and Jumbo Frames
  	Doc ID: 	341788.1

Tuning Inter-Instance Performance in RAC and OPS
  	Doc ID: 	181489.1

How To Track Dead Connection Detection(DCD) Mechanism Without Enabling Any Client/Server Network Tracing
  	Doc ID: 	438923.1





-- CHANGE IP ADDRESS

How to Change Interconnect/Public Interface IP or Subnet in Oracle Clusterware
  	Doc ID: 	283684.1

Modifying the VIP or VIP Hostname of a 10g or 11g Oracle Clusterware Node
  	Doc ID: 	276434.1

Considerations when Changing the Database Server Name or IP
  	Doc ID: 	734559.1

Preparing For Changing the IP Addresses Of Oracle Database Servers
  	Doc ID: 	363609.1

Instance Not Coming Up On Second Node In RAC 'Timeout when connecting'
  	Doc ID: 	351914.1

Warning  Could Not Be Translated To A Network Address
  	Doc ID: 	464986.1

The Sqlnet Files That Need To Be Changed/Checked During Ip Address Change Of Database Server
  	Doc ID: 	274476.1

EMCA
http://download.oracle.com/docs/cd/B19306_01/em.102/b40002/structure.htm#sthref92

APPLICATION SERVER
http://download.oracle.com/docs/cd/B10464_05/core.904/b10376/host.htm#sthref513



-- CHANGE HOSTNAME

http://www.pythian.com/news/482/changing-hostnames-in-oracle-rac
RAC on Windows: Oracle Clusterware Services Do Not Start After Changing Username or Domain [ID 557273.1]





-- BONDING

Configuring Linux for the Oracle 10g VIP or private interconnect using bonding driver
  	Doc ID: 	298891.1

Setting Up Bonding in SLES 9
  	Doc ID: 	291962.1

Setting Up Bonding in Suse SLES8
  	Doc ID: 	291958.1







-- MIGRATION

Migrating to RAC using Data Guard
 	Doc ID:	Note:273015.1




-- CRS_STAT2 

CRS and 10g Real Application Clusters
 	Doc ID:	Note:259301.1
 	
WINDOWS CRS_STAT SCRIPT TO DISPLAY LONG NAMES CORRECTLY
 	Doc ID:	Note:436067.1



-- 


Bug 5128575 - RAC install of 10.2.0.2 does not update libknlopt.a on all nodes
 	Doc ID:	Note:5128575.8

TROUBLESHOOTING - ASM disk not found/visible/discovered issues
 	Doc ID:	Note:452770.1

Unable To Mount Or Drop A Diskgroup, Fails With Ora-15032 And Ora-15063
 	Doc ID:	Note:353423.1

ASM Diskgroup Failed to Mount On Second Node ORA-15063
 	Doc ID:	Note:731075.1

Diskgroup Was Not Mounted After Created ORA-15063 and ORA-15032
 	Doc ID:	Note:467702.1

Disk has been offline In Asm Diskgroup and has 2 entries in v$asm_disk
 	Doc ID:	Note:393958.1

Adding The Label To ASMLIB Disk Using 'oracleasm renamedisk' Command
 	Doc ID:	Note:280650.1

Ora-15063: Asm Discovered An Insufficient Number Of Disks For Diskgroup using NetApp Storage
 	Doc ID:	Note:577526.1

Cannot Start Asm Ora-15063/ORA-15183
 	Doc ID:	Note:340519.1

NEW CREATED DISKGROUP IS NOT VISIBLE ON SECOND NODE - USING NFS AND ASMLIB
 	Doc ID:	Note:372276.1



Cannot Find Exact Kernel Version Match For ASMLib (Workaround using oracleasm_debug_link tool)
 	Doc ID:	Note:462618.1

Heartbeat/Voting/Quorum Related Timeout Configuration for Linux, OCFS2, RAC Stack to avoid unnessary node fencing, panic and reboot
 	Doc ID:	Note:395878.1

Reconfiguring the CSS disktimeout of 10gR2 Clusterware for Proper LUN Failover of the Dell MD3000i iSCSI Storage
 	Doc ID:	Note:462616.1

10g RAC: Steps To Increase CSS Misscount, Reboottime and Disktimeout
 	Doc ID:	Note:284752.1

CSS Timeout Computation in Oracle Clusterware
 	Doc ID:	Note:294430.1

How to Increase CSS Misscount in single instance ASM installations
 	Doc ID:	Note:729878.1

Configuring raw devices (multipath) for Oracle Clusterware 10g Release 2 (10.2.0) on RHEL5/OEL5
 	Doc ID:	Note:564580.1

Steps to Create Test RAC Setup On Oracle VM
 	Doc ID:	Note:742603.1

Requirements For Installing Oracle 10gR2 On RHEL/OEL 5 (x86)
 	Doc ID:	Note:419646.1

Prerequisite Checks Fail When Installing 10.2 On Red Hat 5 (RHEL5)
 	Doc ID:	Note:456634.1

Additional steps to install 10gR2 RAC on IBM zSeries Based Linux (SLES10)
 	Doc ID:	Note:471165.1

10gR2 RAC Install issues on Oracle EL5 or RHEL5 or SLES10 (VIPCA / SRVCTL / OUI Failures)
 	Doc ID:	Note:414163.1

Oracle Clusterware (formerly CRS) Rolling Upgrades
 	Doc ID:	Note:338706.1

10.2.0.X CRS Bundle Patch Information
 	Doc ID:	Note:405820.1
 	
 	
 	
-- CSS MISCOUNT

How to Increase CSS Misscount in single instance ASM installations
 	Doc ID:	Note:729878.1
 	
10g RAC: Steps To Increase CSS Misscount, Reboottime and Disktimeout
 	Doc ID:	Note:284752.1

Subject: 	CSS Timeout Computation in RAC 10g (10g Release 1 and 10g Release 2)
  	Doc ID: 	Note:294430.1

Subject: 	10g RAC: Steps To Increase CSS Misscount, Reboottime and Disktimeout
  	Doc ID: 	Note:284752.1



-- CONVERT SINGLE INSTANCE TO RAC

How to Convert 10g Single-Instance database to 10g RAC using Manual Conversion procedure
  	Doc ID: 	747457.1
How To Convert A Single Instance Database To RAC In A Cluster File System Configuration (Doc ID 208375.1)
http://avdeo.com/2010/02/22/converting-a-single-instance-database-to-rac-manually-oracle-rac-10g/
http://jaffardba.blogspot.com/2011/03/converting-your-single-instance.html
http://onlineappsdba.com/index.php/2009/06/24/single-instance-to-rac-conversion/
http://download.oracle.com/docs/cd/B28359_01/install.111/b28264/cvrt2rac.htm#BABBBDDB <-- 11.1
http://download.oracle.com/docs/cd/E11882_01/install.112/e17214/cvrt2rac.htm#BABBAHCH <-- 11.2



-- COVERT TO SEPARATE NODES

Converting a RAC Environment to Separate Node Environments
  	Doc ID: 	377347.1



-- SHARED HOME
RAC: How To Move From Shared To Non-Shared Homes [ID 605640.1]




-- LOCAL, REMOTE LISTENER

How To Find Out The Example of The LOCAL_LISTENER and REMOTE_LISTENER Defined In The init.ora When configuring the 11i or R12 on RAC ?
  	Doc ID: 	744508.1

Check LOCAL_LISTENER if you run RAC!
http://tardate.blogspot.com/2007/06/check-locallistener-if-you-run-rac.html



-- RAC ASM

How to Convert a Single-Instance ASM to Cluster ASM
  	Doc ID: 	452758.1


-- CLEAN UP ASM INSTALL, UNINSTALL

How to cleanup ASM installation (RAC and Non-RAC)
  	Doc ID: 	311350.1



-- RMAN RAC backup

HowTo Restore RMAN Disk backups of RAC Database to Single Instance On Another Node
  	Doc ID: 	415579.1





-- TAF, FCF

How To Configure Server Side Transparent Application Failover [ID 460982.1]
How to Configure Client Side Transparent Application Failover with Preconnect Option [ID 802434.1]
Understanding Transparent Application Failover (TAF) and Fast Connection Failover (FCF) [ID 334471.1]



Fast Connection Failover (FCF) Test Client Using 11g JDBC Driver and 11g RAC Cluster
  	Doc ID: 	566573.1

Oracle 10g VIP (Virtual IP) changes in Oracle 10g 10.1.0.4
  	Doc ID: 	296878.1

Can the JDBC Thin Driver Do Failover by Specifying FAILOVER_MODE?
  	Doc ID: 	465423.1

Does JBOSS Support Fast Connection Failover (FCF) to a 10g RAC cluster?
  	Doc ID: 	738122.1

How To Verify And Test Fast Connection Failover (FCF) Setup From a JDBC Thin Client Against a 10.2.x RAC Cluster
  	Doc ID: 	433827.1

Failover Issues and Limitations [Connect-time failover and TAF]
  	Doc ID: 	97926.1

How To Use TAF With Instant Client
  	Doc ID: 	428515.1

Troubleshooting TAF Issues in 10g RAC
  	Doc ID: 	271297.1

How To Configure Server Side Transparent Application Failover
  	Doc ID: 	460982.1

What is the Overhead when using TAF Failover Select Type?
  	Doc ID: 	119537.1

Which Oracle Client versions will connect to and work against which version of the Oracle Database?
  	Doc ID: 	172179.1

Configuration of Load Balancing and Transparent Application Failover
  	Doc ID: 	226880.1

Oracle Net80 TAF Enabled Alias Fails With ORA-12197
  	Doc ID: 	284273.1 	Type: 	PROBLEM

Client Load Balancing and Failover Using Description and Address_List
  	Doc ID: 	69010.1

ADDRESS_LISTs and Oracle Net Failover
  	Doc ID: 	67136.1

Load Balancing and DESCRIPTION_LISTs
  	Doc ID: 	67137.1



-- USER EQUIVALENCE

How to Configure SSH for User Equivalence
  	Doc ID: 	372795.1

How to Configure SSH for User Equivalence
  	Doc ID: 	372795.1

Configuring Ssh For Rac Installations
  	Doc ID: 	308898.1

How To Configure SSH for a RAC Installation
  	Doc ID: 	300548.1




-- NTP
Thread: PRVF-5424 : Clock time offset check failed
https://forums.oracle.com/forums/thread.jspa?threadID=2148827






-- RAC MICROSOFT - BLUE SCREEN

Why do we get a Blue Screen Caused By Orafencedrv.sys
  	Doc ID: 	337784.1

ORACLE PROCESSES ENCOUNTERING (OS 1117) ERRORS ON WINDOWS 2003
  	Doc ID: 	444803.1

http://www.orafaq.com/forum/t/120114/2/

http://forums11.itrc.hp.com/service/forums/bizsupport/questionanswer.do?admit=109447626+1255362432406+28353475&threadId=999460



-- CRS REBOOTS, NODE EVICTION

http://www.rachelp.nl/index_kb.php?menu=articles&actie=show&id=25

Troubleshooting CRS Reboots
  	Doc ID: 	265769.1

Data Gathering for Troubleshooting RAC Issues
  	Doc ID: 	556679.1

CSS Timeout Computation in Oracle Clusterware
  	Doc ID: 	294430.1

Corrupt Packets on the Network causes CSS to REBOOT NODE
  	Doc ID: 	400778.1

10g RAC: Steps To Increase CSS Misscount, Reboottime and Disktimeout
  	Doc ID: 	284752.1

Using Diagwait as a diagnostic to get more information for diagnosing Oracle Clusterware Node evictions  <--- SETUP THIS ON ALL RAC ENV, REQUIRED!
  	Doc ID: 	559365.1

Hangcheck-Timer Module Requirements for Oracle 9i, 10g, and 11g RAC on Linux
  	Doc ID: 	726833.1

Frequent Instance Eviction in 9i and/or Node Eviction in 10g
  	Doc ID: 	461662.1

ORA-27506 Results in ORA-29740 and Instance Evictions on Windows
  	Doc ID: 	342708.1

My references - client that runs on 3 node RAC having node evictions
{{{
Common reasons for OCFS2 o2net Idle Timeout (Doc ID 734085.1)	<-- cause of the restart
Troubleshooting 10g and 11.1 Clusterware Reboots (Doc ID 265769.1)	<-- if then else
OCFS2 Fencing, Network, and Disk Heartbeat Timeout Configuration (Doc ID 457423.1)
OCFS2 - FREQUENTLY ASKED QUESTIONS (Doc ID 391771.1)
Root.sh Unable To Start CRS On Second Node (Doc ID 369699.1)
Troubleshooting TAF Issues in 10g RAC (Doc ID 271297.1)
RAC instabilities due to firewall (netfilter/iptables) enabled on the cluster interconnect (Doc ID 554781.1)
Troubleshooting Oracle Clusterware Root.sh Problems (Doc ID 240001.1)
Corrupt Packets on the Network causes CSS to REBOOT NODE (Doc ID 400778.1)
Linux: RAC Instance Halts For Several Minutes When Rebooting Other Node (Doc ID 263477.1)
Irregular ClssnmPollingThread Missed Checkins Messages in CSSD log (Doc ID 372463.1)
Ocssd.Bin Process Consumes 100% Cpu (Doc ID 730148.1)
CRS DOES NOT STARTUP WITHIN 600 SECONDS AFTER 10.2.0.3 BUNDLE3 (Doc ID 744573.1)
Frequent Instance Eviction in 9i and/or Node Eviction in 10g (Doc ID 461662.1)
Resolving Instance Evictions on Windows Platforms (Doc ID 297498.1)
Using Diagwait as a diagnostic to get more information for diagnosing Oracle Clusterware Node evictions (Doc ID 559365.1)
CSS Timeout Computation in Oracle Clusterware (Doc ID 294430.1)
Node Eviction with IPCSOCK_SEND FAILED WITH STATUS: 10054 Errors (Doc ID 243547.1)
How to Collect 'Cluster Health Monitor' (former IPD/OS) Data on Windows Platform for Oracle Support (Doc ID 847485.1)
Linux: OCSSD Reboots Nodes Randomly After Application of 10.2.0.4 Patchset and in 11g Environments (Doc ID 731599.1)
10g RAC: Steps To Increase CSS Misscount, Reboottime and Disktimeout (Doc ID 284752.1)
Using Bonded Network Device Can Cause OCFS2 to Detect Network Outage (Doc ID 423183.1)
OCFS2 - FREQUENTLY ASKED QUESTIONS (Doc ID 391771.1)
}}}




-- RAC SPFILE

Recreating the Spfile for RAC Instances Where the Spfile is Stored in ASM
  	Doc ID: 	554120.1





-- RAC ORA-12545, ORA-12533, ORA-12514, Ora-12520

Troubleshooting ORA-12545 / TNS-12545 Connect failed because target host or object does not exist	<-- good stuff, step by step
  	Doc ID: 	553328.1

Troubleshooting Guide TNS-12535 or ORA-12535 or ORA-12170 Errors
  	Doc ID: 	119706.1

RAC Connection Redirected To Wrong Host/IP ORA-12545
  	Doc ID: 	364855.1

How to Set the LOCAL_LISTENER Parameter to Resolve the ORA-12514
  	Doc ID: 	362787.1

Intermittent TNS - 12533 Error While Connecting to 10g RAC Database
  	Doc ID: 	472394.1

Ora-12520 When listeners on VIP in 10g RAC Setup
  	Doc ID: 	342419.1

Dispatchers Are Not Registered With Listener Running On Default Port 1521
  	Doc ID: 	465881.1

RAC Instance Status Shows Ready Zero Handlers For The Service
  	Doc ID: 	419824.1

RAC Connection Redirected To Wrong Host/IP ORA-12545
  	Doc ID: 	364855.1

Database Will Not Register With Listener configured on IP instead of Hostname ORA-12514
  	Doc ID: 	365314.1

How MTS and DNS are related, MTS_DISPATCHER and ORA-12545
  	Doc ID: 	131658.1

TNS-12500 errors using 64-bit Listener for 32-bit instance
  	Doc ID: 	121091.1

Ora-12520 When listeners on VIP in 10g RAC Setup
  	Doc ID: 	342419.1

Ora-12545 Frequent Client Connection Failure - 10g Standard Rac
  	Doc ID: 	333159.1








-- LOCAL LISTENER

Init.ora Parameter "LOCAL_LISTENER" Reference Note
  	Doc ID: 	47339.1

How To Find Out The Example of The LOCAL_LISTENER and REMOTE_LISTENER Defined In The init.ora When configuring the 11i or R12 on RAC ?
  	Doc ID: 	744508.1

DATABASE WON'T START AFTER CHANGING THE LOCAL_LISTENER
  	Doc ID: 	415362.1

RAC Instance Status Shows Ready Zero Handlers For The Service
  	Doc ID: 	419824.1

Dispatchers Are Not Registering With the Listener When LOCAL_LISTENER is Set Correctly
  	Doc ID: 	465888.1

How to Configure Local_Listener Parameter Without Mentioning the Host Name or IP Address in Cluster Environmet
  	Doc ID: 	285191.1

LOCAL_LISTENER SPECIFICATION FAILS ON STARTUP OF RAC
  	Doc ID: 	253075.1




-- LOAD BALANCING AND TAF

10g & 11g :Configuration of TAF(Transparent Application Failover) and Load Balancing	<-- the one I used for Oberthur
  	Doc ID: 	453293.1

Understanding and Troubleshooting Instance Load Balancing	<-- good stuff, with shell scripts included
  	Doc ID: 	263599.1

Troubleshooting TAF Issues in 10g RAC	<-- good stuff, with RELOCATE command
  	Doc ID: 	271297.1

Note 259301.1 - CRS and 10g Real Application Clusters

Configuration of Load Balancing and Transparent Application Failover
  	Doc ID: 	226880.1




-- SERVICES

Issues Affecting Automatic Service Registration
  	Doc ID: 	235562.1

Service / Instance Not Registering with Listener
  	Doc ID: 	433693.1









http://mvallath.wordpress.com/2010/04/29/coexist-10gr2-and-11gr2-rac-db-on-the-same-cluster-stumbling-blocks-2/


RAID5 and RAID10 comparison on EMC VNX storage
https://www.evernote.com/shard/s48/sh/e5f58be7-309f-42a9-974e-f67fd20ad4d1/9f2f546330bb300d40ce04c54c3ccac2


All tests produced by oriontoolkit 
https://www.dropbox.com/s/jzcl5ydt29mvw69/PerformanceAndTroubleshooting/oriontoolkit.zip
How the relative file number and block number calculation RDBA (relative data block address)
http://translate.google.com/translate?sl=auto&tl=en&u=http://blogs.oracle.com/toddbao/2010/11/rdba.html
https://blogs.oracle.com/fmwinstallproactive/entry/how_to_use_rda_to
taking thread dumps http://middlewaremagic.com/weblogic/?p=823
./rda.sh -M            <-- RDA man page
./rda.sh -M RAC    <-- RDA man page for each individual module

./rda.sh -cv           <-- check directory structure if intact
perl -v                  <-- verify perl version

./rda.sh -L Test     <-- List the test modules
./rda.sh -T ssh      <-- test the ssh connectivity

./rda.sh -L profile   <-- List the RDA profiles
./rda.sh -p Rac      <-- runs the RDA RAC profile


! RDA for multinode collection - RAC
1) 
RSA and DSA key must be loaded.. else, it will still ask you on the setup_cluster part
2)
./rda.sh -vX Remote setup_cluster       <-- remote data collection initial setup
3) 
./rda.sh -vX Remote list                      <-- list the nodes that have been configured
4)
./rda.sh -v -e REMOTE_TRACE=1      <-- run the RDA for multinode collection, REMOTE_TRACE shows more details on the screen
5) 
''to re-run.. re-execute all commands (1,2,3,4)''

! Sample output of multinode collection - RAC
''GOOD OUTPUT''
{{{
[oracle@racnode1 rda]$ ./rda.sh -vX Remote setup_cluster
------------------------------------------------------------------------------
Requesting common information
------------------------------------------------------------------------------
Where RDA should be installed on the remote nodes?
Hit 'Return' to accept the default (/u01/rda/rda)
> 

Where setup files and reports should be stored on the remote nodes?
Hit 'Return' to accept the default (/u01/rda/rda)
> 

Should an alternative login be used to execute remote requests (Y/N)?
Hit 'Return' to accept the default (N)
> 

Enter an Oracle User ID (userid only) to view DBA_ and V$ tables. If RDA will
be run under the Oracle software owner's ID, enter a '/' here, and select Y at
the SYSDBA prompt to avoid being prompted for the database password at
runtime.
Hit 'Return' to accept the default (system)
> /

Is '/' a sysdba user (will connect as sysdba) (Y/N)?
Hit 'Return' to accept the default (N)
> y

------------------------------------------------------------------------------
Requesting information for node racnode1
------------------------------------------------------------------------------
Enter the Oracle Home to be analyzed on the node racnode1
Hit 'Return' to accept the default (/u01/app/oracle/product/10.2.0/db_1)
> 

Enter the Oracle SID to be analyzed on the node racnode1
Hit 'Return' to accept the default (orcl1)
> 

------------------------------------------------------------------------------
Requesting information for node racnode2
------------------------------------------------------------------------------
Enter the Oracle Home to be analyzed on the node racnode2
Hit 'Return' to accept the default (/u01/app/oracle/product/10.2.0/db_1)
> 

Enter the Oracle SID to be analyzed on the node racnode2
Hit 'Return' to accept the default (orcl2)
> 

------------------------------------------------------------------------------
RAC Setup Summary
------------------------------------------------------------------------------
Nodes:
. NOD001  racnode1/orcl1
. NOD002  racnode2/orcl2
2 nodes found
-------------------------------------------------------------------------------
S909RDSP: Produces the Remote Data Collection Reports
-------------------------------------------------------------------------------
        Updating the setup file ...
[oracle@racnode1 rda]$ 
[oracle@racnode1 rda]$ 
[oracle@racnode1 rda]$ 
[oracle@racnode1 rda]$ ./rda.sh -vX Remote list
Nodes:
. NOD001  racnode1/orcl1
. NOD002  racnode2/orcl2
2 nodes found





[oracle@racnode1 rda]$ ./rda.sh -v -e REMOTE_TRACE=1
        Collecting diagnostic data ...
-------------------------------------------------------------------------------
RDA Data Collection Started 26-Nov-2010 11:14:01 AM
-------------------------------------------------------------------------------
Processing Initialization module ...
Processing CFG module ...
Processing OCM module ...
Processing REXE module ...
NOD001>         Setting up ...
NOD002> bash: /u01/rda/rda/rda.sh: No such file or directory
NOD001>         Collecting diagnostic data ...
NOD001> -------------------------------------------------------------------------------
NOD001> RDA Data Collection Started 26-Nov-2010 11:14:07
NOD001> -------------------------------------------------------------------------------
NOD001> Processing Initialization module ...
NOD001> Processing CFG module ...
NOD001> Processing Sampling module ...
NOD001> Processing OCM module ...
NOD001> Processing OS module ...
NOD002>         Setting up ...
NOD002>         Collecting diagnostic data ...
NOD002> -------------------------------------------------------------------------------
NOD002> RDA Data Collection Started 26-Nov-2010 11:14:14 AM
NOD002> -------------------------------------------------------------------------------
NOD002> Processing Initialization module ...
NOD002> Processing CFG module ...
NOD002> Processing Sampling module ...
NOD002> Processing OCM module ...
NOD002> Processing OS module ...
NOD002> Processing PROF module ...
NOD002> Processing PERF module ...
NOD001> Processing PROF module ...
NOD001> Processing PERF module ...
NOD002> Processing NET module ...
NOD002> Processing ONET module ...
NOD002> Listener checks may take a few minutes. please be patient...
NOD002>   Processing listener LISTENER_RACNODE2
NOD002> Processing Oracle installation module ...
NOD002> Processing RDBMS module ...
NOD001> Processing NET module ...
NOD001> Processing ONET module ...
NOD001> Listener checks may take a few minutes. please be patient...
NOD001>   Processing listener LISTENER_RACNODE1
NOD002> Processing RDBMS Memory module ...
NOD001> Processing Oracle installation module ...
NOD001> Processing RDBMS module ...
NOD002> Processing LOG module ...
NOD002> Processing Cluster module ...
NOD002> Processing RDSP module ...
NOD002> Processing LOAD module ...
NOD002> Processing End module ...
NOD002> -------------------------------------------------------------------------------
NOD002> RDA Data Collection Ended 26-Nov-2010 11:16:08 AM
NOD002> -------------------------------------------------------------------------------
NOD002>         Generating the reports ...
NOD002>                 - RDA_PERF_top_sql.txt ...
NOD002>                 - RDA_PERF_autostats.txt ...
NOD002>                 - RDA_LOG_udump4_orcl2_ora_17965_trc.dat ...
NOD002>                 - RDA_ONET_dynamic_dep.txt ...
NOD002>                 - RDA_END_report.txt ...
NOD002>                 - RDA_INST_oracle_home.txt ...
NOD002>                 - RDA_DBA_init_ora.txt ...
NOD002>                 - RDA_DBM_spresmal.txt ...
NOD002>                 - RDA_NET_udp_settings.txt ...
NOD002>                 - RDA_LOG_bdump3_orcl2_arc2_18143_trc.dat ...
NOD002>                 - RDA_INST_make_report.txt ...
NOD002>                 - RDA_RAC_srvctl.txt ...
NOD002>                 - RDA_LOG_bdump7_orcl2_lgwr_16518_trc.dat ...
NOD002>                 - RDA_OS_kernel_info.txt ...
NOD002>                 - RDA_RAC_css_log.txt ...
NOD002>                 - RDA_PROF_dot_bashrc.txt ...
NOD002>                 - RDA_RAC_init.txt ...
NOD002>                 - RDA_INST_inventory_xml.txt ...
NOD002>                 - RDA_LOG_bdump9_orcl2_lmd0_16485_trc.dat ...
NOD002>                 - RDA_OS_misc_linux_info.txt ...
NOD002>                 - RDA_CFG_database.txt ...
NOD002>                 - RDA_LOG_bdump2_orcl2_arc0_8802_trc.dat ...
NOD002>                 - RDA_PERF_lock_data.txt ...
NOD002>                 - RDA_INST_oratab.txt ...
NOD002>                 - RDA_OS_etc_conf.txt ...
NOD002>                 - RDA_LOG_bdump4_orcl2_arc2_8806_trc.dat ...
NOD002>                 - RDA_OS_ntpstatus.txt ...
NOD002>                 - RDA_PROF_dot_bash_profile.txt ...
NOD002>                 - RDA_OS_linux_release.txt ...
NOD002>                 - RDA_DBM_sgastat.txt ...
NOD002>                 - RDA_RAC_cluster_net.txt ...
NOD002>                 - RDA_RAC_crs_stat.txt ...
NOD002>                 - RDA_RAC_crs_log.txt ...
NOD002>                 - RDA_LOG_bdump13_orcl2_lms0_16500_trc.dat ...
NOD002>                 - RDA_DBA_database_properties.txt ...
NOD002>                 - RDA_PERF_cbo_trace.txt ...
NOD002>                 - RDA_DBA_text.txt ...
NOD002>                 - RDA_LOG_udump1_orcl2_ora_16222_trc.dat ...
NOD002>                 - RDA_NET_ifconfig.txt ...
NOD002>                 - RDA_DBM_hwm.txt ...
NOD002>                 - RDA_PROF_profiles.txt ...
NOD002>                 - RDA_LOG_bdump8_orcl2_lgwr_7809_trc.dat ...
NOD002>                 - RDA_PROF_env.txt ...
NOD002>                 - RDA_DBM_sgacomp.txt ...
NOD002>                 - RDA_PERF_addm_report.txt ...
NOD002>                 - RDA_DBA_vsystem_event.txt ...
NOD002>                 - RDA_DBA_sga_info.txt ...
NOD002>                 - RDA_RAC_logs.txt ...
NOD002>                 - RDA_ONET_hs_inithsodbc_ora.txt ...
NOD002>                 - RDA_RAC_ocrconfig.txt ...
NOD002>                 - RDA_LOG_bdump.txt ...
NOD002>                 - RDA_INST_orainst_loc.txt ...
NOD002>                 - RDA_LOG_bdump5_orcl2_diag_16448_trc.dat ...
NOD002>                 - RDA_DBA_ses_procs.txt ...
NOD002>                 - RDA_DBM_lchitrat.txt ...
NOD002>                 - RDA_DBA_tablespace.txt ...
NOD002>                 - RDA_OS_disk_info.txt ...
NOD002>                 - RDA_LOG_last_errors.txt ...
NOD002>                 - RDA_LOG_udump.txt ...
NOD002>                 - RDA_DBA_jvm_info.txt ...
NOD002>                 - RDA_OS_tracing.txt ...
NOD002>                 - RDA_LOG_udump3_orcl2_ora_18539_trc.dat ...
NOD002>                 - RDA_DBA_vfeatureinfo.txt ...
NOD002>                 - RDA_RAC_alert_log.txt ...
NOD002>                 - RDA_END_system.txt ...
NOD002>                 - RDA_OS_cpu_info.txt ...
NOD002>                 - RDA_INST_orainventory_logdir.txt ...
NOD002>                 - RDA_RAC_ipc.txt ...
NOD002>                 - RDA_OS_java_version.txt ...
NOD002>                 - RDA_DBA_replication.txt ...
NOD002>                 - RDA_RAC_ocrcheck.txt ...
NOD002>                 - RDA_DBA_nls_parms.txt ...
NOD002>                 - RDA_DBA_vresource_limit.txt ...
NOD002>                 - RDA_DBA_partition_data.txt ...
NOD002>                 - RDA_ONET_sqlnet_listener_ora.txt ...
NOD002>                 - RDA_LOG_bdump17_orcl2_smon_7813_trc.dat ...
NOD002>                 - RDA_OS_memory_info.txt ...
NOD002>                 - RDA_DBA_vspparameters.txt ...
NOD002>                 - RDA_INST_oraInstall20080321_085143PM_out.dat ...
NOD002>                 - RDA_LOG_bdump15_orcl2_mmon_16580_trc.dat ...
NOD002>                 - RDA_LOG_log_trace.txt ...
NOD002>                 - RDA_LOG_udump2_orcl2_ora_24872_trc.dat ...
NOD002>                 - RDA_RAC_ocrdump.txt ...
NOD002>                 - RDA_ONET_sqlnet_sqlnet_ora.txt ...
NOD002>                 - RDA_OS_nls_env.txt ...
NOD002>                 - RDA_DBM_libcache.txt ...
NOD002>                 - RDA_OS_packages.txt ...
NOD002>                 - RDA_DBA_datafile.txt ...
NOD002>                 - RDA_DBA_security_files.txt ...
NOD002>                 - RDA_LOG_bdump1_orcl2_arc0_18044_trc.dat ...
NOD002>                 - RDA_NET_etc_files.txt ...
NOD002>                 - RDA_RAC_crs_inventory.txt ...
NOD002>                 - RDA_RAC_evm_log.txt ...
NOD002>                 - RDA_DBA_vcontrolfile.txt ...
NOD002>                 - RDA_DBA_security.txt ...
NOD002>                 - RDA_LOG_bdump14_orcl2_lms0_7793_trc.dat ...
NOD002>                 - RDA_DBA_spatial.txt ...
NOD002>                 - RDA_LOG_bdump16_orcl2_qmnc_9268_trc.dat ...
NOD002>                 - RDA_DBA_undo_info.txt ...
NOD002>                 - RDA_PERF_ash_report.txt ...
NOD002>                 - RDA_DBA_vlicense.txt ...
NOD002>                 - RDA_INST_comps_xml.txt ...
NOD002>                 - RDA_DBA_voption.txt ...
NOD002>                 - RDA_DBA_jobs.txt ...
NOD002>                 - RDA_RAC_client_log.txt ...
NOD002>                 - RDA_DBA_vfeatureusage.txt ...
NOD002>                 - RDA_LOG_error2_orcl2_arc0_8802_trc.dat ...
NOD002>                 - RDA_PERF_overview.txt ...
NOD002>                 - RDA_PROF_etc_profile.txt ...
NOD002>                 - RDA_ONET_lstatus.txt ...
NOD002>                 - RDA_DBM_respool.txt ...
NOD002>                 - RDA_DBA_vparameters.txt ...
NOD002>                 - RDA_PROF_ulimit.txt ...
NOD002>                 - RDA_LOG_bdump10_orcl2_lmd0_7791_trc.dat ...
NOD002>                 - RDA_OS_sysdef.txt ...
NOD002>                 - RDA_RAC_cluster_status_file.txt ...
NOD002>                 - RDA_ONET_sqlnetsqlnet_log.txt ...
NOD002>                 - RDA_OS_system_error_log.txt ...
NOD002>                 - RDA_LOG_bdump11_orcl2_lmon_16460_trc.dat ...
NOD002>                 - RDA_ONET_sqlnet_tnsnames_ora.txt ...
NOD002>                 - RDA_LOG_udump5_orcl2_ora_16811_trc.dat ...
NOD002>                 - RDA_INST__link_homes.txt ...
NOD002>                 - RDA_DBA_vcompatibility.txt ...
NOD002>                 - RDA_RAC_racg_dump.txt ...
NOD002>                 - RDA_LOG_bdump12_orcl2_lmon_7789_trc.dat ...
NOD002>                 - RDA_PERF_latch_data.txt ...
NOD002>                 - RDA_CFG_homes.txt ...
NOD002>                 - RDA_DBA_latch_info.txt ...
NOD002>                 - RDA_LOG_error1_orcl2_arc0_18044_trc.dat ...
NOD002>                 - RDA_LOG_bdump6_orcl2_diag_7752_trc.dat ...
NOD002>                 - RDA_RAC_crs_status.txt ...
NOD002>                 - RDA_NET_netperf.txt ...
NOD002>                 - RDA_INST_orainventory_files.txt ...
NOD002>                 - RDA_RAC_racOnOff.txt ...
NOD002>                 - RDA_NET_tcpip_settings.txt ...
NOD002>                 - RDA_DBA_CPU_Statistic.txt ...
NOD002>                 - RDA_ONET_adapters.txt ...
NOD002>                 - RDA_LOG_alert_log.txt ...
NOD002>                 - RDA_CFG_oh_inv.txt ...
NOD002>                 - RDA_DBA_vsession_wait.txt ...
NOD002>                 - RDA_LOG_udump6_orcl2_ora_16173_trc.dat ...
NOD002>                 - RDA_DBA_log_info.txt ...
NOD002>                 - RDA_INST_oraInstall20080321_033734PM_out.dat ...
NOD002>                 - RDA_PROF_umask.txt ...
NOD002>                 - RDA_OS_services.txt ...
NOD002>                 - RDA_OS_libc.txt ...
NOD002>                 - RDA_DBM_subpool.txt ...
NOD002>                 - RDA_DBA_aq_data.txt ...
NOD002>                 - RDA_INST__link_oh_inv.txt ...
NOD002>                 - RDA_PERF_awr_report.txt ...
NOD002>                 - RDA_INST_oracle_install.txt ...
NOD002>                 - RDA_DBA_versions.txt ...
NOD002>                 - RDA_DBA_vHWM_Statistic.txt ...
NOD002>                 - RDA_DBA_dba_registry.txt ...
NOD002>                 - RDA_ONET_netenv.txt ...
NOD002>                 - Report index ...
NOD002>         Packaging the reports ...
NOD002>                 RDA_NOD002.zip created for transfer
NOD002>         Updating the setup file ...
NOD001> Processing RDBMS Memory module ...
NOD001> Processing LOG module ...
NOD001> Processing Cluster module ...
NOD001> Processing RDSP module ...
NOD001> Processing LOAD module ...
NOD001> Processing End module ...
NOD001> -------------------------------------------------------------------------------
NOD001> RDA Data Collection Ended 26-Nov-2010 11:16:46
NOD001> -------------------------------------------------------------------------------
NOD001>         Generating the reports ...
NOD001>                 - RDA_PERF_top_sql.txt ...
NOD001>                 - RDA_PERF_autostats.txt ...
NOD001>                 - RDA_ONET_dynamic_dep.txt ...
NOD001>                 - RDA_END_report.txt ...
NOD001>                 - RDA_INST_oracle_home.txt ...
NOD001>                 - RDA_DBA_init_ora.txt ...
NOD001>                 - RDA_LOG_bdump13_orcl1_lgwr_22435_trc.dat ...
NOD001>                 - RDA_DBM_spresmal.txt ...
NOD001>                 - RDA_LOG_bdump7_orcl1_ckpt_7844_trc.dat ...
NOD001>                 - RDA_NET_udp_settings.txt ...
NOD001>                 - RDA_INST_make_report.txt ...
NOD001>                 - RDA_RAC_srvctl.txt ...
NOD001>                 - RDA_LOG_bdump6_orcl1_cjq0_7862_trc.dat ...
NOD001>                 - RDA_OS_kernel_info.txt ...
NOD001>                 - RDA_RAC_css_log.txt ...
NOD001>                 - RDA_PROF_dot_bashrc.txt ...
NOD001>                 - RDA_LOG_bdump19_orcl1_lms0_22392_trc.dat ...
NOD001>                 - RDA_RAC_init.txt ...
NOD001>                 - RDA_INST_inventory_xml.txt ...
NOD001>                 - RDA_OS_misc_linux_info.txt ...
NOD001>                 - RDA_CFG_database.txt ...
NOD001>                 - RDA_LOG_bdump22_orcl1_smon_7856_trc.dat ...
NOD001>                 - RDA_PERF_lock_data.txt ...
NOD001>                 - RDA_LOG_bdump15_orcl1_lmd0_22347_trc.dat ...
NOD001>                 - RDA_INST_oratab.txt ...
NOD001>                 - RDA_OS_etc_conf.txt ...
NOD001>                 - RDA_OS_ntpstatus.txt ...
NOD001>                 - RDA_PROF_dot_bash_profile.txt ...
NOD001>                 - RDA_LOG_bdump2_orcl1_arc0_25404_trc.dat ...
NOD001>                 - RDA_OS_linux_release.txt ...
NOD001>                 - RDA_DBM_sgastat.txt ...
NOD001>                 - RDA_RAC_cluster_net.txt ...
NOD001>                 - RDA_RAC_crs_stat.txt ...
NOD001>                 - RDA_LOG_bdump3_orcl1_arc1_23210_trc.dat ...
NOD001>                 - RDA_INST_oraInstall20090831_114713AM_out.dat ...
NOD001>                 - RDA_RAC_crs_log.txt ...
NOD001>                 - RDA_DBA_database_properties.txt ...
NOD001>                 - RDA_PERF_cbo_trace.txt ...
NOD001>                 - RDA_DBA_text.txt ...
NOD001>                 - RDA_LOG_bdump4_orcl1_arc2_9360_trc.dat ...
NOD001>                 - RDA_NET_ifconfig.txt ...
NOD001>                 - RDA_DBM_hwm.txt ...
NOD001>                 - RDA_PROF_profiles.txt ...
NOD001>                 - RDA_PROF_env.txt ...
NOD001>                 - RDA_DBM_sgacomp.txt ...
NOD001>                 - RDA_PERF_addm_report.txt ...
NOD001>                 - RDA_DBA_vsystem_event.txt ...
NOD001>                 - RDA_DBA_sga_info.txt ...
NOD001>                 - RDA_RAC_logs.txt ...
NOD001>                 - RDA_ONET_hs_inithsodbc_ora.txt ...
NOD001>                 - RDA_RAC_ocrconfig.txt ...
NOD001>                 - RDA_LOG_bdump1_orcl1_arc0_9341_trc.dat ...
NOD001>                 - RDA_LOG_bdump.txt ...
NOD001>                 - RDA_LOG_bdump14_orcl1_lmd0_7821_trc.dat ...
NOD001>                 - RDA_INST_orainst_loc.txt ...
NOD001>                 - RDA_INST_installActions20090831_073133AM_log.dat ...
NOD001>                 - RDA_DBA_ses_procs.txt ...
NOD001>                 - RDA_DBM_lchitrat.txt ...
NOD001>                 - RDA_INST_oraInstall20090831_073524AM_err.dat ...
NOD001>                 - RDA_DBA_tablespace.txt ...
NOD001>                 - RDA_OS_disk_info.txt ...
NOD001>                 - RDA_LOG_last_errors.txt ...
NOD001>                 - RDA_LOG_udump.txt ...
NOD001>                 - RDA_DBA_jvm_info.txt ...
NOD001>                 - RDA_LOG_bdump17_orcl1_lmon_22339_trc.dat ...
NOD001>                 - RDA_OS_tracing.txt ...
NOD001>                 - RDA_DBA_vfeatureinfo.txt ...
NOD001>                 - RDA_INST_installActions20080321_022900PM_log.dat ...
NOD001>                 - RDA_RAC_alert_log.txt ...
NOD001>                 - RDA_END_system.txt ...
NOD001>                 - RDA_OS_cpu_info.txt ...
NOD001>                 - RDA_INST_orainventory_logdir.txt ...
NOD001>                 - RDA_RAC_ipc.txt ...
NOD001>                 - RDA_OS_java_version.txt ...
NOD001>                 - RDA_DBA_replication.txt ...
NOD001>                 - RDA_LOG_bdump21_orcl1_reco_7858_trc.dat ...
NOD001>                 - RDA_RAC_ocrcheck.txt ...
NOD001>                 - RDA_DBA_nls_parms.txt ...
NOD001>                 - RDA_DBA_vresource_limit.txt ...
NOD001>                 - RDA_DBA_partition_data.txt ...
NOD001>                 - RDA_ONET_sqlnet_listener_ora.txt ...
NOD001>                 - RDA_OS_memory_info.txt ...
NOD001>                 - RDA_DBA_vspparameters.txt ...
NOD001>                 - RDA_LOG_bdump20_orcl1_mmnl_7895_trc.dat ...
NOD001>                 - RDA_LOG_log_trace.txt ...
NOD001>                 - RDA_LOG_bdump10_orcl1_j000_17866_trc.dat ...
NOD001>                 - RDA_RAC_ocrdump.txt ...
NOD001>                 - RDA_ONET_sqlnet_sqlnet_ora.txt ...
NOD001>                 - RDA_INST_installActions20090831_114713AM_log.dat ...
NOD001>                 - RDA_OS_nls_env.txt ...
NOD001>                 - RDA_DBM_libcache.txt ...
NOD001>                 - RDA_OS_packages.txt ...
NOD001>                 - RDA_DBA_datafile.txt ...
NOD001>                 - RDA_DBA_security_files.txt ...
NOD001>                 - RDA_LOG_udump1_orcl1_ora_5922_trc.dat ...
NOD001>                 - RDA_LOG_bdump9_orcl1_diag_22327_trc.dat ...
NOD001>                 - RDA_NET_etc_files.txt ...
NOD001>                 - RDA_LOG_bdump11_orcl1_lck0_7963_trc.dat ...
NOD001>                 - RDA_RAC_crs_inventory.txt ...
NOD001>                 - RDA_RAC_evm_log.txt ...
NOD001>                 - RDA_INST_oraInstall20090831_114713AM_err.dat ...
NOD001>                 - RDA_DBA_vcontrolfile.txt ...
NOD001>                 - RDA_DBA_security.txt ...
NOD001>                 - RDA_DBA_spatial.txt ...
NOD001>                 - RDA_DBA_undo_info.txt ...
NOD001>                 - RDA_LOG_bdump8_orcl1_diag_7815_trc.dat ...
NOD001>                 - RDA_LOG_udump3_orcl1_ora_5277_trc.dat ...
NOD001>                 - RDA_PERF_ash_report.txt ...
NOD001>                 - RDA_DBA_vlicense.txt ...
NOD001>                 - RDA_INST_comps_xml.txt ...
NOD001>                 - RDA_DBA_voption.txt ...
NOD001>                 - RDA_DBA_jobs.txt ...
NOD001>                 - RDA_RAC_client_log.txt ...
NOD001>                 - RDA_DBA_vfeatureusage.txt ...
NOD001>                 - RDA_PERF_overview.txt ...
NOD001>                 - RDA_PROF_etc_profile.txt ...
NOD001>                 - RDA_ONET_lstatus.txt ...
NOD001>                 - RDA_DBM_respool.txt ...
NOD001>                 - RDA_INST_installActions20080321_033734PM_log.dat ...
NOD001>                 - RDA_DBA_vparameters.txt ...
NOD001>                 - RDA_PROF_ulimit.txt ...
NOD001>                 - RDA_INST_installActions20090831_073524AM_log.dat ...
NOD001>                 - RDA_LOG_bdump16_orcl1_lmon_7819_trc.dat ...
NOD001>                 - RDA_OS_sysdef.txt ...
NOD001>                 - RDA_LOG_bdump23_orcl1_smon_22439_trc.dat ...
NOD001>                 - RDA_RAC_cluster_status_file.txt ...
NOD001>                 - RDA_ONET_sqlnetsqlnet_log.txt ...
NOD001>                 - RDA_INST_oraInstall20080321_022900PM_out.dat ...
NOD001>                 - RDA_OS_system_error_log.txt ...
NOD001>                 - RDA_ONET_sqlnet_tnsnames_ora.txt ...
NOD001>                 - RDA_LOG_bdump18_orcl1_lms0_7830_trc.dat ...
NOD001>                 - RDA_INST__link_homes.txt ...
NOD001>                 - RDA_DBA_vcompatibility.txt ...
NOD001>                 - RDA_RAC_racg_dump.txt ...
NOD001>                 - RDA_PERF_latch_data.txt ...
NOD001>                 - RDA_CFG_homes.txt ...
NOD001>                 - RDA_DBA_latch_info.txt ...
NOD001>                 - RDA_LOG_udump2_orcl1_ora_10349_trc.dat ...
NOD001>                 - RDA_LOG_error1_orcl1_arc0_9341_trc.dat ...
NOD001>                 - RDA_RAC_crs_status.txt ...
NOD001>                 - RDA_NET_netperf.txt ...
NOD001>                 - RDA_INST_orainventory_files.txt ...
NOD001>                 - RDA_RAC_racOnOff.txt ...
NOD001>                 - RDA_NET_tcpip_settings.txt ...
NOD001>                 - RDA_DBA_CPU_Statistic.txt ...
NOD001>                 - RDA_LOG_bdump12_orcl1_lgwr_7839_trc.dat ...
NOD001>                 - RDA_ONET_adapters.txt ...
NOD001>                 - RDA_LOG_alert_log.txt ...
NOD001>                 - RDA_CFG_oh_inv.txt ...
NOD001>                 - RDA_DBA_vsession_wait.txt ...
NOD001>                 - RDA_DBA_log_info.txt ...
NOD001>                 - RDA_INST_oraInstall20080321_033734PM_out.dat ...
NOD001>                 - RDA_PROF_umask.txt ...
NOD001>                 - RDA_OS_services.txt ...
NOD001>                 - RDA_OS_libc.txt ...
NOD001>                 - RDA_DBM_subpool.txt ...
NOD001>                 - RDA_DBA_aq_data.txt ...
NOD001>                 - RDA_LOG_bdump5_orcl1_arc2_23248_trc.dat ...
NOD001>                 - RDA_INST__link_oh_inv.txt ...
NOD001>                 - RDA_PERF_awr_report.txt ...
NOD001>                 - RDA_INST_oracle_install.txt ...
NOD001>                 - RDA_DBA_versions.txt ...
NOD001>                 - RDA_DBA_vHWM_Statistic.txt ...
NOD001>                 - RDA_DBA_dba_registry.txt ...
NOD001>                 - RDA_LOG_udump4_orcl1_ora_10747_trc.dat ...
NOD001>                 - RDA_ONET_netenv.txt ...
NOD001>                 - RDA_INST_installActions20080321_085143PM_log.dat ...
NOD001>                 - Report index ...
NOD001>         Packaging the reports ...
NOD001>                 RDA_NOD001.zip created for transfer
NOD001>         Updating the setup file ...
Processing RDSP module ...
Processing LOAD module ...
Processing End module ...
-------------------------------------------------------------------------------
RDA Data Collection Ended 26-Nov-2010 11:16:52 AM
-------------------------------------------------------------------------------
        Generating the reports ...
                - RDA_END_report.txt ...
                - RDA_RDSP_overview.txt ...
                - RDA_S909RDSP.txt ...
                - RDA_END_system.txt ...
                - RDA_RDSP_results.txt ...
                - RDA_CFG_homes.txt ...
                - RDA_CFG_oh_inv.txt ...
                - Report index ...
        Packaging the reports ...

 You can review the reports by transferring the contents of the
 /u01/rda/rda/output directory to a location where you have web-browser
 access. Then, point your browser at this file to display the reports:
   RDA__start.htm

 Based on your server configuration, some possible alternative approaches are:
 - If your client computer with a browser has access to a web shared
   directory, copy the /u01/rda/rda/output directory to the web shared
   directory and visit this URL:
    http://machine:port/web_shared_directory/RDA__start.htm
   or
 - If your client computer with a browser has FTP access to the server
   computer with the /u01/rda/rda/output directory, visit this URL:
    ftp://root@racnode1.us.oracle.com//u01/rda/rda/output/RDA__start.htm

 If this file was generated to assist in resolving a Service Request, please
 send /u01/rda/rda/output/RDA.RDA_racnode1.zip to Oracle Support by uploading
 the file via My Oracle Support. If ftp'ing the file, please be sure to ftp in
 BINARY format.

        Updating the setup file ...




[oracle@racnode1 output]$ unzip -l RDA.RDA_racnode1.zip 
Archive:  RDA.RDA_racnode1.zip
  Length     Date   Time    Name
 --------    ----   ----    ----
      121  11-26-10 11:14   RDA_0CFG.fil
     2507  11-26-10 11:16   RDA.log
        0  11-26-10 11:14   RDA_0REXE.fil
     1533  11-26-10 11:16   RDA_END_report.txt
      634  11-26-10 11:16   RDA_RDSP_overview.txt
      469  11-26-10 11:16   RDA_S909RDSP.htm
      236  11-26-10 11:14   RDA_S010CFG.toc
     4361  11-26-10 11:16   RDA_END_report.htm
      147  11-26-10 11:16   RDA_S909RDSP.txt
      412  11-26-10 11:16   RDA_END_system.txt
     3407  11-26-10 11:05   RDA_rda.css
      486  11-26-10 11:16   RDA__index.htm
      251  11-26-10 11:16   RDA_S010CFG.txt
    19970  11-26-10 11:16   RDA_CFG_oh_inv.htm
      161  11-26-10 11:16   RDA__index.txt
      179  11-26-10 11:16   RDA__blank.htm
     2223  11-26-10 11:16   RDA_RDSP_overview.htm
      915  11-26-10 11:16   RDA_CFG_homes.htm
      138  11-26-10 11:16   RDA_S909RDSP.toc
      388  11-26-10 11:16   RDA_RDSP_results.txt
      604  11-26-10 11:16   RDA_S010CFG.htm
      814  11-26-10 11:16   RDA__start.htm
      358  11-26-10 11:14   RDA_CFG_homes.txt
     5984  11-26-10 11:14   RDA_CFG_oh_inv.txt
     1153  11-26-10 11:16   RDA_RDSP_results.htm
     1291  11-26-10 11:16   RDA_END_system.htm
  1289027  11-26-10 11:16   remote/RDA_NOD001.zip         <---- THIS SHOULD EXIST
   884638  11-26-10 11:16   remote/RDA_NOD002.zip          <---- THIS SHOULD EXIST
 --------                   -------
  2222407                   28 files
}}}

''BAD OUTPUT''
{{{

[oracle@racnode1 rda]$ ./rda.sh -v -e REMOTE_TRACE=1
        Collecting diagnostic data ...
-------------------------------------------------------------------------------
RDA Data Collection Started 26-Nov-2010 11:12:20 AM
-------------------------------------------------------------------------------
Processing Initialization module ...
Processing CFG module ...
Processing OCM module ...
Processing REXE module ...
Processing RDSP module ...
Processing LOAD module ...
Processing End module ...
-------------------------------------------------------------------------------
RDA Data Collection Ended 26-Nov-2010 11:12:26 AM
-------------------------------------------------------------------------------
        Generating the reports ...
                - RDA_END_report.txt ...
                - RDA_RDSP_overview.txt ...
                - RDA_END_system.txt ...
                - RDA_RDSP_results.txt ...
                - RDA_CFG_homes.txt ...
                - RDA_CFG_oh_inv.txt ...
                - Report index ...
        Packaging the reports ...

 You can review the reports by transferring the contents of the
 /u01/rda/rda/output directory to a location where you have web-browser
 access. Then, point your browser at this file to display the reports:
   RDA__start.htm

 Based on your server configuration, some possible alternative approaches are:
 - If your client computer with a browser has access to a web shared
   directory, copy the /u01/rda/rda/output directory to the web shared
   directory and visit this URL:
    http://machine:port/web_shared_directory/RDA__start.htm
   or
 - If your client computer with a browser has FTP access to the server
   computer with the /u01/rda/rda/output directory, visit this URL:
    ftp://root@racnode1.us.oracle.com//u01/rda/rda/output/RDA__start.htm

 If this file was generated to assist in resolving a Service Request, please
 send /u01/rda/rda/output/RDA.RDA_racnode1.zip to Oracle Support by uploading
 the file via My Oracle Support. If ftp'ing the file, please be sure to ftp in
 BINARY format.

        Updating the setup file ...


[oracle@racnode1 output]$ unzip -l RDA.RDA_racnode1.zip 
Archive:  RDA.RDA_racnode1.zip
  Length     Date   Time    Name
 --------    ----   ----    ----
      121  11-26-10 11:12   RDA_0CFG.fil
     1611  11-26-10 11:12   RDA.log
        0  11-26-10 11:12   RDA_0REXE.fil
     1533  11-26-10 11:12   RDA_END_report.txt
      634  11-26-10 11:12   RDA_RDSP_overview.txt
      469  11-26-10 11:12   RDA_S909RDSP.htm
      236  11-26-10 11:12   RDA_S010CFG.toc
     4361  11-26-10 11:12   RDA_END_report.htm
      147  11-26-10 11:12   RDA_S909RDSP.txt
      412  11-26-10 11:12   RDA_END_system.txt
     3407  11-26-10 11:05   RDA_rda.css
      486  11-26-10 11:12   RDA__index.htm
      251  11-26-10 11:12   RDA_S010CFG.txt
    19970  11-26-10 11:12   RDA_CFG_oh_inv.htm
      161  11-26-10 11:12   RDA__index.txt
      179  11-26-10 11:12   RDA__blank.htm
     2223  11-26-10 11:12   RDA_RDSP_overview.htm
      915  11-26-10 11:12   RDA_CFG_homes.htm
      138  11-26-10 11:12   RDA_S909RDSP.toc
      308  11-26-10 11:12   RDA_RDSP_results.txt
      604  11-26-10 11:12   RDA_S010CFG.htm
      814  11-26-10 11:12   RDA__start.htm
      358  11-26-10 11:12   RDA_CFG_homes.txt
     5984  11-26-10 11:12   RDA_CFG_oh_inv.txt
     1067  11-26-10 11:12   RDA_RDSP_results.htm
     1291  11-26-10 11:12   RDA_END_system.htm
 --------                   -------
    47680                   26 files
}}}


! Related Notes
330362.1 RDA Troubleshooting Guide
Remote Diagnostic Agent (RDA) 4 - RAC Cluster Guide (Doc ID 359395.1)
Maclean's notes http://goo.gl/OVvnZ
http://coding-geek.com/how-databases-work/


! complete list of databases 
https://dbdb.io/browse





http://community.vsl.co.at/forums/p/22659/154067.aspx
http://jpaul.me/?p=1078
http://www.tomshardware.com/forum/268964-30-what-diffrence-rdimms-udimms
ORACLE® DATABASE 10G WITH RAC  AND RELIABLE DATAGRAM SOCKETS CONFIGURATION GUIDE
http://www.filibeto.org/sun/lib/blueprints/821-0802.pdf

Using Reliable Datagram  Sockets Over InfiniBand  for Oracle Database 10g Clusters 
http://www.dell.com/downloads/global/power/ps2q07-20070279-Mahmood.pdf

http://www.freelists.org/post/oracle-l/RAC-declustering,7

http://www.google.com.ph/search?hl=tl&safe=active&q=oracle+kcfis&oq=oracle+kcfis&aq=f&aqi=&aql=&gs_sm=e&gs_upl=10987l11538l0l7l4l0l0l0l0l0l0ll0



! updated 2019 
Oracle Clusterware and RAC Support for RDS Over Infiniband (Doc ID 751343.1)
https://maxfilatov.wordpress.com/2018/12/06/sad-story-about-oracle-rds-and-infiniband-relationship/

! pre 12.2
{{{
To verify what protocol is used by RAC, look in the alert log of the ASM and Database instances during startup:

In pre 12.2 version, alert.log for both asm and database shows

Cluster communication is configured to use the following interface(s) for this instance
172.x.x.109
cluster interconnect IPC version:Oracle RDS/IP (generic)
IPC Vendor 1 proto 3
Version 3.0

In the UDP case it would say "UDP/IP" instead of "RDS/IP".
}}}

! 12.2 onwards 

{{{
In 12.2+ version, RDS is supported only for databases running on the engineered systems, and the databases on non-engineered systems always use UDP and the database alert.log will show it is using UDP instead of RDS.  This is true even if RDS is linked into the oracle binary.

In 12.2+ version, asm alert.log shows

cluster interconnect IPC version: Oracle RDS/IP (generic)
IPC Vendor 1 proto 3
Version 4.1
 
However, database alert.log shows

cluster interconnect IPC version: [IPCLW over RDS(mode 2) ]
IPC Vendor 1 proto 2
 
IPCLW is new IPC light weight implementation that is used in 12.2 database.
IPCLW is optimized version of IPC that was used in 12.1 and earlier.
}}}

! my examples here 
oracle regexp output left hand side https://gist.github.com/karlarao/4a456e9865247b07d1c7654116801214
oracle one column to rows https://gist.github.com/karlarao/9eb0d05fdb680db4bb6153e4a23c9bac



! references
oracle split string after keyword https://www.google.com/search?biw=1436&bih=796&ei=T0dqW8-ACozt5gKztqyABw&q=oracle+split+string+after+keyword&oq=oracle+split+string+after+keyword&gs_l=psy-ab.3...1867.1867.0.2171.1.1.0.0.0.0.74.74.1.1.0....0...1.1.64.psy-ab..0.0.0....0.8DSEO_Qp9-0
https://stackoverflow.com/questions/36015847/extract-string-after-character-and-before-final-full-stop-period-in-sql
https://stackoverflow.com/questions/45165587/how-to-get-string-after-character-oracle
https://stackoverflow.com/questions/28674778/oracle-need-to-extract-text-between-given-strings
https://www.experts-exchange.com/questions/28349313/Oracle-SQL-Extract-rightmost-word-in-string.html
https://lalitkumarb.wordpress.com/2017/02/17/regexp_substr-extract-everything-after-specific-character/ <-- good stuff
https://lalitkumarb.wordpress.com/2018/07/20/regexp_substr-extract-everything-before-specific-character/

https://stackoverflow.com/questions/4389571/how-to-select-a-substring-in-oracle-sql-up-to-a-specific-character
https://basitaalishan.com/2014/02/23/removing-part-of-string-before-and-after-specific-character-using-transact-sql-string-functions/
https://community.toadworld.com/platforms/sql-server/b/weblog/archive/2014/02/23/removing-part-of-string-before-and-after-specific-character-using-transact-sql-string-functions
https://www.google.com/search?q=substr+and+instr+in+oracle&oq=substr+and+instr&aqs=chrome.1.69i57j0l5.3768j1j4&sourceid=chrome&ie=UTF-8
https://stackoverflow.com/questions/39405528/using-substr-and-instr-in-sql
http://oraclemine.com/substr-and-instr-in-oracle/ <-- good stuff
http://www.java2s.com/Code/Oracle/Char-Functions/CombineINSTRandSUBSTRtogether.htm
https://www.google.com/search?q=oracle+substr+instr+on+keyword&ei=Y0tqW_OaPOmO0gKK64-4BQ&start=10&sa=N&biw=1436&bih=796
https://www.google.com/search?q=oracle+substr+instr+until+the+end+of+string&oq=oracle+substr+instr+until+the+end+of+string&aqs=chrome..69i57j69i64l3.12272j1j1&sourceid=chrome&ie=UTF-8
https://stackoverflow.com/questions/30820143/using-substr-and-instr-find-end-of-string   <-- good stuff
https://stackoverflow.com/questions/14621357/oracle-get-substring-before-a-space <-- good stuff
https://stackoverflow.com/questions/15614751/if-statement-in-select-oracle









{{{

RH033

[ ] UNIT 1 - LINUX IDEAS AND HISTORY
	open source definition
		www.opensource.org/docs/definition.php
		www.gnu.org/philosophy/free-sw.html
	gnu public license
		www.gnu.org/copyleft/gpl.html

		
[ ] UNIT 2 - LINUX USAGE BASICS
	x window system
	passwords
	root, sudo
	vim, nano
	/etc/issue for the custom message

	
[ ] UNIT 3 - RUNNING COMMANDS AND GETTING HELP
	levels of help
		whatis
		--help
		man (divided into pages), info (divided into nodes)
			manual sections
				1	user commands
				2	system calls
				3	library calls
				4	special files
				5	file formats
				6	games
				7	miscellaneous
				8	administrative commands
		/usr/share/doc
		redhat documentation
                http://en.wikipedia.org/wiki/List_of_Unix_programs

		
[ ] UNIT 4 - BROWSING THE FILESYSTEM
	file system hierarchy standard - http://proton.pathname.com/fhs
	
	home directories:		/root, /home/<username>
	user executables:		(essential user binaries)	/bin,	(non-essential binaries such as grapich environments, office tools) 	/usr/bin, 	(software compiled from source) 	/usr/local/bin
	system executables:		(essential system binaries)	/sbin,	(non-essential binaries such as grapich environments, office tools)		/usr/sbin, 	(software compiled from source)		/usr/local/sbin
	other mountpoints:		/media, /mnt
	configuration:			/etc
	temporary files:		/tmp
	kernels and bootloader:	/boot
	server data:			/var, /srv
	system information:		/proc, /sys
	shared libraries:		/lib, /usr/lib, /usr/local/lib

		
[ ] UNIT 5 - USERS, GROUPS, PERMISSIONS

	who		operator	permissions
	u		+			r
	g		-			w
	o		=			x
	a					s "set user id bit or group"
						t "sticky bit (for directories)"

	chattr +i		<-- add immutable property, only on ext2/3 filesystems 
	chattr -i		<-- remove immutable property
	lsattr			<-- list immutable property
						
	newgrp
		- primary group can be temporarily changed using this command, and will create a new session, to return to original group just do an EXIT
		- if you are not a member of the group then you will be prompted with a passwd (check on "/etc/group"), otherwise you'll not be prompted (done as "gpasswd -a oracle karao")
		- if the group does not have a password and you try to NEWGRP on that group, then you will be denied
		- if the user is added to the group, then you'll see a new group when you do an "id <username>", it could be removed by doing a "usermode -G <group list>" or "gpasswd -d <user> <group>"
		- if a user is granted ADMINISTRATOR (-A) privilege then you'll see a new entry on the "/etc/gshadow" --> karao:0jEuOBLJ51YK2:oracle  but this user is not seen on the "/etc/group" unless you also add him on the group
		- if a user is granted (-M) privilege then you'll see a new entry on the "/etc/gshadow" --> karao:0jEuOBLJ51YK2:oracle:oracle also you see a new entry on the "/etc/group"
		- group must not be associated with any username, because then the user is deleted then also the group
		
	gpasswd
		- there is no way to revoke the (-A) on a user, you could just redirect it to the ROOT user using the +A command
		
	file
		r	- you can copy
		w	- you can't copy and edit if only this
		x	- you can't copy if only this
	
	directory
		r	- you can copy if only this
		w	- you can't copy if only this
		x	- you can't read and copy if only this

			
[ ] UNIT 6 - USING THE BASH SHELL

	$(hostname)
	file{1,2,3}
	mkdir -p folder/{inbox,outbox}/{trash,save}
	
	!1003
	
	#!/bin/bash			<-- shebang, this tells the OS what interpreter to use in order to execute the script
	
	to know the shells, go to "/etc/shells"
	to change your default shell, look for the command "chsh"

		
[ ] UNIT 7 - STANDARD I/O AND PIPES

	linux provides three I/O channels to programs:
		STDIN	- keyboard by default 									(file descriptor # 0)
		STDOUT	- terminal window by default - 1st output data stream	(file descriptor # 1)
		STDERR	- terminal window by default - 2nd output data stream	(file descriptor # 2)
		
	redirecting output to a file
		>	redirect STDOUT to a file
		2>	redirect STDERR to a file
		&>	redirect all output to a file
		
	common redirection operators
		command > file
		command >> file
		command < file		- send FILE as an input to COMMAND
		command 2> file
		command 2>> file
				
		[oracle@centos5-11g ~]$ ls -ltr karlarao.txt install2008-05-11_15-41-12.log &>> error.txt		<-- NOT ALLOWED
		-bash: syntax error near unexpected token `>'
	
		/dev/null	<-- is a black hole for data, so that you dont waste storage for the STDERR output file
	
		sample:
			redirecting to two files
				find /etc/ -iname passwd > find.out 2> /dev/null
				
			redirecting all to a file
				find /etc -iname passwd &> find.all
			
			piping to less (send all output to a pipe)
				find /etc -iname passwd 2>&1 | less
				
			subshell - to print output of two commands
				(cal 2007; cal 2008) | less
		
	piping:
		ls -C | tr 'a-z' 'A-Z'		<-- translate or delete characters
		
	redirecting to multiple targets (tee)
		useful for saving output at various stages in long sequence of pipes, this will actually create the *out files:
			ls -l /etc | tee stage1.out | sort | tee stage2.out | uniq -c | tee stage3.out | sort -r | tee stage4.out | less	
	
	sending multiple lines to STDNIN (mail) - will only terminate when END (the same word) is encountered
		[oracle@centos5-11g ~]$ mail -s "please call" karlarao@gmail.com << END
		> helo
		> that's it!
		> END

	SCRIPTING: 
	
	(for loops)
	
		for NAME in JOE JANE JULIE
		do 
			ADDRESS="$NAME@gmail.com"
			MESSAGE='Projects are due today!'
			echo $MESSAGE | mail -s Reminder $ADDRESS
		done

		
		-- ping IP ADDRESSES, uses sequence
		for USER in $(grep bash /etc/passwd)
		for FILE in *txt
		for NUM in $(seq 1 10)
		for NUM in $(seq 1 2 10) increments of 2
		for LETTER in $(seq a z)
		
		
			#!/bin/bash
			# alive.sh
			# pings machines
			
			for i in $(seq 1 20)
			do
			        host=172.16.126.$i
			        ping -c $host &> /dev/null
			        if [ $? = 0 ]; then
			                echo "$host is up!"
			        else
			                echo "$host is down!"
			        fi
			done
		
		COULD ALSO BE
			
			#!/bin/bash
			# alive.sh
			# pings machines
			
			for i in {1..20}; do
			        host=172.16.126.$i
			        ping -c $host &> /dev/null
			        if [ $? = 0 ]; then
			                echo "$host is up!"
			        else
			                echo "$host is down!"
			        fi
			done

			
[ ] UNIT 8 - TEXT PROCESSING TOOLS

	CUT
		/sbin/ifconfig | grep 'inet addr' | cut -d : -f2 | cut -d ' ' -s -f1
		
	SORT
		cut -d : -f 3,1 /etc/passwd | sort -t : -k 2 -n		<-- t (delimiter), k (field of sort), n (numerical sort)
		
	UNIQ
		cut -d : -f7 /etc/passwd | sort | uniq
		
	DIFF (to do side-by-side mode, -y)
		[oracle@centos5-11g ~]$ diff -y lao.txt tzu.txt
	     The Way that can be told of is not the eternal Way;      |     The Nameless is the origin of Heaven and Earth;
	     The name that can be named is not the eternal name.      |      The named is the mother of all things.
	     The Nameless is the origin of Heaven and Earth;          |
	     The Named is the mother of all things.                   <
	     Therefore let there always be non-being,                        Therefore let there always be non-being,
	       so we may see their subtlety,                                   so we may see their subtlety,
	     And let there always be being,                                  And let there always be being,
	       so we may see their outcome.                                    so we may see their outcome.
	     The two are the same,                                           The two are the same,
	     But after they are produced,                                    But after they are produced,
	       they have different names.                                      they have different names.
	                                                              >      They both may be called deep and profound.
	                                                              >      Deeper and more profound,
	                                                              >      The door of all subtleties!
		
	PATCH (make the 1st file the same as 2nd file, propagating the changes)
		step 1)	$ diff -u lao.txt tzu.txt > patch_lao.txt			<-- unified format, for better format shows + and -	
		step 2)	$ patch -b lao.txt patch_lao.txt
		step 3) $ diff -y lao.txt tzu.txt
		
		to reverse the effect, use the -R switch, or restore the .orig file
			$ patch -R lao.txt patch_lao.txt
		
		to make file usable
			$ restorecon /etc/issue
			
	ASPELL
		interactive:
			$ aspell check letter.txt
		non-interactive:
			$ aspell list < letter.txt		<-- on STDIN
			
	LOOK (quick lookup of words)
		look <word>
		
	SED
		$ sed 's/The/Is/gi' lao.txt					<-- search globally, case insensitive
		$ sed '1,2s/The/Is/g' lao.txt				<-- lines 1 to 2
		$ sed '/digby/,/duncan/s/dog/cat/g' pets	<-- start on digby and continuing on duncan
		$ sed -e '/s/dog/cat/' -e '/s/hi/lo/' pets	<-- multiple SED
		$ sed -f myedits pets						<-- for large edits, place them in a file then reference
	
	REGULAR EXPRESSIONS
		^		beggining of the line
		$		end of line
		[xyz]	character that is x,y,z
		[^xyz]	character that is not x,y,z
		
	grep -l root /etc/* 2> /dev/null				<-- look for files that contain the word "root"

		
[] UNIT 9 - VIM: AN ADVANCED TEXT EDITOR
	three modes:
		command mode
		insert mode
		ex mode
		
	A	append to end of line
	a	insert data after cursor
	I	insert at beginning of line
	i	insert data before cursor
	o	insert a new line (below)
	O	insert a new line (above)
	
	5, Right Arrow		move rigt five characters
	w,b					move by word
	),(					move by sentence
	},{					move by paragraph
	10G					jump to line 10
	G					jump at the end of the line
	
	/,n,N							search
	:%s/\/dev\/hda/\/dev\/sda/g		search/replace
	
						change		delete		yank
						(replace)	(cut)		(copy)
	line				cc			dd			yy
	letter				cl			dl			yl
	word				cw			dw			yw
	sentence ahead		c)			d)			y)
	sentence behind		c(			d(			y(
	paragraph above		c{			d{			y{
	paragraph below		c}			d}			y}
		
	p		paste
	u		undo
	U		undo current line
	CTRL-r	redo
	
	visual mode:
		v			character oriented visual mode
		V			line oriented visual mode
		CTRL-v		block oriented visual mode
	
	multiple windows (must have -o switch):
		vi -o lao.txt tzu.txt
		
		CTRL-w, s		split horizontal
		CTRL-w, v		split vertical
		CTRL-w, arrow	move to another window
	
	configuring vi and vim
		on the fly
			:set or :set all
		permanently
			~/.vimrc (primary) or ~/.exrc (for older)
			
			[oracle@centos5-11g ~]$ cat .vimrc 
			:set nu
			:set wrapmargin=10
			
		:help option-list
		
	learn more
		:help
		vimtutor
		
	visudo		<-- opens the /etc/sudoers in vim
	vipw		<-- edits the password file with necessary locks

		
[ ] UNIT 10 - BASIC SYSTEM CONFIGURATION TOOLS
	important network settings:
		ip configuration
		device activation
		dns configuration
		default gateway
	
	less /usr/share/doc/initscripts-8.45.14.EL/sysconfig.txt		<-- complete list of options of configuration
	
	ifup
	ifdown
	ifconfig
	
	network configuration files:
		ETHERNET DEVICES
			/etc/sysconfig/network-scripts/ifcfg-eth0
			
			configuration options:
				DEVICE=eth0				<-- for DHCP config
				HWADDR=<mac address>	<-- for DHCP config
				BOOTPROTO=none|dhcp		<-- for DHCP config
				IPADDR
				NETMASK
				GATEWAY
				ONBOOT=yes				<-- for DHCP config
				USERCTL=no
				TYPE=Ethernet|Wireless	<-- for DHCP config
		
		GLOBAL NETWORK SETTINGS (rather than per-interface basis)
			/etc/sysconfig/network		<-- many may be provided by DHCP, GATEWAY can be overridden in ifcfg file
			
				NETWORKING=yes
				GATEWAY=<ip add>		<-- this can also be set in ifcfg file, if the gateway is defined here & in ifcfg, the gateway defined in the most recently activated ifcfg file will be used
				HOSTNAME=<hostname>
			
		DNS CONFIGURATION (DNS translates hostnames to network addresses)
			/etc/resolv.conf			<-- local DNS configuration
				
				search example.com cracker.org		<-- specify domains that should be tried when an incomplete DNS name is given to a command
				nameserver 192.168.0.254			<-- ip add of the DNS server, pick the fastest
				nameserver 192.168.1.254
				
	PRINTING IN LINUX:
		configuration tools:
			system-config-printer
			web based: http://localhost:631
			lpadmin
		
		configuration files:
			/etc/cups/cupsd.conf
			/etc/cups/printers.conf
			
		cups-lpd			<-- available for backward compatibility with older LPRng client systems
		
		setup printer:
			1) new printer
			2) serial port1
			3) generic
			4) postscript printer
			
		supported printer connections:
			local (parallel, serial or usb)
			unix/linux print server
			windows print server
			netware print server
			hp jetdirect
			
		printing commands:
			lpr (accepts ASCII, postscript, pdf, others)
				$ lpr -P accounting -#5 report.ps			<-- prints to the accounting printer, without -P will print to default printer
			lpq
				$ lpq -a									<-- shows all jobs, without -P will show jobs from default printer
			lprm <job number>
			
		system V printing commands:
			lp
			lpstat -a										<-- shows all configured printers
			cancel <job number>
			
		printing utilities:
			enscript, a2ps			<-- convert text to postscript
			evince					<-- pdf viewer
			ps2pdf					<-- postscript to pdf
			pdf2ps					<-- pdf to ps
			pdftotext				<-- pdf to plain text 
			mpage					<-- prints ascii or ps input with text reduced in size so it could appear on 1 paper
			
	DATE:
		date format [MMDDhhmm[[CC]YY][.ss]]
		
		date 080820002008.05
		date -s "08/08/2008 20:00:05"
		
	NTP:	
		stratum1 to stratum16
		local clock is stratum10
		
		stratum (1,2)	<-- two ntp servers (2,3)	<-- clients
		
		ntpq -np					<-- query
		
		if you dont want to sync against you local clock comment out the following
		# Undisciplined Local Clock. This is a fake driver intended for backup
		# and when no outside source of synchronized time is available.
		# server  127.127.1.0     # local clock
		fudge   127.127.1.0 stratum 10
	
		/var/lib/ntp/drift			<-- drift file
		
		
		
	SCRIPTING:
	
	(positional parameters)
		$0		is the program
		$*		all command-line arguments
		$#		holds the number of command-line arguments
	
		sample:
			[oracle@centos5-11g ~]$ cat positionaltester.sh2 
			#!/bin/bash
			echo "the program name is $0"
			echo "the first argument is $1 and the second is $2" 
			echo "All command line parameters are $*"
			echo "all parameters are $#"
			
			[oracle@centos5-11g ~]$ ./positionaltester.sh2 red hat enterprise linux
			the program name is ./positionaltester.sh2
			the first argument is red and the second is hat
			All command line parameters are red hat enterprise linux
			all parameters are 4		
	
	(read)
		-p	prompt to display	
	
		sample:
			[oracle@centos5-11g ~]$ cat input.sh 
			#!/bin/bash
			read -p "Enter name (first last):" FIRST LAST
			echo "your first name is $FIRST and your last name is $LAST"
	
			[oracle@centos5-11g ~]$ ./input.sh 
			Enter name (first last):karl arao
			your first name is karl and your last name is arao
			

[ ] UNIT 11 - INVESTIGATING AND MANAGING PROCESSES
	uid, gid, selinux context determines filesystem access
	
	/prod/<pid>		<-- tracks every aspect of process by its PID
	
	LISTING PROCESS:
			
		?				<-- daemon processes
		
		-a				<-- processes on all terminals
		-x				<-- includes processes not attached to terminals
		-u				<-- process owner info
		-f				<-- process parentage
		-o 
		
		process states (do a "man ps" for the complete list):
			running
			sleeping
			uninterruptable sleep
			zombie

	FINDING PROCESS:
		ps axo comm,tty | grep ttyS0
		pgrep -U root			<-- user
		pgrep -G student		<-- group 
		pidof bash				<-- find process id of a program
		
	SIGNALS ("man 7 signal" for the complete list):
		signal 15, term (default)	terminate cleanly
		signal 9, kill				terminate immediately
		signal 1, hup				re-read configuration files
		
		sending signals to processes:
			by PID		kill [signal] pid
			by name		killall [signal] comm
			by pattern	pkill [-signal] pattern
			
	SCHEDULE PRIORITY (nice)
		-20 to 19, default 0	<-- lower value means high cpu priority
		
		when starting a process (only root can lower nice values, also once ordinary user raised the value he can't make it lower)
			nice -n -15 vi ~oracle/lao.txt
		
		after starting
			renice 5 <pid>
			
	INTERACTIVE PROCESS MGT TOOLS:
		top
		gnome-system-monitor
		
	JOB CONTROL

		firefox &		<-- run process in the background
		CTRL-z			<-- temporarily halt a running program
		jobs			<-- list jobs
		bg <job#>		<-- resume in background, you can't stop it, you must fg it first then CTRL-z
		fg <job#>		<-- resume in foreground
		kill %<job#>	<-- kills the job
		
		sample:		
			[oracle@centos5-11g ~]$ jobs
			[1]-  Stopped                 find / -iname "*.conf" 2>/dev/null	<-- last default
			[2]   Stopped                 find / -iname "oracle" 2>/dev/null
			[3]   Stopped                 find / -iname "root" 2>/dev/null
			[4]+  Stopped                 find / -iname "conf" 2>/dev/null		<-- this is the default
		
	AT - one time jobs
		root can modify jobs for other users by getting a login shell (su - <username>)
		
		create 		at <time>		crontab -e
		list		at -l			crontab -l
		details		at -c <job#>	n/a
		remove		at -d <job#>	crontab -r
		edit		n/a				crontab -e
	
	
	CRON - recurring jobs, runs every minute
		root can modify jobs for any user with "crontab -u <username> -l|-e|-r"
		
		see "man 5 crontab" for details on time

# (Use to post in the top of your crontab)
# ------------- minute (0 - 59)
# | ----------- hour (0 - 23)
# | | --------- day of month (1 - 31)
# | | | ------- month (1 - 12)
# | | | | ----- day of week (0 - 6) (Sunday=0)
# | | | | |
# * * * * * command to be executed
		
	EXIT STATUS
		0		success
		1-255	fail
		$?		determine exit status
		
	SCRIPTING:
	
	(conditional execution parameters, based on the exit status of the previous command)
	
		&&	--> AND THEN, the 2nd command will only run if the 1st exits successfully
		||	--> OR ELSE, the 2nd command will only run if the 1st fail
	
	
		sample 1:
			
			$ grep -q no_such_user /etc/passwd && echo 'user existing' || echo 'no such user'	<-- "q" is silent mode, will only give you 1 or 0
		
			$ ping -c1 -W2 centos5-11g &> /dev/null \
			&& echo "station is up"					\
			|| echo $(echo "station is unreachable"; exit 1)
		
			for x in $(seq 1 10); do
			echo adding test$x
			(
				echo -ne "test$x\t"
				useradd test$x 2>&1 > /dev/null && mkpasswd test$x
			) >> /tmp/userlog
			done
			echo 'cat /tmp/userlog to see new passwords'
		
	(test, evaluates boolean statements, 0 true, 1 false)
		
		long form:
		test "$A" = "$B" && echo "Strings are equal"
		test "$A" -eq "$B" && echo "Integers are equal"
	
		short form:
		[ "$A" = "$B" ] && echo "Strings are equal"
		[ "$A" -eq "$B" ] && echo "Integers are equal"
	
	(file tests, test existence of files)
		
		[ -f issue.patch ] && echo "regular file"
		
		some of the supported file tests are:
			-d <file>	true if the file is a directory
			-e			true if the file exists
			-f			true if the file exists and is a regular file
			-h			true if the file is a symbolic link
			-L			true if the file is a symbolic link
			-r			true if the file exists and is readable by you
			-s			true if the file exists and is not empty
			-w			true if the file exists and is writable by you
			-x			true if the file exists and is executable by you
			-O			true if the file is effectively owned by you
			-G			true if the file is effectively owned by your group
		
	(if then else)
		
		# pings my station
		if ping -c1 -W2 centos5-11g &> /dev/null; then
		        echo "station is up"
		elif grep "centos5-11g" ~/maintenance.txt &> /dev/null; then
		        echo "station is undergoing maintenance"
		else echo "station is unexpectedly down"
		        exit 1
		fi

		
		# test ping command
		if test -x /bin/ping6; then
			ping6 -c1 ::1 &> /dev/null && echo "ipv6 stack is up"
		elif test -x /bin/ping; then
			ping -c1 127.0.0.1 &> /dev/null && echo "no ipv6, ipv4 stack is up"
		else 
			echo "oops! this should not happen"
			exit 255
		fi
		
		
		# test if target is up or down, with positional parameters
		#!/bin/bash
		TARGET=$1
		
		ping -c1 -w2 $TARGET &> /dev/null
		RESULT=$?
		
		if [ $RESULT -ne 0 ]
		then
		        echo "$TARGET is down"
		else
		        echo "$TARGET is up"
		fi
		exit $RESULT


		# use reach.sh on AT to ping a station
		at now + 5min
		for x in $(seq 1 40); do
		reach.sh station$x
		done
		CTRL-d
		
		
		# output the head os ps with formatting descending
		ps axo pid,comm,pcpu --sort=-pcpu | head -n2

		# good for finding processes order by CPU PERCENT, RSS (physical memory), CPU TIME (time)
		ps axo pid,comm,pcpu,size,rss,vsz,cputime,stat --sort=-pcpu | head -n10
		
		
[ ] UNIT 12 - CONFIGURING THE BASH SHELL
	
	2 types of variables
		local variables
		environment variables
	
	set | less		<-- all variables
	env | less		<-- environment variables
	echo $HOME		<-- single value
	
	alias="rm -i"
	\rm -r Junk			<-- if you dont want to use alias on "rm" command
	
	PREVENTING EXPANSION:
		echo your cost: \$5.00			<-- (backslash) makes next character literal
		
		'								<-- (single quote) inhibit all expansion
		"								<-- (double quote) inhibit all except: 
																		$ (dollar) variable expansion
																		` (backquotes) command substitution
																		\ (backslash) single char inhibition
																		! (ex point) history substitution
		
			[oracle@centos5-11g ~]$ find . -iname pos\*			<-- or you could do "find . -iname 'pos*'
			./positionaltester.sh2
			./positionaltester.sh
			./pos

	LOGIN vs NON-LOGIN SHELLS - (where startup scripts are configured)
	
		login shells
			any shell created at login (includes x login)
			su -
		non login shells
			su
			graphical terminals
			executed scripts
			any other bash instances
			
			
		global files
			/etc/profile
			/etc/profile.d
			/etc/bashrc
		user files
			~/.bash_profile
			~/.bashrc
			~/.bash_logout		<-- when a login shell exits, for auto backups and cleanup temp files
				
			
		login shells (order)
			1) "/etc/profile" ---which calls---> "/etc/profile.d"
			2) ~/.bash_profile ---calls---> ~./bashrc ---calls---> /etc/bashrc
		
		non login shells (order)
			1) ~/.bashrc ---calls---> /etc/bashrc ---calls---> /etc/profile.d (called by bashrc only for non login shells)
	
			
	SCRIPTING

		ls -laptr
	
		#!/bin/bash
		# script for backing up any directory
		# 1st: the directory to be backed up
		# 2nd: the location to backup to
		
		ORIG=$1
		BACK=~/backups/$(basename $ORIG)-$(date +%Y%m%d%H%M)					<-- used the "basename" command to get the SYSCONFIG word
		
		if [ -e $BACK ]
		then
		        echo "warning: $BACK exists"
		        read -p "Press CTRL-c to exit or ENTER to continue"
		fi
		
		cp -av $ORIG $BACK
		echo "backup of $ORIG to $BACK finished at: $(date +%Y%m%d%H%M)"

		
[ ] UNIT 13 - FINDING AND PROCESSING FILES
	
	locate -i <filename>		<-- case insensitive search
	updatedb					<-- must be run as root, updated daily
	
	find
		-ok																	<-- will prompt before executing command
			sample:
				find /u01/app/oracle/oradata/ -size 10M -ok gzip {} \;		<-- will prompt to gzip for each file found
		-exec
			sample:
				find /u01/app/oracle/oradata/ -size 10M -exec gzip {} \;	<-- will not promp, and will gzip each file found
		
		-user			<-- search for files owned by user & group
		-group	

	find, logical operators (OR (-o & -not)..but AND by default)
		-o		OR
		-not	NOT
		
			sample:
				find -user joe -not -group joe
				find -user joe -o -user jane
				find . -not \( -user oracle -o -user ken \) -exec ls -l {} \;

	find, permissions
		-uid	UID of user
		-gid
		-perm	permission
					find -perm 755	matches if mode is exactly 755
					find -perm +222	matches if anyone can write (first is exact)
					find -perm -222	matches if everyone can write
					find -perm -002	matches if other can write
		
	find, numerical criteria
		-size
			find -size 1M
			find -size +1M
			find -size -1M
		-links		number of links to the file
	
	find, access time
		# DAYS
		-atime			when file was last read
		-mtime			when file data last changed
		-ctime			when file data or metadata last changed
		
			samples:
				find -ctime	10		<-- exact 10 days
				find -ctime -10		<-- within 10 days
				find -ctime +10		<-- more than 10 days
				
		
		# MINUTES
		-amin
		-mmin
		-cmin
		
		# MATCH ACCESS TIMES RELATIVE TO THE TIMESTAMP OF OTHER FILES
		-anewer
		-newer			find -newer recent_file.txt
		-not -newer		find -not -newer recent_file.txt
		-cnewer
		
	find, execution
		find -name "*conf" -exec cp {} {}.orig \;			<-- backup config files, adding a .orig extension
		find /tmp -ctime +3 -user joe -ok rm {} \;			<-- prompt to remove joe's tmp files that are over 3 days old
		find ~ -perm -022 -exec chmod o-w {} \;				<-- fix other-writable files in your home directory
		find /var -user root -group mail 2> /dev/null -ls	<-- ls -l style listing "-ls"
		find -type l -ls									<-- list symbolic links "ls style"
		find -type f -ls									<-- list regular files
		find /bin /usr/bin -perm -4000						<-- list all files under /bin /usr/bin that have SetUID bit set
		find /bin /usr/bin -perm -u+s						<-- list all files under /bin /usr/bin that have SetUID bit set
		
		
[ ] UNIT 14 - NETWORK CLIENTS

	firefox
		engine plugins		mycroft.mozdev.org
		plugins				plugindoc.mozdev.org
	
	non-gui web browser
		links http://www.redhat.com
		links -dump http://www.redhat.com		<-- dumps all the text of the browser to STDOUT
		links -source http://www.redhat.com		<-- dumps all the html source
		
	wget (retrieve a single file via HTTP or FTP, also mirror a website)
		wget <link or html file>
		wget --recursive --level=1 --convert-liks http://www.site.com		<-- mirror a site
		
	email and messaging
	
		email protocol
			pickup
				imap/pop (most popular are imaps & pop3s which encrypts data over the wire)
			delivery
				smtp, esmtp
	
		evolution
			- supports gpg (gnu privacy guard)
		thunderbird
		mutt
			mutt -f imaps://user@server			<-- specify the mailbox you wish to start in
			c 									<-- to change mailbox
		gaim
			http://gaim.sourceforge.net/plugins.php
				
	OpenSSH: secure remote shell
		ssh
		
		scp					<-- secure replacement for rcp
				[user@]host:/<path to file>
				
				-r	recursion 
				-p 	preserve times and permissions
				-C 	to compress datastream
			
		sftp				<-- similar to ftp, remote host's sshd needs to have support for sftp in order to work
		
		rsync (uses remote update protocol)
				-e				specified rsh compatible program to connect with (usually ssh)
				-a				recursive, preserve
				-r				recursive, not preserve
				--partial		continues partially downloaded files
				--progress		prints progress bar
				-P				same as --partial --progress
				
				http://everythinglinux.org/rsync/
				
					sample:
					
						rsync --verbose  --progress --stats --compress --rsh=/usr/bin/ssh --recursive --times --perms --links --delete *txt oracle@192.168.203.11:/u01/app/oracle/rsync
						OR
						rsync -e ssh *txt oracle@192.168.203.11:/u01/app/oracle/rsync/
					
					to setup rsync server:
						# make the file /etc/rsyncd.conf
						motd file = /etc/rsyncd.motd
						log file = /var/log/rsyncd.log
						pid file = /var/run/rsyncd.pid
						lock file = /var/run/rsync.lock
						
						[karlarao]
						   path = /u01/app/oracle/rync
						   comment = test_rsync_server
						   read only = no
						   list = yes
						   auth users = oracle
						   secrets file = /etc/rsyncd.scrt
	
						 # then the /etc/rsyncd.scrt
						 oracle:oracle
						 
						 # to use it
						 rsync --verbose  --progress --stats --compress --rsh=/usr/bin/ssh --recursive --times --perms --links --delete *txt 192.168.203.11:/u01/app/oracle/rsync
					 
		PASSWORDLESS AUTHENTICATION: KEY-BASED AUTHENTICATION
			
			ssh-keygen -t rsa		<-- creates rsa public private keys
			ssh-keygen -t dsa		<-- creates dsa public private keys
			
			ssh-add -l				<-- query list of stored keys
			ssh-copy-id				<-- copy public key to destination system, on older systems you may not have this.. have to manually create authorized_keys
				
			ssh-agent $SHELL		<-- agent authenticates on behalf of user
			ssh-add					<-- add the keys, will ask passphrase		
					
				step by step:
					1) ssh-keygen -t rsa										<-- generate private public keys
					   ssh-keygen -t dsa
					2) ssh-copy-id -i id_rsa.pub oracle@192.168.203.26			<-- copy public key to remote host
					   ssh-copy-id -i id_dsa.pub oracle@192.168.203.26
					3) 					
						[oracle@centos5-11g .ssh]$ ssh-agent $SHELL				<-- load identities
						[oracle@centos5-11g .ssh]$ ssh-add
						Enter passphrase for /home/oracle/.ssh/id_rsa:
						Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)
						Enter passphrase for /home/oracle/.ssh/id_dsa:
						Identity added: /home/oracle/.ssh/id_dsa (/home/oracle/.ssh/id_dsa)
					4) ssh 192.168.203.26 date									<-- test
		
	FTP CLIENTS
		lftp
		gFTP
		
	Xorg Clients (XTERM - X11 forwarding)
		ssh -X <user>@<host>
		xterm &
		
	network diagnostic tools
		ping
		traceroute
		host
		dig
		netstat
		gnome-netttool (GUI)
		
	smbclient (FTP-like client to access SMB/CIFS resources)
		smbclient -L server1									<-- list shares on server1
		smbclient -U student //server1/homes					<-- access a share
			
			-W	workgroup or domain
			-U	username
			-N	suppress password prompt (otherwise you will be asked for a password)
	
	nautilus file transfer
	

[ ] UNIT 15 - ADVANCED TOPICS IN USERS, GROUPS, AND PERMISSIONS
	/etc/passwd
	/etc/shadow
	/etc/group
	/etc/gshadow
	
	user management tools
		system-config-users
		
	command line
		useradd
		usermod
		userdel [-r]
	
	1 - 499				<-- system users and groups
	
	MONITORING LOGINS
		last | less		<-- shows login, logout, reboot history
		lastb			<-- shows bad logins
		w 				<-- show who is logged on and what they are doing, shows load average, cpu info, etc.
                echo $$            <-- to show your current process ID
		
	DEFAULT PERMISSIONS
		666				<-- umask for files
		777				<-- umask for directories
		
		002				<-- default umask for ordinary users
		022				<-- default umask for root
		
	SPECIAL PERMISSIONS FOR EXECUTABLES (executable regular files, also 4-2-1)	SUID, SGID
		(4)suid
				-rwSr--r--  1 oracle oinstall      0 Aug 11 12:40 test2.txt		<-- "S" on group if no executable was granted
				-rwsr--r--  1 oracle oinstall      0 Aug 11 12:40 test2.txt     <-- "s" on group if executable was granted   
		(2)sgid
				-rw-r-Sr--  1 oracle oinstall      0 Aug 11 12:39 test.txt		<-- "S" on group if no executable was granted
				-rw-rwsr--  1 oracle oinstall      0 Aug 11 12:39 test.txt		<-- "s" on group if executable was granted
		   stickybit
		   		-rwx-----T  1 oracle oinstall      0 Aug 11 12:41 test3.txt		<-- "T" on others if no executable was granted 	
		   		-rwxr--r-t  1 oracle oinstall      0 Aug 11 12:41 test3.txt     <-- "t" on others if executable was granted   
		
	SPECIAL PERMISSIONS FOR DIRECTORIES		STICKY BIT, SGID
		   suid
				drwsr-xr-x  2 oracle oinstall   4096 Aug 11 12:44 test3			<-- "s" on the owner
		   sgid
				drwxr-sr-x  2 oracle oinstall   4096 Aug 11 12:37 test			<-- "s" on the group
		(1)stickybit
				drwxr-xr-t  2 oracle oinstall   4096 Aug 11 12:38 test2			<-- "t" on the others

	FOR COLLABORATION (SECURED):				
		as root user, make a group and directory.. then grant
		chmod 3770 <directory>
			
		drwxrws--T 2 oracle collaboration  4096 Aug 16 22:45 collaboration			<-- this will be viewable by collaboration members, and could only delete their respective
																							files, except root & oracle (owner of the folder).. also umask should be 022
																							so that files created on collaboration folders will just be read only on other users
	SCRIPTING:
	
		#!/bin/bash
		# create all users defined in userlist file
		# just add -x if you have problems
		
		for NAME in $(cat ~/bin/userlist)
		do
		        /usr/sbin/useradd $NAME
		        PASSWORD=$(openssl rand -base64 10)
		        echo $PASSWORD | passwd --stdin $NAME
		        echo "username: $NAME, password: $PASSWORD" | mail -s "Account Info" root@localhost
		done


[ ] UNIT 16 - LINUX FILESYSTEM IN-DEPTH
	ext2 and msdos		<-- typically used for floppies, ext2 (since 1993)
	ext3 				<-- features such as extended attributes & posix access control lists (ACLs)
	GFS & GFS2			<-- for SANs
	
	disk partition
	filesystem
	inode table 													<-- for ext2 and ext3 filesystems
	inode (index node) which is reference by its inode number		<-- contains metadata about files such as (unique within the filesystem):
																			- file type, permissions, uid, gid
																			- the link count (count of path names pointing to this file)
																			- file size and various time stamps
																			- pointers to the file's data blocks on disk
																			- other data about the file
	
	computers		reference for a file is inode number
	humans			reference for a file is by file name																	
	directory		is mapping between file names and inode numbers
							when a filename is referenced by a command, 
							linux references the directory in which the file resides,
							determines the inode number associated with the filename, 
							looks up the inode information in the inode table
							if user has permission..returns the contents of the file
	
	cp		<-- creates new inode
	mv		<-- untouched when on the same filesystem
	rm		<-- makes the inode free, but the data untouched..would be overwritten once reused
	
	hard links (ln)
		- only on the same filesystem
		- not allowed on directories
		- who created it will be the UID/GID
		
			88724 -rwxr-xr-x 2 root root  244 Aug 11 13:12 create_users.sh		<-- the same inode, and file count is 2
			88724 -rwxr-xr-x 2 root root  244 Aug 11 13:12 create_HL
	
	soft links (ln -s)
		- can span filesystems
		- who created it will be the UID/GID
		- specify the fully qualified path
		- the 25 is the number of characters
		- filetype is "l"
		
			175729 lrwxrwxrwx 1 root root   25 Aug 11 14:08 create_link -> /root/bin/create_users.s
	
	SEVEN FUNDAMENTAL FILETYPES
		-	regular file
		d	directory
		l	symbolic link
		b	block special file		<-- used to communicate with hardware a block of data at a time 512bytes,1024bytes,2048bytes
		c	character special file	<-- used to communicate with hardware one character at a time
		p	named pipe				<-- file that passes data between processes
		s	socket					<-- stylized mechanism for inter process communication
		
		
	df
	du
	baobab (GUI)
	
	removable media
		mount /dev/fd0 /mnt/floppy		<-- floppy
		mount /dev/cdrom /mnt/cdrom		<-- cdrom
		mtools
		
	cds and dvds
	
	usb media (detected by kernel as scsi devices)
		/dev/sdax or /dev/sdbx
		/media/disk
		
	floppy disks
	
	ARCHIVING FILES AND COMPRESSING ARCHIVES
		tar			<-- (tape archive) natively supports compression using gzip-gunzip, bzip2-bunzip2 (newer)
		
		-c	create
		-t	list
		-x	extrace	
		-f	<archivename>	name of the tar file
		
		-z	gzip		tar.gz
		-j	bzip2		tar.bz2
		-v	verbose
		
	ARCHIVING: other tools
		zip, unzip			<-- compatible with pkzip archives
		file-roller
		
	SCRIPTING:
		
		#!/bin/bash
		# script for backing up any directory
		# 1st: the directory to be backed up
		# 2nd: the location to backup to
		
		ORIG=$1
		BACK=~/backups/$(basename $ORIG)-$(date +%Y%m%d%H%M).tar.bz2
		
		if [ -e $BACK ]
		then
		        echo "warning: $BACK exists"
		        read -p "Press CTRL-c to exit or ENTER to continue"
		fi
		
		tar -cjvpf $BACK $ORIG
		echo "backup of $ORIG to $BACK finished at: $(date +%Y%m%d%H%M)"


[ ] UNIT 17 - ESSENTIAL SYSTEM ADMINISTRATION TOOLS
	
	check hardware compatibility
		http://hardware.redhat.com/hwcert
	check release notes
	
	installer can be started from (boot.iso)
		cdrom 
		usb
		network (PXE)		<-- ethernet and bios must support this
	
	supported installation sources:
		network server (ftp, http, nfs)
		cdrom
		hard disk
	
	managing services	
		managed by:
			System V scripts
			init
			xinetd super server
			GUI, command line
				
		system-config-services (GUI)
		
		command line
			/sbin/service  		start,stop,status,restart,reload
			/sbin/chkconfig
	
	managing software
		rpm		
		name-version-release.architecture.rpm		<-- VERSION is open source version of the project, RELEASE refers to redhat internal patches to the open source code
		
		yum							<-- replacing UP2DATE
		/etc/yum.conf	
		/etc/yum.repos.d/
			yum	install 
			yum remove
			yum update
			yum list available
			yum list installed
		
		pup				<-- software updater
		pirut			<-- add/remove software
		
	securing the system
		system level network security:
			1) application level network security	(tcp_wrappers)
			2) kernel level network security		(iptables, SELinux)
			
		SELinux 
			- all processes & files have a context
			- implements MAC - mandatory access control (default in unix is DAC - discretionary access control, which users make their files world-writable)
			- targeted "policy" by default (web, dns, dhcp, proxy, database, logging, etc.)
			- users may change the contexts of files that they own, but not alter or override the underlying SElinux policy
			
			to disable
				make it PERMISSIVE, logs policy violations but not actually prevent prohibited actions from taking place
			
			available in this classes
				RH133
				RHS427
				RHS429
	
			packet filtering (system-config-securitylevel, simple interface to the kernel level firewall.. NETFILTER)
				TCP/IP transaction divided into packets
				packets contains a header (destination-source address,protocol specific info) and payload
					ip address
					port number
				TCP/IP - UCP/IP uses distinct ports even though they share same numbers
}}}
{{{
RH131

	
[ ] UNIT 1 - SYSTEM INITIALIZATION
	
	boot sequence overview:
		bios initialization
		boot loader
		kernel initialization
		"init" starts and enters desired run level by executing:
			/etc/rc.d/rc.sysinit
			/etc/rc.d/rc & /etc/rc.d/rc?.d/
			/etc/rc.d/rc.local
		X display manager if appropriate
	
	bootloader components
		bootloader
			1st stage	small, resides in the MBR or boot sector (on the 1st 512bytes in hard disk)... IPL (initial program loader) for GRUB is just the 1st stage
							primary task is to locate the 2nd stage which does most of the work to boot the system
			2nd stage	loaded from boot partition
	
		two ways to configure boot loader
			primary boot loader
			secondary boot loader (first stage boot loader into the boot sector of some partition)
		
	GRUB and grub.conf (read at boot time)
		supported filesystems:
			ext2/ext3
			reiserfs
			jfs
			fat
			minix
			ffs
		/boot/grub/grub.conf			<-- changes takes effect immediately
		/sbin/grub-install /dev/sda		<-- if GRUB is corrupted, reinstall.. if this command fails do this
												1) type "grub"
												2) type "root (hd0,0)"
												3) type "setup (hd0)"
												4) type "quit"
		if GRUB can't find the grub.conf then it will default to GRUB command line
		info grub
	
	Kernel initialization
		kernel boot time functions:
			device detection
			device driver initialization			<-- device drivers compiled into the kernel are loaded when device is found
															else if essential (needed for boot) drivers have been compiled as modules then it must be included in INITRD image
															which is temporarily mounted be the kernel on a RAM disk to make the modules available for the initialization process
			mounts root filesystem read only		<-- after essential drivers are loaded, will mount /root in read only
			loads initial process (INIT)			<-- after loading, control is passed from the kernel to that process (INIT)
	
		less /var/log/dmesg				<-- all bootup messages taken just after control is passed to INIT
		dmesg
		
	INIT initialization
		init reads its config /ETC/INITTAB		<-- contains the information on how init should setup the system in every run level, also contains default runlevel
													if lost or corrupted, you'll not be able to boot to any standard run levels
			initial run level
			system initialization scripts
			run level specific script directories
			trap certain key sequences
			define UPS power fail/restore scripts
			spawn gettys on virtual consoles
			initialize X in run level 5
	
			run levels
				0 				halt (Do NOT set initdefault to this)
				1 				Single user mode
				2 				Multiuser, without NFS (The same as 3, if you do not have networking)
				3 				Full multiuser mode
				4 				unused
				5 				X11
				6 				reboot (Do NOT set initdefault to this)
				s,S,single		alternate single user mode
				emergency		bypass rc.sysinit, sulogin
				
				/sbin/runlevel
		
		/etc/rc.d/rc.sysinit
			important tasks include:
				activate udev & SELinux
				set kernel parameters /etc/sysctl.conf
				system clock
				loads keymaps
				swap partitions
				hostname
				root filesystem check and remount
				activate RAID and LVM devices
				enables disk quotas
				check & mount other filesystems
				cleans up stale locks & PID files
				
		/etc/rc.d/rc			<-- responsible for starting/stopping when runlevel changes, also initiates default runlevel as per /etc/inittab "initdefault"
		
		system V run levels
			/etc/rc.d/rcX.d		<-- each runlevel has corresponding directory, symboloc links in run level directories call the init.d scripts with START (S) or STOP (K) argument
			/etc/rc.d/init.d	<-- System V init scripts
		
		/etc/rc.d/rc.local
			- common place for custom scripts
			- run every runlevel
	
	controlling services
		control services startup
			system-config-services
			ntsysv
			chkconfig
			
		control services manually
			service
			chkconfig	<-- (together with system-config-services) will start or stop an xinetd-managed service as soon as you configure it on or off
							standalone service will not start or stop until the system is rebooted or you use the service command
							
							
[ ] UNIT 2 - PACKAGE MANAGEMENT

	rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n" | less

	rpm package manager
		/var/lib/rpm		<-- database is stored in here
	
	rpm installation and removal
		-i		install
		-U		upgrade		original package will be removed (except config files which will be saved ".rpmsave"), default config files from new version might have ".rpmnew"
							will act as -i if package is not yet installed
		-F		freshen		identical to upgrading, except the package will be ignored if not already installed
		-e		erase
	
	updating a KERNEL RPM (do not use rpm -U or rpm -F)
		- kernel modules are version specific, and an upgrade will remove all modules that your present kernel is using, leaving the system unable to dynamically load
			device drivers or other modules
		/etc/sysconfig/kernel		<-- alter kernel addition to GRUB
		
	rpm queries
		-qa					all installed packages
		-qf					
		-ql
		-qi
		
		-qpi
		-qpl
		
		-q --requires			package prerequisites
		-q --provides			capabilities provided by package
		-q --scripts			scripts run upon installation removal
		-q --changelog			package revision history
		-q --queryformat		format custom-formatted information

		rpm --querytags		for a list of query formats
	
		rpm -qa --queryformat '%{name}-%{version}-%{release}: [%{provides} ]\n' | grep postfix		<-- list capabilities
		rpm -q --provides postfix

	rpm verification
		installed package file verification:
			rpm -V <package name>					<-- verifies the installed package against the RPM database, has the file changed since the last install?
			rpm -Vp <package_file>.i386.rpm			<-- verifies the installed package against the package file
			rpm -Va									<-- verifies all installed RPMS against the database
			
		signature verification BEFORE package install:
			rpm --import <RPM-GPG-KEY-redhat-release>
			rpm -K <package_file>.i386.rpm					<-- check signature of RPM
			rpm -qa gpg-pubkey								<-- queries all the GPG keys imported
			rpm -checksig <rpm>								<-- check integrity of package files
			
		/etc/pki/rpm-gpg					<-- GPG key can also be found here 
			
	YUM (yellow dog update, modified)
			- replacement for UP2DATE
			- based on repositories that hold RPMs and repodata file list
			- can call upon several repositories for dependency resolution, fetch the RPMs, install needed packages
	
	yum installation and removal
		yum install
		yum remove
		yum update
	
	yum queries
		searching packages
			yum search <searchterm>
			yum list (all | available | extras | installed | recent | updates)
			yum info <package name>
	
		searching files
			yum whatprovides <filename>
	
	configuring additional repositories
		/etc/yum.repos.d/							<-- put the new repo file here, you could make use of $releasever and $basearch variables for repository declaration
		yum clean dbcache | all						<-- repository information is cached, the clear command..
	
		sample repo file (should be at /etc/yum.repos.d/server1.repo):
			[GLS]
			name=private repository
			baseurl=http://server1.example.com/pub/gls/RPMS
			enabled=1
			gpgcheck=1
			
			[centos511g]
			name=private karl arao
			baseurl=http://192.168.203.25/install/centos/CentOS								<-- this will look for the repodata folder inside it
			gpgcheck=1
			gpgkey=http://192.168.203.25/install/centos/RPM-GPG-KEY-CentOS-5				<-- this will prompt you to install the gpgkey
			
		sample repo file for DVD media installation
			[root@centos5-11g ~]# cat /etc/yum.repos.d/CentOS-Media.repo
			# CentOS-Media.repo
			#
			# This repo is used to mount the default locations for a CDROM / DVD on
			#  CentOS-5.  You can use this repo and yum to install items directly off the
			#  DVD ISO that we release.
			#
			# To use this repo, put in your DVD and use it with the other repos too:
			#  yum --enablerepo=c4-media [command]
			#
			# or for ONLY the media repo, do this:
			#
			#  yum --disablerepo=\* --enablerepo=c4-media [command]							<-- use this command for the installation..
			
			[c5-media]
			name=CentOS-$releasever - Media
			baseurl=file:///media/CentOS_5.0_Final/
			gpgcheck=1
			enabled=0
			gpgkey=file:///media/CentOS_5.0_Final/RPM-GPG-KEY-CentOS-5

	creating a private repository
		- create a directory to hold your packages
		- make this directory available by http/ftp
		- install the "createrepo" rpm
		- run the "createrepo -v /<package-directory>
		- this will create a "repodata" subdirectory and the needed support files
		- to support anaconda on the same server:
			cp /<package-directory>/repodata/comps*.xml /tmp
			createrepo -g /tmp/comps*.xml /<package-directory>
			
			createrepo -g comps.xml /path/to/rpms <--example of a repository with a groups file. Note that the groups file should be in the same directory as the rpm packages (i.e. /path/to/rpms/comps.xml)
			
		createrepo				<-- creates the support files necessary for a yum repository, support files will be put into the "repodata" subdirectory
										the addition and deletion of files within the repository requires createrepo to be run again
		
		files:
			repomd.xml			<-- contains timestamps & checksum values for the other 3 files
			primary.xml.gz		<-- contains list of all RPMS in the repository, as well as dependency info, used by "rpm -qpl"
			filelists.xml.gz	<-- contains list of all files in all the RPMs, used by "yum whatprovides"
			other.xml.gz		<-- contains additional info, including the change logs for the RPMs
			comps.xml			<-- (optional) contains info about package groups, allows group installation
			
	redhat network
		up2date
		
	redhat network server
		rhn proxy server
		rhn satellite server
		rhn accounts
	
	rhn entitlements
		software channels
			base channel
			child channels
		define level of service
			update
			management
			provisioning
			monitoring
			
	rhn client
		

[ ] UNIT 3 - KERNEL SERVICES
	
	the linux kernel (core part of linux OS)
		kernel duties:
			- system initialization
			- process scheduling
			- memory management
			- security
			- provides buffers & caches to speed up hardware access
			- implements standard network protocols & filesystem formats
	kernel images & variants
			/boot/vmlinuz-*
			architectures supported:
				x86
				x86_64
				ia64/itanium
				powerpc64
				s390x
			(3) three kernel versions available for x86
				regular (supports SMP)
					memory support limited to 	4GB
					memory limit per process	3GB
				PAE
					memory support limited to 	16GB (on processors that supports PAE, almost all except some early Pentium M)
					memory limit per process	4GB (virtual memory space)... 3GB (of which available to user-space code & data)
				Xen	(Dom0..DomU(3))
					each domain limited to RAM	16GB
					physical machine RAM limit	64GB		
			
			NOTE:
				HUGEMEM kernel, not available on RHEL5.. must switch to x86-64, then you'll have following supported:
					processors					64	
					memory support limited to 	256GB
					memory limit per process	512GB
		
	kernel modules
		/lib/modules/$(uname -r)
	kernel modules utilities
		/etc/modprobe.conf
		lsmod
		modprobe
		modprobe -r
		modinfo
	initrd (specified in grub.conf.. must match the exact filename)
		to rebuild the initrd so that module "usb_storage" will be loaded early on boot:
			# mkinitrd --with=usb_storage /boot/initrd-$(uname -r).img $(uname -r)
	/dev
	managing /dev with udev
		determine:
		- filenames
		- permissions
		- owners and groups
		- commands to execute when a new device shows up	
	
		/etc/udev/rules.d
			add this line to the new file "99-usb.rules":
				KERNEL=="sdc1", NAME="myusbkey", SYMLINK="usbstorage"
		
		mknod /dev/myusbkey b 8 0				<-- not persistent
		MAKEDEV
	/proc
	/etc/sysctl.conf
		sysctl -a
		sysctl -p
		sysctl -w
	exploring hardware devices
		hal-device
		hal-device-manager
		lspci
		lsusb
	monitoring processes & resources
	
	
[ ] UNIT 4 - SYSTEM SERVICES

	network time protocol
		/etc/ntp.conf
		system-config-date
		ntpdate					<-- reset clock manually
		
		- ntp clients should use 3 time servers, allows clients to reject bogus synchronization messages if one of the servers' NTP deamons or clocks malfunction
		- NTP counters the drift by manipulating the length of a second
	
	system logging
		centralized logging deamons: 
			SYSLOGD (system logging)
			KLOGD 	(intercepts kernel messages & pass it to syslogd)
			
		/etc/rc.d/init.d/syslog		<-- system V script SYSLOG controls both the syslogd & klogd deamons
		
		/etc/syslog.conf			<-- configures system logging, has associated severity
		/etc/sysconfig/syslog		<-- sets switches used when starting syslogd & klogd from the system V initialization script
		
		messages can be logged to:
			- files
			- broadcast to connected users
			- written to console
			- transmitted to remote logging deamons across the network
							setup remote logging:
									on the logging server edit the /etc/sysconfig/syslog.. SYSLOGD_OPTION="-r -m 0"
									restart service
									
									on client edit /etc/syslog.conf
									add this.. user.* @<ip of log server>
									restart service
									logger -i -t oracle "this is a test"
									check /var/log/messages on log server
									
		log format (has four main entries):
			date & time
			hostname where the message came
			name of application or subsystem where the message came
			actual message
			
	XOrg: the X11 server
			- open source implementation of X11
			- XOrg consists of one core server with dynamically loaded modules
				drivers: 	ati, nv, mouse, keyboard, etc.
				extensions:	dri, glx, extmod
			- font rendering
				native server:	xfs 		(a separate service)
				fontconfig/xft libraries	(more efficient implemented within the XOrg core server, will soon replace xfs)
			
			www.x.org (x consortium)				<-- creates reference implementation of X under an open source license
			xorg.freedesktop.org					<-- adds hardware drivers for a variety of video cards & input devices, along with several software extensions
			wiki.x.org
			
			CLIENT --> X --> VIDEO CARD				<-- x provides a standard way in which applications, x clients, may display & write on the screen
			
			/var/log/Xorg.0.log						<-- logfile
			
			/usr/share/fonts & $HOME/.fonts			<-- to add non-default fonts, xft spawns "fc-cache" & reads the contents
			
			"no-listen = tcp"						<-- comment this parameter on xfs config file to accept network connections (otherwise is default)
														- network font servers listen on TCP port 7100
										
		XOrg server configuration
			system-config-display					<-- best results while in runlevel 3, to run an X client to be displayed on a remote system, no local server config is necessary
				--noui
				--reconfig
			/etc/X11/xorg.conf
			
		XOrg in runlevel 3
			/usr/X11R6/bin/xinit			<-- two methods to establish the environment
			/usr/X11R6/bin/startx
		
			environment configuration (runlevel 3):
				/etc/X11/xinit/xinitrc 	& ~/.xinitrc
				/etc/X11/xinit/Xclients	& ~/.Xclients
				/etc/sysconfig/desktop
				
			XOrg in runlevel 3:
				1) startx will pass control of X session to "/etc/X11/xinit/xinitrc" unless "~/.xinitrc" exists
						reads additional system & user config files:
							resource files:
								/etc/X11/Xresources & $HOME/.Xresources
							input devices:
								/etc/X11/Xkbmap		& $HOME/.Xkbmap
								/etc/X11/Xmodmap	& $HOME/.Xmodmap
						xinitrc then runs all shell scripts in 
							/etc/X11/xinit/xinitrc.d
						xinitrc then turns over control of the X session to ~/.Xclients if not existing, /etc/X11/xinit/Xclients
				2) /etc/X11/xinit/Xclients reads /etc/sysconfig/desktop
						if unset then it will attempt to run the ff in order:
							Gnome
							KDE
							twm (failsafe mode - xclock,term,mozillla)

						Example input on file (/etc/sysconfig/desktop): 
							DISPLAYMANAGER="GNOME"
							DESKTOP="GNOME"
		
			environment configuration (runlevel 5)
				/etc/inittab
				/etc/sysconfig/desktop
				/etc/X11/xdm/Xsession				
							
			XOrg in runlevel 5
				1) if /etc/inittab is runlevel 5, then /sbin/init will run /etc/X11/prefdm (invokes X server & display manager /etc/sysconfig/desktop)
						when display manager is started:
							/etc/X11/xdm/Xsetup_0, before display manager presents a login widget
				2) once authenticated, /etc/X11/xdm/Xsession is run (similar to startx in runlevel 3)
					
	Remote X sessions (X protocol is unencrypted)
		host-based sessions:	implemented through xhost 
		user-based sessions:	implemented through Xauthority mechanism
		sshd may automatically install xauth keys on remote machine
		
		xhost +trustedhost
		xhost -friendlyhost
		xhost +					<-- this is dangerous
		
		$HOME/.Xauthority		<-- contains users allowed to use local display
		
		ssh -Y remote-host		<-- tunnel SSH, user-based session
		
	SSH: Secure Shell
		can tunnel X11 and other TCP based network traffic
		
		# ssh -L 8080:remote-server:80 user@ssh-server				<-- tunnel TCP traffic between the SSH server & client, redirect port 8080 of the local system to port 80
																		of the remote server, by pointing your web browser to http://localhost:8080 you will access the webpage
																		on remote-server:80.. you can also do this on VNC
																		
	VNC: Virtual Network Computing
		uses less bandwidth than pure remote X desktops
		server can automatically be started via /etc/init.d/vncserver
		
		vncserver
		runs $HOME/.vnc/xstartup
		
		vncviewer host:screen
		unique screen numbers distinguish between multiple VNC server on the same host
		
		SSH tunneling:		vncviewer -via user@host <localhost>:1
		
		the first client can allow multiple connections:	-Shared
		can also be "view-only" for demos
		
	CRON
			crond deamon
			man 5 crontab
			
			/etc/cron.allow
			/etc/cron.deny			<-- this only exist on my system
			
			cron access control:
				if neither cron.allow nor cron.deny exist 		only root is allowed to install new crontab
				if only cron.deny exists, 						all users except thos lister on cron.deny can install crontab files
				if only cron.allow exists, 						root and all lister users can install crontab files
				if both files exists							cron.deny is ignored
				
				NOTE: denying a user through cron.allow & cron.deny does not disable their current installed crontab
		
		system crontab files
			/etc/crontab			<-- master system crontab file
			
			/etc/cron.hourly
			/etc/cron.daily
			/etc/cron.weekly
			/etc/cron.monthly
			
			/etc/cron.d/			<-- contains additional system crontab files
			
				sample for oracle logfiles (will retain 7 logfiles of more than 100MB):
					[root@centos5-11g logrotate.d]# pwd
					/etc/logrotate.d

					[root@centos5-11g logrotate.d]# cat oracle
					
					/u01/app/oracle/diag/rdbms/ora11/ora11/alert/*xml {
					daily
					rotate 7
					missingok
					size 100M
					}
			
			run-parts <directory>	<-- command that runs all scripts on a directory
			
		daily cron jobs
			tmpwatch				<-- cleans old files in /tmp
			logrorate				<-- rotates logs /etc/logrotate.conf
			logwatch				<-- system log analyzer and reporter
			
	ANACRON
		- runs cron jobs that did not run when the computer is down
	
		/etc/anacrontab

		contents:		
			1 		65    cron.daily      run-parts /etc/cron.daily
			7 		70    cron.weekly     run-parts /etc/cron.weekly
			30      75    cron.monthly    run-parts /etc/cron.monthly

			field 1: if the job has not been run in this many days..
			field 2: wait this number of minutes after reboot and then run it
			field 3: job identifier
			field 4: job to run
		
		how it works:
			when /etc/crontab run cron jobs.. 0anacron is run first, sets a timestamp in /var/spool/anacron/* that notes the time it was last run
			when a server was down for X number of days then starts up again, anacron will read the anacrontab.. then compare the timestamp to /var/spool/anacron/*
			if verified..then it will run the job for the next X minutes indicated in /etc/anacrontab
			
	CUPS (uses internet printing protocol)
		- allows remote browsing of printer queues
		- based on HTTP/1.1
		- uses PPD files to describe printers
		- only members of SYS group can access web based
		
		/etc/cups/cupsd.conf
		/etc/cups/printers.conf				<-- automatically generated by printer tools
		
		system-config-printer
		web based: 	localhost:631
		cli:		lpadmin
		
		documentation: /usr/share/doc/<cups>
		
			
[ ] UNIT 5 - USER ADMINISTRATION

	adding a new user account
		useradd
		passwd
		
		newusers		<-- add in batch, drawback is user home directories are not populated with files from /etc/skel
		chpasswd

    user private groups
    modifying/deleting user accounts
    	usermod
    
    group administration
    	groupadd
    	groupmod -n staff employee				<-- will rename the group, all affected users will use the new name as well as the files
    	groupdel
    	
    password aging policies
    	- by default password never expires.. you can edit the /etc/login.defs to adjust defaults
    
    	chage
    	lchage
    	
		[root@centos5-11g ~]# date
		Fri Aug 15 09:21:36 PHT 2008
		[root@centos5-11g ~]# chage -M 3 -m 2 -W 2 -I 2 -E 2008-08-21 kathy
		Last password change        : Aug 15, 2008
		Password expires            : Aug 18, 2008
		Password inactive           : Aug 20, 2008
		Account expires             : Aug 21, 2008
		Minimum number of days between password change    : 2
		Maximum number of days between password change    : 3
		Number of days of warning before password expires : 2

	network users
		info about users may be stored & managed on a remote server
		two type of info must always be provided for each user account
			account info			<-- controlled by NSS	(NAME SERVICE SWITCH)
			authentication			<-- controlled by PAM	(PLUGGABLE AUTHENTICATION MODULES), encrypts passwords on login & compare it to password provided by NSS
			
	authentication configuration
		system-config-authentication
		authconfig-tui (text based)
		authconfig-gtk (GUI)
		
		SUPPORTED ACCOUNT INFORMATION SERVICES:
			(local files)
			NIS						<-- gets info from database maps stored on NIS server
			LDAP					<-- entries on LDAP directory server
			Hesiod					<-- stores info as special resources in a DNS name server, its use is relatively uncommon
			Winbind					<-- uses winbindd to automatically map accounts stored in Windows domain controller to Linux by storing SID to UID/GID mappings in a
											database & automatically generating any other NSS info that is required
		
		SUPPORTED AUTHENTIATION MECHANISMS:
			(NSS)
			kerberos				<-- authenticates by requesting a ticket (from the server), if user's password decrypts the ticket.. he is authenticated
			ldap					<-- username, password on LDAP directory server
			smartcards				<-- use smartcards, also to lock the system
			smb						<-- uses Windows domain controller
			Windind					<-- uses Windows domain controller
			
		Example: NIS CONFIGURATION (not encrypted)
			RPMS:
				ypserv (server)
				ypbind (client)
				yp-tools
				portmap
			
			system-config-authentication
			
			ypserv (running on server)
				rpc.yppasswdd											<-- allows NIS clients to update the passwords on NIS
			ypbind (running on clients to share info with server)
			portmap
			
			what does this actually do? (five text files changed)
				/etc/sysconfig/network			<-- specify NIS domain
				/etc/yp.conf					<-- specify which server to use for NIS domain
				/etc/nsswitch.conf				<-- specify NIS as source of info for password, shadow, group
				/etc/sysconfig/authconfig		<-- specify "USENIS=yes"
				/etc/pam.d/system-auth-ac		<-- password changes for NIS accounts will be sent to rpc.yppasswdd (running on master)
				
			NIS is relatively insecure.. can be used with KERBEROS
			alternative is LDAP protected with TLS (SSL)..
			
			############################ NIS AUTOMOUNTER CONFIGURATION STEP BY STEP - START ############################
				RPMS:
					ypserv (server)
					ypbind (client)
					yp-tools
					portmap
			
				NFS SERVER	
					1) configure NFS server, edit /etc/exports
					
							/rhome/station12 172.24.0.12(rw,sync)
		
					2) # exportfs -a
					3) Make sure the required NFS, NFSLOCK, AND PORTMAP are there & started
				
				NFS CLIENT
					1) Make sure the required NETFS, NFSLOCK, AND PORTMAP daemons are there & started
					2) test mounting the remote home directory
							
							mount -t nfs 172.24.254.254:/rhome/station12 /rhome
					
					3) edit the auto.master file that will refer to auto.home
							
							#/etc/auto.master
							/rhome      /etc/auto.home --timeout=60
							
					4) edit auto.home
						
							#/etc/auto.home
							nisuser12   172.24.254.254:/rhome/station12/&   -nosuid
		
					5) start autofs
							# service autofs on
							
				NIS SERVER
					1) edit /etc/sysconfig/network
					
							[root@server1 ~]# cat /etc/sysconfig/network
							NETWORKING=yes
							HOSTNAME=server1.example.com
							GATEWAY=172.24.254.254
							NISDOMAIN=RHCE

					2) edit /etc/yp.conf
							# /etc/yp.conf - ypbind configuration file
							ypserver 127.0.0.1
							
					3) restart necessary deamons
							portmap  	The foundation RPC daemon upon which NIS runs.  
							yppasswdd  	Lets users change their passwords on the NIS server from NIS clients  
							ypserv  	Main NIS server daemon  
 

					4) make sure deamons are running 
							# rpcinfo -p localhost
							
					5) initialize NIS domain
							# /usr/lib/yp/ypinit -m
			
					6) restart ypbind and ypxfrd
							ypbind  	Main NIS client daemon  
							ypxfrd  	Used to speed up the transfer of very large NIS maps 
							
					7) make sure deamons are running 
							# rpcinfo -p localhost
							
					8) add new users

							[root@server1 yp]# useradd -d /rhome/station12/nisuser12 nisuser12
							
							[root@server1 yp]# usermod -d /rhome/nisuser12 nisuser12
							
							[root@server1 ~]# ypcat passwd
							nisuser12:$1$1C1UkauJ$ASV7yuHKhMsspBx6SVhpO/:500:500::/rhome/nisuser12:/bin/bash

							[root@server1 ~]# getent passwd nisuser12
							nisuser12:x:500:500::/rhome/nisuser12:/bin/bash

							[root@server1 ~]# ypmatch nisuser12 passwd
							nisuser12:$1$1C1UkauJ$ASV7yuHKhMsspBx6SVhpO/:500:500::/rhome/nisuser12:/bin/bash

				NIS CLIENT
					1) system-config-authentication 
						- will create yp.conf
						- define /etc/sysconfig/network NISDOMAIN
						- updates /etc/nsswitch.conf, place NIS
					
					2) start necessary deamons
							portmap
							ypbind
							
					3) make sure deamons are running 
							# rpcinfo -p localhost
				
					4) configure /etc/hosts, include both servers
					
					5) test NIS access to the NIS server
					
							[root@station12 ~]# ypcat passwd
							nisuser12:$1$1C1UkauJ$ASV7yuHKhMsspBx6SVhpO/:500:500::/rhome/nisuser12:/bin/bash

							[root@station12 ~]# getent passwd nisuser12
							nisuser12:x:500:500::/rhome/nisuser12:/bin/bash

							[root@station12 ~]# ypmatch nisuser12 passwd
							nisuser12:$1$1C1UkauJ$ASV7yuHKhMsspBx6SVhpO/:500:500::/rhome/nisuser12:/bin/bash
							
					6) edit /etc/nsswitch.conf..arrange the nis value
					
					7) restart sshd
					
					8) test login
							# ssh -l nisuser 172.24.0.12
							
			############################ NIS AUTOMOUNTER CONFIGURATION STEP BY STEP - END ############################
					
		Example: LDAP CONFIGURATION (recommend to use TLS (SSL))
			RPMS:
				nss_ldap
				openldap
			
			what does this actually do? (five text files changed)
				/etc/ldap.conf					<-- specify location of LDAP, & TLS used
				/etc/openldap/ldap.conf			<-- specify location of LDAP
				/etc/nsswitch.conf				<-- source of info for password, shadow, group
				/etc/sysconfig/authconfig		<-- specify "USELDAPAUTH=yes", "USELDAP=yes"
				/etc/pam.d/system-auth-ac		<-- PAM will use directory to authenticate
				
			ldapsearch -x -Z								<-- if the server is reachable, will dump user info in LDIF format
			openssl s_client -connect 192.168.203.26:636	<-- TLS can be tested by openssl s_client
	
	switching accounts
		su - 
		su - root -c free -m		<-- run a command
		
		sudo
			/etc/sudoers
			visudo					<-- to edit the /etc/sudoers
			
			sample:
				User_Alias	LIMITEDTRUST=student1,student2
				Cmnd_Alias	MINIMUM=/etc/rc.d/init.d/httpd
				
				LIMITEDTRUST	ALL-MINIMUM				<-- student1,student2 can use sudo with commands listed in MINIMUM
				
	SUID & SGID EXECUTABLES
		for security reasons, SUID & SGID are not honored when set on non-compiled programs..such as shell scripts
		
	SGID DIRECTORIES
		
	STICKY BIT
		/tmp has sticky bit.. users can only delete their respective files..
	
	FOR COLLABORATION (SECURED):				
		as root user, make a group and directory.. then grant
		chmod 3770 <directory>
			
		drwxrws--T 2 oracle collaboration  4096 Aug 16 22:45 collaboration			<-- this will be viewable by collaboration members, and could only delete their respective
																							files, except root & oracle (owner of the folder).. also umask should be 022
																							so that files created on collaboration folders will just be read only on other users
	DEFAULT FILE PERMISSIONS
		umask for root & any system account (uid < 100)															022
		for regular user (uid > 99), provided the primary group is the user private group, else 022				002
		
	ACCESS CONTROL LISTS (ACLs)
		- if you want to allow additional access to other groups or particular user on a particular directory of file.. this is very useful
		- if you set "rx" on /depts/tech and set default "rw" on /depts/tech on a user.. he'll not be able to create new files, but can edit existing files on that directory
	
		drwxrws---+ 2 root hr    4096 Aug 16 23:51 tech			<-- it will show + sign if contains ACL
		-rw-rw----+ 1 manager hr  0 Aug 16 23:51 test
		
		NOTE: filesystems created during installation are automatically mounted with ACL option, after installation must specifically mounted with ACL option
	
		mount -o remount,acl /home							<-- to enable ACL on a filesystem
		getfacl /home/schedule.txt							<-- view ACL
		setfacl -m u:visitor:rx /home/schedule.txt			<-- grant visitor rx access to file
		setfacl -x u:visiror:x /home/schedule.txt			<-- remove execute
		setfacl -m d:u:visitor:rw /home/share/project		<-- set default ACL on a directory
		setfacl -m u:visitor:--- /home/share/project		<-- to not have read/write/execute access to a file
		
	SELINUX
			NSA (national security agency).. first implementation of MAC was a system called Mach..
				later.. they implemented it on Linux kernel as patches became knows as SELinux
			
			MAC	mandatory access control
				Type Enforcement (assign values to files, directories, resources, users, processes)
			DAC	discretionary access control
			
			policy							<-- rule set, defines which resources a restricted process is allowed to access, any action that is explicitly allowed, by default denied
			restricted/unconfined			<-- processes category
			security context				<-- all files & processes have this
			
			elements of context:
				user
				role
				type
				sensitivity
				category
				
			ls -Z <filename>			<-- to view security context of a file
			ls -Zd <directory>
			
			ps -eZ						<-- view entire process stack
			ps Zax			
			
			RHEL4	protecting 13 processes
			RHEL5	protecting 88 processes
		
		SELinux targeted policy
			most local processes are unconfined
			
			chcon -t tmp_t /etc/hosts		<-- security context can be change
			restorecon /etc/hosts			<-- restore default
			
			strict policy
			targeted policy
		
			SELINUX= can take one of these three values:
			      enforcing - SELinux security policy is enforced.
			      permissive - SELinux prints warnings instead of enforcing.
			      disabled - SELinux is fully disabled.
			
			SELINUXTYPE= type of policy in use. Possible values are:
			      targeted - Only targeted network daemons are protected.			<-- DEFAULT
			      strict - Full SELinux protection.

		SELinux management
			getenforce
			system-config-securitylevel			<-- disabling requires reboot
			system-config-selinux
			
			/var/log/audit/audit.log			<-- default logfile for SELinux
			setroubleshootd

			
[ ] UNIT 6 - FILESYSTEM MANAGEMENT

	overview: adding new filesystems to filesystem tree
		identify device
		partition device
		make filesystem
		label filesystem
		create entry in /etc/fstab
		mount new filesystem
		
	device recognition
		MBR contains:
			- executable code to load operating system
			- contains structure describing the hard drive partitions
				partition id or type
				starting cylinder for partition
				number of cylinders for partition
		
		four primary partitions, one could be extended (will have a separate partition descriptors on the first sector of the partition)
		
		some linux partition types:	
			5 or f		extended
			82			linux swap
			83			linux
			8e			linux LVM
			fd			linux RAID auto

	disk partitioning
		total max number of partitions supported by the kernel:
				IDE devices		63
				SCSI devices	15
										
		/usr/share/doc/kernel-doc-2.6.18/Documentation/devices.txt		<-- list of devices
		
		why partition devices?
			containment
			performance
			quota				<-- implemented on filesystem level
			recovery
			
	managing partitions
		fdisk
		sfdisk					<-- more accurate
		GNU parted
		partprobe				<-- at system bootup, kernel makes its own in-memory copy of the partition tables from disk.. FDISK edits on-disk copy of partition tables
										to update the in-memory copies..run this
										
	making filesystems
		mkfs									<-- front end or wrapper to various filesystem creation programs, it -t is ext3.. then it will look for mkfs.ext3.. and so on..
		mkfs.ext2, mkfs.ext3, mkfs.msdos
		mke2fs									<-- when you do "man mkfs.ext3" this is called
			-L	to add label
			
		mkfs.ext3 -L opt -b 2048 -i 4096 <device>	<-- creates ext3 filesystem on a new partition, 
															use 2KB sized blocks
															& one inode per every 4KB of disk space (should not be lower than block size)
															& label of "opt"
			
	filesystem labels
		e2label <device> <label>
		mount <options> LABEL=<fs label>
		blkid									<-- can be used to see labels and filesystem type of all devices
				
			sample:
				[root@centos5-11g ~]# blkid /dev/sda3
				/dev/sda3: LABEL="/" UUID="d86726ee-0f6c-455f-b6ff-af20fef3c941" SEC_TYPE="ext2" TYPE="ext3"
				
				[root@centos5-11g ~]# blkid /dev/mapper/vgsystem-lvu01
				/dev/mapper/vgsystem-lvu01: UUID="18602cdd-8219-4377-bac3-7617ec090d8d" SEC_TYPE="ext2" TYPE="ext3"

				e2label /dev/mapper/vgsystem-lvu01 u01
				
				e2label /dev/mapper/vgsystem-lvu01
				
				mount LABEL=u01 /u01
		
	tune2fs (adjust filesystem parameters)				<-- can also be used to add journal to ext2 filesystem first created with mke2fs
		reserved blocks
		default mount options
		fsck frequency
		
		tune2fs -m 10 /dev/sda1							<-- modify percentage of reserved blocks
		tune2fs -o acl,user_xattr /dev/sda1				<-- modify mount options
		tune2fs -i0 -c0 /dev/sda1						<-- modify filesystem checks
		
		dumpe2fs										<-- view current settings of a filesystem
	
	MOUNT POINTS AND /ETC/FSTAB
		used to create the filesystem hierarchy on boot up
		contains six fields per line
		floppy & cd-rom have noauto as an option		<-- Can only be mounted explicitly (i.e., the -a option will not cause the file system to be mounted)
		
		fields in fstab:
			device
			mount point
			fs type
			mount options
			dump freq
														NOTE by Charlie:				
														Determines whether the dump command (used for backup) needs to
														backup the filesystem. 1 for yes, zero (0) for no. The dump command is
														for ext2 file systems only. Do not use it for ext3 file systems � Linus
														Torvald's himself discourages the use of the dump command on ext3
														filesystem due to certain technical issues.
				1 daily
				2 every other day
				
			fsck order									<-- NFS and cd-rom should be ignored
				0 	ignore
				1 	first (must for /)
				2-9	second						
			
	mounting filesystems
		mount
			-t 
			-o
		
		default option is: rw,suid,dev,exec,async
		
		reads /etc/mtab if invoked w/o arguments..		<-- display currently mounted filesystems
		
		mount options for EXT3:
			rw
			suid						<-- suid or sgid file modes are honored
			dev							<-- devices files permitted
			exec						<-- permit execution of binaries
			async						<-- file changes managed asynchronously
			
			acl							<-- POSIX ACLs are honored
			uid=henry, gid=henry		<-- all files mounted are owned by
			loop						<-- using a loopback device
			owner						<-- similar to user option, but in this case the mount request and the device, or special file, must be owned by the same EUID
			
	unmounting filesystems
		umount -a						<-- references /etc/mtab
		
		fuser -v <mount point>			<-- to show user accessing the mount point
		ps -aux | grep \/u01\/app		<-- another way
		
		fuser -km <mount point>			<-- send kill signal to the process
		kill <process>					<-- dangerous
		
		mount -o remount,ro /u01		<-- remounts to read only
		
	MOUNT BY EXAMPLE
		mount -t ext3 -o noexec /dev/hda7 /home						<-- for security, denying permission to execute files
		mount -t iso9660 -o loop /iso/documents.iso /mnt/cdimage	<-- mount cd drive
		mount -t vfat -o uid=515,gid=520 /dev/hdc2 /mnt/projx		<-- mount vfat, owner is 515
		mount -t ext3 -o noatime /dev/hda2 /data					<-- Do not update inode access times on this file system (e.g, for faster access on the news spool to speed up news servers)
		mount --bind /u01 /u02										<-- Since Linux 2.4.0 it is possible to remount part of the file hierarchy somewhere else
		
	handling SWAP partitions and files (supplement to system RAM)
		step by step:
			1) create a swap partition or file
			2) make file system type to swap (for patition only)
			3) writing a special signatire using.. mkswap
			4) adding entry to /etc/fstab
			4) activating swap.. swapon -a 
			
		setting up swap file
			dd if=/dev/zero of=swapfile bs=1024 count=X				<-- X is file size in kilobytes blocks, bs is bytes, could also be bs in MB
			then.. mkswap
			then.. add to /etc/fstab
	
	MOUNTING NFS FILESYSTEMS
		make remote filesystem as though it were a local filesystem
		/etc/fstab for persistent network mounts
                        <server>:</path/of/dir> </local/mnt/point> nfs <options> 0 0
		/etc/init.d/netfs											<-- NFS shares are mounted at boot time
		exports can be mounted manually
		
		1) check NFS service on host server
		2) edit /etc/exports file on host server.. /var/ftp/pub 192.168.203.25(rw)
		3) service nfs reload
		4) mount -t nfs <host server>:/var/ftp/pub /mnt/server1
		
		some nfs mount options:
			rsize=8192 and wsize 8192		will speed up NFS throughput
			soft							return with an error on a failed I/O attempt
			hard							will block a process that tries to access an unreachable share
			intr							interrupt or kill if server is unreachable
			nolock							disable file locking (lockd), & allow inter operation with older NFS servers
			
	AUTOMOUNTER (autofs)
		/etc/auto.master		<-- provides directory /misc
		/etc/auto.misc			<-- configuration file listing the filesystem to be mounted under the directory
		"autofs" deamon
		
		- filesystems automatically unmounted after a specified interval of inactivity
		- enable the special map "-hosts" to browse all NFS exports on the network
		- supports wildcard directory names
		
		sample:
			add this on /etc/auto.misc
				server1         -ro,intr,hard           192.168.203.26:/var/ftp/pub
			then
				service autofs reload
			then 
				cd /misc/server1
			then
				[oracle@centos5-11g server1]$ ls -l
				total 12
				-rw-r--r-- 1 root      root       9 Aug 17  2008 test1
				-rw-r--r-- 1 nfsnobody nfsnobody  9 Aug 17  2008 test2
				-rw-r--r-- 1 root      root      13 Aug 17  2008 test3

				Wildcard Key
			       A  map key of * denotes a wild-card entry. This entry is consulted if the specified key does not exist in the map.  A typical wild-card
			       entry looks like this:
			
			         *         server:i/export/home/&
			
			       The special character �&� will be replaced by the provided key.  So, in the example above, a lookup for the key  �foo�  would  yield  a
			       mount of server:/export/home/foo.
	
		DIRECT MAPS (absolute path names)
			- does not obscure local directory structure
			- referenced in /etc/auto.master
			
			on /etc/auto.master
				/-	/etc/auto.direct
				
			on /etc/auto.direct
				/foo			server1:/export/foo
				/usr/local/		server1:/usr/local
				
		GNOME-MOUNT
			gnome-mount
			- automatically mounts removable devices
			- integrated with HAL (hardware abstraction layer)
			- replaces fstab-sync (RHEL4)
			
			
[ ] UNIT 7 - ADVANCED FILESYSTEM MANAGEMENT

	configure QUOTA system
		- implemented within the kernel
		- enabled per filesystem basis
		- individual policies for groups or users
			limit number of blocks or inodes
			implement soft & hard limit
					
	step by step implementation:
				1) LABEL=/home /home ext3 defaults,usrquota,grpquota 1 2			<-- edit fstab	
				2) mount -o remount -v /home										<-- remount
				3) # quotacheck -cug /home											<-- create quota files
				4) # quotaon -vug /home												<-- activate quota
				5) # edquota <username>												<-- edit user's quota
			
																						Filesystem specifies in which quota-enabled filesystem the quota would be
																						set. The blocks column specifies the number of blocks, in kilobytes, that lisa
																						currently owns. The soft field specifies the block soft limit. The hard field
																						specifies the block hard limit. The inodes column specifies the number of inodes
																						that are owned by lisa. The soft field specifies the inode soft limit. The hard field
																						specifies the inode hard limit.
																						
																						The settings shown will give a soft block limit of 10MB and a hard block limit of
																						12MB to lisa. Soft limits may be exceeded for a certain grace period. Hard limits
																						may not be exceeded.
																						
				6) # edquota -t														<-- To modify the grace period for users,
																						As we can observe, the default grace period for users is 7 days. The countdown for
																						the grace period is initiated as soon as the soft limit is breached. After the grace
																						period, the user will be forced to free space so that his utilization falls below the
																						soft limit.
				7) # edquota -p lisa tony rose										<-- To make lisa's quota settings be the prototype for other users
				
			GROUP QUOTAS
				8) # edquota -g training											<-- To assign group quotas
				9) # edquota -tg													<-- To modify the grace period for group quotas
				10) # edquota -g -p training finance accounting						<-- To make the training group's quota settings be the prototype for other groups
				
			Summarizing Quotas for a Filesystem
				11) # repquota -aug | less											<-- 
																						We are presented with two (2) tables. The table on top is the summary for
																						user quotas. It specifies on which filesystem it is for. It also specifies the grace
																						period for both block and inode limits.
																						The first column is for the user name.
																						
																						The next two (2) columns could either a plus (+) or a minus (-). A + on the
																						left indicates that the block soft limit has been breached. A + on the right
																						indicates that the inode block limit has been breached.
																						The next four (4) columns are for the disk utilization. �used� specifies the
																						number of blocks that is currently used. �soft� specifies that the soft limit. �hard�
																						specifies the hard limit. �grace� specifies the remaining time from the grace
																						period.
																						The table on the lower part of the screen is for the group quota summary.

			Keeping Quota Information Accurate	(put script in /etc/rc.local)
				12) 
					#!/bin/bash
					# File name: /etc/cron.daily/quotacheck.sh or /home/oracle/bin/quotacheck.sh
					#
					# This script performs a quotacheck
					/sbin/quotaoff -vug -a &> /home/oracle/offerror.txt; cat /home/oracle/offerror.txt | mail -s "quotaoff done" root@localhost
					/sbin/quotacheck -vugm -a &> /home/oracle/checkerror.txt; cat /home/oracle/checkerror.txt | mail -s "quotacheck done" root@localhost
					/sbin/quotaon -vug -a &> /home/oracle/onerror.txt; cat /home/oracle/onerror.txt | mail -s "quotaon done" root@localhost

			
																						It is important to quotacheck after the filesystem has been unmounted
																						uncleanly, like in the unlikely event of a system crash. Also, quotacheck should be
																						run every time the system boots.
			reporting:
				user inspection: 
					quota
				quota overviews:
					repquota
				miscellaneous utilities:
					warnquota					<-- mail to users that reached their soft limit

	FSCK - file system check, MUST BE UNMOUNTED

		> fsck.ext3 -cv /dev/vgsystem/lvtmp	<--- check ONLY for bad blocks and verbose then press "Y", if you want to auto repair then add -p switch
			-p autorepair 
			-c check bad blocks
	
		> fsck.vfat -av /mnt/fat32			<--- check for bad blocks and verbose on FAT filesystem
	

	SOFTWARE RAID (mdadm)
		- multiple disks grouped together into "arrays" to provide better performance, redundancy, both
		- raid levels supported:
			raid 0						<-- stripe
			raid 1						<-- mirror, only raid type that you can place /BOOT partition
			raid 5						<-- 3 or more disks, with 0 or more hot spares.. not good for databases
			raid 6						<-- striping with dual (duplicated) distributed parity.. similar to raid 5 except that it improves fault tolerance by allowing
												the failure of any two drives in the array
												protects data loss during recovery of a single disk failure, provides the administrator the additional time to rebuild
		- spare disks add redundancy
		- all the disks should be identical, size & speed
		- partition type Linux RAID
		/proc/mdstat

		#create raid partitions
			/dev/sdb/
			> fdisk sdb1 (raid partition)
			
			/dev/sdc/
			> fdisk sdc1 (raid partition)
			
			/dev/sdd/
			> fdisk sdd1 (raid partition)
		
		#create raid array
			> mdadm --create /dev/md0 -a yes -l 1 -n 2 /dev/sdb1 /dev/sdc1	<--- create raid1 array, "-a yes" intructs udev to create the md device file if it doesnt already exist
			> mdadm --misc --detail /dev/md0								<--- to view the detail of your raid
			> mkfs.ext3 -v /dev/md0											<--- then format it
			> mkdir -p /mnt/md0												<--- make mountpoint
			> edit fstab and add /mnt/md0

			also..create /dev/mn0 with 3 disks and 2 spares
			> mdadm --create /dev/md0 -l 5 -n 3 /dev/sdd1 /dev/sde1 /dev/sdf1 -x 2 /dev/sdg1 /dev/sdh1
					
			other options:
				--chunk=64
				mke2fs -j -b 4096 -E stride=16 /dev/md0		<-- make ext3, "-E stride" can improve performance, it's software raid device's chunk-size in filesystem blocks
																	for example, with an ext3 filesystem that will have a 4kb block size on a raid device with a chunk size
																	of 64kb, the stride should be set to 16.. so determine what chunk size you want, then divide by block size..
			
		#to add a hot swap
			> mdadm --manage /dev/md0 -a /dev/sdd1
			> mdadm --misc --detail /dev/md0
			
		#to re-add a device that was recently removed from an array 
			> mdadm --manage /dev/md0 --re-add /dev/sdd1
		
		#to fail an array
			> mdadm --manage /dev/md0 -f /dev/sdb1
			> mdadm --misc --detail /dev/md0
			
		#to get an overview of all the raid array
			> cat /proc/mdstat
			
		#then the hot swap kicks in, then the faulty raid is unusable (must unregister it in the raid table)
			> to remove the faulty raid
			> mdadm --manage /dev/md0 -r /dev/sdb1
		
		#to dismember the /dev/md0 
			> unmount
			> erase in fstab
			> mdadm --manage --stop /dev/md0
			> mdadm --manage /dev/md0 -r /dev/sdb1 /dev/sdb1 /dev/sdb1		<-- could be in RHEL4
			> mdadm --misc --zero-superblock /dev/sdd1						<-- erase the MD superblock from a device.. do this on all devices (RHEL5)
																					because if you dont do this, md1 raid will still show w/o valid partition
		
	LVM, logical volume management
		- physical devices can be added & removed with relative ease
		- partition type Linux LVM
		
									logical volumes
					lvcreate
									volume groups
					vgcreate	
									physical volumes
					pvcreate
									linux partitions
						
		#create physical volumes
		> pvcreate /dev/hda3
		
		#assign physical volumes to volume group, you can also extend existing volume group.. vgextend
		> vgcreate vgsystem /dev/hda3
		
		#create logical volume
		> lvcreate -l 83 -n u01 vgsystem
		> lvcreate -L 500M -n u01 vgsystem
			
		#"STRIPE like RAID0" logical volumes accross physical volumes, ideal if PVs are contained on separate disks
		> lvcreate -i 2 -L 1G -n u01 vgsystem			<-- stipes to (2) PVs, with default stripe size
		
			NOTE: 
				a striped logical volume may be extended later, but only with extents from the original PVs
				also, as an alternative.. you can choose which physical volume you want LV to be assigned.. see manpage
									
		#display the logical volumes and allocated extents
		> lvdisplay -vm <logical_volume>				<-- to show what are the used physical volumes
		> ext2online -C d /dev/vgsystem02/lvu01
			
		#GROW in "extents" & "size"
		> lvextend -l +83 /dev/vgsystem02/lvu01			<-- (grow logical volume) lvextend is extend, lvreduce to reduce	
		> lvextend -L +500M /dev/vgsystem/u01			<-- binary is in /usr
		> umount <filesystem>
		> resize2fs -p /dev/vgsystem/u01			<-- (grow filesystem), binary is in /sbin
		
		#SHRINK LVM
		> 760 (used space) * 1.1
		> umount /u01									<-- must be unmounted
		> e2fsck -f /dev/vgsystem02/lvu01				<-- Force checking even if the file system seems clean
		> resize2fs -p /dev/vgsystem02/lvu01 840M		<-- (shrink filesystem)
		> 760 (used space) * 1.2
		> lvreduce -L 900M /dev/vgsystem02/lvu01		<-- (shrink logical volume) 
		> dumpe2fs /dev/vgsystem02/lvu01				<-- to view the info about the LVM or partition

http://askubuntu.com/questions/196125/how-can-i-resize-an-lvm-partition-i-e-physical-volume
http://microdevsys.com/wp/linux-lvm-resizing-partitions/
		
		#
		> e2fsadm										<-- counterpart of resize2fs and lvresize in RedHat3
		
		#to move the logical data to another PV 
		> vgextend 
		> pvmove -v /dev/sdb1 /dev/sdd1					<-- move the data from source to destination
		> vgreduce
		
			NOTE: 
				- can indicate extents to move
				- a certain LV
				- can continue when canceled
			
		#to remove the PV in the VG
		> vgreduce -v vgsystem02 /dev/sdb1
		
		#to remove VG
		> inactivate VG first
		> vgremove
		
		#LVM on top of RAID1
		> create two raid partition
		> hot swap
		> pvcreate /dev/md0
		> vgextend and add the md0
		
			NOTE: (in creating 2 volume groups)
				- one VG for internal
				- one VG for external

	LVM SNAPSHOTS
		- special LV that are exact copy of an existing LV at the time the snapshot is created
		- perfect for backups & other operations where a temporary copy of an existing dataset is needed
		- only consumes space where they are different from the original LV
				* snapshots are allocated space at creation but do not use it until changes are made to the original LV or the snapshot
				* when data is changed on the original LV the older data is copied to the snapshot
				* snapshots contain only data that has changed on the original LV or the snapshot since the snapshot was created
				
		common uses:
			backup of live data
				- for database put it in quiesce mode first..
			application testing
			hosting of virtualized machines
			
		#create snapshot of existing LV
			lvcreate -l 64 -s -n datasnap /dev/vgsystem/lvu01			<-- in extents
			lvcreate -L 512M -s -n datasnap /dev/vgsystem/lvu01			<-- in MB, the extents will be pulled from the volume group where the LV resides
			
					  --- Logical volume ---
					  LV Name                /dev/vgraid/vgraidopt
					  VG Name                vgraid
					  LV UUID                0KDodV-JlvI-95E6-bvjb-CXBi-rM6u-iI1n2j
					  LV Write Access        read/write
					  LV snapshot status     source of
					                         /dev/vgraid/raidoptsnap [active]
					  LV Status              available
					  # open                 1
					  LV Size                1000.00 MB
					  Current LE             250
					  Segments               1
					  Allocation             inherit
					  Read ahead sectors     0
					  Block device           253:4
					
					  --- Logical volume ---
					  LV Name                /dev/vgraid/raidoptsnap
					  VG Name                vgraid
					  LV UUID                FwW2n9-MGML-LZ6h-TZH1-WFbK-BYMc-rYQ8fA
					  LV Write Access        read/write
					  LV snapshot status     active destination for /dev/vgraid/vgraidopt
					  LV Status              available
					  # open                 0
					  LV Size                1000.00 MB
					  Current LE             250
					  COW-table size         1000.00 MB
					  COW-table LE           250
					  Allocated to snapshot  0.00%
					  Snapshot chunk size    8.00 KB
					  Segments               1
					  Allocation             inherit
					  Read ahead sectors     0
					  Block device           253:5

		#mount snapshot
			mkdir -p /mnt/datasnap
			mount -o ro /dev/vgsystem/datasnap /mnt/datasnap
			
		#remove snapshot
			umount /mnt/datasnap
			lvremove /dev/vgsystem/datasnap
			
		#check used space.. see "Allocated to snapshot"
			lvdisplay /dev/vgsystem/datasnap
			
		#GROW snapshots
			lvextend -L +500M /dev/vgsystem/datasnap					<-- can be expanded as other LVs
			
	TAR
		tar will extract all of the extended attributes that were archived.. to not extract use "--no" switches
		use "rmt" to write to a remote tape device
	
       --preserve
              like --preserve-permissions --same-order

       --acls this option causes tar to store each file�s ACLs in the archive.

       --selinux
              this option causes tar to store each file�s SELinux security context information in the archive.

       --xattrs
              this  option  causes  tar  to store each file�s extended attributes in the archive. This option also
              enables --acls and--selinux if they haven�t been set already, due to the  fact  that  the  data  for
              those are stored in special xattrs.

       --no-acls
              This  option  causes  tar  not  to  store each file�s ACLs in the archive and not to extract any ACL
              information in an archive.

       --no-selinux
              this option causes tar not to store each file�s SELinux security context information in the  archive
              and not to extract any SELinux information in an archive.

       --no-xattrs
              this  option  causes  tar  not  to  store  each file�s extended attributes in the archive and not to
              extract any extended attributes in an archive. This option also enables --no-acls  and  --no-selinux
              if they haven�t been set already.

	Archiving tools: DUMP, RESTORE

		DUMP
			- backup & restore ext2/3 filesystems (does not work with other filesystems)
			- should only be used on unmounted or read-only 
			- full, incremental backups
			
			#do a level 0 backup
				dump -0u -f /dev/nst0 /home
				dump -0u -f /dev/nst0 /dev/hda2
				
					NOTE:
						"u" option will update the /etc/dumpdates, which will record dump info for future use by dump..
						after level 0 backup, dump will perform an incremental backup everyday on active filesystems listed in /etc/fstab
					
			#do an incremental update
				dump -4u -f /dev/nst0 /home
				
					NOTE:
						will perform an incremental update of all files that have changed since the last backup of level 4 or lower..
						as recorded in /etc/dumpdates			
		
			#perform remote backup to tape
				dump -0uf joe@<server>:/dev/nst0 /home
			
					NOTE:
						perform remote backup using rmt, ssh can be used as a transport layer when $RSH is set to ssh
						
					In  the event of a catastrophic disk event, the time required to restore all the necessary backup tapes or files to
					disk can be kept to a minimum by staggering the incremental dumps. An efficient method  of  staggering  incremental
					dumps to minimize the number of tapes follows:
					
					�      Always start with a level 0 backup, for example:
					              /sbin/dump -0u -f /dev/st0 /usr/src
					
					       This should be done at set intervals, say once a month or once every two months, and on a set of fresh tapes
					       that is saved forever.
					
					�      After a level 0, dumps of active file systems are taken on a daily basis, using a modified  Tower  of  Hanoi
					       algorithm, with this sequence of dump levels:
					              3 2 5 4 7 6 9 8 9 9 ...
					
					       For  the  daily  dumps,  it should be possible to use a fixed number of tapes for each day, used on a weekly
					       basis. Each week, a level 1 dump is taken, and the daily Hanoi sequence repeats beginning with 3. For weekly
					       dumps, another fixed set of tapes per dumped file system is used, also on a cyclical basis.
					
					After  several  months  or  so, the daily and weekly tapes should get rotated out of the dump cycle and fresh tapes
					brought in.
					
					(The 4.3BSD option syntax is implemented for backward compatibility but is not documented here.)
			
		RESTORE
			
			#restore backup
				restore -rf /dev/st0

					-r     Restore (rebuild) a file system. The target file system should be made pristine with mke2fs(8), mounted, and
					       the  user  cd�d into the pristine file system before starting the restoration of the initial level 0 backup.
					       If the level 0 restores successfully, the -r flag may be used to restore any necessary  incremental  backups
					       on  top of the level 0. The -r flag precludes an interactive file extraction and can be detrimental to one�s
					       health (not to mention the disk) if not used carefully. An example:
					
					              mke2fs /dev/sda1
					
					              mount /dev/sda1 /mnt
					
					              cd /mnt
					
					              restore rf /dev/st0
					              
					
					       Note that restore leaves a file restoresymtable in the root directory to pass information between  incremen-
					       tal restore passes.  This file should be removed when the last incremental has been restored.

		RSYNC
			
			#rsync on another server
				rsync --verbose  --progress --stats --compress --rsh=/usr/bin/ssh --recursive --times --perms --links --delete *txt oracle@192.168.203.11:/u01/app/oracle/rsync
				OR
				rsync -e ssh *txt oracle@192.168.203.11:/u01/app/oracle/rsync/
		

[ ] UNIT 8 - NETWORK CONFIGURATION

	network interfaces
		ifconfig -a					<-- will show all interfaces, active & inactive
		ip link
		
	driver selection
		/etc/modprobe.conf			<-- RHEL compiles network cards as kernel modules, module is loaded based on alias..
											if there is more than one card utilizing one module.. then the mapping will be based on HW address
											
	speed & duplex settings (configured to autogenerate, by DEFAULT)
		ethtool <interface>												<-- Display or change ethernet card settings, if you alter settings it's best when it's not in use
																				also turn of autogeneration before forcing manual setting
		ETHTOOL_OPTS													<-- put this in ifcfg-ethX, to be persistent
		"options" OR "install" in /etc/modprobe.conf					<-- for older interface modules
		
		#to manually force 100Mbps full duplex operation on eth1
			ifdown eth1
			ethtool -s eth1 autoneg off speed 100 duplex full
			ifup eth1
			
			ETHTOOL_OPTS="autoneg off speed 100 duplex full"			<-- to make persistent, add it in ifcfg-eth1
			
	ipv4 addresses
		ifconfig
		ip addr
		
	DHCP - dynamic ipv4 configuration
		/etc/sysconfig/network-scripts/ifcfg-ethX
			BOOTPROTO=dhcp
		
		zeroconf					<-- if there is no DHCP server configured, then an address of 169.254.0.0/16 network is automatically assigned
											these address are non-routable
			NOZEROCONF=yes

		dhclient deamon				<-- will negotiate a lease from a DHCP server
		ppd deamon
		
	STATIC ipv4 configuration
		/etc/sysconfig/network-scripts/ifcfg-ethX
			BOOTPROTO=none
			IPADDR=<address>
			NETMASK=<netmask>
			
		ifup 
		ifdown
		
	DEVICE ALIASES
		- useful for virtual hosting, hosting multiple web or ftp sites on a single server.. separate ip address are generally required for each
			website that supports SSL or when defining multiple FTP sites
		- bind multiple addresses to a single NIC, logical 3 network address
			eth1:1
			eth1:2
			eth1:3
		- create a separate interface config file for each device alias, must use static networking
		
	ROUTING TABLE
		
		#to view table
			route
			netstat -r
			ip route 
			
			[root@centos5-11g ~]# route
			Kernel IP routing table
			Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
			192.168.203.0   *               255.255.255.0   U     0      0        0 eth0			
			169.254.0.0     *               255.255.0.0     U     0      0        0 eth0
			default         192.168.203.2   0.0.0.0         UG    0      0        0 eth0
			
			[root@centos5-11g ~]# netstat -r
			Kernel IP routing table
			Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
			192.168.203.0   *               255.255.255.0   U         0 0          0 eth0
			169.254.0.0     *               255.255.0.0     U         0 0          0 eth0
			default         192.168.203.2   0.0.0.0         UG        0 0          0 eth0
			
			[root@centos5-11g ~]# ip route
			192.168.203.0/24 dev eth0  proto kernel  scope link  src 192.168.203.25		<-- "LOCAL"..packet would be sent physically to the destination address, out the device eth0
			169.254.0.0/16 dev eth0  scope link
			default via 192.168.203.2 dev eth0											<-- "REMOTE"..packet would be sent physically to the router at 192.168.203.2, out the device eth0
			
	DEFAULT GATEWAY (router)
		- specifies where ip packets should be sent where there is no "more specific match" found in the routing table
			also generally used when there is only "one way out" of the local network
		- if in DHCP, DHCP will server the address of the default gateway.. 
			the dhclient will get the value & set the value in the routing table
		
		/etc/sysconfig/network-scripts/ifcfg-ehtX		<-- set per interface (will override global if set)
		/etc/sysconfig/network							<-- set globally
	
	CONFIGURING ROUTES
		- control traffic flow when there is more than one router, or more than one interface each attached to different routers, we may want
			to selectively control which traffic goes through which router by configuring additional routes
		
		- static routes defined per interface
			/etc/sysconfig/network-scripts/route-ethX
			ip route add...
			
			#sample command
				ip route add 192.168.22.0/24 via 10.53.0.253
				
				cat /etc/sysconfig/network-scripts/route-ethX
				192.168.22.0/24 via 10.53.0.253
				
		- dynamic routes learned via deamons (challege in dynamic is when network changes)
			quagga 
				package that supports RIP - router information protocol		<-- smaller networks
									OSPF - open shortest path first			<-- enterprise networks
									BGP - border gateway protocol			<-- ISPs
									
	verify ip connectivity
		ping				<-- packet loss & latancy measurement tool (sends ICMP - internet control message protocol, default is 64byte)
		traceroute			<-- displays network path to a destination (uses UCP frames to probe the path)
		mtr					<-- a tool that combines ping & traceroute
		
	defining the local hostname
		/etc/sysconfig/network
		
		- might PULL from the network
			dhclient
			"reverse DNS lookup".. will be done by /etc/rc.d/init.d/network
			
	local resolver
		resolver performs forward & reverse lookups
			forward lookup		looks up the number when we have the name
			reverse lookup		looks up the name when we have the number
				
		/etc/hosts			<-- at a minumum, your hostname should be here, normally checked before DNS
		
	remote resolvers
		/etc/resolv.conf
			- domains to search
			- strict order of name servers to use (DNS)
			- may be updated by dhclient
	
		entries:			
			search 
			domain
			nameserver
			
		PEERDNS=no				<-- the dhclient will automatically obtain a list of nameservers from the DHCP server unless the interface config file contains this
		
		/etc/nsswitch.conf		<-- precedence of DNS versus /etc/hosts
		
	verify DNS connectivity
		nslookup (deprecated)
		host
		dig
		
		bind-utils (package)
	
	NETWORK CONFIGURATION UTILITIES
		system-config-network
		
		profile selection:
			system-config-network-cmd --profile <profilename> --activate			<-- switch profiles
			netprofile (kernel argument)											<-- on boot time, choose a profile
			
		transparent dynamic configuration
			networkmanager (package)				<-- for too many profiles..
			nm-applet
		
	IMPLEMENTING IPv6
			enabling/disabling ipv6, set this in /etc/modprobe.conf
				alias net-pf-10 off
				alias ipv6 off
				
			# ip -6 addr

		IPv6 DHCP - dynamic interface configuration
			two ways to dynamically configure ipv6:
				1) router advertisement deamon
						- runs on (Linux) default gateway - radvd
						- only specifies prefix & default gateway
						- enabled with configuration: IPV6_AUTOCONF=yes in /etc/sysconfig/network.. global.. or on interface config
						- interface ID automatically generated based on the MAC address of the system
						- RFC 3041 was developed to protect the privary from EUI-64, enabled with configuration: IPV6_PRIVACY=rfc3041 on local interface config
				2) DHCP version6
						dhcp6 supports more configuration options		<-- does not listen for broadcast..but rather subscribe to the multicast address ff02::16
						enabled with configuration: DHCPV6C=yes on interface config
						
		IPv6 STATIC configuration
			- enabled with configuration: IPV6ADDR												<-- first Global Unicast Address
			- no need for device alias..enabled with configuration: IPV6ADDR_SECONDARIES 		<-- additional Global Unicast Address
			
		IPv6 routing configuration
			Default gateway
				- dynamically from radvd or dhcpv6s
				- manually specify in /etc/sysconfig/network
					configuration: 
						IPV6_DEFAULTGW
						IPV6_DEFAULTDEV				<-- only valid on point-to-point interfaces
				
			Static Routes
				- defined on interface config /etc/sysconfig/network-scripts/route6-ethX
				- or use the "ip 6 route add"
		
		New and Modified utilities
			ping6
			traceroute6
			tracepath6
			ip -6
			host -t AAAA hostname6.domain6
			
		
[ ] UNIT 9 - INSTALLATION
	
	anaconda: different modes
		kickstart
		upgrade
		rescue
		
		consists of two stages:
			first stage				<-- boots the system & performs initialization of the system
			second stage			<-- performas the installation
		
			
[ ] UNIT 10 - VIRTUALIZATION WITH XEN
		



[ ] UNIT 11 - TROUBLESHOOTING

	method of fault analysis:
		characterize the problem
		reproduce the problem
		find further information
		eliminate possible causes
		try the easy things first
		backup config files before changing
		
	fault analysis: gathering data
		useful commands:
			history
			grep
			diff
			find / -cmin -60
			strace <command>
			tail -f <logfile>
		generate additional info
			*.debug		/var/log/debug
			--debug option in application
			
	X11: things to check
		never debug X while in runlevel5
		when changing hardware, try system-config-display first..
		X -probeonly						<-- performs all tasks necessary to start the X server w/o actually starting it
													check /usr/share/hwdata/Cards
		/home or /tmp full, quota?
		is XFS running?						<-- once in a while the font indexes in a font directory may be corrupt..run "mkfontdir" to recreate them
													also try commenting out font paths in /etc/X11/fs/config..then run XFS to determine which directories
													has problems
		change hostname? 					<-- exit of runlevel5..
		
	NETWORKING
		hostname resolution
			dig <fq hostname>
		ip configuration
			ifconfig
		default gateway
			route -n
		module specification
		device activation
		
	ORDER OF BOOT PROCESS: REVIEW
		bootloader configuration
		kernel
		/sbin/init
			starting init
		/etc/rc.d/rc.sysinit
		/etc/rc.d/rc.. and /etc/rc.d/rc[1,3,5].d/
			entering runlevel X
		/etc/rc.d/rc.local
		X
		
		POSSIBLE ISSUES:
				1) issue: no bootloader splash screen on prompt appears
						cause:
							grub is misconfigured
							boot sector is corrupt
							bios setting such as disk addressing scheme has been modified since the boot sector was written
							
				2) issue: kernel does not load at all, or loads partially before a panic occurs
						cause:
							corrupt kernel image
							incorrect parameters passed to the kernel by the bootloader
							
				3) issue: kernel loads completely, but panics or fails when it tries to mount root filesystem and run /sbin/init
						cause:
							bootloader is misconfigured
							/sbin/init is corrupted
							/etc/fstab is misconfigured
							root filesystem is damaged and unmountable
					
				4) issue: kernel loads completely, and /etc/rc.d/rc.sysinit is started and interupted
						cause:
							/bin/bash is missing or corrupted
							/etc/fstab may have an error, evident when filesystems are mounted or fsck'd
							errors in software raid or quota specifications
							corrupted non-root filesystem (due to a failed disk)
							
				5) issue: run level errors (typically services)
						cause:
							another service required by a failing service was not configured for a given runlevel
							service-specific configuration errors
							misconfigured X or related services in runlevel5
							
	FILESYSTEM PROBLEMS DURING BOOT
		rc.sysinit attempts to mount local filesystems
		upon failure, user is dropped to a root shell, root in read-only
			fsck to repair
			but before fsck, check /etc/fstab for mistakes
				mount -o remount,rw /..... before editing
			manually test mounting filesystems
			
	RECOVERY RUN-LEVELS (pass run-level to init)
		runlevel 1
			process rc.sysinit & rc1.d scripts
		runlevel s,S,or single
			process only rc.sysinit
		emergency
			run sulogin only..much like a failed disk
	
	RESCUE ENVIRONMENT
		required when root filesystem is unavailable
		non-system specific
		boot from CDROM (boot.iso or CD#1)..then type linux rescue
		boot from diskboot.img on USB device.. then linux rescue
	
		rescue environment utilities
			disk maintenance
			networking
			miscellaneous
			logging: 
				/tmp/syslog				<-- system loggin info
				/tmp/anaconda.log		<-- booting info
				/tmp					<-- some more config files are there..
		
		rescue environment details
			filesystem reconstruction									<-- will try to reconstruct the hard disk's filesystem under /mnt/sysimage
				anaconda will ask if filesystems should be mounted
					/mnt/sysimage/*
					/mnt/source
					$PATH includes hard drive's directories
			filesystem nodes
				system-specific device files provided
				"mknod" knows major/minor #'s			<-- for floppys, in order to access it
			
			linux rescue nomount						<-- a corrupted partition table will appear to hang the rescue environment
																ALT-F2 has shell with fdisk
															this command will disable automatic mounting of filesystems & circumvents
															the hanging caused by bad partition tables

	-------------
	# TEST CASES:
	-------------
	
		#REINSTALL GRUB
			prepare the environment:
				# dd if=/dev/zero of=/dev/sda bs=256 count=1 && reboot
		
			option 1) 
						1. just do a /sbin/grub-install <boot device.."/dev/sda"> on rescue mode
							- this will recreate a new folder "grub" but will not recreate grub.conf
					
						2. when you have a separate boot partition which is mounted at /boot, since grub is
							a boot loader, it doesn't know anything about mountpoints at all
							
							# fdisk -l
							# grub-install --root-directory=/boot /dev/hda
							
								NOTE: 
									how to specify a file?						
										(hd0,0)/vmlinuz						<-- normally this is the case.. because you create 100MB separate mount point
																					means that the file name 'vmlinuz', found on the first partition of the first
																					hard disk drive. the argument completion works with file names too.
									
									what else to look for?
										device.map							<-- the content of this should be the disk where the MBR resides.. (hd0) /dev/sda
										grub.conf							<-- this is not created when you do grub-install, either you get a copy from backup
																					or manually recreate it.. DONT FORGET THE "root=LABEL=/"
																					
																					default=0
																					timeout=5
																					splashimage=(hd0,0)/grub/splash.xpm.gz
																					hiddenmenu
																					password --md5 $1$wIn2KEYl$pjKQtiDuiRlqO/8QKkS0X0
																					title CentOS (2.6.18-8.el5)
																					        root (hd0,0)
																					        kernel /vmlinuz-2.6.18-8.el5 ro root=LABEL=/ rhgb quiet
																					        initrd /initrd-2.6.18-8.el5.img
			option 2) if above fails.. then do this..
						1. on rescue mode type command "grub" & press enter
						2. type "root (hd0,0)"
						3. type "setup (hd0)"
						4. quit
		
		#RECREATE MKINITRD
			1. take note of the kernel version by typing
					# uname -r
					# uname -a
			2. then, create the initrd image
					# mkinitrd /boot/initrd-$(uname -r).img $(uname -r)				<-- if using non-xen kernel
					# mkinitrd /boot/initrd-$(uname -r)xen.img $(uname -r)xen		<-- if using xen kernel
					
					NOTE: 
						the image name doesn't have to be the same as before.. grub.conf reference this file 
						so also check the contents of grub.conf
						
		#ROOT FILESYSTEM READ ONLY, IMMUTABLE PROPERTY ON FSTAB
		
		
		#ROOT FILESYSTEM READ ONLY, RESIZED THE FILESYSTEM TO A SMALLER VALUE (LVM)
		
		
		#CORRUPTED MOUNT COMMAND
			prepare the environment:	
				# cp /bin/date /bin/mount
		
			solution			
				1. load rescue environment
				2. chroot /mnt/sysimage
					rpm -qf /bin/mount
					rpm -V util-linux
					exit
				3. mount the installer through NFS
				4. rpm -ivh --force --root /mnt/sysimage util-linux*
}}}
{{{
###################################################################################################
[ ] UNIT 1 - SYSTEM PERFORMANCE AND SECURITY
###################################################################################################

      System Resources as Services
	    ** Computing infrastructure is comprised of roles
	      systems that serve
	      systems that request
	    ** System infrastructure is comprised of roles
	      processes that serve
	      processes that request
	    ** Processing infrastructure is comprised of roles
	      accounts that serve
	      accounts that request
	    ** System resources, and their use, must be accounted for as policy of securing the system

      Security in Principle
	    Security Domains (this course will focus on Local and Remote)
	    Physical
	    Local 
	    Remote
	    Personnel

      Security in Practice
	    Host only services you must, and only to those you must
	    A service is characterized by its "listening" for an event, like "GET" request on IP port 80

      Security Policy: the People
	    Managing human activities
	    includes Security Policy maintenance
	    The policy is the objective reference against which one can measure

      Security Policy: the System
	    Managing system activities
	    ** Regular system monitoring
	    Log to an external server in case of compromise
	    Monitor logs with logwatch
	    Monitor bandwidth usage inbound and outbound
	    ** Regular backups of system data

      Response Strategies
	    ** Assume suspected system is untrustworthy
	    Do not run programs from the suspected system
	    Boot from trusted media to verify breach

		rpm -V --root=/mnt/sysimage --define '_dbpath /path/to/backup' procps  <-- this will compare the size,md5sum,ownership,etc. against 
                                                                                           the backup and the one on disk

	    Analyze logs of remote logger and "local" logs
	    Check file integrity against read-only backup of rpm
	    database
	    ** Make an image of the machine for further
	    analysis/evidence-gathering
	    ** Wipe the machine, re-install and restore
	    from backup

      System Faults and Breaches
	    ** Both effect system performance
	    ** System performance is the security
	    concern
	    a system fault yields an infrastructure void
	    an infrastructure void yields opportunity for
	    alternative resource access
	    an opportunity for alternative resource access yields
	    unaccountable resource access
	    an unaccountable resource access is a breach of security policy

	    "It is therefore essential to monitor system activity, or "behavior", to establish a norm, and prescribe methods to 
	    reinstate this norm should a fault occur. It is also important to implement emthods to explain the effects of changes
	    to a system while altering configuration of tis resource access"

	    "MATRIX" of access controls
	    ----------------------------------------------------------
	    Access Control          Implementation
	    ----------------------------------------------------------
	    Application             configuration file parameters
	    PAM                     as linked to, and configured in /etc/pam.d/programname
	    xinetd                  as configured in /etc/xinetd.d/service
	    libwrap                 as linked to libwrap.so, or managed by so linked
	    SELinux                 as per SELinux implemented policy
	    Netfilter, IPv6         as configured in /etc/sysconfig/ip6tables
	    Netfilter               as configured in /etc/sysconfig/iptables

      Method of Fault Analysis
	    ** Characterize the problem
	    ** Reproduce the problem
	    ** Find further information

      Fault Analysis: Hypothesis
	    ** Form a series of hypotheses
	    ** Pick a hypothesis to check
	    ** Test the hypothesis
	    ** Note the results, then reform or test a new
	    hypothesis if needed
	    ** If the easier hypotheses yield no positive
	    result, further characterize the problem

      Fault Analysis: Gathering Data
	    ** strace command

		strace -o karl.txt ls /var/lib		<-- do an strace
		grep ' E.' karl.txt			<-- what errors were encountered
		grep 'open' karl.txt			<-- which files are called "open"

	    ** tail -f logfile

	    ** *.debug in syslog

		/etc/syslog.conf:
		*.debug		/var/log/debug

	    ** --debug option in application

		vi /etc/sysconfig/xinetd
		EXTRAOPTIONS="-d"
		service xinetd restart

      Benefits of System Monitoring
	    ** System performance and security may be maintained with regular system monitoring
	    ** System monitoring includes:
	    Network monitoring and analysis
	    File system monitoring
	    Process monitoring
	    Log file analysis

      Network Monitoring Utilities
	    ** Network interfaces (ip)
	    Show what interfaces are available on a system
	    ** Port scanners (nmap)
	    Show what services are available on a system
	    ** Packet sniffers (tcpdump, wireshark)
	    Stores and analyzes all network traffic visible to the
	    "sniffing" system

      Networking, a Local view
	    ** The ip utility
	     Called by initialization scripts
	     Greater capability than ifconfig
	    ** Use netstat -ntaupe for a list of:
	     active network servers
	     established connections

	    netstat -tupln		<-- to get all services listening on localhost

      Networking, a Remote view

	    nmap -sS -sU -sR -P0 -A -v station1   <-- will perform a TCP SYN(chronous packet) scan (-sS), UDP scan (-sU), rpc/portmap scan (-sR)
                                                       with operating system and service version detection (-A) on station1. It will print diagnostic
                                                       information (-v) and will not attempt to ping the system before scanning (-P0)

	    nmap -sP 192.168.234.*		<-- scan the whole subnet 

	    nmap <remotehost> | grep tcp	<-- to test which services you can reach on the remote host

	    nmapfe	<-- GUI tool frontend

      File System Analysis

	    df, du
	    stat	<-- reports length of the file

	    find ~ -type f -mmin -90 | xargs ls -l	<-- find recently changed files 90 mins

      Typical Problematic Permissions
	    ** Files without known owners may indicate
	    unauthorized access:

	    find / \( -nouser -o -nogroup \)	<-- Locate files and directories with no user or group entries in the /etc/passwd file

	    ** Files/Directories with "other" write
	    permission (o+w) may indicate a problem
 
	    find / -type f -perm -002		<-- Locate other-writable files

	    find / -type d -perm -2		<-- Locate other-writable directories

      Monitoring Processes
	    ** Monitoring utilities
	     top
	     gnome-system-monitor
	     sar

      Process Monitoring Utilities

      System Activity Reporting
	    sysstat RPM

      Managing Processes by Account
	    ** Use PAM to set controls on account resource limits:
	     pam_access.so  <-- can be used to limit access by account and location /etc/security/access.conf
	     pam_time.so    <-- can be used to limit access by day and time         /etc/security/time.conf
	     pam_limits.so  <-- can be used to limit resources available to process /etc/security/limits.conf

      System Log Files
	    ** Logging Services:
	     syslogd   <-- Many daemons send messages to 
	     klogd     <-- Kernel messages are handled

	     /etc/syslog.conf	<-- configuration file

	    /var/log/messages		<-- most system messages
	    /var/log/audit/audit.log	<-- audit subsystem and SELinux messages
	    /var/log/secure			<-- authentication messages, xinetd services
	    /var/log/xferlog		<-- FTP (vsftpd) transactions
	    /var/log/maillog		<-- mail transactions

      syslogd and klogd Configuration
	    <facility>.<priority>	<loglocation>		<-- format
	    mail.info	/dev/tty8				<-- example
	    kern.info   /var/log/kernel				<-- example

	    Facility                                     Priority
	    -------------------------------------------  -----------------
	    authpriv    security/authorization messages  debug    debugging information
	    cron        clock daemons (atd and crond)    info     general informative messages
	    daemon      other daemons                    notice   normal, but significant, condition
	    kern        kernel messages                  warning  warning messages
	    local[0-7]  reserved for local use           err      error condition
	    lpr         printing system                  crit     critical condition
	    mail        mail system                      alert    immediate action required
	    news        news system                      emerg    system no longer available
	    syslog      internal syslog messages
	    user        generic user level messages

	    CENTRALIZED HOST LOGGING: 
	      1) on the remote host setup syslogd to accept remote messages
		    edit /etc/sysconfig/syslog
		    SYSLOGD_OPTIONS="-r -m 0"
	      2) restart syslogd
	      3) setup syslogd on the source host
		    vi /etc/syslog.conf
		    user.* @remotehost
	      4) restart syslogd
	      5) test it using the "logger" command
		    logger -i -t yourname "This is a test"
	      6) view the log files on the remote and source host

      Log File Analysis
	    logwatch RPM

      Virtualization with Xen

      Xen Domains
	    /etc/xen/<domain>		<-- Dom-U configuration files are stored on this directory of Dom-0
	    virt-manager or xm console 	<-- front end console (GUI)
	    xmdomain.cfg(5)		<-- help file

      Xen Configuration
	    xenbr0		<-- network by default is mapped to this interface
	    xendomains		<-- determines what domains to start by which xen domain configuration file are linked in /etc/xen/auto
                                    create a symbolic link to /etc/xen/auto/<name of Dom-U> for auto start

      Domain Management with xm
	    "xm" tool sends commands to "Xend" which relays the commands to the Hypervisor

	    Controlling domains:
	    ---------------------
	    xm <create | destroy> domain
	    xm <pause | unpause> domain
	    xm <save | restore> domain filename
	    xm <shutdown | reboot> domain

	    Monitoring domains:
	    ---------------------
	    xm list
	    xm top
	    xm console domain



###################################################################################################
[ ] UNIT 2 - SYSTEM SERVICE ACCESS CONTROLS
###################################################################################################

      System Resources Managed by init

	    ** Services listening for serial
		protocol connections
		a serial console
		a modem
	    ** Configured in /etc/inittab
	    ** Calls the command rc to spawn initialization scripts
	    ** Calls a script to start the X11 Display Manager
	    ** Provides respawn capability
	    co:23:respawn:/sbin/agetty -f /etc/issue.serial 19200 ttyS1

      System Initialization and Service Management

	    ** Commonly referred to as "System V" or
	    "SysV"
	    Many scripts organized by file system directory
	    semantics
	    Resource services are either enabled or disabled
	    ** Several configuration files are often used
	    ** Most services start one or more processes
	    ** Commands are "wrapped" by scripts
	    ** Services are managed by these scripts,
	    found in /etc/init.d/
	    ** Examples:
	    /etc/init.d/network status
	    service network status

      chkconfig

	    ** Manages service definitions in run levels
	    ** To start the cups service on boot:
	    chkconfig cups on
	    ** Does not modify current run state of System
	    V services
	    ** Used for standalone and transient services
	    ** Called by other applications, including
	    system-config-services
	    ** To list run level assignments, run chkconfig
	    --list

      Initialization Script Management
	    chkconfig --list   <-- provides a listing of all services that are started via initialization scripts or xinetd
                                -- it only maintains the symbolic links in /etc/rcX.d/ and the xinetd configuration. It does not 
                                   start or stop the services or control the behavior of other services

      The /etc/sysconfig/ files

	    * Some services are configured for how they run
	      - named
	      - sendmail
	      - dhcpd
	      - samba
	      - init
	      - syslog

	    /etc/sysconfig/  <-- many files under this directory describe hardware configuration
			      -- some of them configure service run-time parameters!!! and "configure the manner" of daemon execution!!!

	    /etc/init.d/     <-- files under here are executables that "configure the conditions" of daemon execution

	    /usr/share/doc/initscripts-9.02/sysconfig.txt  <-- this is where the /etc/sysconfig/ files are documented

      XINETD MANAGED SERVICES

	    ** Transient services are managed by the xinetd service	<-- transient services are not configured for a given runlevel
                                                                            but whether xinetd should manage the port and connections to these services

	      /etc/services	<-- port-to-service management list used by xinetd

              xinetd provides the following: 
              ------------------------------
		- host-based authentication
		- resource logging
		- timed access
		- address redirection
		- etc.

	    ** Incoming requests are brokered by xinetd
	    ** Configuration files: 

	      /etc/xinetd.conf         <-- config files
	      /etc/xinetd.d/<service>

	    ** Linked with libwrap.so, services compiled with this will first call host_access(5) rules when a service is requested
              if they allow access, then xinetd's internal access control policies are checked

	      [root@karl ~]# ldd /usr/sbin/xinetd
		      linux-vdso.so.1 =>  (0x00007fff4b9ff000)
		      libselinux.so.1 => /lib64/libselinux.so.1 (0x00007f58b3c3b000)
		      libwrap.so.0 => /lib64/libwrap.so.0 (0x00007f58b3a31000)		<-- xinetd is linked to libwrap.so
		      libnsl.so.1 => /lib64/libnsl.so.1 (0x00007f58b3818000)
		      libm.so.6 => /lib64/libm.so.6 (0x00007f58b3594000)
		      libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007f58b335d000)
		      libc.so.6 => /lib64/libc.so.6 (0x00007f58b2fe5000)
		      libdl.so.2 => /lib64/libdl.so.2 (0x00007f58b2de1000)
		      /lib64/ld-linux-x86-64.so.2 (0x00007f58b3e59000)
		      libfreebl3.so => /usr/lib64/libfreebl3.so (0x00007f58b2b82000)

	    ** Services controlled with chkconfig:
	    chkconfig tftp on

      XINETD DEFAULT CONTROLS

	    ** Top-level configuration file
	    
	    # /etc/xinetd.conf
	    defaults
	    {
	    instances = 60
	    log_type = SYSLOG authpriv
	    log_on_success = HOST PID
	    log_on_failure = HOST
	    cps = 25 30
	    }
	    includedir /etc/xinetd.d

	    * Can be overridden or appended-to in service-specific configuration files in /etc/xinetd.d

	    man xinetd.conf  <-- all the xinetd configuration parameters are documented, Extended Internet Services Daemon configuration file

      XINETD SERVICE CONFIGURATION

	    ** Service specific configuration
	    /etc/xinetd.d/<service>

	    yum install tftp-server

	    /etc/xinetd.d/tftp:		<-- All service config utilities will edit the appropriate xinetd service config files by calling "chkconfig".
                                         -- When xinetd is started, each enabled service is called when a connection is attempted on a specific network port
	    # default: off
	    service tftp
	    {
	    disable = yes			<-- determines whether or not xinetd will accept connections for the service
	    socket_type = dgram
	    protocol = udp
	    wait = yes
	    user = root
	    server = /usr/sbin/in.tftpd		<-- the binary used to run the service, used by libwrap.so (tcp_wrappers)
	    server_args = -c -s /tftpboot
	    per_source = 11
	    cps = 100 2
	    flags = IPv4
	    }

      XINETD ACCESS CONTROLS

	    ** Syntax
	    Allow with only_from = host_pattern
	    Deny with no_access = host_pattern
	    The most exact specification is authoritative
	    ** Example
	    only_from = 192.168.0.0/24		<-- if nothing is specified then it defaults to ALL hosts
	    no_access = 192.168.0.1

	    service telnet			<-- this will block access to the telnet service to everyone except hosts from the 192.168.0.0/24 network
						and of those, 192.168.0.1 will be denied access
	    {
	    disable = yes
	    flags = REUSE
	    socket_type = stream
	    wait = no
	    user = root
	    only_from = 192.168.0.0/24
	    no_access = 192.168.0.1
	    server = /usr/bin/in.telnetd
	    log_on_failure += USERID
	    }


http://www.cyberciti.biz/faq/how-do-i-turn-on-telnet-service-on-for-a-linuxfreebsd-system/
http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch16_:_Telnet,_TFTP,_and_xinetd#.Ua-SuGRATbA



      Host Pattern Access Controls

	    ** Host masks for xinetd may be:
		- numeric address (192.168.1.0)		<-- full or partial, righmost zero are treated as wildcards
                                                         -- example: 192.168.1.0

		- network name (from /etc/networks)	<-- network names from /etc/networks or NIS
                                                         -- does not work together with usernames
                                                         -- example: @mynetwork

		- hostname or domain (.domain.com)	<-- performs a reverse lookup everytime a client connects
                                                         -- example: .example.com (all hosts in the example.com domain)

		- IP address/netmask range (192.168.0.0/24)	<-- must specify the complete network address and netmask
                                                                 -- example: 192.168.0.0/24

	    ** Number of simultaneous connections
		Syntax: per_source = 2			<-- limits the number of simultaneous connections per IP address, Cannot exceed maximum instances

      SERVICE AND APPLICATION ACCESS CONTROLS

	    ** Service-specific configuration
		Daemons like httpd, smbd, squid, etc. provide service-specific security mechanisms

	    ** General configuration
		All programs linked with libwrap.so use common configuration files
		Because xinetd is linked with libwrap.so, its services are effected
		Checks for host and/or remote user name

	    /etc/hosts.allow	<-- when a client connects to a "tcp wrapped" service, these files are examined, then choose to accept or drop the connection
	    /etc/hosts.deny

	    all processes controlled by XINETD automatically use libwrap.so (tcp_wrappers)

	    Here are the standalone deamons linked with libwrap.so:
		- sendmail
		- slapd
		- sshd
		- stunnel
		- xinetd
		- gdm
		- gnome-session
		- vsftpd
		- portmap

      TCP_WRAPPERS CONFIGURATION

	    libwrap.so implements a "STOP ON FIRST MATCH" policy!!!
	
	    changes to the access files are effective immediately for all new connections!!!

	    ** Three stages of access checking
		Is access explicitly permitted?
		Otherwise, is access explicitly denied?
		Otherwise, BY DEFAULT, PERMIT ACCESS!

	    ** Configuration stored in two files:
		Permissions in /etc/hosts.allow
		Denials in /etc/hosts.deny

	    ** Basic syntax:
	    daemon_list: client_list [:options]

      Daemon Specification

	    "nfs and nis" uses "portmap" service for RPC messages <-- http://querieslinux.blogspot.com/2009/08/what-is-port-map-why-is-it-required.html
                                                                   -- block the underlying "portmap" for RPC based services like NFS and NIS

	    ** Daemon name:
		Applications pass name of their executable
		Multiple services can be specified
		Use wildcard ALL to match all daemons
		Limitations exist for certain daemons

	    ** Advanced Syntax:
		daemon@host: client_list ...

	    EXAMPLES:
	    in.telnetd:	192.168.0.1
	    sshd, gdm:	192.168.0.1	<-- comma delimited list of daemons

	    in.telnetd@192.168.0.254:	192.168.0.	<-- if your host has two interface cards and if you want different policies for each, do this!
	    in.telnetd@192.168.1.254:	192.168.1.

      Client Specification

	    ** Host specification
	    by IP address (192.168.0.1,10.0.0.)
	    by name (www.redhat.com, .example.com)
	    by netmask (192.168.0.0/255.255.255.0)
	    by network name

      Macro Definitions (also known as WILDCARDS)

	    ** Host name macros
	    LOCAL		- all hosts without a dot in their name
	    KNOWN		- all hostnames that can be resolved
	    UNKNOWN		- all hostnames that cannot be resolved
	    PARANOID		- all hostnames where forward and reverse lookup do not match, or resolve
	    ** Host and service macro
	      ALL		- always matches all hosts and all services
	    ** EXCEPT		- exclude some hosts from your match
	    Can be used for client and service list
	    Can be nested

	    ----------------------------------
	    /etc/hosts.allow
		sshd: ALL EXCEPT .cracker.org EXCEPT trusted.cracker.org

	    /etc/hosts.deny
		sshd: ALL
	    ----------------------------------
	    "Because of the catch-all rule in hosts.deny this ruleset would allow only those who have been explicitly granted access to ssh into the system.
            In hosts.allow we granted access to everyone except for hosts in the cracker.org domain, but to this rule we make an exception: 
            We will allow the host trusted.cracker.org to ssh in despite the band on cracker.org"

      Extended Options

	    man 5 hosts_options	<-- documentation

	    ** Syntax:
		daemon_list: client_list [:opt1 :opt2...]

	    ** spawn
		Can be used to start additional programs
		Special expansions are available (%c, %s)
		  %c client information
		  %s server information
		  %h the clients hostname
		  %p server PID

	    ** Example:
		in.telnetd: ALL : spawn echo "login attempt from %c to %s" \
		| mail -s warning root

	    ** DENY
		Can be used as an option in hosts.allow

	    ** Example:
		ALL: ALL: DENY

      A TCP_WRAPPERS EXAMPLE

	    * CONSIDER THE FOLLOWING EXAMPLE FOR THE MACHINE 192.168.0.254 ON A CLASS C NETWORK
	    -----------------------------------------------
	    # /etc/hosts.allow
	    vsftpd:			192.168.0.
	    in.telnetd, portmap: 	192.168.0.8

	    # /etc/hosts.deny
	    ALL: .cracker.org EXCEPT trusted.cracker.org
	    vsftpd, portmap:	ALL
	    sshd: 192.168.0.	EXCEPT 192.168.0.4
	    -----------------------------------------------
	    OBSERVATIONS:
	      1) only stations on the local network can ftp to the machine
	      2) only station8 could NFS-mount a directory from the machine (remember that NFS relies on portmap)
	      3) all hosts in cracker.org, except trusted.cracker.org are denied access to any tcp-wrapped services
	      4) only host 192.168.0.4 is able to ssh in from the local network

	    QUESTIONS:
	    1) what stations from the local network can initiate a telnet connection to this machine?
	    2) can machines in the cracker.org network access the web server?
	    3) what tcp-wrapped services are available to a system from someother.net? what's wrong with these rules from the perspective
	      of a security policy? 

	    * A REALISTIC EXAMPLE - USING A "MOSTLY CLOSED APPROACH"
	    -----------------------------------------------
	    # /etc/hosts.allow
	    ALL:    127.0.0.1 [::1]
	    vsftpd: 192.168.0.
	    in.telnetd, sshd: .example.com 192.168.2.5
	    
	    # /etc/hosts.deny
	    ALL: ALL
	    -----------------------------------------------
	    The above example denies access to all tcp-wrapped services for everyone, except those which are explicitly allowed. 
	    In this case "ftp" access is allowed to all hosts in the 192.168.0. subnet while "telnet and ssh" are allowed by everyone
	    in the example.com domain as well as host 192.168.2.5. 
	    Additionally, all services are available via the loopback adapter. This is a better a method for tightening down a system!
	    It is simplier, more direct approach and is much easier to maintain.

      XINETD AND TCP_WRAPPERS

	    ** xinetd provides its own set of access control functions
		host-based
		time-based
	    ** tcp_wrappers is still used (still authoritative)
		xinetd is compiled with libwrap support
		If libwrap.so allows the connection, then xinetd security configuration is evaluated

	    THE WORKFLOW:
	    --------------
	    1) tcp_wrappers is checked
	    2) If access is granted, xinetd's security directives are evaluated
	    3) If xinetd accepts the connection, then the requested service is called and its service-specific security is evaluated

      SELinux

	    ** Mandatory Access Control (MAC) -vs- Discretionary Access Control (DAC)
	    ** A rule set called the policy determines how strict the control
	    ** Processes are either restricted or unconfined
	    ** The policy defines what resources restricted processes are allowed to access
	    ** Any action that is not explicitly allowed is, by default, denied

      SELinux Security Context

	    ** All files and processes have a security context

	    ** The context has several elements, depending on the security needs
		user:role:type:sensitivity:category
		user_u:object_r:tmp_t:s0:c0
		Not all systems will display s0:c0

	    ** ls -Z					<-- View security context of a file
	      ls -Zd					<-- View security context of a directory

	    ** ps -Z					<-- View security context of a process, determines if a process is protected
              ps -ZC bash				 -- any type with "unconfined_t" is not yet restricted by SELinux!!!
		Usually paired with other options, such as -e

	    * To SELinux, everything is an object and access is controlled by "security elements" stored in the inode's extended attribute fields
	    * Collectively the "elements" are called the "security context"

	    * There are 5 supported elements
	    1) USER	- indicates the type of user that is logged into the system, if elevated it will stay the same. Processes have a value of system_u
	    2) ROLE	- defines the purpose of the particular file, process, or user
	    3) TYPE	- used by the "Type Enforcement" to specify the nature of the data in a file or process. Rules within the policy say what processes
			      types can access which file types
	    4) SENSITIVITY - a security classification sometimes used by government agencies
	    5) CATEGORY - similar to group, but can block root's access to confidential data

	    RHEL4			<-- protecting 13 processes
	    RHEL5 (initial release)	<-- protecting 88 processes, still increasing.. and 624 TYPE elements

      SELinux: Targeted Policy

	    ** The targeted policy is loaded at install time
	    ** Most local processes are unconfined
	    ** Principally uses the type element for type enforcement
	    ** The security context can be changed with chcon

		chcon -t tmp_t /etc/hosts			<-- change the security context

		chcon --reference /etc/shadow anaconda-ks.cfg	<-- takes the security context from one object and apply it to another

		restorecon /etc/hosts				<-- the policy determines and applies the object's defaul context (safer)

      SELinux: Management

	    ** Modes: Enforcing (default), Permissive, Disabled
		/etc/sysconfig/selinux
		system-config-securitylevel
		getenforce and setenforce 0 | 1		<-- see/change current mode
		Disable from GRUB with selinux=0		<-- disable needs reboot!
	    ** Policy adjustments: Booleans, file contexts, ports, etc.
		system-config-selinux (from policycoreutils-gui package)
		getsebool and setsebool
		semanage
	    ** Troubleshooting
		Advises on how to avoid errors, not ensure security!
		setroubleshootd and sealert -b

	    /var/log/audit/audit.log	<-- where SELinux logs errors, if auditd is running
	    /var/log/messages		<-- secondary logging

      SELinux: semanage (modular - targeted policy)

	    ** Some features controlled by semanage
	    ** Recompiles small portions of the policy
	    ** semanage function -l
	    ** Most useful in high security environments

	    6 functions that semanage manipulates: 
	    ----------------------------------------
	    1) login       assigns clearances to users at login
	    2) user        assigns role transitions for users, allows for multiple privilege tiers between traditional user and root
	    3) port        allows confined daemons to bind to non-standard ports
	    4) interface   used to assign a security clearance to a network interface
	    5) fcontext    defines the file contexts used by restorecon
	    6) translation translates sensitivity and categories into names

      SELinux: File Types

	    ** A managed service type is called its domain
	    ** Allow rules in the policy define what file types a domain may access
	    ** The policy is stored in a binary format, obscuring the rules from casual viewing
	    ** Types can be viewed with semanage

		semanage fcontext -l	<-- list of types decompiled into human readable output

		cat /etc/selinux/targeted/contexts/files/file_contexts | grep named	<-- list of types decompiled into human readable output

		ps -ZC named	<-- generally the daemon will run with a type value that is similar to it's binary name, and can access files
                                    of a similar type

		semanage fcontext -l | cut -d: -f3 | sort -u | grep "named.*_t"

	    ** public_content_t	<-- special type that may be available for data that would be shared by several daemons



###################################################################################################
[ ] UNIT 3 - SECURING DATA
###################################################################################################

      The Need For Encryption

	    ** Susceptibility of unencrypted traffic
		password/data sniffing
		data manipulation
		authentication manipulation
		equivalent to mailing on postcards
	    ** Insecure traditional protocols
		telnet, FTP, POP3, etc. : insecure passwords
		sendmail, NFS, NIS, etc.: insecure information
		rsh, rcp, etc.: insecure authentication

      Cryptographic Building Blocks

	    Cryptographic Building Blocks
	    ** Random Number Generator
	    ** One Way Hashes
	    ** Symmetric Algorithms
	    ** Asymmetric (Public Key) Algorithms
	    ** Public Key Infrastructures
	    ** Digital Certificates

	    ** Two implementations of Cryptographic services for RHEL:
		1) openssl, 
		2) gpg (Gnu Privacy Guard)

      Random Number Generator

	    ** Pseudo-Random Numbers and Entropy (movements - mouse, disk io, etc.)
		Sources
		keyboard and mouse events
		block device interrupts

	    ** Kernel provides sources (reads the Entropy)
		/dev/random:

	    â–&nbsp; best source

	    â–&nbsp; blocks when entropy pool exhausted
		/dev/urandom:

	    â–&nbsp; draws from entropy pool until depleted

	    â–&nbsp; falls back to pseudo-random generators

	    ** openssl rand [ -base64 ] num

      One-Way Hashes (used to check software that was downloaded)

	    ** Arbitrary data reduced to small "fingerprint"
		arbitrary length input
		fixed length output
		If data changed, fingerprint changes ("collision free")
		data cannot be regenerated from fingerprint ("one way")
	    ** Common Algorithms
		md2, md5, mdc2, rmd160, sha, sha1
	    ** Common Utilities
		sha1sum [ --check ] file
		md5sum [ --check ] file
		openssl, gpg
		rpm -V

      Symmetric Encryption (used for passphrase from plain text to "ciphertext" - vice versa)

	    ** Based upon a single Key
		used to both encrypt and decrypt
	    ** Common Algorithms
		DES, 3DES, Blowfish, RC2, RC4, RC5, IDEA, CAST5
	    ** Common Utilities
		passwd (modified DES)
		gpg (3DES, CAST5, Blowfish)
		openssl

      Asymmetric Encryption I (public and private key - then distribute public key)

	    ** Based upon public/private key pair
		What one key encrypts, the other decrypts
	    ** Protocol I: Encryption without key
		synchronization
		Recipient
	    â–&nbsp; generate public/private key pair: P and S
	    â–&nbsp; publish public key P, guard private key S
		Sender
	    â–&nbsp; encrypts message M with recipient public key
	    â–&nbsp; send P(M) to recipient
		Recipient
	    â–&nbsp; decrypts with secret key to recover: M = S(P(M))

      Asymmetric Encryption II (public and private key - combines encryption and digital signature)

	    ** Protocol II: Digital Signatures Sender
	    â–&nbsp; generate public/private key pair: P and S
	    â–&nbsp; publish public key P, guard private key S
	    â–&nbsp; encrypt message M with private key S
	    â–&nbsp; send recipient S(M)
	    Recipient
	    â–&nbsp; decrypt with sender's public key to recover M = P(S(M))
	    ** Combined Signature and Encryption
	    ** Detached Signatures

      Public Key Infrastructures

	    ** Asymmetric encryption depends on public key integrity
	    ** Two approaches discourage rogue public keys:
		Publishing Key fingerprints
		Public Key Infrastructure (PKI)
	    â–&nbsp; Distributed web of trust
	    â–&nbsp; Hierarchical Certificate Authorities
	    ** Digital Certificates

      Digital Certificates (Third Party)

	    ** Certificate Authorities
	    ** Digital Certificate
		Owner: Public Key and Identity
		Issuer: Detached Signature and Identity
		Period of Validity
	    ** Types
		Certificate Authority Certificates
		Server Certificates
	    ** Self-Signed certificates

      Generating Digital Certificates

	    ** X.509 Certificate Format 					<-- The Standard FORMAT

	    ** Generate a public/private key pair and define identity

	      openssl genrsa -out server1.key.pem 1024			<-- 1st step

	    ** Two Options:						<-- 2nd step
		1) Use a Certificate Authority
		  â–&nbsp; generate signature request (csr)

		      openssl req -new -key server1.key.pem -out server1.csr.pem

		  â–&nbsp; send csr to CA
		  â–&nbsp; receive signature from CA

		2) Self Signed Certificates				<-- 2nd step alternative (self signed)
                                                                            the owner is also the issuer..such certificates are appropriate
                                                                            for root level CA's, or in situations where encryption is desired
                                                                            but authentication identity is not necessary
		  â–&nbsp; sign your own public key

		      openssl req -new -key server1.key.pem -out server1.crt.pem -x509

	    http://www.cacert.org/					<-- CERTificate authorities that do not require payment!

	    [root@karl ~]# cd /etc/pki/tls/certs 			<-- the DIRECTORY (RHEL5) where you create your certificates
                                                                         -- you may also create self signed certificate here!

			    /usr/share/ssl/certs			<-- the DIRECTORY on RHEL4

		[root@karl certs]# ls -ltr
		total 664
		-rw-r--r-- 1 root root 669565 2009-07-22 22:33 ca-bundle.crt
		-rw-r--r-- 1 root root   2242 2009-11-18 22:10 Makefile
		-rwxr-xr-x 1 root root    610 2009-11-18 22:10 make-dummy-cert
		[root@karl certs]# make
		This makefile allows you to create:
		  o public/private key pairs
		  o SSL certificate signing requests (CSRs)
		  o self-signed SSL test certificates

		To create a key pair, run "make SOMETHING.key".
		To create a CSR, run "make SOMETHING.csr".
		To create a test certificate, run "make SOMETHING.crt".
		To create a key and a test certificate in one file, run "make SOMETHING.pem".

		To create a key for use with Apache, run "make genkey".
		To create a CSR for use with Apache, run "make certreq".
		To create a test certificate for use with Apache, run "make testcert".

		To create a test certificate with serial number other than zero, add SERIAL=num

		Examples:
		  make server.key
		  make server.csr
		  make server.crt
		  make stunnel.pem
		  make genkey
		  make certreq
		  make testcert
		  make server.crt SERIAL=1
		  make stunnel.pem SERIAL=2
		  make testcert SERIAL=3

      OpenSSH Overview

	    ** OpenSSH replaces common, insecure network communication applications

	    ** Provides user and token-based authentication

	    ** Capable of tunneling insecure protocols through port forwarding (rsync & rdist)

	    ** System default configuration (client and server) resides in /etc/ssh/	<-- configuration file!

	    Below is the list of RPMs and what they provide:
	    ------------------------------------------------
	    openssh                ssh-keygen, scp
	    openssl                cryptographic libraries and routines required by openssh
	    openssh-clients        ssh, slogin, ssh-agent, ssh-add, sftp
	    openssh-askpass        X11 passphrase dialog
	    openssh-askpass-gnome  GNOME passphrase dialog
	    openssh-server         sshd				<-- install this only if you are providing "remote" shell access

      OpenSSH Authentication

	    ** The sshd daemon can utilize several different authentication methods
		password (sent securely)
		RSA and DSA keys
		Kerberos
		s/key and SecureID
		host authentication using system key pairs

      The OpenSSH Server

	    ** Provides greater data security between networked systems
		private/public key cryptography
		compatible with earlier restricted-use commercial versions of SSH
	    ** Implements host-based security through
		libwrap.so

	    SSHD is installed with the following RPMs... 
		openssl
		openssh
		openssh-server

      Service Profile: SSH

	    ** Type: System V-managed service
	    ** Packages: openssh, openssh-clients, openssh-server
	    ** Daemon: /usr/sbin/sshd
	    ** Script: /etc/init.d/sshd
	    ** Port: 22
	    ** Configuration: /etc/ssh/*, $HOME/.ssh/
	    ** Related: openssl, openssh-askpass, openssh-askpass-gnome, tcp_wrappers

      OpenSSH Server Configuration

	    ** SSHD configuration file
		/etc/ssh/sshd_config	<-- the configuration file
	    ** Options to consider
	    Protocol
	    ListenAddress
	    PermitRootLogin
	    Banner

	    Some of the configurations at /etc/ssh/sshd_config
	    --------------------------------------------------

	    Protocol 2			<-- only allow SSH2

	    ListenAddress 192.168.0.250:22		<-- configure to listen on multiple interfaces and multiple ports

	    PermitRootLogin no				<-- don't allow direct remote ROOT ssh
	    PermitRootLogin forced-commands-only	<-- don't allow direct remote ROOT ssh
	    PermitRootLogin without-password		<-- don't allow direct remote ROOT ssh, but allow using public-key

	    /etc/issue.net		<-- the banner!

      The OpenSSH Client

	    ** Secure shell sessions
		ssh hostname
		ssh user@hostname
		ssh hostname remote-command
	    ** Secure remote copy files and directories
		scp file user@host:remote-dir
		scp -r user@host:remote-dir localdir
	    ** Secure ftp provided by sshd
		sftp host
		sftp -C user@host

      Port Forwarding!

	    * ssh and sshd can forward TCP traffic
	    * obtuse syntax can be confusing
		- L clientport:host:hostport
		- R serverport:host:hostport
	    * can be used to bypass access controls
		- requires successful authentication to remote sshd by client
		- AllowTcpForwarding

	    --------------------------------------------------------------------------------------------------------

	    ssh -L 3025:mail.example.com:25 -N station1.example.com

	    Tells sshd on station1.example.com:
	    I, the ssh client, will listen for traffic on my host's port 3025 and send it to you, sshd on station1. 
	    You will decrypt it and forward that traffic to port 25 on mail.example.com as if it came from you

	    --------------------------------------------------------------------------------------------------------

	    ssh -R 3025:mail.example.com:25 -N station1.example.com

	    Tells sshd on station1.example.com:
	    You, the sshd on station1, will listen for traffic on your port 3025 and send it to me. 
	    I ssh, will decrypt it and forward that traffic to port 25 on mail.example.com as if it came from me

	    --------------------------------------------------------------------------------------------------------

	    Also check here http://www.walkernews.net/2007/07/21/how-to-setup-ssh-port-forwarding-in-3-minutes/

      Protecting Your Keys

	    MORE POWERFUL: combined SSH-AGENT and PASSPHRASE

	    ** ssh-add -- collects key passphrases
	    ** ssh-agent -- manages key passphrases
	    * ssh-copy_id -- copies keys to other hosts

      Applications: RPM

	    ** Two implementations of file integrity
	    ** Installed Files
		MD5 One-way hash
		rpm --verify package_name (or -V)			<-- compare the files currently in the system against their original form
	    ** Distributed Package Files
		GPG Public Key Signature
		rpm --import /etc/pki/rpm-gpg/RPM-GPGKEY-redhat*	<-- import the GPG key (public)
		rpm --checksig package_file_name (or -K)		<-- check the signature of RPMs downloaded from the internet

      VNC-SSH TUNNEL

	    PRE-REQ:
	    -----------
	    stationx		<-- vncviewer, must be able to authenticate for SSH
	    stationx+100	<-- vncserver, must have AllowTcpForwarding (yes, default)

	    STEP BY STEP:
	    -----------
	    1) on stationx+100 do
	      vncserver
	      netstat -tupln | grep vnc
	      
	    2) on stationx do 
	      ssh -L 5901:stationx+100:5901 stationx+100	<-- establish an SSH tunnel

	      ssh -Nf stationx+100 5901:stationx+100:5901	<-- establish an SSH tunnel in the background & not execute as a remote command

	    3) on stationx do
	      vncviewer localhost:5901



###################################################################################################
[ ] UNIT 4 - NETWORK RESOURCE ACCESS CONTROLS
###################################################################################################

      Routing

	    ** Routers transport packets between different networks
	    ** Each machine needs a default gateway to reach machines outside the local network
	    ** Additional routes can be set using the route command

	    ipv4 - 32bits addressing - 4 billion unique addresses
	    ipv6 - 128bits	- 340 trillion addresses

	    ip
	    route -n 		<-- display routing table
	    traceroute <ip>	<-- diagnose routing problems

      Why IPV6? 

	    ** Larger Addresses
		128-bit Addressing
		Extended Address Hierarchy
	    ** Flexible Header Format
		Base header - 40 octets
		Next Header field supports Optional Headers for current and future extensions
	    ** More Support for Autoconfiguration
		Link-Local Addressing
		Router Advertisement Daemon
		Dynamic Host Configuration Protocol version 6

	    http://www.tldp.org/HOWTO/Linux%2BIPv6-HOWTO/	<-- IPV6 HOWTO

      IPV6 on RHEL

	    ip -6 addr show

	    Utility         Notes
	    --------------- ---------------------------------------
	    ping6           tests connectivity
	    ip -6 route     displays routing table
	    traceroute6     verifies list of routers between systems
	    tracepath6      exposes the PMTU function which is now the responsibility of sending system 
	    host or dig     with "-t AAAA" option will obtain the IPV6
	    netstat         look for "::" to get a list of services listening on IPV6

	    ipv6.ko	<-- the kernel module that enables IPV6, to disable it do the following...

            alias net-pf-10 off
            alias ipv6 off
			<-- but if the module is loaded, active interfaces will have the default link-local addresses 
			    automatically assigned. These addresses are locally-scoped i.e. non-routable

	  Important options in the /etc/sysconfig/network
	    NETWORKING_IPV6=yes|no			<-- enables/disables execution of any IPV6 in startup scripts
	    IPV6_DEFAULTGW="2001:db8:100:1::ffff"	<-- manually define default gateway

	  Important options in the /etc/sysconfig/network-scripts/ifcfg-eth0
	    IPV6INIT=yes|no			<-- enables/disables execution of any IPV6 in startup scripts on this interface
	    IPV6_AUTOCONF=yes|no		<-- enables/disables listening to Router Advertisements for dynamic configuration
	    DHCPV6C=yes|no			<-- enables/disables sending a DHCP multicast request to ff02::16 to obtain dynamic configuration
	    IPV6ADDR="2001:db8:100:0::1/64"	<-- assign first static IPV6 global unicast address and prefix to interface
	    IPV6ADDR_SECONDARIES="2001:db8:100:1::1/64 2001:db8:100:2::1/64"	<-- assign additional Global Unicast addresses to the interface

	    /etc/sysconfig/network-scripts/route6-eth0		<-- where static routes can be persistently defined using "ip -6 route add"
                                                                    2001:db8:100:5::/64 via 2001:db8:100:1::ffff

	    /usr/share/doc/initscripts-9.02/sysconfig.txt	<-- other details here!!!

      tcp_wrappers and IPv6

	    ** tcp_wrappers is IPv6 aware
		When IPv6 is fully implemented throughout the domain, ensure tcp_wrappers rules include IPv6 addresses
	    ** Example: 
		preserving localhost connectivity, add to /etc/hosts.allow
		ALL: [::1]

		[fe80::]/64	<-- IPV6 addresses are enclosed in brakets and may be coupled with a prefix to represent a network

      Netfilter Overview

	    ** Filtering in the kernel: no daemon
	    ** Asserts policies at layers 2, 3 & 4 of the OSI Reference Model
	    ** Only inspects packet headers
	    ** Consists of netfilter modules in kernel, and the iptables user-space software

      Netfilter Tables and Chains

	    [img[picturename| http://lh6.ggpht.com/_F2x5WXOJ6Q8/TQhtM0dOccI/AAAAAAAAA-U/tdBtUUKt4Vo/NetfilterTablesAndChains.png]]

	    --------------------------------------
				    TABLE
			    ---------------------
	    Filtering Point  filter   nat   mangle	<-- "table names" are case sensitive and are in lower case!
	    ---------------  ---------------------
	    INPUT               X             X		<-- "filtering point" names are case sensitive and are in UPPER case!
	    FORWARD             X             X
	    OUTPUT              X      X      X
	    PREROUTING                 X      X
	    POSTROUTING                X      X

	    filter	<-- the main packet filtering is performed in this table 
	    nat		<-- this is where NAT occurs
	    mangle	<-- this is where a limited number of "special effects" can happen. this table is rarely used
	    custom chains	<-- can be created at runtime

      Netfilter Packet Flow

	    [img[picturename| http://lh6.ggpht.com/_F2x5WXOJ6Q8/TQhtNBPVqFI/AAAAAAAAA-Y/yEo6Y0o-IBQ/NetfilterPacketFlow.png]]


	    If a packet's destination is to local address then it's handled by the local process. Else, if to another system, and if 
            packet forwarding is enabled then packets are directed in accordance with the routing table.

	    PREROUTING	this filter point deals with packets first upon arrival (nat)
	    FORWARD	this filter point handles packets being routed through the local system (filter)
	    INPUT	this filter point handles packets destined for the local system, after the routing decision (filter)
	    OUTPUT	this filter point handldes packets after they have left their sending process and prior to POSTROUTING (nat and filter)
	    POSTROUTING	this filter point handles packets immediately prior to leaving the system (nat)

      Rule Matching

	    ** Rules in ordered list
	    ** Packets tested against each rule in turn
	    ** On first match, the target is evaluated: usually exits the chain
	    ** Rule may specify multiple criteria for match
	    ** Every criterion in a specification must be met for the rule to match (logical AND)
	    ** Chain policy (default) applies if no match

      Rule Targets

	    Rule Targets determine what action to take when a packet matches the rule's selection criteria

	    -j 		<-- the option of the iptabls command, target can be BASE, Custom Chain or Extension Target

	    ** Built-in targets: DROP, ACCEPT
	    ** Extension targets: LOG, REJECT, custom chain
		REJECT sends a notice returned to sender
		LOG connects to system log kernel facility
		LOG match does not exit the chain
	    ** Target is optional, but no more than one per rule and defaults to the chain policy if absent

      Simple Example

	    [img[picturename| http://lh3.ggpht.com/_F2x5WXOJ6Q8/TQhtNI8mHSI/AAAAAAAAA-c/GqP0HzgCDxA/NetfilterSimpleExample.png]]


	    iptables -t filter -A INPUT -s 192.168.0.1 -j DROP

	    "the example will append a single rule to the INPUT chain of the filter table. This rule
	    causes any packet with a source address (-s) of 192.168.0.1 to match and "jump" to
	    it's target, DROP, and discarded"

	    Our first consideration should be whether our system 
	      - is mostly open (accepting most packets)
	      - or mostly closed (denying most packets)
	    The tendency toward open or closed effects not only rules, but most importantly the chain policy,
	    in effect when no rule matches or is present. The effect is the target for the packet under inspection.

      Basic Chain Operations

	    ** List rules in a chain or table (-L or -vL)
	    ** Append a rule to the chain (-A)		<-- append rule at the end of existing chain,
                                                          if a table is not specified then the "filter" table is assumed
	    ** Insert a rule to the chain (-I)
		-I CHAIN (inserts as the first rule)	<-- you can insert as the first or at a given point
		-I CHAIN 3 (inserts as rule 3)
	    ** Delete an individual rule (-D)
		-D CHAIN 3 (deletes rule 3 of the chain)
		-D CHAIN RULE (deletes rule explicitly)

	    -F 	<-- used to Flush, or remove all rules from a chain. this does not reset the chain policy

	    -L			<-- list the contents of the chain (rules and policy)
	    -v 			<-- displays packet and byte counters,interfaces,protocols
	    -n 			<-- prevents time consuming reverse lookups of IP addresses
	    --line-numbers	<-- displays line numbers that could then be used to determine the rule number to be used w/ -D or -I

	    iptables -t filter -nvL --line-numbers	<-- example usage that prints good output

      Common Match Criteria

	    "Most rules in the filter table involve allowing or denying packets based on their source or destination."

	    IP address or network
	      -s 192.168.0.0/24		<-- packet's source
	      -d 192.168.0.1		<-- packet's destination

	    Network interface
	      -i lo			<-- packet's interface arriving
	      -o eth1			<-- packet's interface leaving

	    Criteria can be inverted with '!'
	      -i eth0 -s '!' 192.168.0.0/24

	    Transport protocol and port
	      -p tcp --dport 80
	      -p udp --sport 53
	      port ranges can be specified with start:end

	    ICMP type
	      -p icmp --icmp-type host-unreachable

      Additional Chain Operations

	    ** Assign chain policy (-P CHAIN TARGET)
		ACCEPT (default, a built-in target)
		DROP (a built-in target)
		REJECT (not permitted, an extension target)
	    ** Flush all rules of a chain (-F)
		Does not flush the policy
	    ** Zero byte and packet counters (-Z [CHAIN])
		Useful for monitoring chain statistics
	    ** Manage custom chains (-N, -X)
		-N Your_Chain-Name (adds chain)
		-X Your_Chain-Name (deletes chain)

      Rules: General Considerations
      Match Arguments
      Connection Tracking

	    ip_conntrack
	    cat /proc/net/ip_tables_matches
	    cat /proc/net/ip_conntrack

      Connection Tracking, continued
      Connection Tracking Example
      Network Address Translation (NAT)
      DNAT Examples
      SNAT Examples
      Rules Persistence
      Sample /etc/sysconfig/iptables
      IPv6 and ip6tables

      Solutions:
	    iptables -t filter -N CLASS-RULES
	    iptables -t filter -A INPUT -j CLASS-RULES
	    iptables -t filter -A CLASS-RULES -i lo -j ACCEPT
	    iptables -t filter -A CLASS-RULES -p icmp -j ACCEPT
	    iptables -t filter -A CLASS-RULES -m state --state ESTABLISHED,RELATED -j ACCEPT
	    iptables -t filter -A CLASS-RULES --protocol tcp --dport 22 -j ACCEPT
	    iptables -t filter -A CLASS-RULES -m state --state NEW --protocol udp --dport 514 -j ACCEPT
	    iptables -t filter -A CLASS-RULES -j LOG
	    iptables -t filter -A CLASS-RULES -j REJECT

	    [root@server1 ~]# iptables -nvL --line-numbers
	    Chain INPUT (policy ACCEPT 1768 packets, 155K bytes)
	    num   pkts bytes target     prot opt in     out     source               destination         
	    1      781 65424 CLASS-RULES  all  --  *      *       0.0.0.0/0            0.0.0.0/0           

	    Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
	    num   pkts bytes target     prot opt in     out     source               destination         

	    Chain OUTPUT (policy ACCEPT 2524 packets, 213K bytes)
	    num   pkts bytes target     prot opt in     out     source               destination         

	    Chain CLASS-RULES (1 references)
	    num   pkts bytes target     prot opt in     out     source               destination         
	    1      376 34383 ACCEPT     all  --  lo     *       0.0.0.0/0            0.0.0.0/0           
	    2        0     0 ACCEPT     icmp --  *      *       0.0.0.0/0            0.0.0.0/0           
	    3      339 24769 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0           state RELATED,ESTABLISHED 
	    4        1    60 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           tcp dpt:22 
	    5        0     0 ACCEPT     udp  --  *      *       0.0.0.0/0            0.0.0.0/0           state NEW udp dpt:514 
	    6        9   963 LOG        all  --  *      *       0.0.0.0/0            0.0.0.0/0           LOG flags 0 level 4 
	    7        9   963 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable 

	    [root@server1 sysconfig]# cat iptables
	    # Generated by iptables-save v1.2.11 on Thu Dec 16 17:45:15 2010
	    *filter
	    :INPUT ACCEPT [1768:154858]
	    :FORWARD ACCEPT [0:0]
	    :OUTPUT ACCEPT [2324:189432]
	    :CLASS-RULES - [0:0]
	    -A INPUT -j CLASS-RULES
	    -A CLASS-RULES -i lo -j ACCEPT
	    -A CLASS-RULES -p icmp -j ACCEPT
	    -A CLASS-RULES -m state --state RELATED,ESTABLISHED -j ACCEPT
	    -A CLASS-RULES -p tcp -m tcp --dport 22 -j ACCEPT
	    -A CLASS-RULES -p udp -m state --state NEW -m udp --dport 514 -j ACCEPT
	    -A CLASS-RULES -j LOG
	    -A CLASS-RULES -j REJECT --reject-with icmp-port-unreachable
	    COMMIT
	    # Completed on Thu Dec 16 17:45:15 2010

      Now let's do something.. I want to just allow SSH on 172.24 segment with higher priority than the custom chain
      well.. it will still allow SSH on the other network segment because of the SSH rule on the custom chain.. 

	    iptables -t filter -I INPUT 1 -s 172.24.0.0/16 --protocol tcp --dport 22 -j ACCEPT

      so you have to remove it... 

	    iptables -t filter -D CLASS-RULES 4

      here's the report

	    [root@server1 ~]# iptables -nvL --line-numbers
	    Chain INPUT (policy ACCEPT 1768 packets, 155K bytes)
	    num   pkts bytes target     prot opt in     out     source               destination         
	    1      453 35526 ACCEPT     tcp  --  *      *       172.24.0.0/16        0.0.0.0/0           tcp dpt:22 
	    2     1433  124K CLASS-RULES  all  --  *      *       0.0.0.0/0            0.0.0.0/0           

	    Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
	    num   pkts bytes target     prot opt in     out     source               destination         

	    Chain OUTPUT (policy ACCEPT 3547 packets, 318K bytes)
	    num   pkts bytes target     prot opt in     out     source               destination         

	    Chain CLASS-RULES (1 references)
	    num   pkts bytes target     prot opt in     out     source               destination         
	    1      800 73396 ACCEPT     all  --  lo     *       0.0.0.0/0            0.0.0.0/0           
	    2        0     0 ACCEPT     icmp --  *      *       0.0.0.0/0            0.0.0.0/0           
	    3      457 34861 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0           state RELATED,ESTABLISHED 
	    4        0     0 ACCEPT     udp  --  *      *       0.0.0.0/0            0.0.0.0/0           state NEW udp dpt:514 
	    5      117 10082 LOG        all  --  *      *       0.0.0.0/0            0.0.0.0/0           LOG flags 0 level 4 
	    6      117 10082 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable 

	    [root@server1 ~]# cat /etc/sysconfig/iptables
	    # Generated by iptables-save v1.2.11 on Thu Dec 16 18:16:14 2010
	    *filter
	    :INPUT ACCEPT [1768:154858]
	    :FORWARD ACCEPT [0:0]
	    :OUTPUT ACCEPT [3498:312822]
	    :CLASS-RULES - [0:0]
	    -A INPUT -s 172.24.0.0/255.255.0.0 -p tcp -m tcp --dport 22 -j ACCEPT 
	    -A INPUT -j CLASS-RULES 
	    -A CLASS-RULES -i lo -j ACCEPT 
	    -A CLASS-RULES -p icmp -j ACCEPT 
	    -A CLASS-RULES -m state --state RELATED,ESTABLISHED -j ACCEPT 
	    -A CLASS-RULES -p udp -m state --state NEW -m udp --dport 514 -j ACCEPT 
	    -A CLASS-RULES -j LOG 
	    -A CLASS-RULES -j REJECT --reject-with icmp-port-unreachable 
	    COMMIT
	    # Completed on Thu Dec 16 18:16:14 2010


###################################################################################################
[ ] UNIT 5 - ORGANIZING NETWORKED SYSTEMS
###################################################################################################

      Host Name Resolution

	    ** Some name services provide mechanisms to translate host names into lower-layer addresses so that computers can communicate
		Example: Name --> MAC address (link layer)
		Example: Name --> IP address (network layer) --> MAC address (link layer)
	    ** Common Host Name Services
		Files (/etc/hosts and /etc/networks)
		DNS
		NIS
	    ** Multiple client-side resolvers:
		"stub"
		dig
		host
		nslookup

      The Stub Resolver

	    ** Generic resolver library available to all applications
	     Provided through gethostbyname() and other glibc functions
	     Not capable of sophisticated access controls, such as packet signing or encryption
	    ** Can query any name service supported by glibc
	    ** Reads /etc/nsswitch.conf to determine
		the order in which to query name services, as
		shown here for the default configuration:
		hosts: files dns
	    ** The NIS domain name and the DNS domain name should usually be different to simplify troubleshooting and avoid name collisions

      DNS-Specific Resolvers

	    ** host
		Never reads /etc/nsswitch.conf
		By default, looks at both the nameserver and
		search lines in /etc/resolv.conf
		Minimal output by default
	    ** dig
		Never reads /etc/nsswitch.conf
		By default, looks only at the nameserver line in /
		etc/resolv.conf
		Output is in RFC-standard zone file format, the
		format used by DNS servers, which makes dig
		particularly useful for exploring DNS resolution
	    ** nslookup

      Trace a DNS Query with dig

	    ** dig +trace redhat.com
	    Reads /etc/resolv.conf to determine
	    nameserver
	    Queries for root name servers
	    Chases referrals to find name records (answers)
	    See notes for sample output in case the training
	    center's firewall restricts outbound DNS
	    ** This is known as an
	    iterative query
	    ** Initial Observations:
	    Names are organized in an inverted tree with root
	    (.) at top
	    The name hierarchy allows DNS to cross
	    organizational boundaries
	    Names in records end with a dot when fully-qualified

      Other Observations

	    ** Answers in the previous trace are in the form of
		resource
		records
	    ** Each resource record has five fields:
		domain - the domain or subdomain being
		queried
		ttl - how long the record should be cached,
		expressed in seconds
		class - record classification (usually IN)
		type - record type, such as A or NS
		rdata - resource data to which the
		domain maps
	    ** Conceptually, one queries against the
		domain (name), which is mapped to
		the rdata for an answer
	    ** In the trace example,
		The NS (name server) records are referrals
		The A (address) record is the final answer and is the
		default query type for dig

	    IN class	<-- the most common class.. two other types are CH (Chaos) and HS (Hesiod)
	    origin	<-- refers to the name of domain or subdomain as it is managed by a particular server
	    canonical	<-- the usual or real name of a host

	    Domain                  Class   Record Type   rdata
	    canonical name          IN      A             IPv4 address 
	    canonical name          IN      AAAA          IPv6 address 
	    alias                   IN      CNAME         canonical name
	    origin                  IN      MX            canonical name of mail exchanger
	    origin                  IN      NS            canonical name of nameserver
	    reversed IP addresses   IN      PTR           canonical name
	    origin                  IN      SOA           authoritative info

      Forward Lookups

	    ** dig redhat.com							<-- look for status: NOERROR and answer: 1
		Attempts recursion first, as indicated by rd (recursion
		desired) in the flags section of the output:
		if the nameserver allows recursion, then the server finds
		the answer and returns the requested records to the
		client
		If the nameserver does not allow recursion, then the
		server returns a referral to a top-level domain, which
		dig chases
	    ** Observations
		dig's default query type is A; the rdata for an A record
		is an IPv4 address
		Use -t AAAA to request IPv6 rdata
		When successful, dig returns a status of NOERROR, an
		answer count, and also indicates which nameservers are
		authoritative for the name

      Reverse Lookups

	    ** dig -x 209.132.177.50						<-- look for status: NOERROR and answer: 1
	    ** Observations
		The question section in the output shows that DNS
		reverses the octets of an address and appends inaddr.
		arpa. to fully qualify the domain part of the
		record
		The answer section shows that DNS uses PTR
		(pointer) records for reverse lookups
		Additionally, the rdata for a PTR record is a fullyqualified
		domain name

      Mail Exchanger Lookups

	    ** An MX record maps a domain to the fullyqualified
		domain name of a mail server
	    ** dig -t mx redhat.com
	    ** Observations
		The rdata field is extended to include an additional
		piece of data called the priority
		The priority can be thought of as a distance:
		networks prefer shorter distances
		To avoid additional lookups, nameservers typically
		provide A records as additional responses to
		correspond with the FQDN's provided in the MX
		records
		Together, an MX record and its associated A record
		resolve a domain's mail server

      SOA Lookups

	    ** An SOA record marks a server as a master authority
	    ** dig -t soa redhat.com
	    ** Initial Observations
		The domain field is called the origin
		The rdata field is extended to support additional
		data, explained on the next slide
		There is typically only one master nameserver for a
		domain; it stores the master copy of its data
		Other authoritative nameservers for the domain or
		zone are referred to as slaves; they synchronize
		their data from the master

      SOA rdata

	    ** Master nameserver's FQDN
	    ** Contact email
	    ** Serial number
	    ** Refresh delay before checking serial number
	    ** Retry interval for slave servers
	    ** Expiration for records when the slave cannot
	    contact its master(s)
	    ** Minimum TTL for negative answers ("no such
	    host")

      Being Authoritative

	    ** The SOA record merely indicates the master
	    server for the origin (domain)
	    ** A server is authoritative if it has:
	    Delegation from the parent domain: NS record plus
	    A record
	    A local copy of the domain data, including the SOA
	    record
	    ** A nameserver that has the proper delegation
	    but lacks domain data is called a
	    lame server

      The Everything Lookup

	    ** dig -t axfr example.com.
	    @192.168.0.254
	    ** Observations
	    All records for the zone are transferred
	    Records reveal much inside knowledge of the
	    network
	    Response is too big for UDP, so transfers use TCP
	    ** Most servers restrict zone transfers to a
	    select few hosts (usually the slave nameservers)
	    ** Use this command from a slave to test
	    permissions on the master

      Exploring DNS with host

	    ** For any of the following queries, add a -v
	    option to see output in zone file format
	    ** Trace: not available
	    ** Delegation: host -rt ns redhat.com
	    ** Force iterative: host -r redhat.com
	    ** Reverse lookup: host 209.132.177.50
	    ** MX lookup: host -t mx redhat.com
	    ** SOA lookup: host -t soa redhat.com
	    ** Zone transfer: host -t axfr redhat.com
	    192.168.0.254 or
	    host -t ixfr=serial example.com.
	    192.168.0.254

      Transitioning to the Server

	    ** Red Hat Enterprise Linux uses BIND, the
	    Berkely Internet Name Daemon
	    ** BIND is the most widely used DNS server on
	    the Internet
	     A stable and reliable infrastructure on which to base
	    a domain's name and IP address associations
	     The reference implementation for DNS RFC's
	     Runs in a chrooted environment

      Service Profile: DNS

      Access Control Profile: BIND

	    ** Netfilter: tcp/udp ports 53 and 953 incoming; tcp/udp ephemeral ports outgoing
	    ** TCP Wrappers: N/A
		ldd `which named` | grep libwrap
		strings `which named` | grep hosts
	    ** Xinetd: N/A (named is a standalonedaemon)
	    ** PAM: N/A (no configuration in /etc/pam.d/)
	    ** SELinux: yes - see notes
	    ** App-specific controls: yes, discussed in later slides and in the ARM
		/usr/share/doc/bind-*/arm/Bv9ARM.{html,pdf}

	    [root@server1 ~]# cat /etc/selinux/targeted/contexts/files/file_contexts | grep named
	    # named
	    /var/named(/.*)?		system_u:object_r:named_zone_t
	    /var/named/slaves(/.*)?		system_u:object_r:named_cache_t
	    /var/named/data(/.*)?		system_u:object_r:named_cache_t
	    /etc/named\.conf	--	system_u:object_r:named_conf_t
	    /etc/rndc.*		--	system_u:object_r:named_conf_t
	    /usr/sbin/named      	--	system_u:object_r:named_exec_t
	    /var/run/ndc		-s	system_u:object_r:named_var_run_t
	    /var/run/bind(/.*)?		system_u:object_r:named_var_run_t
	    /var/run/named(/.*)?		system_u:object_r:named_var_run_t
	    /usr/sbin/lwresd	--	system_u:object_r:named_exec_t
	    /var/log/named.* 	--  system_u:object_r:named_log_t
	    /var/named/named\.ca	--	system_u:object_r:named_conf_t
	    /var/named/chroot(/.*)?		system_u:object_r:named_conf_t
	    /var/named/chroot/dev/null   -c	system_u:object_r:null_device_t
	    /var/named/chroot/dev/random -c	system_u:object_r:random_device_t
	    /var/named/chroot/dev/zero -c	system_u:object_r:zero_device_t
	    /var/named/chroot/etc(/.*)? 	system_u:object_r:named_conf_t
	    /var/named/chroot/etc/rndc.key  -- system_u:object_r:dnssec_t
	    /var/named/chroot/var/run/named.* system_u:object_r:named_var_run_t
	    /var/named/chroot/var/tmp(/.*)? system_u:object_r:named_cache_t
	    /var/named/chroot/var/named(/.*)?	system_u:object_r:named_zone_t
	    /var/named/chroot/var/named/slaves(/.*)? system_u:object_r:named_cache_t
	    /var/named/chroot/var/named/data(/.*)? system_u:object_r:named_cache_t
	    /var/named/chroot/var/named/named\.ca	--	system_u:object_r:named_conf_t

      Getting Started with BIND

	    ** Install packages
		bind 			<-- for core binaries
		bind-chroot 		<-- for security
		caching-nameserver 	<-- for an initial configuration
	    ** Configure startup
		service named configtest
		service named start
		chkconfig named on
	    ** Proceed with essential named configuration

      Essential named Configuration

	    ** Configure the stub resolver
	    ** Define access controls in /etc/named.conf
		Declare client match lists
		Server interfaces: listen-on and listen-on-v6
		What queries should be allowed?
		  â–&nbsp; Iterative: allow-query { match-list; };
		  â–&nbsp; Recursive: allow-recursion { matchlist; };
		  â–&nbsp; Transfers: allow-transfer { matchlist; };
	    ** Add data via zone files
	    ** Test!

      Configure the Stub Resolver

	    ** On the nameserver:
		Edit /etc/resolv.conf to specify nameserver 127.0.0.1
		Edit /etc/sysconfig/network-scripts/ifcfg-* to specify PEERDNS=no
	    ** Advantages:
		Ensures consistent lookups for all applications
		Simplifies access controls and troubleshooting
	    ** Besides /etc/resolv.conf, where can an
		unprivileged user see what nameservers DHCP provides?

      bind-chroot Package

	    ** Installs a chroot environment under /var/
	    named/chroot
	    ** Moves existing config files into the chroot
	    environment, replacing the original files with
	    symlinks
	    ** Updates /etc/sysconfig/named with a
	    named option:
	    ROOTDIR=/var/named/chroot
	    ** Tips
	     Inspect /etc/sysconfig/named after installing
	    bind-chroot
	     Run ps -ef | grep named after starting named to
	    verify startup options

      caching-nameserver Package

	    ** Provides
	    named.caching-nameserver.conf
	    named.ca containing root server 'hints'
	    Forward and reverse lookup zone files for machinelocal
	    names and IP addresses (e.g., localhost.
	    localdomain)
	    ** Tips
	    Copy named.caching-nameserver.conf to
	    named.conf
	    Change ownership to root:named
	    Edit named.conf
	    ** The following slides describe essential access directives

	    http://www.ietf.org/rfc/rfc1912.txt		<-- RFC for common DNS errors

	    -------------
	    GOTCHAs!!!
	    ---------------------------------------------------------------------------------------------------------------------
	    * system-config-bind utilities will overwrite /etc/named.caching-nameserver.conf if it exists, so you should
	    copy or move the file to //etc/named.conf before making any changes

	    * The named init script reads /etc/named.caching-nameserver.conf only if /etc/named.conf is unreadable, which
	    will be the case if /etc/named.conf doesn't exist, has improper file ownership/permissions, or has the wrong
	    SELinux context
	    ---------------------------------------------------------------------------------------------------------------------

      Address Match List

	    ** A semicolon-separated list of IP addresses or subnets used with security directives for hostbased access control
	    ** Format
		IP address: 192.168.0.1
		Trailing dot: 192.168.0.
		CIDR: 192.168.0/24
		Use a bang (!) to denote inversion
	    ** A match list is checked in order, stopping on first match
	    ** Example:
		{ 192.168.0.1; 192.168.0.; !192.168.1.0/24; };

      Access Control List (ACL)

	    ** In its simplest form, an ACL assigns a name to an address match list
	    ** Can generally be used in place of a match list (nesting is allowed!)
	    ** Best practice is to define ACL's at the top of /etc/named.conf
	    ** Example declarations
		acl "trusted" { 192.168.1.21; };
		acl "classroom" { 192.168.0.0/24; trusted; };
		acl "cracker" { 192.168.1.0/24; };
		acl "mymasters" { 192.168.0.254; };
		acl "myaddresses" { 127.0.0.1; 192.168.0.1; };

      Built-In ACL's

	    ** BIND pre-defines four ACL's
	    none - No IP address matches
	    any - All IP addresses match
	    localhost - Any IP address of the name server matches
	    localnets - Directly-connected networks match
	    ** What is the difference between the localhost builtin
	    ACL and the myaddresses example on the previous
	    page (assuming the server is multi-homed)?

      Server Interfaces

	    ** Option: listen-on port 53 { matchlist;
	    };
	    ** Binds named to specific interfaces
	    ** Example
	    listen-on port 53 { myaddresses; };
	    listen-on-v6 port 53 { ::1; };
	    ** Restart and verify: netstat -tulpn | grep
	    named
	    ** Questions:
	    What if listen-on does not include
	    127.0.0.1?
	    How might changing listen-on-v6 to :: (all IPv6
	    addresses) affect IPv4?
	    ** Default: if listen-on is missing, named
	    listens on all interfaces

      Allowing Queries

	    ** Option: allow-query { matchlist;
	    };
	    ** Server provides both authoritative and
	    cached answers to clients in match list
	    ** Example:
	    allow-query { classroom; cracker; };
	    ** Default: if allow-query is missing, named
	    allows all

      Allowing Recursion

	    ** Option: allow-recursion { matchlist;
	    };
	    ** Server chases referrals on behalf of clients in
	    the match-list
	    ** Example:
	    allow-recursion { classroom; !cracker; };
	    ** Questions
	    What happens if 192.168.1.21 tries a recursive
	    query?
	    What happens if 127.0.0.1 tries a recursive query?
	    ** Default: if allow-recursion is missing,
	    named allows all

      Allowing Transfers

	    ** Option: allow-transfer { matchlist;
	    };
	    ** Clients in the match-list are allowed to act as
	    slave servers
	    ** Example:
	    allow-transfer { !cracker; classroom; };
	    ** Questions
	    What happens if 192.168.1.21 tries a slave transfer?
	    What happens if 127.0.0.1 tries a slave transfer?
	    ** Default: if allow-transfer is missing,
	    named allows all

      Modifying BIND Behavior

	    ** Option: forwarders { match-list; };
	    ** Modifier: forward first | only;
	    ** Directs named to recursively query specified
	    servers before or instead of chasing referrals
	    ** Example:
	    forwarders { mymasters; };
	    forward only;
	    ** How can you determine if forwarders is
	    required ?
	    ** If the forward modifier is missing, named
	    assumes first

      Access Controls: Putting it Together

	    ** Sample /etc/named.conf with essential access control options:
	    // acl's make security directives easier to read
	    acl "myaddresses" { 127.0.0.1; 192.168.0.1; };
	    acl "trusted" { 192.168.1.21; };
	    acl "classroom" { 192.168.0.0/24; trusted; };
	    acl "cracker" { 192.168.1.254; };
	    options {
	    # bind to specific interfaces
	    listen-on port 53 { myaddresses; };
	    listen-on-v6 port 53 { ::1; };
	    # make sure I can always query myself for troubleshooting
	    allow-query { localhost; classroom; cracker; };
	    allow-recursion { localhost; classroom; !cracker; };
	    /* don't let cracker (even trusted) do zone transfers */
	    allow-transfer { localhost; !cracker; classroom; };
	    # use a recursive, upstream nameserver
	    forwarders { 192.168.0.254; };
	    forward only;
	    };

      Slave Zone Declaration

	    zone "example.com" {
	    type slave;
	    masters { mymasters; };
	    file "slaves/example.com.zone";
	    };
	    ** Sample zone declaration directs the server to:
	    Act as an authoritative nameserver for example.
	    com, where example.com is the origin as specified
	    in the SOA record's domain field
	    Be a slave for this zone
	    Perform zone transfers (AXFR and IXFR) against the
	    hosts in the masters option
	    Store the transferred data in /var/named/chroot/
	    var/named/slaves/example.com.zone
	    ** Reload named to automatically create the
	    file

      Master Zone Declaration

	    zone "example.com" {
	    type master;
	    file "example.com.zone";
	    };
	    ** Sample zone declaration directs the server to:
	    Act as an authoritative nameserver for example.
	    com, where example.com is the origin as specified
	    in the SOA record's domain field
	    Be a master for this zone
	    Read the master data from /var/named/chroot/
	    var/named/example.com.zone
	    ** Manually create the master file before
	    reloading named

      Zone File Creation

	    ** Content of a zone file:
	    A collection of records, beginning with the SOA record
	    The @ symbol is a variable representing the zone's
	    origin as specified in the zone declaration from /etc/
	    named.conf
	    Comments are assembly-style (;)
	    ** Precautions:
	    BIND appends the domain's origin to any name that is
	    not properly dot-terminated
	    If the domain field is missing from a record, BIND uses
	    the value from the previous record (Danger! What if
	    another admin changes the record order?)
	    Remember to increment the serial number and reload
	    named after modifying a zone file
	    ** What DNS-specific resolver puts its output in
	    zone file format?

      Tips for Zone Files

	    ** Shortcuts:
	    Do not start from scratch - copy an existing zone file
	    installed by the caching-nameserver package
	    To save typing, put $TTL 86400 as the first line of
	    a zone file, then omit the TTL from individual records
	    BIND allows you to split multi-valued rdata across
	    lines when enclosed within parentheses ()
	    ** Choose a filename for your zone file that
	    reflects the origin in some way

      Testing

	    ** Operation
	    Select one of dig, host, or nslookup, and use it
	    expertly to verify the operation of your DNS server
	    Run tail -f /var/log/messages in a separate shell
	    when restarting services
	    ** Configuration
	    BIND will fail to start for syntax errors, so always
	    run service named configtest after editing config
	    files
	    configtest runs two syntax utilities against files
	    specified in your configuration, but the utilities may
	    be run separately against files outside your
	    configuration

      BIND Syntax Utilities

	    ** named-checkconf -t ROOTDIR /path/to/
	    named.conf
	    Inspects /etc/named.conf by default (which will be
	    the wrong file if the -t option is missing)
	    Example: named-checkconf -t /var/named/chroot
	    ** named-checkzone origin /path/to/
	    zonefile
	    Inspects a specific zone configuration
	    Example:
	    named-checkzone redhat.com \
	    /var/named/chroot/var/named/redhat.com.zone

      Advanced BIND Topics

	    ** Remote Name Daemon Control (rndc)
	    ** Delegating Subdomains

      Remote Name Daemon Control (rndc)

	    ** Provides local and remote management of
	    named
	    ** The bind-chroot package configures rndc
	    Listens on the IPv4 and IPv6 loopbacks only
	    Reads key from /etc/rndc.key
	    If the key does not match, cannot start or stop the
	    named service
	    No additional configuration is needed for a default,
	    local install
	    ** Example - flush the server's cache: rndc
	    flush

      Delegating Subdomains

	    ** Steps
	    On the child, create a zone file to hold the
	    subdomain's data
	    On the parent, add an NS record
	    On the parent, add an A record to complete the
	    delegation
	    ** Glue Records
	    If the child's canonical name is in the subdomain it
	    manages, the A record is called a glue
	    record

      DHCP Overview

	    ** DHCP: Dynamic Host Configuration Protocol,
	    implemented via dhcpd
	    ** dhcpd provides services to both DHCP and
	    BOOTP IPv4 clients

      Service Profile: DHCP

	    ** Type: SystemV-managed service
	    ** Package: dhcp
	    ** Daemon: /usr/sbin/dhcpd
	    ** Script: /etc/init.d/dhcpd
	    ** Ports: 67 (bootps), 68 (bootpc)
	    ** Configuration: /etc/dhcpd.conf, /var/
	    lib/dhcpd/dhcpd.leases
	    ** Related: dhclient, dhcpv6_client, dhcpv6

      Configuring an IPv4 DHCP Server

	    ** Configure the server in /etc/dhcpd.conf
	    ** Sample configuration provided in /usr/
	    share/doc/dhcp-version/dhcpd.conf.
	    sample
	    ** There must be at least one subnet block,
	    and it must correspond with configured
	    interfaces.
	    ** Run service dhcpd configtest to check
	    syntax


      Service Profile: DNS	<-- installs in an unconfigured state

	    ** Type: System V-managed service
	    ** Packages: bind, bind-utils, bind-chroot
	    ** Daemons: /usr/sbin/named, /usr/sbin/rndc
	    ** Script: /etc/init.d/named
	    ** Ports: 53 (domain), 953(rndc)
	    ** Configuration: (Under /var/named/chroot/) /etc/named.conf, /var/named/*, /etc/rndc.key
	    ** Related: caching-nameserver, openssl

	Required RPMS: 
	  bind			<-- for core binaries
	  bind-utils		
	  bind-chroot		<-- for security
	  caching-nameserver	<-- for an initial configuration

	Applicable Access Controls:
	      ----------------------------------------------------------
	      Access Control          Implementation
	      ----------------------------------------------------------
	      Application             listen-on, allow-query, allow-transfer, forwarders
	      PAM                     N/A (no files in /etc/pam.d reference named)
	      xinetd                  N/A (init-managed standalone daemon)
	      libwrap                 N/A
	      SELinux                 ensure correct file context; no change to booleans
	      Netfilter, IPv6         disregard IPV6 access for now
	      Netfilter               inbound UDP and TCP port 53 and 953 from 192.168.0.0/24
                                      outbound to port 53 + ephemeral ports (>=1024)

	[root@server1 ~]# ls -l /etc/named.conf 
	lrwxrwxrwx  1 root root 32 Nov 20 21:14 /etc/named.conf -> /var/named/chroot/etc/named.conf

	Configuration:
	--------------
	yum install bind bind-utils bind-chroot caching-nameserver
	change resolv.conf
	modify named.conf (master/slave)
	create zone files


      -------------------------------------------------------------
      Solutions: 
                sequence1- Impliment a minimal DNS server - caching only nameserver
                sequence2- Add data to the name server
                sequence3- Add slave DNS capabilities
                sequence4- Cleaning up
      -------------------------------------------------------------

      sequence1- Impliment a minimal DNS server - caching only nameserver
      -----------------------------
	    yum install bind bind-chroot caching-nameserver bind-utils

	    [root@station103 ~]# cat /etc/services | grep domain
	    domain		53/tcp				# name-domain server
	    domain		53/udp

	    [root@station103 ~]# ldd $(which named) | grep libwrap			<-- no LIBWRAP

	    [root@station103 ~]# cat /etc/sysconfig/named 				<-- to get the ROOTDIR
	    ROOTDIR=/var/named/chroot

	    chgrp named /var/named/chroot/etc/named.conf				<-- change ownership
	    chkconfig named on 
	    service named start

	    iptables -t filter -I CLASS-RULES 4 -p tcp --dport 53 -j ACCEPT		<-- add iptables rules
	    iptables -t filter -I CLASS-RULES 4 -p udp --dport 53 -j ACCEPT

	    [root@station103 ~]# iptables -nvL --line-numbers
	    Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
	    num   pkts bytes target     prot opt in     out     source               destination         
	    1     3375  786K CLASS-RULES  all  --  *      *       0.0.0.0/0            0.0.0.0/0           

	    Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
	    num   pkts bytes target     prot opt in     out     source               destination         

	    Chain OUTPUT (policy ACCEPT 2338 packets, 276K bytes)
	    num   pkts bytes target     prot opt in     out     source               destination         

	    Chain CLASS-RULES (1 references)
	    num   pkts bytes target     prot opt in     out     source               destination         
	    1       96  7072 ACCEPT     all  --  lo     *       0.0.0.0/0            0.0.0.0/0           
	    2        0     0 ACCEPT     icmp --  *      *       0.0.0.0/0            0.0.0.0/0           
	    3     3150  753K ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0           state RELATED,ESTABLISHED 
	    4        0     0 ACCEPT     udp  --  *      *       0.0.0.0/0            0.0.0.0/0           udp dpt:53 
	    5        0     0 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           tcp dpt:53 
	    6        0     0 ACCEPT     tcp  --  *      *       172.25.0.0/16        0.0.0.0/0           tcp dpt:25 
	    7        0     0 ACCEPT     tcp  --  *      *       172.24.0.0/16        0.0.0.0/0           tcp dpt:25 
	    8        0     0 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           tcp dpt:995 
	    9        0     0 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           tcp dpt:993 
	    10       0     0 ACCEPT     tcp  --  *      *       172.24.0.0/16        0.0.0.0/0           tcp dpt:3128 
	    11       0     0 ACCEPT     tcp  --  *      *       172.24.0.0/16        0.0.0.0/0           tcp dpt:80 
	    12       0     0 ACCEPT     tcp  --  *      *       172.24.0.0/16        0.0.0.0/0           tcp dpt:445 
	    13       2   120 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           tcp dpt:22 
	    14       0     0 ACCEPT     udp  --  *      *       0.0.0.0/0            0.0.0.0/0           state NEW udp dpt:514 
	    15       0     0 ACCEPT     tcp  --  *      *       172.24.0.0/16        0.0.0.0/0           tcp dpt:21 state NEW 
	    16       0     0 ACCEPT     udp  --  *      *       172.24.0.0/16        0.0.0.0/0           udp dpts:4002:4005 
	    17       0     0 ACCEPT     tcp  --  *      *       172.24.0.0/16        0.0.0.0/0           tcp dpts:4002:4005 
	    18       0     0 ACCEPT     udp  --  *      *       172.24.0.0/16        0.0.0.0/0           udp dpt:2049 
	    19       0     0 ACCEPT     tcp  --  *      *       172.24.0.0/16        0.0.0.0/0           tcp dpt:2049 
	    20       0     0 ACCEPT     udp  --  *      *       172.24.0.0/16        0.0.0.0/0           udp dpt:111 
	    21       0     0 ACCEPT     tcp  --  *      *       172.24.0.0/16        0.0.0.0/0           tcp dpt:111 
	    22     127 25637 LOG        all  --  *      *       0.0.0.0/0            0.0.0.0/0           LOG flags 0 level 4 
	    23     127 25637 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable 

	    service iptables save
	    service iptables restart

	    Add the following on /etc/named.conf on the "options"		<-- named.conf configuration, on the heading part

		    listen-on port 53 { localhost; };				<-- server interfaces, where named will listen on, defaults to ALL interfaces
		    allow-query { localhost; 172.24.0.0/16; };			<-- iterative, provides AUTHORITATIVE AND CACHED answers to clients on list, defaults allow ALL
		    allow-transfer { localhost; 172.24.254.254; };		<-- transfers, clients on list are allowed to act as SLAVE SERVERS
		    forwarders { 172.24.254.254; };				<-- anything this DNS can't resolve gets forwarded to this!
		    forward only;

      sequence2- Add data to the name server
      -----------------------------

	    [root@station103 ~]# cat /etc/resolv.conf 		<-- edit the resolv.conf, if on DHCP interface must have PEERDNS=no on network config
	    search domain103.example.com
	    nameserver 127.0.0.1

	    Add a forward lookup zone for domain103.example.com
	    - declare a zone in named.conf
	    - create a zone file to hold the data

	    zone "domain103.example.com" IN {			<-- create a FORWARD LOOKUP ZONE in /etc/named.conf, on the bottom part
		    type master;
		    file "domain103.example.com.zone";
		    allow-update { none; };
		    forwarders {};
	    };

	    service named configtest
	    [root@station103 named]# cp -a localdomain.zone domain103.example.com.zone		<-- COPY localdomain.zone to a new FORWARD ZONE file
	    [root@station103 named]# 
	    [root@station103 named]# ls -ltr
	    total 80
	    drwxrwx---  2 named named 4096 Jul 27  2004 slaves
	    drwxrwx---  2 named named 4096 Aug 26  2004 data
	    -rw-r--r--  1 named named  416 Aug 26  2004 named.zero
	    -rw-r--r--  1 named named  433 Aug 26  2004 named.local
	    -rw-r--r--  1 named named  432 Aug 26  2004 named.ip6.local
	    -rw-r--r--  1 named named 2518 Aug 26  2004 named.ca
	    -rw-r--r--  1 named named  415 Aug 26  2004 named.broadcast
	    -rw-r--r--  1 named named  195 Aug 26  2004 localhost.zone
	    -rw-r--r--  1 named named  198 Aug 26  2004 localdomain.zone
	    -rw-r--r--  1 named named  198 Aug 26  2004 domain103.example.com.zone


	    [root@station103 named]# cat domain103.example.com.zone					<-- CREATE the FORWARD ZONE file
	    $TTL	86400
	    @		IN SOA	station103 root (
						    43		; serial (d. adams)		<-- increment this serial#
						    3H		; refresh
						    15M		; retry
						    1W		; expiry
						    1D )		; minimum
	    @	        IN NS		station103					<-- NS record
	    @		IN MX 10	station103					<-- MX record, below the NS
	    station3	IN A		172.24.0.3					<-- the A records
	    station103	IN A		172.24.0.103	

	    service named configtest
	    service named restart

	    [root@station103 named]# host station3 localhost			<-- test your FORWARD LOOKUPs
	    Using domain server:
	    Name: localhost
	    Address: 127.0.0.1#53
	    Aliases: 

	    station3.domain103.example.com has address 172.24.0.3

	    [root@station103 named]# host station3					<-- test your FORWARD LOOKUPs
	    station3.domain103.example.com has address 172.24.0.3


	    dig -t mx domain103.example.com
	    dig -t axfr domain103.example.com
	    host -l !$

	    [root@station103 named]# dig -t mx domain103.example.com		<-- check your MAIL EXCHANGER RECORD

	    ; <<>> DiG 9.2.4 <<>> -t mx domain103.example.com
	    ;; global options:  printcmd
	    ;; Got answer:
	    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 50821
	    ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1

	    ;; QUESTION SECTION:
	    ;domain103.example.com.		IN	MX

	    ;; ANSWER SECTION:
	    domain103.example.com.	86400	IN	MX	10 station103.domain103.example.com.

	    ;; AUTHORITY SECTION:
	    domain103.example.com.	86400	IN	NS	station103.domain103.example.com.

	    ;; ADDITIONAL SECTION:
	    station103.domain103.example.com. 86400	IN A	172.24.0.103

	    ;; Query time: 17 msec
	    ;; SERVER: 127.0.0.1#53(127.0.0.1)
	    ;; WHEN: Sun Jan  2 10:26:31 2011
	    ;; MSG SIZE  rcvd: 96


	    [root@station103 named]# dig -t axfr domain103.example.com		<-- do a comprehensive check

	    ; <<>> DiG 9.2.4 <<>> -t axfr domain103.example.com
	    ;; global options:  printcmd
	    domain103.example.com.	86400	IN	SOA	station103.domain103.example.com. root.domain103.example.com. 43 10800 900 604800 86400
	    domain103.example.com.	86400	IN	NS	station103.domain103.example.com.
	    domain103.example.com.	86400	IN	MX	10 station103.domain103.example.com.
	    station103.domain103.example.com. 86400	IN A	172.24.0.103
	    station3.domain103.example.com.	86400 IN A	172.24.0.3
	    domain103.example.com.	86400	IN	SOA	station103.domain103.example.com. root.domain103.example.com. 43 10800 900 604800 86400
	    ;; Query time: 12 msec
	    ;; SERVER: 127.0.0.1#53(127.0.0.1)
	    ;; WHEN: Sun Jan  2 10:26:31 2011
	    ;; XFR size: 6 records


	    [root@station103 named]# host -l !$					<-- list all host
	    host -l domain103.example.com
	    domain103.example.com name server station103.domain103.example.com.
	    station103.domain103.example.com has address 172.24.0.103
	    station3.domain103.example.com has address 172.24.0.3


	    zone "24.172.in-addr.arpa" IN {						<-- create a REVERSE LOOKUP ZONE in /etc/named.conf, on the bottom part
		    type master;
		    file "172.24.zone";
		    allow-update { none; };
		    forwarders {};
	    };


	    cp -a named.local 172.24.zone						<-- COPY named.local to a new REVERSE ZONE file

	    [root@station103 named]# cat 172.24.zone				<-- CREATE the FORWARD ZONE file
	    $TTL	86400
	    @       IN      SOA     station103.domain103.example.com. root.station103.domain103.example.com.  (
						  1997022701 ; Serial						<-- increment this!
						  28800      ; Refresh
						  14400      ; Retry
						  3600000    ; Expire
						  86400 )    ; Minimum
	    @              IN      NS      station103.domain103.example.com.

	    3.0	    IN	    PTR	    station3.domain103.example.com.						<-- add the PTRs here!
	    103.0       IN      PTR     station103.domain103.example.com.


	    service named configtest
	    service named restart

	    host 172.24.0.3										<-- TEST THE REVERSE LOOKUP
	    host 172.24.0.103
	    dig -t axfr 24.172.in-addr.arpa
	    host -l !$

	    [root@station103 named]# host 172.24.0.3
	    3.0.24.172.in-addr.arpa domain name pointer station3.domain103.example.com.
	    [root@station103 named]# host 172.24.0.103
	    103.0.24.172.in-addr.arpa domain name pointer station103.domain103.example.com.
	    [root@station103 named]# 
	    [root@station103 named]# dig -t axfr 24.172.in-addr.arpa

	    ; <<>> DiG 9.2.4 <<>> -t axfr 24.172.in-addr.arpa
	    ;; global options:  printcmd
	    24.172.in-addr.arpa.	86400	IN	SOA	station103.domain103.example.com. root.station103.domain103.example.com. 1997022701 28800 14400 3600000 86400
	    24.172.in-addr.arpa.	86400	IN	NS	station103.domain103.example.com.
	    103.0.24.172.in-addr.arpa. 86400 IN	PTR	station103.domain103.example.com.
	    3.0.24.172.in-addr.arpa. 86400	IN	PTR	station3.domain103.example.com.
	    24.172.in-addr.arpa.	86400	IN	SOA	station103.domain103.example.com. root.station103.domain103.example.com. 1997022701 28800 14400 3600000 86400
	    ;; Query time: 15 msec
	    ;; SERVER: 127.0.0.1#53(127.0.0.1)
	    ;; WHEN: Sun Jan  2 11:25:54 2011
	    ;; XFR size: 5 records

	    [root@station103 named]# host -l !$
	    host -l 24.172.in-addr.arpa
	    24.172.in-addr.arpa name server station103.domain103.example.com.
	    103.0.24.172.in-addr.arpa domain name pointer station103.domain103.example.com.
	    3.0.24.172.in-addr.arpa domain name pointer station3.domain103.example.com.



      sequence3- Add slave DNS capabilities
      -----------------------------

	    dig -t axfr example.com @172.24.254.254
	    host -r station3.example.com localhost
	    host -r station103.example.com localhost
	    dig +norecurse station3.example.com @localhost

	    [root@station103 named]# dig -t axfr example.com @172.24.254.254		<-- confirm if the remote (MASTER) server will ALLOW US TO SLAVE THE ZONE DATA for example.com

	    ; <<>> DiG 9.2.4 <<>> -t axfr example.com @172.24.254.254
	    ;; global options:  printcmd

	    .. output snipped .. 

	    ;; Query time: 134 msec
	    ;; SERVER: 172.24.254.254#53(172.24.254.254)
	    ;; WHEN: Sun Jan  2 11:28:34 2011
	    ;; XFR size: 134 records


	    [root@station103 named]# host -r station3.example.com localhost		<-- non-recursive query to test where the info is currently coming from!
	    Using domain server:
	    Name: localhost
	    Address: 127.0.0.1#53
	    Aliases: 

	    [root@station103 named]# host -r station103.example.com localhost
	    Using domain server:
	    Name: localhost
	    Address: 127.0.0.1#53
	    Aliases: 

	    [root@station103 named]# host -r station103.example.com				<-- no output
	    [root@station103 named]# 
	    [root@station103 named]# 
	    [root@station103 named]# dig +norecurse station3.example.com @localhost		<-- no answer is available from the local name server

	    ; <<>> DiG 9.2.4 <<>> +norecurse station3.example.com @localhost
	    ;; global options:  printcmd
	    ;; Got answer:
	    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58483
	    ;; flags: qr ra; QUERY: 1, ANSWER: 0, AUTHORITY: 13, ADDITIONAL: 0

	    ;; QUESTION SECTION:
	    ;station3.example.com.		IN	A

	    ;; AUTHORITY SECTION:
	    .			3600000	IN	NS	M.ROOT-SERVERS.NET.
	    .			3600000	IN	NS	A.ROOT-SERVERS.NET.
	    .			3600000	IN	NS	B.ROOT-SERVERS.NET.
	    .			3600000	IN	NS	C.ROOT-SERVERS.NET.
	    .			3600000	IN	NS	D.ROOT-SERVERS.NET.
	    .			3600000	IN	NS	E.ROOT-SERVERS.NET.
	    .			3600000	IN	NS	F.ROOT-SERVERS.NET.
	    .			3600000	IN	NS	G.ROOT-SERVERS.NET.
	    .			3600000	IN	NS	H.ROOT-SERVERS.NET.
	    .			3600000	IN	NS	I.ROOT-SERVERS.NET.
	    .			3600000	IN	NS	J.ROOT-SERVERS.NET.
	    .			3600000	IN	NS	K.ROOT-SERVERS.NET.
	    .			3600000	IN	NS	L.ROOT-SERVERS.NET.

	    ;; Query time: 23 msec
	    ;; SERVER: 127.0.0.1#53(localhost)
	    ;; WHEN: Sun Jan  2 11:30:19 2011
	    ;; MSG SIZE  rcvd: 249



	    [root@station103 named]# cat /etc/named.conf 				<-- create the SLAVE ZONE!!
	    zone "example.com" IN {
		    type slave;
		    masters { 172.24.254.254; };
		    file "slaves/example.com.zone";
		    forwarders {};
	    };


	    service named configtest
	    service named restart

	    [root@station103 named]# pwd
	    /var/named/chroot/var/named
	    [root@station103 named]# ls -l slaves/					<-- upon restarting, you should see this file created!
	    total 8
	    -rw-------  1 named named 3497 Jan  2 11:55 example.com.zone

	    [root@station103 named]# ls -lZ slaves/					
	    -rw-------  named    named    root:object_r:named_cache_t      example.com.zone		<-- SELINUX context


	    dig -t axfr example.com @172.24.254.254
	    host -r station3.example.com localhost
	    host -r station103.example.com localhost
	    dig +norecurse station3.example.com @localhost


	    [root@station103 named]# host -r station3.example.com localhost		<-- non-recursive query to test the zone transfer and see where the data is coming
	    Using domain server:
	    Name: localhost
	    Address: 127.0.0.1#53
	    Aliases: 

	    station3.example.com has address 172.24.0.3

	    [root@station103 named]# host -r station103.example.com localhost	<-- non-recursive query to test the zone transfer and see where the data is coming
	    Using domain server:
	    Name: localhost
	    Address: 127.0.0.1#53
	    Aliases: 

	    station103.example.com has address 172.24.0.103
	    [root@station103 named]# dig +norecurse station3.example.com @localhost

	    ; <<>> DiG 9.2.4 <<>> +norecurse station3.example.com @localhost
	    ;; global options:  printcmd
	    ;; Got answer:
	    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 11207
	    ;; flags: qr aa ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1

	    ;; QUESTION SECTION:
	    ;station3.example.com.		IN	A

	    ;; ANSWER SECTION:
	    station3.example.com.	86400	IN	A	172.24.0.3

	    ;; AUTHORITY SECTION:
	    example.com.		86400	IN	NS	server1.example.com.

	    ;; ADDITIONAL SECTION:
	    server1.example.com.	86400	IN	A	172.24.254.254

	    ;; Query time: 7 msec
	    ;; SERVER: 127.0.0.1#53(localhost)
	    ;; WHEN: Sun Jan  2 12:57:25 2011
	    ;; MSG SIZE  rcvd: 92

	    [root@station103 named]# dig station3.example.com @localhost

	    ; <<>> DiG 9.2.4 <<>> station3.example.com @localhost
	    ;; global options:  printcmd
	    ;; Got answer:
	    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47217
	    ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1

	    ;; QUESTION SECTION:
	    ;station3.example.com.		IN	A

	    ;; ANSWER SECTION:
	    station3.example.com.	86400	IN	A	172.24.0.3

	    ;; AUTHORITY SECTION:
	    example.com.		86400	IN	NS	server1.example.com.

	    ;; ADDITIONAL SECTION:
	    server1.example.com.	86400	IN	A	172.24.254.254

	    ;; Query time: 8 msec
	    ;; SERVER: 127.0.0.1#53(localhost)
	    ;; WHEN: Sun Jan  2 12:57:42 2011
	    ;; MSG SIZE  rcvd: 92



###################################################################################################
[ ] UNIT 6 - NETWORK FILE SHARING SERVICES
###################################################################################################

      File Transfer Protocol(FTP)
      Service Profile: FTP

	Required RPMS: 
	  vsftpd	<-- FTP

	Applicable Access Controls:
	      ----------------------------------------------------------
	      Access Control          Implementation
	      ----------------------------------------------------------
	      Application             /etc/vsftpd/vsftpd.conf
	      PAM                     /etc/pam.d/vsftpd
	      xinetd                  N/A
	      libwrap                 linked, use service name vsftpd
	      SELinux                 ensure correct file context; change one boolean
	      Netfilter, IPv6         disregard IPV6 access for now
	      Netfilter               tcp and udp port 21, and ip_conntrack_ftp.ko

      Network File Service (NFS)
      Service Profile: NFS
      Port options for the Firewall
      NFS Server
      NFS utilities

	Required RPMS: 
	  nfs-utils	<-- NFS

	Applicable Access Controls:
	      ----------------------------------------------------------
	      Access Control          Implementation
	      ----------------------------------------------------------
	      Application             /etc/exports
	      PAM                     N/A
	      xinetd                  N/A
	      libwrap                 /sbin/portmap is compiled with libwrap.a
	      SELinux                 ensure correct file context; change to boolean
	      Netfilter, IPv6         disregard IPV6 access for now
	      Netfilter               tcp and udp ports 111 (portmap) and 2049 (nfs) are constant; set other port values in configuration

      Client-side NFS

      Samba services
      Service Profile: SMB

	Required RPMS: 
	  samba		<-- SAMBA
	  samba-common
	  samba-client

	Also look at the related tools
	--------------------------------
	  system-config-samba
	  testparm		<-- to check the syntax of smb.conf

	  smbclient						<-- "FTP-LIKE" command line access
	  smbclient -L						<-- allows for simple view of shared services
	  smbclient //station103.example.com/legal -U karl	<-- logs in as user karl

	  nmblookup		<-- queries WINS server

	  smbpasswd -a joe	<-- ADDS USER joe and given password
	  tdbdump /etc/samba/secrets.tdb	<-- reads content of the binary file

	  mount -t cifs //station103/legal /mnt/samba -o user=karl	<-- use cifs
	  
	  smbmount //station103/legal /mnt/samba -o user=karl		<-- use smbfs (deprecated in RHEL5)
	  smbumount

	  //station103/legal /mnt/samba cifs username=bob,uid=bob 0 0 			<-- entry in /etc/fstab
	  //station103/legal /mnt/samba cifs username=bob,uid=bob,noauto 0 0 		<-- to not require to enter the password before the machine will boot

	  //station103/legal /mnt/samba cifs credentials=/etc/samba/cred.txt 0 0	<-- to guard against prying eyes!

	  cat /etc/samba/cred.txt
	  username=<uname>
	  password=<passwd>

	Applicable Access Controls:
	      ----------------------------------------------------------
	      Access Control          Implementation
	      ----------------------------------------------------------
	      Application             /etc/samba/smb.conf
	      PAM                     /etc/pam.d/samba ; but disabled by default with "obey pam restrictions = no" in /etc/samba/smb.conf
	      xinetd                  N/A
	      libwrap                 N/A
	      SELinux                 ensure correct file context; change one boolean
	      Netfilter, IPv6         disregard IPV6 access for now
	      Netfilter               tcp port 445 (microsoft-ds)

	Some references:

	    http://cri.ch/linux/docs/sk0001.html	<-- Mount a Windows share on Linux with Samba
	    http://www.cyberciti.biz/tips/how-to-mount-remote-windows-partition-windows-share-under-linux.html	<-- How to mount remote windows partition (windows share) under Linux
	    http://goo.gl/iYlvi			<-- smbmount sample
	    http://goo.gl/QINih			<-- smbmount on large files
	    http://en.wikipedia.org/wiki/Smbmount	<-- saying smbmount is deprecated in RHEL5
	    http://goo.gl/4JyJe			<-- Good discussion on the difference between smbmount mount.cifs and mount -t

      Configuring Samba
      Overview of smb.conf Sections

	    ** smb.conf is styled after the .ini file format and is split into different [ ] sections
		[global] : section for server generic or global settings
		[homes]  : used to grant some or all users access to their home directories
		[printers] : defines printer resources and services
	    ** Use testparm to check the syntax of /etc/samba/smb.conf

      Configuring File and Directory Sharing
      Printing to the Samba Server
      Authentication Methods
      Passwords
      Samba Syntax Utility
      Samba Client Tools: smbclient
      Samba Client Tools: nmblookup
      Samba Clients Tools: mounts
      Samba Mounts in /etc/fstab


      Solutions: A working FTP server accessible to hosts and users
                 An available but invisible upload directory via FTP
      -------------------------------------------------------------

	    man -k ftp | grep selinux
	    man ftpd_selinux				<-- Security-Enhanced Linux policy for ftp daemons

	    setsebool -P allow_ftpd_anon_write on
	    chcon -t public_content_rw_t incoming	<-- Allow ftp servers to read and write /var/tmp/incoming, publicly writable!!!
                                                            requires the allow_ftpd_anon_write boolean to be set

	    IPTABLES_MODULES="ip_conntrack_ftp"

	    iptables -t filter -I CLASS-RULES 6 -s 172.24.0.0/16 --protocol tcp -m tcp --dport 21 -m state --state NEW -j ACCEPT

	    [root@station103 ~]# iptables -nvL --line-numbers
	    Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
	    num   pkts bytes target     prot opt in     out     source               destination         
	    1       32  1872 CLASS-RULES  all  --  *      *       0.0.0.0/0            0.0.0.0/0           

	    Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
	    num   pkts bytes target     prot opt in     out     source               destination         

	    Chain OUTPUT (policy ACCEPT 28 packets, 2960 bytes)
	    num   pkts bytes target     prot opt in     out     source               destination         

	    Chain CLASS-RULES (1 references)
	    num   pkts bytes target     prot opt in     out     source               destination         
	    1        0     0 ACCEPT     all  --  lo     *       0.0.0.0/0            0.0.0.0/0           
	    2        0     0 ACCEPT     icmp --  *      *       0.0.0.0/0            0.0.0.0/0           
	    3       32  1872 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0           state RELATED,ESTABLISHED 
	    4        0     0 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           tcp dpt:22 
	    5        0     0 ACCEPT     udp  --  *      *       0.0.0.0/0            0.0.0.0/0           state NEW udp dpt:514 
	    6        0     0 ACCEPT     tcp  --  *      *       172.24.0.0/16        0.0.0.0/0           tcp dpt:21 state NEW 
	    7        0     0 LOG        all  --  *      *       0.0.0.0/0            0.0.0.0/0           LOG flags 0 level 4 
	    8        0     0 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable 

      Solutions: A working NFS share of the /home/nfstest directory
      -------------------------------------------------------------

	    check nfs and nfslock services

	    rpcinfo -p			<-- list RPC services
	    showmount -e localhost		<-- list NFS shares

	    [root@station103 sysconfig]# cat /etc/sysconfig/nfs 
	    MOUNTD_PORT="4002"
	    STATD_PORT="4003"
	    LOCKD_TCPPORT="4004"
	    LOCKD_UDPPORT="4004"
	    RQUOTAD_PORT="4005"

	    iptables -t filter -I CLASS-RULES 7 -s 172.24.0.0/16 -p tcp --dport 111 -j ACCEPT
	    iptables -t filter -I CLASS-RULES 7 -s 172.24.0.0/16 -p udp --dport 111 -j ACCEPT
	    iptables -t filter -I CLASS-RULES 7 -s 172.24.0.0/16 -p tcp --dport 2049 -j ACCEPT
	    iptables -t filter -I CLASS-RULES 7 -s 172.24.0.0/16 -p udp --dport 2049 -j ACCEPT
	    iptables -t filter -I CLASS-RULES 7 -s 172.24.0.0/16 -p tcp --dport 4002:4005 -j ACCEPT
	    iptables -t filter -I CLASS-RULES 7 -s 172.24.0.0/16 -p udp --dport 4002:4005 -j ACCEPT

	    [root@station103 ~]# cat /etc/hosts.allow 
	    vsftpd: 172.24.
	    portmap: 172.24.
	    [root@station103 ~]# cat /etc/hosts.deny
	    ALL:ALL EXCEPT 172.24. 

	    [root@station103 ~]# cat /etc/exports 
	    /home/nfstest	*.example.com(rw,sync)


      Solutions: 
                sequence1- A working Samba server accessible to several users with smbclient (on their home directories)
                sequence2- A linux directory that only the "legal" group can useful, a samba share that only "legal" group users can access and modify
      -------------------------------------------------------------

      sequence1
      -----------------------------
	    Create the users.. with the same secondary group "legal"
	    smbclient //station103.example.com/joe -U joe		<-- by DEFAULT you can share a user's home directory given that you can authenticate

      sequence2
      -----------------------------
	    mkdir -p /home/depts/legal 
	    chgrp legal /home/depts/legal
	    chmod 3770 /home/depts/legal

	    vi /etc/samba/smb.conf
	    [legal]
	      comment = legal's files
	      path = /home/depts/legal
	      public = no
	      write list = @legal
	      create mask = 0660

	    [example]			<-- browseable, available only to example.com
	      comment = example
	      path = /example
	      browseable = yes
	      hosts allow = 172.24.


	    service smb restart

	    [root@station103 samba]# smbclient -L localhost -N
	    Anonymous login successful
	    Domain=[MYGROUP] OS=[Unix] Server=[Samba 3.0.10-1.4E.2]

		    Sharename       Type      Comment
		    ---------       ----      -------
		    legal           Disk      legal's files
		    IPC$            IPC       IPC Service (Samba Server)
		    ADMIN$          IPC       IPC Service (Samba Server)
	    Anonymous login successful
	    Domain=[MYGROUP] OS=[Unix] Server=[Samba 3.0.10-1.4E.2]

		    Server               Comment
		    ---------            -------
		    STATION103           Samba Server

		    Workgroup            Master
		    ---------            -------
		    MYGROUP              STATION103

	    smbclient //station103.example.com/legal -U joe		<-- mount the "legal", then create a file
      -------------------------------------------------------------


###################################################################################################
[ ] UNIT 7 - WEB SERVICES
###################################################################################################


	Required RPMS: 
	  httpd		<-- HTTP
	  httpd-devel
	  httpdmanual

	Applicable Access Controls:
	      ----------------------------------------------------------
	      Access Control          Implementation
	      ----------------------------------------------------------
	      Application             /etc/httpd/conf/httpd.conf and /etc/httpd/conf.d/*
	      PAM                     N/A
	      xinetd                  N/A
	      libwrap                 N/A
	      SELinux                 ensure correct file context; change to boolean
	      Netfilter, IPv6         disregard IPV6 access for now
	      Netfilter               tcp ports 80 and 443

      Apache Overview

	    ** Process control:
		spawn processes before needed adapt number of processes to demand
	    ** Dynamic module loading:
		run-time extensibility without recompiling
	    ** Virtual hosts:
		Multiple web sites may share the same web server

      Service Profile: HTTPD

	    ** Type: SystemV-managed service
	    ** Packages: httpd, httpd-devel, httpdmanual
	    ** Daemon: /usr/sbin/httpd
	    ** Script: /etc/init.d/httpd
	    ** Ports: 80(http), 443(https)
	    ** Configuration: /etc/httpd/*, /var/www/*
	    ** Related: system-config-httpd, mod_ssl

      Apache Configuration

	    ** Main server configuration stored in /etc/httpd/conf/httpd.conf controls general web server parameters, regular virtual hosts, 
		and access defines filenames and mime-types
	    ** Module configuration files stored in /etc/httpd/conf.d/*
	    ** DocumentRoot default /var/www/html/

      Apache Server Configuration

	    ** Min and Max Spare Servers
	    ** Log file configuration
	    ** Host name lookup
	    ** Modules
	    ** Virtual Hosts
	    ** user and group

      Apache Namespace Configuration

	    ** Specifying a directory for users' pages:
		UserDir public_html
	    ** MIME types configuration:
		AddType application/x-httpd-php .phtml
		AddType text/html .htm
	    ** Declaring index files for directories:
		DirectoryIndex index.html default.htm

      Virtual Hosts

	    NameVirtualHost 192.168.0.100:80
	    <VirtualHost 192.168.0.100:80>
	    ServerName virt1.com
	    DocumentRoot /virt1
	    </VirtualHost>
	    <VirtualHost 192.168.0.100:80>
	    ServerName virt2.com
	    DocumentRoot /virt2
	    </VirtualHost>

      Apache Access Configuration

	    ** Apache provides directory- and file-level hostbased access control
	    ** Host specifications may include dot notation numerics, network/netmask, and dot notation hostnames and domains
	    ** The Order statement provides control over "order", but not always in the way one might expect

      Apache Syntax Utilities

	    ** service httpd configtest
	    ** apachectl configtest
	    ** httpd -t
	    ** Checks both httpd.conf and ssl.conf

      Using .htaccess Files

	    ** Change a directory's configuration:
	     add mime-type definitions
	     allow or deny certain hosts
	    ** Setup user and password databases:
	     AuthUserFile directive
	     htpasswd command:
	    htpasswd -cm /etc/httpd/.htpasswd bob
	    htpasswd -m /etc/httpd/.htpasswd alice

      .htaccess Advanced Example

	    AuthName "Bob's Secret Stuff"
	    AuthType basic
	    AuthUserFile /var/www/html/.htpasswd
	    AuthGroupFile /var/www/html/.htgroup
	    <Limit GET>
	    require group staff
	    </Limit>
	    <Limit PUT POST>
	    require user bob
	    </Limit>

      CGI

	    ** CGI programs are restricted to separate
	    directories by ScriptAlias directive:
	    ScriptAlias /cgi-bin/ /path/cgi-bin/
	    ** Apache can greatly speed up CGI programs
	    with loaded modules such as mod_perl

      Notable Apache Modules

	    ** mod_perl
	    ** mod_php
	    ** mod_speling

      Apache Encrypted Web Server

	    ** Apache and SSL: https (port 443)
		mod_ssl
		/etc/httpd/conf.d/ssl.conf
	    ** Encryption Configuration:
		certificate: /etc/pki/tls/certs/your_host.crt
		private key: /etc/pki/tls/private/your_host.key
	    ** Certificate/key generation:
		/etc/pki/tls/certs/Makefile
		self-signed cert: make testcert
		certificate signature request: make certreq

      Squid Web Proxy Cache

	    ** Squid supports caching of FTP, HTTP, and other data streams
	    ** Squid will forward SSL requests directly to origin servers or to one other proxy
	    ** Squid includes advanced features including access control lists, cache hierarchies, and HTTP server acceleration

      Service Profile: Squid

	    ** Type: SystemV-managed service
	    ** Package: squid
	    ** Daemon: /usr/sbin/squid
	    ** Script: /etc/init.d/squid
	    ** Port: 3128(squid), (configurable)
	    ** Configuration: /etc/squid/*

      Useful parameters in /etc/squid/squid.conf

	    ** http_port 3128
	    ** cache_mem 8 MB
	    ** cache_dir ufs /var/spool/squid 100 16 256
	    ** acl all src 0.0.0.0/0.0.0.0
	    ** acl localhost src 127.0.0.1/255.255.255.255
	    ** http_access allow localhost
	    ** http_access deny all


	Required RPMS: 
	  squid		<-- SQUID

	Applicable Access Controls:
	      ----------------------------------------------------------
	      Access Control          Implementation
	      ----------------------------------------------------------
	      Application             /etc/squid/squid.conf
	      PAM                     /etc/pam.d/squid
	      xinetd                  N/A
	      libwrap                 N/A
	      SELinux                 ensure correct file context; change to boolean
	      Netfilter, IPv6         disregard IPV6 access for now
	      Netfilter               default tcp port is 3128




      Solutions: To implement a web (HTTP) server with a virtual host and CGI capability
                sequence1- A working web services implementation: with virtual hosting, CGI capacility, and a proxy server
                sequence2- A web server with a CGI script
                sequence3- A password protected web server
                sequence4- A working squid  (ICP) proxy server
      -----------------------------------------------------------------------------------------------------------

      sequence1
      -----------------------------
      [root@station103 ~]# cat /etc/services | grep www-http
      http		80/tcp		www www-http	# WorldWideWeb HTTP
      http		80/udp		www www-http	# HyperText Transfer Protocol

      [root@station103 ~]# cat /etc/services | grep 443
      https		443/tcp				# MCom
      https		443/udp				# MCom

      [root@station103 ~]# ldd $(which httpd) | grep libwr		<-- check whether httpd is linked

      [root@station103 ~]# strings ldd $(which httpd) | grep hosts	<-- check whether it has references to hosts.allow or deny, it must output "hosts_access"

      iptables -t filter -I CLASS-RULES 4 -s 172.24.0.0/16 -p tcp --dport 80 -j ACCEPT

      [root@station103 ~]# iptables -nvL --line-numbers
      Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
      num   pkts bytes target     prot opt in     out     source               destination         
      1    13082 1961K CLASS-RULES  all  --  *      *       0.0.0.0/0            0.0.0.0/0           

      Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
      num   pkts bytes target     prot opt in     out     source               destination         

      Chain OUTPUT (policy ACCEPT 13685 packets, 1805K bytes)
      num   pkts bytes target     prot opt in     out     source               destination         

      Chain CLASS-RULES (1 references)
      num   pkts bytes target     prot opt in     out     source               destination         
      1      214 27786 ACCEPT     all  --  lo     *       0.0.0.0/0            0.0.0.0/0           
      2        0     0 ACCEPT     icmp --  *      *       0.0.0.0/0            0.0.0.0/0           
      3     1195 1168K ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0           state RELATED,ESTABLISHED 
      4        0     0 ACCEPT     tcp  --  *      *       172.24.0.0/16        0.0.0.0/0           tcp dpt:80 
      5       12   720 ACCEPT     tcp  --  *      *       172.24.0.0/16        0.0.0.0/0           tcp dpt:445 
      6    11317  697K ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           tcp dpt:22 
      7        0     0 ACCEPT     udp  --  *      *       0.0.0.0/0            0.0.0.0/0           state NEW udp dpt:514 
      8        0     0 ACCEPT     tcp  --  *      *       172.24.0.0/16        0.0.0.0/0           tcp dpt:21 state NEW 
      9        0     0 ACCEPT     udp  --  *      *       172.24.0.0/16        0.0.0.0/0           udp dpts:4002:4005 
      10       0     0 ACCEPT     tcp  --  *      *       172.24.0.0/16        0.0.0.0/0           tcp dpts:4002:4005 
      11       0     0 ACCEPT     udp  --  *      *       172.24.0.0/16        0.0.0.0/0           udp dpt:2049 
      12       0     0 ACCEPT     tcp  --  *      *       172.24.0.0/16        0.0.0.0/0           tcp dpt:2049 
      13       0     0 ACCEPT     udp  --  *      *       172.24.0.0/16        0.0.0.0/0           udp dpt:111 
      14       0     0 ACCEPT     tcp  --  *      *       172.24.0.0/16        0.0.0.0/0           tcp dpt:111 
      15     344 67450 LOG        all  --  *      *       0.0.0.0/0            0.0.0.0/0           LOG flags 0 level 4 
      16     344 67450 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable 

      [root@station103 conf.d]# pwd
      /etc/httpd/conf.d

      [root@station103 conf.d]# cat www103.example.com.conf 	<-- YOU CAN FIND THIS CONFIG ON httpd.conf, you just have to add the directory section
      NameVirtualHost 172.24.0.103:80
      <VirtualHost 172.24.0.103:80>
	  ServerAdmin root@station103.example.com
	  DocumentRoot /var/www/virtual/www103.example.com/html
	  ServerName www103.example.com
	  ErrorLog logs/www103.example.com-error_log
	  CustomLog logs/www103.example.com-access_log combined
	  <Directory /var/www/virtual/www103.example.com/html>
	      Options Indexes Includes
	  </Directory>
      </VirtualHost>


      [root@station103 conf.d]# service httpd configtest
      Syntax OK

      [root@station103 conf.d]# service httpd reload
      Reloading httpd:                                           [  OK  ]

      elinks http://www103.example.com		<-- verify from cracker.org 
      

      sequence2
      -----------------------------

      NameVirtualHost 172.24.0.103:80
      <VirtualHost 172.24.0.103:80>
	  ServerAdmin root@station103.example.com
	  DocumentRoot /var/www/virtual/www103.example.com/html
	  ServerName www103.example.com
	  ErrorLog logs/www103.example.com-error_log
	  CustomLog logs/www103.example.com-access_log combined
	  <Directory /var/www/virtual/www103.example.com/html>
	      Options Indexes Includes
	  </Directory>
	  ScriptAlias /cgi-bin/ /var/www/virtual/www103.example.com/cgi-bin/		<-- add this to execute test.sh
      </VirtualHost>


      sequence3
      -----------------------------

      [root@station103 html]# cat .htaccess 			<-- triggers password authentication
      AuthName "restricted stuff"
      AuthType Basic
      AuthUserFile /etc/httpd/conf/.htpasswd-www103
      require valid-user

      cd /etc/httpd/conf/
      ls -ltr
      htpasswd -mc .htpasswd-www103 karl
      less .htpasswd-www103 
	karl:$apr1$vC4Hp/..$Mh0tVzOtbGx/76lWimd0b/
      chgrp apache .htpasswd-www103 
      chmod 640 .htpasswd-www103 
      service httpd reload
      vi ../conf.d/www103.example.com.conf 
      service httpd restart

      NameVirtualHost 172.24.0.103:80
      <VirtualHost 172.24.0.103:80>
	  ServerAdmin root@station103.example.com
	  DocumentRoot /virtual/html
	  ServerName www103.example.com
	  ErrorLog logs/www103.example.com-error_log
	  CustomLog logs/www103.example.com-access_log combined
	  <Directory /virtual/html>
	      Options Indexes Includes
	      AllowOverride AuthConfig				<-- this was added for the password prompt to take effect
	  </Directory>
	  ScriptAlias /cgi-bin/ /virtual/html/cgi-bin/
      </VirtualHost>


      sequence4
      -----------------------------

      Add squid 3128 as HTTP proxy server on Firefox		<-- to use port 8080, edit the parameter "http_port" on squid.conf

      [root@station103 conf.d]# iptables -t filter -I CLASS-RULES 4 -s 172.24.0.0/16 -p tcp --dport 3128 -j ACCEPT

      [root@station103 conf.d]# iptables -nvL --line-numbers
      Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
      num   pkts bytes target     prot opt in     out     source               destination         
      1    29873   20M CLASS-RULES  all  --  *      *       0.0.0.0/0            0.0.0.0/0           

      Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
      num   pkts bytes target     prot opt in     out     source               destination         

      Chain OUTPUT (policy ACCEPT 19430 packets, 2092K bytes)
      num   pkts bytes target     prot opt in     out     source               destination         

      Chain CLASS-RULES (1 references)
      num   pkts bytes target     prot opt in     out     source               destination         
      1     1545  238K ACCEPT     all  --  lo     *       0.0.0.0/0            0.0.0.0/0           
      2        0     0 ACCEPT     icmp --  *      *       0.0.0.0/0            0.0.0.0/0           
      3    12845   18M ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0           state RELATED,ESTABLISHED 
      4        0     0 ACCEPT     tcp  --  *      *       172.24.0.0/16        0.0.0.0/0           tcp dpt:3128 		<-- ADD THIS FOR SQUID
      5        0     0 ACCEPT     tcp  --  *      *       172.24.0.0/16        0.0.0.0/0           tcp dpt:80 
      6        0     0 ACCEPT     tcp  --  *      *       172.24.0.0/16        0.0.0.0/0           tcp dpt:445 
      7    15407 1089K ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           tcp dpt:22 
      8        0     0 ACCEPT     udp  --  *      *       0.0.0.0/0            0.0.0.0/0           state NEW udp dpt:514 
      9        0     0 ACCEPT     tcp  --  *      *       172.24.0.0/16        0.0.0.0/0           tcp dpt:21 state NEW 
      10       0     0 ACCEPT     udp  --  *      *       172.24.0.0/16        0.0.0.0/0           udp dpts:4002:4005 
      11       0     0 ACCEPT     tcp  --  *      *       172.24.0.0/16        0.0.0.0/0           tcp dpts:4002:4005 
      12       0     0 ACCEPT     udp  --  *      *       172.24.0.0/16        0.0.0.0/0           udp dpt:2049 
      13       0     0 ACCEPT     tcp  --  *      *       172.24.0.0/16        0.0.0.0/0           tcp dpt:2049 
      14       0     0 ACCEPT     udp  --  *      *       172.24.0.0/16        0.0.0.0/0           udp dpt:111 
      15       0     0 ACCEPT     tcp  --  *      *       172.24.0.0/16        0.0.0.0/0           tcp dpt:111 
      16      76 16783 LOG        all  --  *      *       0.0.0.0/0            0.0.0.0/0           LOG flags 0 level 4 
      17      76 16783 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable 
      [root@station103 conf.d]# 

      Access the /etc/squid/squid.conf
      then search for "Recommended minimum" then hit ENTER twice

      Add the following acls
	  acl example src 172.24.0.0/16
	  acl otherguys dstdomain .yahoo.com
	  acl otherguys dstdomain .hotmail.com

      And the following further down below
	  http_access deny otherguys		<-- DENY should be first, then ALLOW
	  http_access allow example
	  http_access allow localhost
	  http_access deny all

      service squid reload



###################################################################################################
[ ] UNIT 8 - ELECTRONIC MAIL SERVICES
###################################################################################################

      Essential Email Operation

	    [img[picturename| http://lh4.ggpht.com/_F2x5WXOJ6Q8/TQhtNIU_rYI/AAAAAAAAA-g/oZWGdtkWQLY/EssentialEmailOperation.png]]

      Simple Mail Transport Protocol

	    ** RFC-standard protocol for talking to MTA's
		Almost always uses TCP port 25
		Extended SMTP (ESMTP) provides enhanced features for MTA's
		An MTA often uses Local Mail Transport Protocol (LMTP) to talk
		to itself
	    ** Example MSP:
		mail -vs 'Some Subject' student@stationX.example.com
	    ** Use telnet to troubleshoot SMTP connections

      SMTP Firewalls

	    ** Network layer with Netfilter stateful
		inspection
		Inbound and outbound to TCP port 25
	    ** Application layer for relay protection
		Internal MTA to which users connect for sending and
		receiving
		DMZ-based outgoing smart
		host which relays mail from the
		internal MTA
		DMZ-based inbound mail hub
		which relays mail to the internal MTA
		Filtering rules within the DMZ MTA's or integrated
		applications (e.g., Spamassassin)

      Mail Transport Agents

	    ** Red Hat Enterprise Linux includes three
	    MTA's
	     Sendmail (default MTA), Postfix, and Exim
	    ** Common features
	     Support virtual hosting
	     Provide automatic retry for failed delivery and other
	    error conditions
	     Interoperable with Spamassassin
	    ** Default access control
	     Sendmail and Postfix have no setuid components
	     Listen on loopback only
	     Relaying is disabled

      Service Profile: Sendmail
      Intro to Sendmail Configuration
      Incoming Sendmail Configuration
      Outgoing Sendmail Configuration
      Inbound Sendmail Aliases
      Outbound Address Rewriting
      Sendmail SMTP Restrictions
      Sendmail Operation
      Using alternatives to Switch MTAs
      Service Profile: Postfix
      Intro to Postfix Configuration
      Incoming Postfix Configuration
      Outgoing Postfix Configuration
      Inbound Postfix Aliases
      Outbound Address Rewriting
      Postfix SMTP Restrictions
      Postfix Operation
      Procmail, A Mail Delivery Agent
      Procmail and Access Controls
      Intro to Procmail Configuration
      Sample Procmail Recipe
      Mail Retrieval Protocols
      Service Profile: Dovecot
      Dovecot Configuration
      Verifying POP Operation
      Verifying IMAP Operation


      Solutions: To build common skills with MTA configuration
                sequence1- A working infrastructure for mail retrieval via POPs and IMAPs
                sequence2- User accounts and a Postfix server that starts at boot-time
                sequence3- A mail server that is available on the classroom subnet and has essential host-based access controls in place
                sequence4- An MTA that allows selective relaying
                sequence5- Message archival and address rewriting
                sequence6- A working Procmail recipe
      -----------------------------------------------------------------------------------------------------------

      sequence1
      -----------------------------

      yum install -y dovecot				<-- install dovecot!

      make -C /usr/share/ssl/certs dovecot.pem
      [root@station103 certs]# cp -p dovecot.pem ../private/
      [root@station103 certs]# ls ../private/dovecot.pem 

      [root@station103 certs]# cat /etc/dovecot.conf  | grep protocols
      protocols = imaps pop3s

      cat /etc/services | grep imaps
      cat /etc/services | grep pop3s

      iptables -t filter -I CLASS-RULES 4 -p tcp --dport 993 -j ACCEPT
      iptables -t filter -I CLASS-RULES 4 -p tcp --dport 995 -j ACCEPT

      chkconfig dovecot on
      service dovecot restart

      echo 'this is a test' | mail -s test student
      mutt -f imaps://student@172.24.0.103


      sequence2
      -----------------------------

      for i in myuser1 myuser2 compliance; do  useradd $i; echo redhat | passwd --stdin $i; done

      yum install -y postfix 				<-- install postfix! and unconfigure sendmail..

      service sendmail stop
      chkconfig sendmail off
      alternatives --config mta				<-- choose postfix!
      service postfix restart
      chkconfig postfix on
      chkconfig --list postfix
      cp -rpv /etc/postfix /tmp/postfix.orig		<-- backup!


      sequence3
      -----------------------------

      iptables -nvL --line-numbers | grep -i established
      cat /etc/services | grep 25					<-- this is smtp, add it on IPTABLES
      iptables -nvL --line-numbers 

      iptables -t filter -I CLASS-RULES 4 -s 172.24.0.0/16 -p tcp --dport 25 -j ACCEPT
      iptables -t filter -I CLASS-RULES 4 -s 172.25.0.0/16 -p tcp --dport 25 -j ACCEPT
      service iptables save
      service iptables restart
      iptables -nvL --line-numbers

      /etc/postfix/main.cf						<-- edit mail.cf, configure interface
      inet_interfaces = localhost
      inet_interfaces = 172.24.0.103

      service postfix restart
      netstat -tupln | grep master

      [root@station3 ~]# telnet station103.example.com 25		<-- test the connectivity from station3
      Trying 172.24.0.103...
      Connected to station103.example.com (172.24.0.103).
      Escape character is '^]'.
      220 station103.example.com ESMTP Postfix
      ^]
      telnet> quit
      Connection closed.

      [root@station103 postfix]# postconf smtpd_client_restrictions
      smtpd_client_restrictions = 

      smtdp_client_restrictions = check_client_access hash:/etc/postfix/access		<-- add on main.cf

      cat /etc/postfix/access
      127.0.0.1         OK
      172.24.0.0/16     OK
      172.25.0.0/16     OK
      0.0.0.0/0         REJECT

      postconf mydestination						<-- postconf!
      postconf myorigin

      echo 'hey root' | mail -s test root
      echo 'hey root' | mail -s test student				<-- it works!
      cat /var/spool/mail/root


      sequence4
      -----------------------------

      yum install -y sendmail-cf						<-- sendmail-cf!

      cat /etc/mail/sendmail.mc | grep  DAEMON_OPTIONS			<-- comment out the line that restricts 127.0.0.1

      alternatives --config mta
      service sendmail restart

      echo 'hello' | mail -s test root@station3.example.com

      cat /etc/postfix/access
      127.0.0.1         OK
      172.24.0.0/16     RELAY
      172.25.0.0/16     OK
      0.0.0.0/0         REJECT

      postmap /etc/postfix/access 


      sequence5
      -----------------------------

      vi /etc/aliases					<-- alias!!
	myuser1.alias:	myuser1
	mylist:	myuser1,myuser2,student

      service postfix restart
      echo 'message alias' | mail -s mailalias mylist@station103.example.com


      sequence6
      -----------------------------


###################################################################################################
[ ] UNIT 9 - ACCOUNT MANAGEMENT
###################################################################################################

      User Accounts
      Account Information (Name Service)
      Name Service Switch (NSS)
      getent
      Authentication
      Pluggable Authentication Modules (PAM)
      PAM Operation
      /etc/pam.d/ Files: Tests
      /etc/pam.d/ Files: Control Values
      Example: /etc/pam.d/login File
      The system_auth file
      pam_unix.so
      Network Authentication
      auth Modules
      Password Security
      Password Policy
      session Modules
      Utilities and Authentication
      PAM Troubleshooting
}}}

[img[picturename| http://lh6.ggpht.com/_F2x5WXOJ6Q8/TQhtM0dOccI/AAAAAAAAA-U/tdBtUUKt4Vo/NetfilterTablesAndChains.png]]

[img[picturename| http://lh6.ggpht.com/_F2x5WXOJ6Q8/TQhtNBPVqFI/AAAAAAAAA-Y/yEo6Y0o-IBQ/NetfilterPacketFlow.png]]

[img[picturename| http://lh3.ggpht.com/_F2x5WXOJ6Q8/TQhtNI8mHSI/AAAAAAAAA-c/GqP0HzgCDxA/NetfilterSimpleExample.png]]

[img[picturename| http://lh4.ggpht.com/_F2x5WXOJ6Q8/TQhtNIU_rYI/AAAAAAAAA-g/oZWGdtkWQLY/EssentialEmailOperation.png]]
Exam schedule
http://www.itgroup.com.ph/corporate/events/red_hat_enterprise_linux_training_philippines

RedHat Training Catalogue 2013
http://images.engage.redhat.com/Web/RedHat/RedHatTrainingCatalogue2013.pdf

''RHEL 6''
http://epistolatory.blogspot.com/2010/05/rhel-6-part-i-distros-new-features-for.html
http://epistolatory.blogspot.com/2010/11/rhel-6-part-ii-installation-of-rhel-6.html
http://epistolatory.blogspot.com/2010/12/rhel-6-part-iii-first-impressions-from.html
http://epistolatory.blogspot.com/2011/11/rhel-6-part-iv-placing-xfs-into.html

''RHEL 7''
http://epistolatory.blogspot.com/2014/07/first-sysadmin-impressions-on-rhel-7.html
http://www.techotopia.com/index.php/RHEL_6_Desktop_-_Starting_Applications_on_Login
http://www.techotopia.com/index.php/RHEL_5_Desktop_Startup_Programs_and_Session_Configuration
https://www.certdepot.net/rhel7-mount-unmount-cifs-nfs-network-file-systems/
https://linuxconfig.org/quick-nfs-server-configuration-on-redhat-7-linux
http://www.itzgeek.com/how-tos/linux/centos-how-tos/how-to-setup-nfs-server-on-centos-7-rhel-7-fedora-22.html
https://www.howtoforge.com/tutorial/setting-up-an-nfs-server-and-client-on-centos-7/
! 1) Image Management
> - ISO library
>> better if you do manual copy.. a lot faster
> - Snapshots
>> - shutdown the VM first before doing snapshots
>> - then, you can preview... then, commit the current state or undo
> - Templates
>> - shutdown before creating templates
> - Pools
! 2) High Availability

Red Hat Enterprise Virtualization High Availability requires
an out-of-band management interface such as IPMI, Dell
DRAC, HP iLO, IBM RSA or BladeCenter for host power
management. In the case of a failure these interfaces are
used to check the hardware status and physically power
down the host to prevent data corruption.

! 3) Live Migration
! 4) System Scheduler

There are three policies: 

a) NONE - no automatic load distribution

b) Even Distribution - balance workload between physical systems

have to define the following: 
- Maximum Service Level <-- the peak which will trigger the live migration
- Time threshold <-- when the threshold is met, then it will do the live migration to other hosts

c) Power Saving - consolidate more VMs on fewer hosts

have to define the following: 
- Maximum Service Level <-- when host reach this utilization, the VMs will automatically live migrated to the idle host to balance the workload
- Minimum Service Level <-- when host utilization goes below this threshold, the power saver policy is triggered and live migration will automatically occur relocating all VMs to other host 
- Time threshold <-- when the threshold is met, then it will do the live migration to other hosts

! 5) Power Saver

Must setup the out-of-band management module/controller. http://en.wikipedia.org/wiki/Out-of-band_management

Types of OOB management device:

DRAC5 - Dell Remote Access Controller for Dell computers
ilo - HP Integrated Lights Out standard
ipmilan - Intelligent Platform Management Interface
rsa - IBM Remote Supervisor Adaptor
bladecenter - IBM Bladecentre Remote Supervisor Adapter

For IBM Bladecenter:
IBM BladeCenter: Management Module User's Guide
ftp://ftp.software.ibm.com/systems/support/system_x_pdf/42c4886.pdf
IBM eServer xSeries and BladeCenter Server Management
http://www.redbooks.ibm.com/abstracts/SG246495.html

! 6) Maintenance Manager
! 7) Monitoring and Reporting
! Configure YUM repository 

<<<
See the [[Yum]] setup

But here are the specifics:

1) Copy the contents of DVD
mkdir -pv /RHEL/installers/5.4/{os,updates}/x86-64
cp -av /media/cdrom/* /RHEL/installers/5.4/os/x86-64 

Additional
•	Also create a yum repository with all packages in it (Server,VT,Cluster,ClusterStorage)... so copy all of the contents of these folders on one directory which is the "Server" folder... that will approx 3182 packages
•	And copy the fence-agents (from RHN) to the "Server" folder as well
•	Once YUM is setup you have to install httpd (see [[Yum]] for details) to be able to access the yum from another machine which is the “host” that will be added to the RHEVM

3) Import GPG key 

rpm import RPM-GPG-KEY-redhat-release

4) Install createrepo RPM

5) createrepo -g 

then do 

yum clean all
yum fence-agents    <== should output a file

6) Then.. setup the HTTPD for the installers (see [[Yum]] for details)
<<<


! Configure the storage 

<<<
For this one... I'll do NFS

1) chkconfig nfs on

2) create directories and chown em', and edit /etc/exports (as root)

mkdir -p /data/images
mkdir -p /iso/images
mkdir /rhevdata
mkdir /rheviso

chown -R 36:36 /data
chown -R 36:36 /iso
chown -R 36:36 /rhevdata
chown -R 36:36 /rheviso

-- add this to /etc/exports
/data/images *(rw,no_root_squash,async)
/iso/images *(rw,no_root_squash,async)

3) add mount options on /etc/fstab

rhevhost1:/data/images /rhevdata nfs defaults 0 0
rhevhost1:/iso/images /rheviso nfs defaults 0 0

4) restart nfs service
<<<

! Configure the RHEV bridge network 

<<<
here is the reference http://kbase.redhat.com/faq/docs/DOC-19071

on my notes: 

1) make changes on the network files
eth0
====
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=none
BRIDGE=rhevm
TYPE=Ethernet
PEERDNS=yes
USERCTL=no
HWADDR=

rhevm
=====
DEVICE=rhevm
ONBOOT=yes
BOOTPROTO=static
TYPE=Bridge
PEERDNS=yes
USERCTL=no
IPADDR= 
NETMASK=

2) edit the /etc/hosts files to reflect the hostnames

the login page is at /RHEVManagerWeb/login.aspx
<<<


! Pre-req for RHEVM installation on WindowsServer2k3 SP2 32bit

<<<
Installed at the following order:

Note: better if you have filezilla on the Windows Server

1) Latest Red Hat Enterprise Virtualization Release Notes.
2) WindowsServer2k3 SP2 32bit
•	Windows update... then install IE8
3) IIS
Add/Remove programs -> Application Server
•	Application Server Console
•	ASP.NET
•	Enable COM+ access
•	Enable DTC access
•	IIS
3) .NET framework 3.5 SP1 with family update
4) Poweshell 1.0
<<<

RHEV: Why is a bond not available after a host restart?
http://kbase.redhat.com/faq/docs/DOC-26763

Supported platforms 
http://www.redhat.com/rhel/server/advanced/virt.html
Implementation I did on a very OLTP multi app schema database w/ some reporting
MGMT_P1 and MGMT_MTH -> 'RATIO' was used here
in PX the MGMT_P1 also serves as the prioritization of dequeuing... aside from being used as a resource allocation percentage for CPU & IO
https://www.evernote.com/l/ADDqIZzGPg1CU7GzXB5FNzWnbDCTlO1WAs4

see also here [[resource manager - shares vs percentage, mgmt_mth]] for more references about shares on 11g and 12c


''How to Duplicate a Standalone Database: ASM to ASM'' http://www.colestock.com/blogs/labels/ASM.html
11gr2 DataGuard: Restarting DUPLICATE After a Failure https://blogs.oracle.com/XPSONHA/entry/11gr2_dataguard_restarting_dup
RMAN Reference
http://morganslibrary.org/reference/rman.html
* in this case below I need to re-create the controlfile
{{{
col checkpoint_change# format 9999999999999999
select 'controlfile' "SCN location",'SYSTEM checkpoint' name,checkpoint_change#
from v$database
union
select 'file in controlfile',to_char(count(*)),checkpoint_change#
from v$datafile
group by checkpoint_change#
union
select 'file header',to_char(count(*)),checkpoint_change#
from v$datafile_header
group by checkpoint_change#;

SCN location        NAME                                     CHECKPOINT_CHANGE#
------------------- ---------------------------------------- ------------------
controlfile         SYSTEM checkpoint                             7728034951671
file header         783                                           7729430480637
file in controlfile 783                                           7728034951671

}}}
{{{
restore database preview; 
recover database until scn 7689193749494 preview;

list backup of database summary completed after 'sysdate - 1';
restore database preview summary from tag = TAG20140108T141855;
list archivelog from scn 2475111 until scn 2475374; (+1 on end) 
BACKUP ARCHIVELOG FROM SEQUENCE 7754 UNTIL SEQUENCE 7761;
BACKUP ARCHIVELOG FROM SCN 7689190283437 UNTIL SCN 7689193749495;

RESTORE DATABASE PREVIEW ;
RESTORE DATABASE VALIDATE;
RESTORE ARCHIVELOG FROM sequence xx UNTIL SEQUENCE yy THREAD nn VALIDATE;
RESTORE CONTROLFILE VALIDATE;
RESTORE SPFILE VALIDATE;
}}}

https://goldparrot.wordpress.com/2011/05/16/how-to-find-exact-scn-number-for-oracle-restore/
http://damir-vadas.blogspot.com/2010/02/how-to-find-correct-scn.html
http://damir-vadas.blogspot.com/2009/10/autonomous-rman-online-backup.html
http://dba.stackexchange.com/questions/56326/rman-list-archivelogs-that-are-needed-for-to-recover-specified-backup
https://blog.dbi-services.com/list-all-rman-backups-that-are-needed-to-recover/
https://www.pythian.com/blog/rman-infatuation/
https://oracleracdba1.wordpress.com/2012/10/22/how-to-checkvalidate-that-rman-backups-are-good/
http://reneantunez.blogspot.com/2012/09/rman-how-to-verify-i-have-consistant.html
How to determine minimum end point for recovery of an RMAN backup (Doc ID 1329415.1)
RMAN recover database fails RMAN-6025 - v$archived_log.next_change# is 281474976710655 (Doc ID 238422.1)
How to check for correct RMAN syntax [ID 427224.1]
{{{
CHECKSYNTAX can also ckeck the syntax in the comand file.

$ rman CHECKSYNTAX @filename
}}}

375386.1
http://www.oracleracexpert.com/2012/11/rman-debug-and-trace.html
{{{
RMAN Debug Command
$ rman target / debug trace rman.trc log rman.log
Or 
$ rman target / catalog xxx/xxxx@rmancat debug trace = /tmp/rman.trc log=/tmp/rman.log
}}}
Rolling a Standby Forward using an RMAN Incremental Backup To Fix The Nologging Changes [ID 958181.1]
ORA-26040:FLASHBACK DATABASE WITH NOLOGGING OBJECTS/ACTIVITIES RESULTS IN CORRUPTION [ID 554445.1]
http://www.idevelopment.info/data/Oracle/DBA_tips/Data_Guard/DG_53.shtml
http://jarneil.wordpress.com/2008/06/03/applying-an-incremental-backup-to-a-physical-standby/
http://web.njit.edu/info/oracle/DOC/backup.102/b14191/rcmdupdb008.htm

http://arup.blogspot.com/2009/12/resolving-gaps-in-data-guard-apply.html
https://shivanandarao-oracle.com/2012/03/26/roll-forward-physical-standby-database-using-rman-incremental-backup/
https://jhdba.wordpress.com/2013/03/18/rebuild-of-standby-using-incremental-backup-of-primary/
https://docs.oracle.com/cd/E11882_01/backup.112/e10643/rcmsynta007.htm#RCMRF107
<<<
You cannot specify PLUS ARCHIVELOG on the BACKUP ARCHIVELOG command or BACKUP AS COPY INCREMENTAL command (or BACKUP INCREMENTAL command when the default backup type is COPY). You cannot specify PLUS ARCHIVELOG when also specifying INCREMENTAL FROM SCN.

Unless the online redo log is archived after the backup, DUPLICATE is not possible with this backup.
<<<
-- RMAN incrementally updated backup to another machine
Use RMAN to relocate a 10TB RAC database with minimum downtime http://www.nyoug.org/Presentations/2011/September/Zuo_RMAN_to_Relocate.pdf


-- RMAN incrementally updated backup 
https://www.realdbamagic.com/moving-a-3tb-database-datafiles-with-only-2-minute-downtime/
RMAN Incremental Update Between Different Oracle Versions (Doc ID 2106949.1)
Incrementally Updated Backups Rolling Forward Image Copies Using RMAN https://oracle-base.com/articles/misc/incrementally-updated-image-copy-backups
Merged Incremental Backup Strategies (Doc ID 745798.1)
Moving User datafiles between ASM Diskgroups using Incrementally Updated Backups (Doc ID 1472959.1)
Incrementally Updated Backup In 10G and higher (Doc ID 303861.1)
Using Rman Incremental backups To Update Transportable Tablespaces. (Doc ID 831223.1)
RMAN Fast Incremental Backups using BCT = Block Change Tracking file (Doc ID 262853.1)
How Many Incremental Backups Can Be Taken When BCT Is Enabled ? (Doc ID 452455.1)
https://uhesse.com/2010/12/01/database-migration-to-asm-with-short-downtime/
alejandro vargas rman hands on http://static7.userland.com/oracle/gems/alejandroVargas/RmanHandsOn.pdf
RMAN Backup Strategy for 40TB Data Warehouse Database http://4dag.cronos.be/village/dvp_forum.OpenThread?ThreadIdA=39061


There are two ways of doing this:
* make the RETENTION POLICY longer
* make use of the KEEP option.... but you can't do this inside the FRA (How to KEEP a backup created in the Flash Recovery Area (FRA)? [ID 401163.1])
** workaround is do the backup without the KEEP, put it in a folder.. and rename the folder


http://gavinsoorma.com/2010/04/rman-keep-forever-keep-until-time-and-force-commands/
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 3560 DAYS; http://www.freelists.org/post/oracle-l/Rman-backup-keep-forever,3
Bug 5685815 : ALLOW A KEEP OPTION FOR BACKUPS CREATED IN THE FLASH RECOVERY AREA (FRA)
OERR: ORA-19811 cannot have files in DB_RECOVERY_FILE_DEST with keep attribute [ID 288177.1]
keepOption http://docs.oracle.com/cd/B28359_01/backup.111/b28273/rcmsubcl011.htm
RMAN-6764 - New 11g error during backup of Standby Database with the keep option [ID 1331072.1]




A different way of doing RMAN on DBFS..


 
http://www.appsdba.com/blog/?p=205

http://www.appsdba.com/blog/?p=302


 
http://download.oracle.com/docs/cd/E11882_01/appdev.112/e18294/adlob_hierarch.htm#g100


 

 

 


http://www.oracle.com/webfolder/technetwork/tutorials/obe/em/emgc10gr2/quick_start/jobs/creating_jobs.htm     <-- this is complete
http://www.oracle.com/technetwork/articles/havewala-rman-grid-089150.html     <-- this is complete
http://www.oracle.com/technetwork/articles/grid/havewala-gridcontrol-088685.html
http://technology.amis.nl/blog/2892/how-to-stop-running-rman-jobs-in-oem-grid-control


https://forums.oracle.com/forums/thread.jspa?threadID=2465428
http://enterprise-manager.blogspot.com/2008/05/rman-and-enterprise-manager.html
http://www.juvo.be/en/blog/scheduling-rman-backup-within-oem-12c-cloud-control
http://learnwithme11g.wordpress.com/2011/07/04/rman-duplication-from-tape-backups/
http://www.oracle.com/us/products/enterprise-manager/advanced-uses-em11g-wp-170683.pdf
''As a workaround you can use Virtual Tape drives''
{{{
So, I’ve got it working but I still don’t know what the problem is. The work around was to use Oracle’s pseudo tape device. I tried this partially out of desperation and partially from a hunch. I saw a posting that seemed to indicate that having the right locking daemons running could be a problem. Thinking that since a tape device can’t be shared, maybe RMAN wouldn’t do the same checks for lock management. Here’s the allocate command that seems to be working for me.
 
Still bugs me that I haven’t solved the real problem.
 
  # Allocate Channel(s)
  ALLOCATE CHANNEL SBT1 DEVICE TYPE SBT
   FORMAT '%d-%U' parms='SBT_LIBRARY=oracle.disksbt,ENV=(BACKUP_DIR=/home/dbshare/orabackup/PROD1)';
 
  ALLOCATE CHANNEL SBT2 DEVICE TYPE SBT
   FORMAT '%d-%U' parms='SBT_LIBRARY=oracle.disksbt,ENV=(BACKUP_DIR=/home/dbshare/orabackup/PROD1)';
 
  ALLOCATE CHANNEL SBT3 DEVICE TYPE SBT
   FORMAT '%d-%U' parms='SBT_LIBRARY=oracle.disksbt,ENV=(BACKUP_DIR=/home/dbshare/orabackup/PROD1)';
 
  SET COMMAND ID TO 'RMANB_BS_FULL_HOT';
 
  # Execute Database Backup
  BACKUP FULL AS BACKUPSET
   DATABASE
   INCLUDE CURRENT CONTROLFILE
   TAG = 'DB_BS_FULL_HOT';
 
 
 
Can anyone lend a hand on this one?
 
I’m trying to backup a database using Rman and the target file system is NFS mounted. The problem I’m seeing is that Rman will start writing the first set of backup sets and then hang. For example with three backup channels it looks like this…
 
[eve:oracle:wfprd1] /home/dbshare/orabackup/PROD1
> ls -l
total 103472
-rw-rw----  1 oracle oinstall 45056 Apr 19 14:03 PROD1-ofma5vgg_1_1
-rw-rw----  1 oracle oinstall 45056 Apr 19 14:03 PROD1-ogma5vgg_1_1
-rw-rw----  1 oracle oinstall 45056 Apr 19 14:03 PROD1-ohma5vgh_1_1
 
It always hangs at the same byte count.
 
Here are the Linux servers involved.
  NFS Host            : Vortex
  NFS Client         : Eve
 
BTW, the tests I’ve run include:
 
1)      On Vortex, I’ve successfully run an Rman backup of a local test database to this file system (same directory I’ve exported).
2)      On Eve, I’ve successfully copied several large files (as the oracle user), to this NFS volume.
3)      On Eve, I’ve successfully exported a large chunk of the database onto this NFS mount.
 
Reading and writing to this NFS file system doesn’t appear to be a problem for the oracle user account. It appears to be an RMAN thing.
 
Here is how the file system is shared out on the host system:
 
[vortex:oracle:RACTST1] /home/oracle
> sudo su -
[root@vortex ~]# exportfs
/mnt/dbshare    10.0.0.36
/mnt/dbshare    192.168.192.20
 
[root@vortex ~]# cat /etc/exports
/mnt/dbshare    10.0.0.36(rw) 192.168.192.20(rw)
 
…and on the system I’ve mounted the NFS share I’ve tried all three of the options for Vortex below….
 
[eve:oracle:wfprd1] /home/oracle
> cat /etc/fstab
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/osvg/root          /                       ext3    defaults        1 1
LABEL=/boot             /boot                   ext3    defaults        1 2
/dev/osvg/crs           /crs                    ext3    defaults        1 2
…
#vortex:/mnt/dbshare     /home/dbshare           nfs     rw      0 0
#vortex:/mnt/dbshare     /home/dbshare           nfs     rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600        0 0
vortex:/mnt/dbshare      /home/dbshare           nfs     rw,rsize=32768,wsize=32768,hard,noac,addr=10.0.0.152 0 0
}}}
http://www.oracle.com/technetwork/articles/oem/maa-wp-10gr2-recoverybestpractices-131010.pdf
http://www.cmg.org/wp-content/uploads/2010/09/m_73_3.pdf
http://www.oracle.com/technetwork/database/availability/rman-perf-tuning-bp-452204.pdf
http://www.nyoug.org/Presentations/2010/December/Chien_RMAN.pdf
http://www.emc.com/collateral/software/white-papers/h6540-oracle-backup-recovery-perform-avamar-rman-wp.pdf
http://ww2.dbvisit.com/forums/showthread.php?p=2953
https://docs.oracle.com/cd/B28359_01/rac.111/b28254/backup.htm#i491018
http://blog.csdn.net/sweethouse1128/article/details/6846166
https://docs.oracle.com/cd/E11882_01/rac.112/e41960/backup.htm#RACAD890
https://docs.oracle.com/cd/E28223_01/html/E27586/configappl.html
https://levipereira.files.wordpress.com/2011/01/tuning_rman_buffer.pdf
https://docs.oracle.com/cd/E51475_01/html/E52872/integration__ssc__configure_appliance__tuning_the_oracle_database_instance_for.html











@@http://feeds.feedburner.com/KarlAraoTiddlyWiki@@

@@__''<<tiddler ToggleRightSidebar with: ">SEARCH<">>''__@@ ''<--'' click here to toggle on/off the search tab
Official Doc http://docs.oracle.com/cd/E26370_01/doc.121/e26360/toc.htm
RUEI installation http://docs.oracle.com/cd/E26370_01/doc.121/e26358/rueiinstalling.htm
http://www.orafaq.com/forum/t/144040/2/
http://oracleformsinfo.wordpress.com/2011/12/22/oracle-forms-11g-r-2-ruei-real-user-experience-insight-the-good-the-bad-and-the-ugly/
http://oracleformsinfo.wordpress.com/2011/12/30/oracle-ruei-for-oracle-forms-11g-r2-the-good-the-not-so-bad-and-less-ugly-than-before/
http://www.youtube.com/watch?v=904Gy7bYxQY

''Oracle Real User Experience Insight Best Practices Self-Study Series'' http://apex.oracle.com/pls/apex/f?p=44785:24:0:::24:P24_CONTENT_ID,P24_PREV_PAGE:6626,1
Real-World Performance Group Learning Library, URL here http://bit.ly/1xurTO8

Index:
Real-World Performance Education
Video    Introduction to Real-World Performance
VideoRWP #1: Cursors and Connections
VideoRWP #2: Bad Performance with Logons
VideoRWP #3: Connection Pools and Hard Parse
VideoRWP #4: Bind Variables and Soft Parse
VideoRWP #5: Shared Cursors and One Parse
VideoRWP #6: Leaking Cursors
VideoRWP #7: Set Based Processing
VideoRWP #8: Set Based Parallel Processing
VideoRWP #9: Deduplication
VideoRWP #10: Transformation
VideoRWP #11: Aggregate
VideoRWP - #12: Getting In Control
VideoRWP - #13: Large Dynamic Connection Pools - Part 1
VideoRWP - #14: Large Dynamic Connection Pools - Part 2
VideoRWP - #15: Index Contention
VideoRWP - #16: Classic Real World Performance
VideoRWP - #17: Database Log Writer
VideoRWP - #18: Large Linux Pages
VideoRWP - #19: Architecture with an AWR Report


Connection Pool Sizing and SmartDB / Connection Pool Sizing Concepts - ToonKoppelaars
https://www.youtube.com/watch?v=eiydITTdDAQ


! official docs 
* search for "real-world" https://docs.oracle.com/search/?q=real-world&category=database&product=en%2Fdatabase%2Foracle%2Foracle-database%2F21
* db dev guide - 5 Designing Applications for Oracle Real-World Performance https://docs.oracle.com/en/database/oracle/oracle-database/21/adfns/rwp.html#GUID-754328E1-2203-4B03-A21B-A91C3E548233

https://www.ibm.com/developerworks/mydeveloperworks/blogs/aixpert/entry/raspberry_pi_thoughts_on_game_changing_technology324?lang=en
http://www.raspberrypi.org/about
http://www.raspberrypi.org/faqs

''where to buy''
http://www.alliedelec.com/lp/120626raso/?cm_mmc=Offline-Referral-_-Electronics-_-RaspberryPi-201203-_-World-Selector-Page
http://www.farnell.com/


http://www.engadget.com/2012/09/04/raspberry-pi-getting-started-guide-how-to/
http://www.aonsquared.co.uk/raspi_voice_control


http://blogs.oracle.com/warehousebuilder/2010/07/owb_11gr2_the_right_time_with_goldengate.html
https://docs.google.com/viewer?url=http://www.oracle.com/us/products/middleware/data-integration/goldengate11g-ds-168062.pdf
https://docs.google.com/viewer?url=http://www.imamu.edu.sa/topics/Slides/Data%2520Warehousing%25202/Experiences%2520with%2520Real-Time%2520Data%2520Warehousing%2520Using%2520Oracle%2520Database%252010G.ppt
http://it.toolbox.com/blogs/oracle-guide/ralph-kimball-realtime-data-warehouse-design-challenges-6359
https://docs.google.com/viewer?url=http://www.oracle.com/technetwork/middleware/data-integrator/overview/best-practices-for-realtime-data-wa-132882.pdf
https://docs.google.com/viewer?url=http://i.zdnet.com/whitepapers/Quest_Offload_Reporting_To_Improve_Oracle_Database_Performance.pdf
http://www.rittmanmead.com/2007/10/05/five-oracle-bi-trends-for-the-future/
http://www.rittmanmead.com/2010/04/08/realtime-data-warehouses/
http://www.rittmanmead.com/2010/05/27/realtime-data-warehouse-challenges-part-1/
http://www.rittmanmead.com/2010/06/27/realtime-data-warehouse-challenges-%E2%80%93-part-2/
http://www.rittmanmead.com/2010/05/06/realtime-data-warehouse-loading/
http://dssresources.com/papers/features/langseth/langseth02082004.html
https://docs.google.com/viewer?url=http://www.oracle.com/us/products/middleware/data-integration/odi-ee-11g-ds-168065.pdf
https://docs.google.com/viewer?url=http://www.oracle.com/us/products/middleware/data-integration/odi11g-newfeatures-wp-168152.pdf
https://docs.google.com/viewer?url=http://www.oracle.com/technetwork/middleware/data-integrator/overview/odiee-km-for-oracle-goldengate-133579.pdf
<<showtoc>>


! ''Monitoring SQL statements with Real-Time SQL Monitoring [ID 1380492.1]''
http://structureddata.org/2008/01/06/oracle-11g-real-time-sql-monitoring-using-dbms_sqltunereport_sql_monitor 
{{{
If you want to get a SQL Monitor report for a statement you just ran in your session (similar to dbms_xplan.display_cursor) then use this command:

set pagesize 0 echo off timing off linesize 1000 trimspool on trim on long 2000000 longchunksize 2000000
select DBMS_SQLTUNE.REPORT_SQL_MONITOR(
   session_id=>sys_context('userenv','sid'),
   report_level=>'ALL') as report
from dual;

Or if you want to generate the EM Active SQL Monitor Report (my recommendation) from any SQL_ID you can use:

set pagesize 0 echo off timing off linesize 1000 trimspool on trim on long 2000000 longchunksize 2000000 feedback off
spool sqlmon_4vbqtp97hwqk8.html
select dbms_sqltune.report_sql_monitor(report_level=>'+histogram', type=>'EM', sql_id=>'4vbqtp97hwqk8') monitor_report from dual;
spool off
}}}

! ''hint''
{{{
/*+ MONITOR */
}}}

! ''spool from SQL Developer''
{{{
--select * from v$sql where sql_fulltext like '%&txt%' order by last_load_time;
SET TERMOUT OFF
SET verify off;
 
 
spool C:\Users\Administrator\Documents\sql_stats\fe_stage2a.html;
 
select
  dbms_sql_monitor.report_sql_monitor(sql_id => '&sql_id',
                                      type => decode(upper('&&ptype'),'A', 'ACTIVE', 'H' , 'HTML', 'TEXT'), --'TEXT'  'HTML'  'ACTIVE'
                                      report_level => 'ALL') as report
FROM DUAL;
SPOOL OFF;
}}}


! using SQLD360 - all SQL monitor report types in one shot 
https://github.com/karlarao/report_sql_monitor
run with 
{{{
@sqld360 <SQL_ID> T 
}}}




! references 
<<<
http://www.oracle.com/technetwork/database/focus-areas/manageability/sqlmonitor-084401.html?ssSourceSiteId=otncn  <-- SQL Monitor FAQ

http://oracledoug.com/serendipity/index.php?/archives/1506-Real-Time-SQL-Monitoring-in-SQL-Developer.html <-- SQL Developer
http://oracledoug.com/serendipity/index.php?/archives/1642-Real-Time-SQL-Monitoring-Statement-Not-Appearing.html <-- hidden parameter to increase the lines - statement not appearing
http://oracledoug.com/serendipity/index.php?%2Farchives%2F1646-Real-Time-SQL-Monitoring-Retention.html <-- retention
http://structureddata.org/2011/08/28/reading-active-sql-monitor-reports-offline/  <-- ''offline view'' of sql monitor reports


http://www.oracle-base.com/blog/2011/03/22/real-time-sql-monitoring-update/
http://blog.aristadba.com/?tag=real-time-sql-monitoring
http://www.pythian.com/news/582/tuning-pack-11g-real-time-sql-monitoring/

http://joze-senegacnik.blogspot.com/2009/12/vsqlmonitor-and-vsqlplanmonitor.html <-- V$SQL_MONITOR and V$SQL_PLAN_MONITOR
<<<




{{{

-- viewing waits system wide
col event format a46
col seconds format 999,999,990.00
col calls format 999,999,990
select a.event,
       a.time_waited,
       a.total_waits calls,
       a.time_waited/a.total_waits average_wait,
       sysdate - b.startup_time days_old
from   v$system_event a, v$instance b
where rownum < 6
order by a.time_waited desc;


-- viewing waits on a session
select
  e.event, e.time_waited
from
  v$session_event  e
where
  e.sid = 12
union all
select
  n.name,
  s.value
from
  v$statname  n,
  v$sesstat  s
where
  s.sid = 12
and n.statistic# = s.statistic# 
and n.name = 'CPU used by this session'
order by
  2 desc
/


-- sesstat
select a.sid, b.name, a.value
from v$sesstat a, v$statname b
where a.statistic# = b.statistic#
and a.value > 0
and a.sid = 12;


-- kill sessions 
-- select /* usercheck */ 'alter system disconnect session '''||s.sid||','||s.serial#||''''||' post_transaction;'
select /* usercheck */ 'alter system disconnect session '''||s.sid||','||s.serial#||''''||' immediate;'
from v$process p, v$session s, v$sqlarea sa
where p.addr=s.paddr
and   s.username is not null
and   s.sql_address=sa.address(+)
and   s.sql_hash_value=sa.hash_value(+)
and   sa.sql_text NOT LIKE '%usercheck%'
-- and   upper(sa.sql_text) LIKE '%CP_IINFO_DAILY_RECON_PKG.USP_DAILYCHANGEFUND%'
-- and   s.sid = 178
and s.sql_id = '&sql_id'
-- and sid in (1404,1023,520,389,645)
-- and   s.username = 'APAC'
 -- and sa.plan_hash_value = 3152625234
order by status desc;

-- quicker kill sessions
select /* usercheck */ 'alter system disconnect session '''||s.sid||','||s.serial#||''''||' immediate;'
from v$session s
where s.sql_id = '&sql_id';


-- purge SQL_ID on shared pool

var name varchar2(50)
BEGIN
	select /* usercheck */ sa.address||','||sa.hash_value into :name
	from v$process p, v$session s, v$sqlarea sa
	where p.addr=s.paddr
	and   s.username is not null
	and   s.sql_address=sa.address(+)
	and   s.sql_hash_value=sa.hash_value(+)
	and   sa.sql_text NOT LIKE '%usercheck%'
	-- and   upper(sa.sql_text) LIKE '%CP_IINFO_DAILY_RECON_PKG.USP_DAILYCHANGEFUND%'
	 and   s.sid = 176
	-- and   s.username = 'APAC'
	order by status desc;

dbms_shared_pool.purge(:name,'C',1);
END;
/



-- show all users
-- on windows to kill do.. orakill <instance_name> <spid>
set lines 32767
col terminal format a4
col machine format a4
col os_login format a4
col oracle_login format a4
col osuser format a4
col module format a5
col program format a8
col schemaname format a5
-- col state format a8
col client_info format a5
col status format a4
col sid format 99999
col serial# format 99999
col unix_pid format a8
col txt format a50
col action format a8
select /* usercheck */ s.INST_ID, s.terminal terminal, s.machine machine, p.username os_login, s.username oracle_login, s.osuser osuser, s.module, s.action, s.program, s.schemaname,
	s.state,
	s.client_info, s.status status, s.sid sid, s.serial# serial#, lpad(p.spid,7) unix_pid, -- s.sql_hash_value, 
	sa.plan_hash_value,	-- remove in 817, 9i
	s.sql_id, 		-- remove in 817, 9i
	substr(sa.sql_text,1,1000) txt
from gv$process p, gv$session s, gv$sqlarea sa
where p.addr=s.paddr
and   s.username is not null
and   s.sql_address=sa.address(+)
and   s.sql_hash_value=sa.hash_value(+)
and   sa.sql_text NOT LIKE '%usercheck%'
-- and   lower(sa.sql_text) LIKE '%grant%'
-- and s.username = 'APAC'
-- and s.schemaname = 'SYSADM'
-- and lower(s.program) like '%uscdcmta21%'
-- and s.sid=12
-- and p.spid  = 14967
-- and s.sql_hash_value = 3963449097
-- and s.sql_id = '5p6a4cpc38qg3'
-- and lower(s.client_info) like '%10036368%'
-- and s.module like 'PSNVS%'
-- and s.program like 'PSNVS%'
order by status desc;


-- find running jobs
set linesize 250
col sid            for 9999     head 'Session|ID'
col spid                        head 'O/S|Process|ID'
col serial#        for 9999999  head 'Session|Serial#'
col log_user       for a10
col job            for 9999999  head 'Job'
col broken         for a1       head 'B'
col failures       for 99       head "fail"
col last_date      for a18      head 'Last|Date'
col this_date      for a18      head 'This|Date'
col next_date      for a18      head 'Next|Date'
col interval       for 9999.000 head 'Run|Interval'
col what           for a60
select j.sid,
s.spid,
s.serial#,
       j.log_user,
       j.job,
       j.broken,
       j.failures,
       j.last_date||':'||j.last_sec last_date,
       j.this_date||':'||j.this_sec this_date,
       j.next_date||':'||j.next_sec next_date,
       j.next_date - j.last_date interval,
       j.what
from (select djr.SID, 
             dj.LOG_USER, dj.JOB, dj.BROKEN, dj.FAILURES, 
             dj.LAST_DATE, dj.LAST_SEC, dj.THIS_DATE, dj.THIS_SEC, 
             dj.NEXT_DATE, dj.NEXT_SEC, dj.INTERVAL, dj.WHAT
        from dba_jobs dj, dba_jobs_running djr
       where dj.job = djr.job ) j,
     (select p.spid, s.sid, s.serial#
          from v$process p, v$session s
         where p.addr  = s.paddr ) s
where j.sid = s.sid;



-- find where a system is stuck
break on report
compute sum of sessions on report
select event, count(*) sessions from v$session_wait
where state='WAITING'
group by event
order by 2 desc;


-- find the session state
select event, state, count(*) from v$session_wait group by event, state order by 3 desc;


-- when user calls up, describe wait events per session since the session has started up
select max(total_waits), event, sid from v$session_event   
where sid = 12
group by sid, event
order by 1 desc;

-- You can easily discover which session has high TIME_WAITED on the db file sequential read or other waits
select a.sid,
       a.event,
       a.time_waited,
       a.time_waited / c.sum_time_waited * 100 pct_wait_time,
       round((sysdate - b.logon_time) * 24) hours_connected
from   v$session_event a, v$session b,
      (select sid, sum(time_waited) sum_time_waited
       from   v$session_event
       where  event not in (
                   'Null event',
                   'client message',
                   'KXFX: Execution Message Dequeue - Slave',
                   'PX Deq: Execution Msg',
                   'KXFQ: kxfqdeq - normal deqeue',
                   'PX Deq: Table Q Normal',
                   'Wait for credit - send blocked',
                   'PX Deq Credit: send blkd',
                   'Wait for credit - need buffer to send',
                   'PX Deq Credit: need buffer',
                   'Wait for credit - free buffer',
                   'PX Deq Credit: free buffer',
                   'parallel query dequeue wait',
                   'PX Deque wait',
                   'Parallel Query Idle Wait - Slaves',
                   'PX Idle Wait',
                   'slave wait',
                   'dispatcher timer',
                   'virtual circuit status',
                   'pipe get',
                   'rdbms ipc message',
                   'rdbms ipc reply',
                   'pmon timer',
                   'smon timer',
                   'PL/SQL lock timer',
                   'SQL*Net message from client',
                   'WMON goes to sleep')
       having sum(time_waited) > 0 group by sid) c
where a.sid = b.sid
and   a.sid = c.sid
and   a.time_waited > 0
-- and   a.event = 'db file sequential read'
order by hours_connected desc, pct_wait_time;


-- show all users RAC
select s.inst_id instance_id,
       s.failover_type failover_type,
       s.FAILOVER_METHOD failover_method,
       s.FAILED_OVER failed_over,
       p.username os_login,                  
       s.username oracle_login,
       s.status status,
       s.sid oracle_session_id,
       s.serial# oracle_serial_no,
       lpad(p.spid,7) unix_process_id,
       s.machine, s.terminal, s.osuser,
       substr(sa.sql_text,1,540) txt
from gv$process p, gv$session s, gv$sqlarea sa
where p.addr=s.paddr
and   s.username is not null
and   s.sql_address=sa.address(+)
and   s.sql_hash_value=sa.hash_value(+)
--and s.sid=48
--and p.spid  
order by 3;


-- this is for RAC TAF, fewer columns
col oracle_login format a10
col instance_id format 99
col sidserial format a8
select s.inst_id instance_id,
       s.failover_type failover_type,
       s.FAILOVER_METHOD failover_method,
       s.FAILED_OVER failed_over,               
       s.username oracle_login,
       s.status status,
       concat (s.sid,s.serial#) sidserial,
	substr(sa.sql_text,1,15) txt
from gv$process p, gv$session s, gv$sqlarea sa
where p.addr=s.paddr
and   s.username is not null
and   s.type = 'USER'
and   s.username = 'ORACLE'
and   s.sql_address=sa.address(+)
and   s.sql_hash_value=sa.hash_value(+)
--and s.sid=48
--and p.spid  
order by 6;


-- show open cursors
col txt format a100
select sid, hash_value, substr(sql_text,1,1000) txt from v$open_cursor where sid = 12;


-- show running cursors
select   nvl(USERNAME,'ORACLE PROC'), s.SID, s.sql_hash_value, SQL_TEXT
from     sys.v_$open_cursor oc, sys.v_$session s
where    s.SQL_ADDRESS = oc.ADDRESS
and      s.SQL_HASH_VALUE = oc.HASH_VALUE
and s.sid = 12
order by USERNAME, s.SID;


-- Get recent snapshot
select instance_number, to_char(startup_time, 'DD-MON-YY HH24:MI:SS') startup_time, to_char(begin_interval_time, 'DD-MON-YY HH24:MI:SS') begin_interval_tim, snap_id 
from DBA_HIST_SNAPSHOT 
order by snap_id;


-- Finding top expensive SQL in the workload repository, get snap_ids first
select * from (
select a.sql_id as sql_id, sum(elapsed_time_delta)/1000000 as elapsed_time_in_sec,
	      (select x.sql_text
	      from dba_hist_sqltext x
	      where x.dbid = a.dbid and x.sql_id = a.sql_id) as sql_text
from dba_hist_sqlstat a, dba_hist_sqltext b
where a.sql_id = b.sql_id and
a.dbid   = b.dbid
and a.snap_id between 710 and 728
group by a.dbid, a.sql_id
order by elapsed_time_in_sec desc
) where ROWNUM < 2
/



-- Only valid for 10g Release 2, Finding top 10 expensive SQL in the cursor cache by elapsed time
select * from (
select sql_id, elapsed_time/1000000 as elapsed_time_in_sec, substr(sql_text,1,80) as sql_text
from   v$sqlstats
order by elapsed_time_in_sec desc
) where rownum < 11
/


-- get hash value statistics, The query sorts its output by the number of LIO calls executed per row returned. This is a rough measure of statement efficiency. For example, the following output should bring to mind the question, "Why should an application require more than 174 million memory accesses to compute 5 rows?"
col stmtid      heading 'Stmt Id'               format    9999999999
col dr          heading 'PIO blks'              format   999,999,999
col bg          heading 'LIOs'                  format   999,999,999,999
col sr          heading 'Sorts'                 format       999,999
col exe         heading 'Runs'                  format   999,999,999,999
col rp          heading 'Rows'                  format 9,999,999,999
col rpr         heading 'LIOs|per Row'          format   999,999,999,999
col rpe         heading 'LIOs|per Run'          format   999,999,999,999
select  hash_value stmtid
       ,sum(disk_reads) dr
       ,sum(buffer_gets) bg
       ,sum(rows_processed) rp
       ,sum(buffer_gets)/greatest(sum(rows_processed),1) rpr
       ,sum(executions) exe
       ,sum(buffer_gets)/greatest(sum(executions),1) rpe
 from v$sql
where command_type in ( 2,3,6,7 )
and hash_value in (2023740151)
-- and rownum < 20
group by hash_value
order by 5 desc;


-- check block gets of a session
col block_gets format 999,999,999,990
col consistent_gets format 999,999,999,990
select to_char(sysdate, 'hh:mi:ss') "time", physical_reads, block_gets, consistent_gets, block_changes, consistent_changes
from v$sess_io 
where sid=681;


-- show SQL in shared SQL area, get hash value
    SELECT /* example */ substr(sql_text, 1, 80) sql_text,
           sql_id, 
	    hash_value, address, child_number, plan_hash_value, FIRST_LOAD_TIME
      FROM v$sql
     WHERE 
	--sql_id = '6wps6tju5b8tq'
	-- hash_value = 1481129178
	upper(sql_text) LIKE '%INSERT INTO PS_CBLA_RET_TMP SELECT CB_BUS_UN%'
       AND sql_text NOT LIKE '%example%' 
      order by first_load_time; 


-- show SQL hash
col txt format a1000
select 
       	sa.hash_value, sa.sql_id,
		substr(sa.sql_text,1,1000) txt
from v$sqlarea sa
where 
sa.hash_value = 517092776
--ADDRESS = '2EBC7854'
 --sql_id = 'gz5bfrcjq060u'; 


-- show full sql text of the transaction
col sql_text format a1000
set heading off
select sql_text from v$sqltext                
where  HASH_VALUE = 1481129178
-- where sql_id = 'a5xnahpb62cvq'
order by piece;
set heading on

-- get sql_text
set long 50000 pagesize 0 echo off lines 5000 longchunksize 5000 trimspool on
select sql_fulltext from v$sql where sql_id = 'a5xnahpb62cvq';


/*
The trace doesn't contain the SQLID as such but the hash value. 
In this case Hash=61d72ac6. 
Translate this to decimal and query v$sqlarea where hash_value=#
( if the hash value is still in the v$sqlarea )
/u01/app/oracle/diag/rdbms/biprddal/biprd1/incident/incdir_65625/biprd1_dia0_19054_i65625.trc
~~~~~~~~~~
....
LibraryHandle: Address=0x1ff1434c8 Hash=61d72ac6 LockMode=N PinMode=0 LoadLockMode=0 Status=VALD 
ObjectName: Name=UPDATE WC_BUDGET_BALANCE_A_TMP A SET 
*/
select sql_text from dba_hist_sqltext where sql_id in (select sql_id from DBA_HIST_SQLSTAT where plan_hash_value = 1641491142);


-- get sql id and hash value, convert hash to, sqlid to hash, sql_id to hash, h2s
col sql_text format a1000
select substr(sql_text, 1,30), sql_id, hash_value from v$sqltext                
  where  
  HASH_VALUE = 1312665718
  -- sql_id = '048znjmq3uvs9'
and rownum < 2;


-- show full sql text of the transaction, of the top sqls in awr
col sql_text format a1000
set heading off
select ''|| sql_id || ' '|| hash_value || ' ' || sql_text || '' from v$sqltext                
-- where  HASH_VALUE = 1481129178
where sql_id in (select distinct a.sql_id as sql_id
		from dba_hist_sqlstat a, dba_hist_sqltext b
		where a.sql_id = b.sql_id and
		a.dbid   = b.dbid
		and a.snap_id between 710 and 728)
order by sql_id, piece;
set heading on



-- query SQL in ASH
set lines 3000
select substr(sa.sql_text,1,500) txt, a.sample_id, a.sample_time, a.session_id, a.session_serial#, a.user_id, a.sql_id,
       a.sql_child_number, a.sql_plan_hash_value, 
       a.sql_opcode, a.plsql_object_id, a.service_hash, a.session_type,
       a.session_state, a.qc_session_id, a.blocking_session,
       a.blocking_session_status, a.blocking_session_serial#, a.event, a.event_id,
       a.seq#, a.p1, a.p2, a.p3, a.wait_class,
       a.wait_time, a.time_waited, a.program, a.module, a.action, a.client_id
from gv$active_session_history a, gv$sqltext sa 
where a.sql_id = sa.sql_id
-- and session_id = 126


/* -- weird scenario, when I'm looking for TRUNCATE statement, I can see it in V$SQLTEXT
-- and I can't see it on V$SQLAREA and V$SQL
select * from v$sqltext where upper(sql_text) like '%TRUNCATE%TEST3%';

select * from v$sqlarea 
where sql_id = 'dfwz4grz83d6a'
where upper(sql_text) like '%TRUNCATE%';

select * from v$sql 
where sql_id = 'dfwz4grz83d6a'
where upper(sql_text) like '%TRUNCATE%'; 

from oracle-l:

Checking V$FIXED_VIEW_DEFINITION, you can see that V$SQLAREA is based off of 
x$kglcursor_child_sqlid, V$SQL is off x$kglcursor_child, and V$SQLTEXT is off 
x$kglna.  I may be way off on this, but I believe pure DDL is not a cursor, 
which is why it won't be found in X$ cursor tables.  Check with a CTAS vs. a 
plain CREATE TABLE ... (field ...).  CTAS uses a cursor and would be found in 
all the X$ sql tables.  A plan CREATE TABLE won't.
*/




-- query long operations
set lines 200
col opname format a35
col target format a10
col units format a10
select * from (
			select 
			sid, serial#, sql_id,
			opname, target, sofar, totalwork, round(sofar/totalwork, 4)*100 pct, units, elapsed_seconds, time_remaining time_remaining_sec, round(time_remaining/60,2) min
			,sql_hash_value
		-- 	,message
			from v$session_longops 
			WHERE sofar < totalwork
			order by start_time desc);


-- query session waits
set lines 300
col program format a23
col event format a18
col seconds format 99,999,990
col state format a17
select w.sid, s.sql_hash_value, s.program, w.event, w.wait_time/100 t, w.seconds_in_wait seconds_in_wait, w.state, w.p1, w.p2, w.p3
from v$session s, v$session_wait w
where s.sid = w.sid and s.type = 'USER'
and s.sid = 37
-- and s.sql_hash_value = 1789726554
-- and s.sid = w.sid and s.type = 'BACKGROUND'
and w.state = 'WAITING'
order by 6 asc;



-- show actual transaction start time, and exact object
SELECT s.saddr, s.SQL_ADDRESS, s.sql_hash_value, t.START_TIME, t.STATUS, s.lockwait, s.row_wait_obj#, row_wait_file#, s.row_wait_block#, s.row_wait_row#
--, s.blocking_session    
FROM   v$session s, v$transaction t
WHERE  s.saddr = t.ses_addr
and s.sid = 12;

-- search for the object
  select owner, object_name, object_type              
  from dba_objects
  where object_id = 73524;

  SELECT owner,segment_name,segment_type
  FROM   dba_extents
  WHERE  file_id = 32
  AND 238305
  BETWEEN block_id AND block_id + blocks - 1;


-- open transactions 
set lines 199 pages 100
col object_name for a30
COL iid for 999
col usn for 9999
col slot for 9999
col ublk for 99999
col uname for a15
col sid for 9999
col ser# for 9999999
col start_scn for 99999999999999
col osuser for a20

select * from (
select v.inst_id iid, v.XIDUSN usn, v.XIDSLOT slot, v.XIDSQN ,v. START_TIME, v.start_scn,  v.USED_UBLK ublk, o.oracle_username uname,s.sid sid,s.serial# ser#, s.osuser, o.object_id oid ,d.object_name 
from gv$transaction v, gv$locked_object o, dba_objects d, gv$session s 
where  v.XIDUSN = o.XIDUSN and v.xidslot=o.xidslot and v.xidsqn=o.xidsqn and o.object_id = d.object_id and v.addr = s.taddr order by 6,1,11,12,13) where rownum < 26;



-- search for the object in the buffer cache
select b.sid,
       nvl(substr(a.object_name,1,30),
                  'P1='||b.p1||' P2='||b.p2||' P3='||b.p3) object_name,
       a.subobject_name,
       a.object_type
from   dba_objects a, v$session_wait b, x$bh c
where  c.obj   = a.object_id(+)
and    b.p1    = c.file#(+)
and    b.p2    = c.dbablk(+)
-- and    b.event = 'db file sequential read'
union
select b.sid,
       nvl(substr(a.object_name,1,30),
                  'P1='||b.p1||' P2='||b.p2||' P3='||b.p3) object_name,
       a.subobject_name,
       a.object_type
from   dba_objects a, v$session_wait b, x$bh c
where  c.obj   = a.data_object_id(+)
and    b.p1    = c.file#(+)
and    b.p2    = c.dbablk(+)
-- and    b.event = 'db file sequential read'
order by 1;

-- if there are locks, show the locks thats are waited in the system
select sid, type, id1, id2, lmode, request, ctime, block
  from v$lock
 where request>0;



-- per session pga
BREAK ON REPORT
COMPUTE SUM OF alme ON REPORT 
COMPUTE SUM OF mame ON REPORT 
COLUMN alme     HEADING "Allocated MB" FORMAT 99999D9
COLUMN usme     HEADING "Used MB"      FORMAT 99999D9
COLUMN frme     HEADING "Freeable MB"  FORMAT 99999D9
COLUMN mame     HEADING "Max MB"       FORMAT 99999D9
COLUMN username                        FORMAT a15
COLUMN program                         FORMAT a22
COLUMN sid                             FORMAT a5
COLUMN spid                            FORMAT a8
set pages 3000
SET LINESIZE 3000
set echo off
set feedback off
alter session set nls_date_format='yy-mm-dd hh24:mi:ss';

SELECT sysdate, s.username, SUBSTR(s.sid,1,5) sid, p.spid, logon_time,
       SUBSTR(s.program,1,22) program , s.process pid_remote,
       ROUND(pga_used_mem/1024/1024) usme,
       ROUND(pga_alloc_mem/1024/1024) alme,
       ROUND(pga_freeable_mem/1024/1024) frme,
       ROUND(pga_max_mem/1024/1024) mame,
       decode(a.IO_CELL_OFFLOAD_ELIGIBLE_BYTES,0,'No','Yes') Offload,
       s.sql_id
FROM  v$session s,v$process p, v$sql a
WHERE s.paddr=p.addr
and s.sql_id=a.sql_id
ORDER BY pga_max_mem, logon_time;

-- pga breakdown
SELECT pid, category, allocated, used, max_allocated
  FROM   v$process_memory
 WHERE  pid = (SELECT pid
                 FROM   v$process
                WHERE  addr= (select paddr
                                FROM   v$session
                               WHERE  sid = &sid))




-- UNDO
/* Shows active (in progress) transactions -- feed the db_block_size to multiply with t.used_ublk */
/* select value from v$parameter where name = 'db_block_size'; */
select sid, serial#,s.status,username, terminal, osuser,
       t.start_time, r.name, (t.used_ublk*8192)/1024 USED_kb, t.used_ublk "ROLLB BLKS",
       decode(t.space, 'YES', 'SPACE TX',
          decode(t.recursive, 'YES', 'RECURSIVE TX',
             decode(t.noundo, 'YES', 'NO UNDO TX', t.status)
       )) status
from sys.v_$transaction t, sys.v_$rollname r, sys.v_$session s
where t.xidusn = r.usn
  and t.ses_addr = s.saddr;


-- TEMP, show user currently using space in temp space 
select   se.username
        ,se.sid
        ,se.serial#
        ,su.extents
        ,su.blocks * to_number(rtrim(p.value))/1024/1024 as Space
        ,tablespace
        ,segtype
from     v$sort_usage su
        ,v$parameter  p
        ,v$session    se
where    p.name          = 'db_block_size'
and      su.session_addr = se.saddr
order by se.username, se.sid;



-- To report the info on temp usage used...
select swa.sid, vs.process, vs.osuser, vs.machine,vst.sql_text, vs.sql_id "Session SQL_ID",
swa.sql_id "Active SQL_ID", trunc(swa.tempseg_size/1024/1024)"TEMP TOTAL MB"
from v$sql_workarea_active swa, v$session vs, v$sqltext vst
where swa.sid=vs.sid
and vs.sql_id=vst.sql_id
and piece=0
and swa.tempseg_size is not null
order by "TEMP TOTAL MB" desc;


-- a quick TEMP script for threshold
echo "TEMP_Threshold: $TMP_THRSHLD"
sqlplus -s << EOF | read GET_TMP
/ as sysdba
set head off
set pagesize 0
select sum(trunc(swa.tempseg_size/1024/1024))"TEMP TOTAL MB"
from v\$sql_workarea_active swa;
EOF



 -- Oracle also provides single-block read statistics for every database file in the V$FILESTAT view. The file-level single-block average wait time can be calculated by dividing the SINGLEBLKRDTIM with the SINGLEBLKRDS, as shown next. (The SINGLEBLKRDTIM is in centiseconds.) You can quickly discover which files have unacceptable average wait times and begin to investigate the mount points or devices and ensure that they are exclusive to the database

select a.file#,
       b.file_name,
       a.singleblkrds,
       a.singleblkrdtim,
       a.singleblkrdtim/a.singleblkrds average_wait
from   v$filestat a, dba_data_files b
where  a.file# = b.file_id
and    a.singleblkrds > 0
order by average_wait;


--------------------
-- BUFFER CACHE
--------------------

/*This dynamic view has an entry for each block in the database buffer cache. The status are:
free : Available ram block. It might contain data but it is not currently in use.
xcur :Block held exclusively by this instance
scur :Block held in cache, shared with other instance
cr    :Block for consistent read
read :Block being read from disk
mrec :Block in media recovery mode
irec :Block in instance (crash) recovery mode
If it is needed to investigate the buffer cache you can use the following script:*/
SELECT count(*), db.object_name, tb.name
    FROM v$bh bh, dba_objects db, v$tablespace tb
    WHERE bh.objd = db.object_id
    AND bh.TS# = TB.TS#
    AND db.owner NOT IN ('SYS', 'SYSTEM')
GROUP BY db.object_name, bh.TS#, tb.name
ORDER BY 1 ASC;


-- get block
select block#,file#,status from v$bh where objd = 46186


-- get touch count
select tch, file#, dbablk,
       case when obj = 4294967295
            then 'rbs/compat segment'
            else (select max( '('||object_type||') ' ||
                              owner || '.' || object_name  ) ||
                         decode( count(*), 1, '', ' maybe!' )
                    from dba_objects
                   where data_object_id = X.OBJ )
        end what
  from (
select tch, file#, dbablk, obj
  from x$bh
 where state <> 0
 order by tch desc
       ) x
 where rownum <= 5
/

--shows touch count for tables/indexes. Use to determine tables/indexes to keep
select decode(s.buffer_pool_id,0,'DEFAULT',1,'KEEP',2,'RECYCLE') buffer_pool,
s.owner, s.segment_name, s.segment_type,count(bh.obj) blocks, round(avg(bh.tch),2) avg_use, max(bh.tch) max_use 
from sys_dba_segs s, X$BH bh where s.segment_objd = bh.obj 
group by decode(s.buffer_pool_id,0,'DEFAULT',1,'KEEP',2,'RECYCLE'), s.segment_name, s.segment_type, s.owner 
order by decode(s.buffer_pool_id,0,'DEFAULT',1,'KEEP',2,'RECYCLE'), count(bh.obj) desc,
round(avg(bh.tch),2) desc, max(bh.tch) desc;


}}}
http://jonathanlewis.wordpress.com/2010/08/24/index-rebuilds-2/

http://blog.tanelpoder.com/2007/06/23/a-gotcha-with-parallel-index-builds-parallel-degree-and-query-plans/
https://community.oracle.com/thread/2231622?tstart=0
https://asktom.oracle.com/pls/asktom/f%3Fp%3D100:11:0::::P11_QUESTION_ID:1412203938893
http://myracle.wordpress.com/2008/01/11/recover-database-without-control-files-and-redo-log-files/
http://blog.ronnyegner-consulting.de/2010/11/03/how-to-restore-an-rman-backup-without-any-existing-control-files/
http://www.freelists.org/post/oracle-l/high-recursive-cpu-usage
Metalink OPDG - recursive CPU
A test environment, no backup, noarchivelog mode.. and won't open because: 

1) current redo log has a corrupted block
2) when it was opened, the undo tablespace also had a corrupted block


See the details here:
http://forums.oracle.com/forums/message.jspa?messageID=4232743#4232743

{{{
Hi Yas,

I tried what user "user583761" suggested. It worked for me.

My scenario was this... (BTW this is a critical test environment)

1) Due to some movements (by the storage engineer) on the SAN storage I got a corrupted current online redo log, so the database is looking for a change on that current redo log to sync the other datafiles and to be able to open the database

2) I have no backup, the database is in "noarchivelog mode"

3) I have to get rid of that "current redo log". So I just followed the steps here http://oracle-abc.wikidot.com/recovery-from-current-redolog-corruption
I did the following steps:
- set the parameter _allow_resetlogs_corruption=TRUE
- "recover database until cancel"
- "cancel"
- alter database open resetlogs

then my session timed out.. then i bounced the database and it opened. Then after a while the instance was again terminated.

4) I checked on the alert log, and found out that Oracle is trying to rollback some transactions in UNDO tablespace and there was a corrupted block on that tablespace. (so.. this is entirely another problem..)

5) I have to get rid of that UNDOTBS1. So I followed the steps mentioned here to set the UNDO_MANAGEMENT=MANUAL
http://dbaforums.org/oracle/index.php?showtopic=1062

6) It opened!

7) Created a new UNDOTBS2, alter the parameters again and removed the hidden parameters, then bounced the database.

8) Executed an incremental level 0 RMAN backup.
And since this is just a test environment (although critical) we will have the test engineers check the database first, but it is certain that we have to rebuild the database.


}}}



http://msdn.microsoft.com/en-us/library/bb211408(v=office.12).aspx
{{{
Sub ClearRanges()
    Worksheets("Sheet1").Range("C5:D9,G9:H16,B14:D18"). _
        ClearContents
End Sub
}}}

http://www.yogeshguptaonline.com/2009/05/macros-in-excel-selecting-multiple.html
https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:9534209800346444034
{{{
Oracle Database refreshes non-unique indexes and gather stats on them too with non-atomic refreshes:

create table t2 ( x , y ) as
  select rownum x, mod(rownum, 5) y from dual connect by level <= 1000;
create table t1 ( x , y ) as
  select rownum x, mod(rownum, 3) y from dual connect by level <= 1000;
  
create materialized view mv_name
refresh on demand 
as
  select t1.* from t1
  union 
  select t2.* from t2;
  
create index iy on mv_name(y);

select status, num_rows from user_indexes
where  index_name = 'IY';

STATUS  NUM_ROWS  
VALID   1,800 
  
insert into t1
  select rownum+1000 x, mod(rownum, 3) y from dual connect by level <= 1000;
insert into t2
  select rownum+1000 x, mod(rownum, 5) y from dual connect by level <= 1000;
commit;

exec dbms_mview.refresh('mv_name', atomic_refresh => false);

select status, num_rows from user_indexes
where  index_name = 'IY';

STATUS  NUM_ROWS  
VALID   3,600   

select count(*) from mv_name;

COUNT(*)  
3,600   



In terms of making your refresh faster, it's worth investigating if you can go for regular fast refreshes. 
You have to go for complete refreshes with union. But you can fast refresh with union all. (Just make sure you add a 
marker column: https://jonathanlewis.wordpress.com/2016/07/12/union-all-mv/ )

So you could:

- Create a fast refresh union all MV
- A complete refresh MV on top of this returning the distinct rows

This may help if the intersection of the two tables is "small". If so, the distinct MV will have much less data to process => it'll be faster. 
}}}
<<showtoc>>

! python 
!!cheatsheet
[img(100%,100%)[ https://i.imgur.com/oTjD8H5.png]]

!!setup
[img(100%,100%)[ https://i.imgur.com/q2EzdOk.png]]



!bash 
https://www.linuxjournal.com/content/bash-regular-expressions






! regex debugger 
https://www.debuggex.com/


..
http://en.wikipedia.org/wiki/Relational_algebra
Relax-and-Recover is a setup-and-forget Linux bare metal disaster recovery solution. It is easy to set up and requires no maintenance so there is no excuse for not using it.


http://relax-and-recover.org

.
https://blogs.oracle.com/XPSONHA/entry/relocating_grid_infrastructure
https://blogs.oracle.com/XPSONHA/entry/relocating_grid_infrastructure_1
http://www.evernote.com/shard/s48/sh/21e2267d-6530-4012-9615-982dd3850ded/8017106b4dd76c2d1e291e5faf14755f
!! How To Remove Texts Before Or After A Specific Character From Cells In Excel 
https://www.extendoffice.com/documents/excel/1783-excel-remove-text-before-character.html

If cell is blank https://exceljet.net/formula/if-cell-is-blank
http://askdba.org/weblog/2010/09/renaming-diskgroup-containing-voting-disk-ocr/
/***
|Name:|RenameTagsPlugin|
|Description:|Allows you to easily rename or delete tags across multiple tiddlers|
|Version:|3.0 ($Rev: 5501 $)|
|Date:|$Date: 2008-06-10 23:11:55 +1000 (Tue, 10 Jun 2008) $|
|Source:|http://mptw.tiddlyspot.com/#RenameTagsPlugin|
|Author:|Simon Baird <simon.baird@gmail.com>|
|License|http://mptw.tiddlyspot.com/#TheBSDLicense|
Rename a tag and you will be prompted to rename it in all its tagged tiddlers.
***/
//{{{
config.renameTags = {

	prompts: {
		rename: "Rename the tag '%0' to '%1' in %2 tidder%3?",
		remove: "Remove the tag '%0' from %1 tidder%2?"
	},

	removeTag: function(tag,tiddlers) {
		store.suspendNotifications();
		for (var i=0;i<tiddlers.length;i++) {
			store.setTiddlerTag(tiddlers[i].title,false,tag);
		}
		store.resumeNotifications();
		store.notifyAll();
	},

	renameTag: function(oldTag,newTag,tiddlers) {
		store.suspendNotifications();
		for (var i=0;i<tiddlers.length;i++) {
			store.setTiddlerTag(tiddlers[i].title,false,oldTag); // remove old
			store.setTiddlerTag(tiddlers[i].title,true,newTag);  // add new
		}
		store.resumeNotifications();
		store.notifyAll();
	},

	storeMethods: {

		saveTiddler_orig_renameTags: TiddlyWiki.prototype.saveTiddler,

		saveTiddler: function(title,newTitle,newBody,modifier,modified,tags,fields,clearChangeCount,created) {
			if (title != newTitle) {
				var tagged = this.getTaggedTiddlers(title);
				if (tagged.length > 0) {
					// then we are renaming a tag
					if (confirm(config.renameTags.prompts.rename.format([title,newTitle,tagged.length,tagged.length>1?"s":""])))
						config.renameTags.renameTag(title,newTitle,tagged);

					if (!this.tiddlerExists(title) && newBody == "")
						// dont create unwanted tiddler
						return null;
				}
			}
			return this.saveTiddler_orig_renameTags(title,newTitle,newBody,modifier,modified,tags,fields,clearChangeCount,created);
		},

		removeTiddler_orig_renameTags: TiddlyWiki.prototype.removeTiddler,

		removeTiddler: function(title) {
			var tagged = this.getTaggedTiddlers(title);
			if (tagged.length > 0)
				if (confirm(config.renameTags.prompts.remove.format([title,tagged.length,tagged.length>1?"s":""])))
					config.renameTags.removeTag(title,tagged);
			return this.removeTiddler_orig_renameTags(title);
		}

	},

	init: function() {
		merge(TiddlyWiki.prototype,this.storeMethods);
	}
}

config.renameTags.init();

//}}}
-- TABLE

HOW TO DO TABLE AND INDEX REORGANIZATION
 	Doc ID:	Note:736563.1



How I Reorganize Objects In Our Manufacturing Database Server
  	Doc ID: 	430679.1
http://www.makeuseof.com/dir/rescuetime/
http://blog.rescuetime.com/2009/05/29/rescuetime-now-does-non-computer-time-codename-timepie/
http://blog.rescuetime.com/2011/10/26/7-steps-to-boost-your-teams-productivity/
http://blog.rescuetime.com/2011/12/28/that-awesome-data-collector-we-call-carry-around-in-our-pockets/
http://blog.rescuetime.com/2012/10/10/updates-to-offline-time-logging/
http://besthubris.com/entrepreneur/rescuetime-time-tracker-offline-version-manictime/
! AWR scripts

<<<
!! AWR port

@awr_genwl
Capacity, Requirements, Utilization
<<<

- ngcp, meralco, dbm



! Statspack

<<<
!! OSM - Summary of Reports: 

@genwl
General Workload Report

@findpeaks 0.00
Find General Workload Peak-of-Peak Report

@we %sequential%
Wait Event (%sequential%) Activity Report

@sprt2
Response Time Report

@cpufc
CPU Forecasting Stats Report

@iofc
IO Forecast Stats Report

@ip1 %sga_max_size%
Instance Parameter (by date/time) Report

@spsysstat %logon%
Sysstat Statistic (%logon%) Activity Report

@sqlrank 10
TOP SQL by 10 Report
<<<

<<<
!! Tim Gorman
sp_evtrend.sql
sp_systime2.sql
sp_sys_time_trends.sql
sp_buffer_busy.sql
gen_redo.sql
<<<
''MindMap - Database Resource Management (DBRM)'' http://www.evernote.com/shard/s48/sh/15245790-6686-4bcd-9c01-aa243187f086/1c722c2600b5f424ade98b81bc57e3c1


! The Types of Resources Managed by the Resource Manager

Resource plan directives specify how resources are allocated to resource consumer groups or subplans. Each directive can specify several different methods for allocating resources to its consumer group or subplan. The following sections summarize these resource allocation methods:

* CPU
* Degree of Parallelism Limit
* Parallel Target Percentage
* Parallel Queue Timeout
* Active Session Pool with Queuing
* Automatic Consumer Group Switching
* Canceling SQL and Terminating Sessions
* Execution Time Limit
* Undo Pool
* Idle Time Limit

** “max_utilization_limit” is available from Oracle Database 11g Release 2 onwards
** “cpu_managed” is available from Oracle Database 11g Release 2 onwards.  For all releases, you can determine if Resource Manager is managing CPU by seeing whether the current resource plan has CPU directives.
** v$rsrcmgrmetric is available from Oracle Database 11g Release 1 onwards

! References

* ''Oracle Database Resource Manager and OBIEE'' http://www.rittmanmead.com/2010/01/oracle-database-resource-manager-and-obiee/
* Official Doc - 27 Managing Resources with Oracle Database Resource Manager http://docs.oracle.com/cd/E11882_01/server.112/e10595/dbrm.htm#i1010776
* Official Doc - http://docs.oracle.com/cd/E11882_01/server.112/e16638/os.htm#PFGRF95151
* MindMap - Resource Manager series whitepaper http://www.evernote.com/shard/s48/sh/64021de9-92c6-4ade-afeb-81a12e3e015f/fdc686cf4370cf9678098a0928a01669
** Introduction to Resource Management in Oracle Solaris and Oracle Database http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-054-intro-rm-419298.pdf
** Effective Resource Management Using Oracle Solaris Resource Manager http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-055-solaris-rm-419384.pdf
** Effective Resource Management Using Oracle Database Resource Manager http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-056-oracledb-rm-419380.pdf
** Resource Management Case Study for Mixed Workloads and Server Sharing http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-057-mixed-wl-rm-419381.pdf
* ''A fair bite of the CPU pie? Monitoring & Testing Oracle Resource Manager'' http://rnm1978.wordpress.com/2010/09/10/a-fair-bite-of-the-cpu-pie-monitoring-testing-oracle-resource-manager/
* ''Using Oracle Database Resource Manager'' http://www.oracle.com/technetwork/database/focus-areas/performance/resource-manager-twp-133705.pdf
* ''Control Your Environment with the Resource Manager'' http://seouc.com/PDF_files/2011/Presentations/NormanInstanceCaging_SEOUC_2011.pdf
* ''ResourceManagerEnhancements_11gR1'' http://www.oracle-base.com/articles/11g/ResourceManagerEnhancements_11gR1.php
* ''Oracle Resource Manager Concepts'' http://www.dbform.com/html/2010/1283.html
* ''limiting parallel per session'' http://www.experts-exchange.com/Database/Oracle/Q_10351442.html, http://www.freelists.org/post/oracle-l/limit-parallel-process-per-session-in-10204,2
* http://www.pythian.com/news/2740/oracle-limiting-query-runtime-without-killing-the-session/




! ''Enable the Resource Manager''
<<<
alter system set resource_manager_plan=default_plan;              
alter system set cpu_count=12;                                 ''<-- if you want to do instance caging''
<<<


! ''Disabling the Resource Manager''
<<<
To disable the Resource Manager, complete the following steps:

Issue the following SQL statement:

ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = '';
Disassociate the Resource Manager from all Oracle Scheduler windows.

To do so, for any Scheduler window that references a resource plan in its resource_plan attribute, use the DBMS_SCHEDULER.SET_ATTRIBUTE procedure to set resource_plan to the empty string (''). Qualify the window name with the SYS schema name if you are not logged in as user SYS. You can view Scheduler windows with the DBA_SCHEDULER_WINDOWS data dictionary view. See "Altering Windows" and Oracle Database PL/SQL Packages and Types Reference for more information.

Note:
By default, all maintenance windows reference the DEFAULT_MAINTENANCE_PLAN resource plan. If you want to completely disable the Resource Manager, you must alter all maintenance windows to remove this plan. However, use caution, because resource consumption by automated maintenance tasks will no longer be regulated, which may adversely affect the performance of your other sessions. See Chapter 24, "Managing Automated Database Maintenance Tasks" for more information on maintenance windows.
<<<









Sysadmin Resources for Oracle Linux	
http://blogs.sun.com/OTNGarage/entry/sysadmin_resources_for_oracle_linux
https://community.hortonworks.com/articles/90768/how-to-fix-ambari-hive-view-15-result-fetch-timed.html
http://www.orainternals.com/papers/tuning-101_bottleneck_identification_2.pdf

Doc ID: 415579.1 HowTo Restore RMAN Disk backups of RAC Database to Single Instance On Another Node

{{{

-- SETUP

1) do an incremental level 0 on server1

2) setup the following on server2

a. add an entry on the oratab
orcl:/oracle/app/oracle/product/10.2.0/db_1:N

b. create pfile 
      comment out the following lines
	*.cluster_database_instances=2
	*.cluster_database=true
	*.remote_listener='LISTENERS_ORCL'

      for undo tablespace comment out the following
	orcl2.undo_tablespace='UNDOTBS2'
	orcl1.undo_tablespace='UNDOTBS1'

      and replace it with
	undo_tablespace=UNDOTBS1

3) after the incremental level0 on server1, copy all of the backup pieces to server2

4) on server2, do "startup nomount"

5) restore controlfile from autobackup

restore controlfile from '/flash_reco/flash_recovery_area/ORCL/autobackup/ORCL-20090130-c-1177758841-20090130-00';

6) "alter database mount;"

7) catalog backup files

catalog start with '/flash_reco/flash_recovery_area/ORCL';

8) determine the point upto which media recovery should run on the restored database

 List of Archived Logs in backup set 43
  Thrd Seq     Low SCN    Low Time  Next SCN   Next Time
  ---- ------- ---------- --------- ---------- ---------
  1    56      727109     30-JAN-09 727157     30-JAN-09
  2    36      727107     30-JAN-09 727154     30-JAN-09


9) restore and recover


run {
set until sequence 37 thread 2;
restore database;
recover database;
}



-- FOR AUTOMATIC RECOVERY

1) copy archivelogs from primary

2) on server2 catalog the new archivelogs

catalog start with '/flash_reco/flash_recovery_area/ORCL/archivelog';

3) on server2 execute the following query

set lines 100
select 'recover automatic database until time '''||to_char(max(first_time),'YYYY-MM-DD:HH24:MI:SS')||''' using backup controlfile;' from v$archived_log;


----------------------------------------------------------------------------------------------------------------------
-- to automate the recovery deploy below scripts
-- file recover.sh
rman target / << EOF
catalog start with '/flash_reco/flash_recovery_area/ORCL/archivelog';
yes
exit
EOF
sqlplus "/ as sysdba" @getrecover.sql

-- file getrecover.sql 
set lines 100
set heading off
spool recover.sql
select 'recover automatic database until time '''||to_char(max(first_time),'YYYY-MM-DD:HH24:MI:SS')||''' using backup controlfile;' from v$archived_log;
spool off
set echo on
spool recover.log
@@recover.sql
spool off
exit
----------------------------------------------------------------------------------------------------------------------




-- OPEN READ ONLY

1) disable block change tracking;

alter database disable block change tracking;

2) alter database open read only;




-- OPEN

1) rename online redo log files to the new location

2) open resetlogs

3) remove the redolog groups for redo threads of other instances

	SQL> select THREAD#, STATUS, ENABLED
	  2  from v$thread;

	  THREAD# STATUS ENABLED
	---------- ------ --------
		1 OPEN   PUBLIC
		2 CLOSED PRIVATE

	SQL> select group# from v$log where THREAD#=2;

	    GROUP#
	----------
		4
		5
		6

	SQL> alter database disable thread 2;

	Database altered.

	SQL> alter database drop logfile group 4;
	alter database drop logfile group 4
	*
	ERROR at line 1:
	ORA-00350: log 4 of instance racdb2 (thread 2) needs to be archived
	ORA-00312: online log 4 thread 2: '/u01/oracle/oradata/ractest/log/redo04.log'

	SQL> alter database clear unarchived logfile group 4;

	Database altered.

	SQL> alter database drop logfile group 4;

	Database altered.

	SQL> alter database drop logfile group 5;

	Database altered.

	SQL> alter database drop logfile group 6;

	Database altered.

	SQL> select THREAD#, STATUS, ENABLED from v$thread;

	  THREAD# STATUS ENABLED
	---------- ------ --------
		1 OPEN   PUBLIC

4) remove the undo tablespaces of other instances

	SQL> sho parameter undo;

	NAME                                 TYPE        VALUE
	------------------------------------ ----------- ------------------------------
	undo_management                      string      AUTO
	undo_retention                       integer     900
	undo_tablespace                      string      UNDOTBS1
	SQL>
	SQL>
	SQL> select tablespace_name from dba_tablespaces where contents='UNDO';

	TABLESPACE_NAME
	------------------------------
	UNDOTBS1
	UNDOTBS2

	SQL> drop tablespace UNDOTBS2 including contents and datafiles;

	Tablespace dropped.


5) create a new temporary tablespace to complete the activity

	select file#, name from v$tempfile;

	-- to drop tempfiles
	select 'alter database tempfile '|| file# ||' drop including datafiles;' 
	from v$tempfile;

	alter tablespace temp add tempfile '/u01b/oradata/HCPRD3/temp01.dbf' size 500M autoextend on next 100M maxsize 2000M;





-- CAVEATS

1) if you add a new datafile it will error at first.. but when you open it, it will create the datafile

    Tue Jan 27 18:11:35 2009
    alter database recover logfile '/flash_reco/flash_recovery_area/ORCL/archivelog/orcl_1_60_649978105.arc'
    Tue Jan 27 18:11:35 2009
    Media Recovery Log /flash_reco/flash_recovery_area/ORCL/archivelog/orcl_1_60_649978105.arc
    File #6 added to control file as 'UNNAMED00006'. Originally created as:
    '+DATA_1/orcl/datafile/karlarao.dbf'
    Errors with log /flash_reco/flash_recovery_area/ORCL/archivelog/orcl_1_60_649978105.arc
    Some recovered datafiles maybe left media fuzzy
    Media recovery may continue but open resetlogs may fail
    Tue Jan 27 18:11:38 2009
    Media Recovery failed with error 1244
    ORA-283 signalled during: alter database recover logfile '/flash_reco/flash_recovery_area/ORCL/archivelog/orcl_1_60_649978105.arc'...
    Tue Jan 27 18:11:38 2009
    alter database recover datafile list clear
    Completed: alter database recover datafile list clear
    Tue Jan 27 18:11:40 2009
    alter database recover datafile list 6
    Completed: alter database recover datafile list 6
    Tue Jan 27 18:11:40 2009
    alter database recover datafile list
    1 , 2 , 3 , 4 , 5
    Completed: alter database recover datafile list
    1 , 2 , 3 , 4 , 5
    Tue Jan 27 18:11:40 2009
    alter database recover if needed
    start until cancel using backup controlfile
    Media Recovery Start
    ORA-279 signalled during: alter database recover if needed
    start until cancel using backup controlfile
    ...
    Tue Jan 27 18:11:41 2009
    alter database recover logfile '/flash_reco/flash_recovery_area/ORCL/archivelog/orcl_1_60_649978105.arc'
    Tue Jan 27 18:11:41 2009
    Media Recovery Log /flash_reco/flash_recovery_area/ORCL/archivelog/orcl_1_60_649978105.arc
    ORA-279 signalled during: alter database recover logfile '/flash_reco/flash_recovery_area/ORCL/archivelog/orcl_1_60_649978105.arc'...
    Tue Jan 27 18:11:41 2009


2) if adding datafile, when you open read only it will look for the added file

    SQL> alter database open read only;
    alter database open read only
    *
    ERROR at line 1:
    ORA-01565: error in identifying file '+DATA_1/orcl/datafile/karlarao.dbf'
    ORA-17503: ksfdopn:2 Failed to open file +DATA_1/orcl/datafile/karlarao.dbf
    ORA-15173: entry 'karlarao.dbf' does not exist in directory 'datafile'


3) if adding new redo logs, it will not be created on the pseudo standby

4) any tables created will be generated at the pseudo standby
}}}
http://oraclue.com/2010/11/02/role-based-database-service/
MAX_ENABLED_ROLES: WHAT IS THE MAX THIS CAN BE SET TO?
  	Doc ID: 	Note:1012034.6

Roles and Privileges Administration and Restrictions
  	Doc ID: 	Note:13615.1

Oracle Clusterware (formerly CRS) Rolling Upgrades
  	Doc ID: 	Note:338706.1

How We Upgraded our Oracle 9i Database to Oracle 10g Database with Near-Zero Downtime
  	Doc ID: 	Note:431430.1

RAC Survival Kit: Database Upgrades and Migration
  	Doc ID: 	Note:206678.1

Applying one-off Oracle Clusterware patches in a mixed version home environment
  	Doc ID: 	Note:363254.1



10g Rolling Upgrades with Logical Standby
  	Doc ID: 	Note:300479.1
http://jonathanlewis.wordpress.com/2007/02/05/go-faster-stripes/#more-187
http://oraexplorer.com/2009/10/online-san-storage-migration-for-oracle-11g-rac-database-with-asm/

but wait.. I recently had this scenario.. and for large scale migration it's better to SAN Copy.. passing the data through Fiber.. 

Facebook wall thread
<<<
 I heart EMC SAN copy makes storage array migration so fast .. ;)

Martin Berger who wants to migrate storage arrays? 
I only care of the data ;-)
June 13 at 11:31pm · Like · 

Roy Hayrosa be sure non of those files are corrupted..hehehe
June 14 at 3:30am · Like · 

Karl Arao ‎@Martin: part of the job man :) there was a need to migrate from old CX to a new CX4 and the rac asm disks,OCR,vote disk should be moved.. Plan A was to make use of asm disk rebalance (add/drop) but turned out it will finish for like daysss.. Plan B was short and sweet make use of the SAN copy of the LUNs (header and metadata are the same) and instantly they were recognized without problems.. Rac started up without problems.. 

@roy: Asm metadata was clean, rman backup validate check logical was okay :)
June 15 at 2:28am · Like · 

Roy Hayrosa your the man! blog it...want to see the steps :)
June 15 at 12:52pm · Like · 

Karl Arao coming up soon :)
June 16 at 10:58am · Like · 

Martin Berger did you need a downtime for the SAN-copy? how did you exchange the disks on the servers? I'm very curious!
June 16 at 12:32pm · Like · 

Karl Arao Nope.. there is a way to do incremental SANCopy while the RAC environment is still running.. only time you have to full shutdown is when you do the final sync so the dirty blocks will be synced to the new devices... The whole activity is just like restarting the whole RAC environment and pointing the server to the new LUNs.. Bulk of the work will be on the storage engineer, in our case the OCR and Voting Disk are on OCFS2 so we just need to edit the fstab with the new EMC pseudo device names.. and for the ASM, we are using ASMlib.. and the new devices although having different names still has the header and metadata which the only two stuff that ASM cares about so when you boot the machine up again it's as if nothing happened.

Note that you should not present the OLD and NEW LUNs together, because ASM will tell you that it is seeing two instance of the disk.. :)

This is very ideal for large array migration and the business allows for a minimal downtime window.. at a minimum one restart of the servers.. instead of doing a full array migration using ASM rebalance which will take longer and will make you worry if it's already finished or not and if you take this route the bottleneck would be your CPUs consuming IO time (on full throttle rebalance power 11) .. compared to SANCopy, it's all passing through the Fiber (1TB = 1hour as per the engineer) and much faster.. :)
June 16 at 2:41pm · Like ·  1 person · 
<<<



add drop on dell storage http://www.oracle.com/jp/gridcenter/partner/nssol/wp-storage-mig-grid-nsso-289788-ja.pdf

How to Migrate Oracle Database from Oracle Solaris 8 to Oracle Solaris 11 http://www.oracle.com/technetwork/articles/servers-storage-admin/migrate-s8db-to-s11-1867397.html
on migration to hana, this is where hana can beat exadata.. 
they can run the bwa on top of hana, or do bwa by itself

2 hour data load
cube to cube build
http://www.oracle.com/us/solutions/sap/oracleengsys-sap-080613-final2-1988224.pdf
http://www.precise.com/appsapone/app-sap-one
https://blogs.oracle.com/bhuang/entry/white_paper_for_sap_on
architecture config options https://www.google.com/search?q=sap+dual+stack+vs+single+stack&oq=SAP+dual+stacks+vs+&aqs=chrome.1.69i57j0.4685j0j1&sourceid=chrome&ie=UTF-8 
Single stack vs. Dual Stack - Why does SAP recommend single stack https://archive.sap.com/discussions/thread/919833
https://www.sas.com/en_us/software/studio.html

https://www.google.com/search?ei=G9TBXNe5Cqu2ggen-q-ADA&q=r+studio+job+vs+SAS+studio&oq=r+studio+job+vs+SAS+studio&gs_l=psy-ab.3..35i39.4103.4103..4372...0.0..0.86.86.1......0....1..gws-wiz.2ZZe9iOp8yE
go here GetSystemInfo and run the system info tool you'll see the interface and transfer mode SATA-300 and SATA-150
that is actually SATAII 3Gbps (375MB/s) and SATAI 1.5Gbps (187.5MB/s)

[img[picturename| https://lh5.googleusercontent.com/_F2x5WXOJ6Q8/TZwkhYQb6MI/AAAAAAAABLc/-oT8vP0wHh0/SATA.png]]




Other references:
http://www.tomshardware.com/forum/56577-35-sata-notebooks-sale-today
http://www.gtopala.com/siw-download.html
http://www.neowin.net/forum/topic/603834-what-is-the-difference-between-sata-150-and-sata-300/



https://www.scaledagileframework.com/story/
https://learning.oreilly.com/videos/leading-safe-scaled/9780134864044/9780134864044-SAFE_00_01_03_00
https://github.com/karlarao/hive-scd-examples


https://www.researchgate.net/publication/330798405_Temporal_Dimensional_Modeling
https://github.com/Roenbaeck/tempodim
http://www.anchormodeling.com/?p=1212

https://mssqldude.wordpress.com/2019/04/15/adf-slowly-changing-dimension-type-2-with-mapping-data-flows-complete/

! SCD type 1,2,3

- Type1 (update in place)
- Type2 (historical - dates, flags, versions)
[img(70%,70%)[https://i.imgur.com/RPwufrR.png]]

- Type 3 (keep version of last two values and delete the rest)
[img(70%,70%)[https://i.imgur.com/sCQsQsn.png]]

- Mixed Type 2 and 3 (add Flags Y on last two recent versions)
[img(70%,70%)[https://i.imgur.com/wOnAL58.png]]
https://tanelpoder.com/2008/02/10/sqlnet-message-to-client-vs-sqlnet-more-data-to-client/
https://martincarstenbach.com/2014/06/11/my-sdu-goes-to-11-hh-i-meant-2097152/
https://oraganism.wordpress.com/2011/09/24/setting-sdu-size-mainly-in-11-2/
https://community.ifs.com/framework-experience-infrastructure-cloud-integration-dev-tools-50/oracle-sdu-session-data-unit-for-remote-implementations-44228
[oracle@dbrocaix01 oradata]$ sqlplus "/ as sysdba"

SQL*Plus: Release 10.2.0.1.0 - Production on Wed Jul 2 00:12:20 2008

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Release 10.2.0.1.0 - Production
PL/SQL Release 10.2.0.1.0 - Production
CORE    10.2.0.1.0      Production
TNS for Linux: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production

SQL> conn scott/tiger
Connected.
SQL>
SQL> begin
  2    DBMS_RLS.ADD_POLICY (
  3                    OBJECT_SCHEMA   => 'SEC_MGR',
  4                    OBJECT_NAME     => 'PEOPLE_RO',
  5                    POLICY_NAME     => 'PEOPLE_RO_IUD',
  6                    FUNCTION_SCHEMA => 'SEC_MGR',
  7                    POLICY_FUNCTION => 'No_Records',
  8                    STATEMENT_TYPES => 'INSERT,UPDATE,DELETE',
  9                    UPDATE_CHECK    => TRUE);
 10  end;
 11  /
begin
*
ERROR at line 1:
ORA-00439: feature not enabled: Fine-grained access control
ORA-06512: at "SYS.DBMS_RLS", line 20
ORA-06512: at line 2

SQL> exit
Disconnected from Oracle Database 10g Release 10.2.0.1.0 - Production


EE options installed:
1) ASO
2) spatial
3) label security
4) OLAP

1) ASO
2) partitioning
3) spatial
4) OLAP



COMP_NAME                           VERSION                        STATUS
----------------------------------- ------------------------------ -----------
Oracle Database Catalog Views       10.2.0.1.0                     VALID
Oracle Database Packages and Types  10.2.0.1.0                     VALID
Oracle Workspace Manager            10.2.0.1.0                     VALID
Oracle Label Security               10.2.0.1.0                     VALID


PARAMETER                                VALUE
---------------------------------------- ------------------------------
Partitioning                             FALSE
Objects                                  TRUE
Real Application Clusters                FALSE
Advanced replication                     FALSE
Bit-mapped indexes                       FALSE
Connection multiplexing                  TRUE
Connection pooling                       TRUE
Database queuing                         TRUE
Incremental backup and recovery          TRUE
Instead-of triggers                      TRUE
Parallel backup and recovery             FALSE

PARAMETER                                VALUE
---------------------------------------- ------------------------------
Parallel execution                       FALSE
Parallel load                            TRUE
Point-in-time tablespace recovery        FALSE
Fine-grained access control              FALSE
Proxy authentication/authorization       TRUE
Change Data Capture                      FALSE
Plan Stability                           TRUE
Online Index Build                       FALSE
Coalesce Index                           FALSE
Managed Standby                          FALSE
Materialized view rewrite                FALSE

PARAMETER                                VALUE
---------------------------------------- ------------------------------
Materialized view warehouse refresh      FALSE
Database resource manager                FALSE
Spatial                                  FALSE
Visual Information Retrieval             FALSE
Export transportable tablespaces         FALSE
Transparent Application Failover         TRUE
Fast-Start Fault Recovery                FALSE
Sample Scan                              TRUE
Duplexed backups                         FALSE
Java                                     TRUE
OLAP Window Functions                    TRUE

PARAMETER                                VALUE
---------------------------------------- ------------------------------
Block Media Recovery                     FALSE
Fine-grained Auditing                    FALSE
Application Role                         FALSE
Enterprise User Security                 FALSE
Oracle Data Guard                        FALSE
Oracle Label Security                    FALSE
OLAP                                     FALSE
Table compression                        FALSE
Join index                               FALSE
Trial Recovery                           FALSE
Data Mining                              FALSE

PARAMETER                                VALUE
---------------------------------------- ------------------------------
Online Redefinition                      FALSE
Streams Capture                          FALSE
File Mapping                             FALSE
Block Change Tracking                    FALSE
Flashback Table                          FALSE
Flashback Database                       FALSE
Data Mining Scoring Engine               FALSE
Transparent Data Encryption              FALSE
Backup Encryption                        FALSE
Unused Block Compression                 FALSE

54 rows selected.




-------------------------------------------------------------------------------------------------------------------




[oracle@dbrocaix01 oradata]$ sqlplus "/ as sysdba"

SQL*Plus: Release 10.2.0.1.0 - Production on Wed Jul 2 00:12:41 2008

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options







SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options




select comp_name, version, status from dba_registry

COMP_NAME                           VERSION                        STATUS
----------------------------------- ------------------------------ -----------
Oracle Database Catalog Views       10.2.0.1.0                     VALID
Oracle Database Packages and Types  10.2.0.1.0                     VALID
Oracle Workspace Manager            10.2.0.1.0                     VALID
Oracle Label Security               10.2.0.1.0                     VALID


col parameter format a40
select * from v$option


PARAMETER                                VALUE
---------------------------------------- ------------------------------
Partitioning                             TRUE
Objects                                  TRUE
Real Application Clusters                FALSE
Advanced replication                     TRUE
Bit-mapped indexes                       TRUE
Connection multiplexing                  TRUE
Connection pooling                       TRUE
Database queuing                         TRUE
Incremental backup and recovery          TRUE
Instead-of triggers                      TRUE
Parallel backup and recovery             TRUE

PARAMETER                                VALUE
---------------------------------------- ------------------------------
Parallel execution                       TRUE
Parallel load                            TRUE
Point-in-time tablespace recovery        TRUE
Fine-grained access control              TRUE
Proxy authentication/authorization       TRUE
Change Data Capture                      TRUE
Plan Stability                           TRUE
Online Index Build                       TRUE
Coalesce Index                           TRUE
Managed Standby                          TRUE
Materialized view rewrite                TRUE

PARAMETER                                VALUE
---------------------------------------- ------------------------------
Materialized view warehouse refresh      TRUE
Database resource manager                TRUE
Spatial                                  TRUE
Visual Information Retrieval             TRUE
Export transportable tablespaces         TRUE
Transparent Application Failover         TRUE
Fast-Start Fault Recovery                TRUE
Sample Scan                              TRUE
Duplexed backups                         TRUE
Java                                     TRUE
OLAP Window Functions                    TRUE

PARAMETER                                VALUE
---------------------------------------- ------------------------------
Block Media Recovery                     TRUE
Fine-grained Auditing                    TRUE
Application Role                         TRUE
Enterprise User Security                 TRUE
Oracle Data Guard                        TRUE
Oracle Label Security                    FALSE
OLAP                                     TRUE
Table compression                        TRUE
Join index                               TRUE
Trial Recovery                           TRUE
Data Mining                              TRUE

PARAMETER                                VALUE
---------------------------------------- ------------------------------
Online Redefinition                      TRUE
Streams Capture                          TRUE
File Mapping                             TRUE
Block Change Tracking                    TRUE
Flashback Table                          TRUE
Flashback Database                       TRUE
Data Mining Scoring Engine               FALSE
Transparent Data Encryption              TRUE
Backup Encryption                        TRUE
Unused Block Compression                 TRUE

54 rows selected.
https://www.thatjeffsmith.com/archive/2016/11/7-ways-to-avoid-select-from-queries-in-sql-developer/

ORA-04031: Unable to Allocate 83232 Bytes of Shared Memory
  	Doc ID: 	374329.1

Diagnosing and Resolving Error ORA-04031
  	Doc ID: 	146599.1 	

ORA-4031 Common Analysis/Diagnostic Scripts
  	Doc ID: 	430473.1




LOG_BUFFER Differs from the Value Set in the spfile or pfile
  	Doc ID: 	373018.1


-- ASMM
Excess “KGH: NO ACCESS” Memory Allocation [Video] [ID 801787.1]
http://www.evernote.com/shard/s48/sh/b60af7b1-9b57-4be8-b64b-476f068d0c9f/6a719a99c1367c81b59620d9bb6fb1d0
Shutdown Normal or Shutdown Immediate Hangs. SMON disabling TX Recovery
  	Doc ID: 	Note:1076161.6

SMON - Temporary Segment Cleanup and Free Space Coalescing
  	Doc ID: 	Note:61997.1

ORA-0054: When Dropping or Truncating Table, When Creating or Rebuilding Index
  	Doc ID: 	Note:117316.1
http://docs.oracle.com/cd/E11857_01/em.111/e14091/chap1.htm#SNMPR001
http://docs.oracle.com/cd/E11857_01/em.111/e16790/notification.htm#EMADM9130
firescope http://www.firescope.com/Support/ , http://www.firescope.com/QuickStart/Unify/Article.asp?ContentID=1


Configuring SNMP Trap Notification Method in EM - Steps and Troubleshooting [ID 434886.1]
Where can I find the MIB file for Grid Control SNMP trap Notifications? [ID 389585.1]
The Enterprise Manager MIB file 'omstrap.v1' has Incorrect Formatting [ID 750117.1]
How To Configure Notification Rules in Enterprise Manager Grid Control? [ID 429422.1]
How to Troubleshoot Notifications That Are Hung / Stuck and Not Being Sent from EM 10g [ID 285093.1]
How to Verify the SNMP Trap Contents Being Sent by Grid Control? [ID 469884.1]

http://h30499.www3.hp.com/t5/ITRC-HP-Systems-Insight-Manager/HP-insight-manager-snmp-traps/td-p/5269813

''Monitoring Exadata database machine with Oracle Enterprise Manager 11g'' http://dbastreet.com/blog/?tag=enterprise-manager  If you use enterprise wide monitoring tools like tivoli, openview or netcool, use snmp traps from oracle enterprise manager, to notify these monitoring tools (ie dont try to directly use snmp to monitor the exadata components. You could do this but it will be too time consuming).



''setting up SNMP''
http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch22_:_Monitoring_Server_Performance
http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch23_:_Advanced_MRTG_for_Linux
http://www.cyberciti.biz/nixcraft/linux/docs/uniqlinuxfeatures/mrtg/
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/sect-System_Monitoring_Tools-Net-SNMP.html









TESTING SQL PERFORMANCE IMPACT OF AN ORACLE 9i TO ORACLE DATABASE 10g RELEASE 2 UPGRADE WITH SQL PERFORMANCE ANALYZER (Doc ID 562899.1)

SQL PERFORMANCE ANALYZER EXAMPLE [ID 455889.1]
{{{
SQL> 
SQL> --Setting optimizer_capture_sql_plan_baselines=TRUE
SQL> --Automatically Captures the Plan for any
SQL> --Repeatable SQL statement.
SQL> --By Default optimizer_capture_sql_plan_baselines is False.
SQL> --A repeatable sql  means which is executed more then once.
SQL> --Now we will Capture various Optimizer plans in SPM Baseline.
SQL> --Plans are captured into the Baselines as New Plans are found.
SQL> 
SQL> pause

SQL> 
SQL> alter session set optimizer_capture_sql_plan_baselines = TRUE;

Session altered.

SQL> alter session set optimizer_use_sql_plan_baselines=FALSE;

Session altered.

SQL> alter system set optimizer_features_enable='11.1.0.6';

System altered.

SQL> 
SQL> pause

SQL> 
SQL> set autotrace on
SQL> 
SQL> pause

SQL> 
SQL> --Execute Sql.
SQL> 
SQL> pause

SQL> 
SQL> SELECT *
  2  from sh.sales
  3  where quantity_sold > 30
  4  order by prod_id;

no rows selected


Execution Plan
----------------------------------------------------------                                                                                  
Plan hash value: 3803407550                                                                                                                 
                                                                                                                                            
----------------------------------------------------------------------------------------------                                              
| Id  | Operation            | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |                                              
----------------------------------------------------------------------------------------------                                              
|   0 | SELECT STATEMENT     |       |     1 |    29 |   328   (7)| 00:00:04 |       |       |                                              
|   1 |  SORT ORDER BY       |       |     1 |    29 |   328   (7)| 00:00:04 |       |       |                                              
|   2 |   PARTITION RANGE ALL|       |     1 |    29 |   327   (6)| 00:00:04 |     1 |    28 |                                              
|*  3 |    TABLE ACCESS FULL | SALES |     1 |    29 |   327   (6)| 00:00:04 |     1 |    28 |                                              
----------------------------------------------------------------------------------------------                                              
                                                                                                                                            
Predicate Information (identified by operation id):                                                                                         
---------------------------------------------------                                                                                         
                                                                                                                                            
   3 - filter("QUANTITY_SOLD">30)                                                                                                           


Statistics
----------------------------------------------------------                                                                                  
          0  recursive calls                                                                                                                
          0  db block gets                                                                                                                  
       1718  consistent gets                                                                                                                
          0  physical reads                                                                                                                 
          0  redo size                                                                                                                      
        639  bytes sent via SQL*Net to client                                                                                               
        409  bytes received via SQL*Net from client                                                                                         
          1  SQL*Net roundtrips to/from client                                                                                              
          1  sorts (memory)                                                                                                                 
          0  sorts (disk)                                                                                                                   
          0  rows processed                                                                                                                 

SQL> 
SQL> pause

SQL> 
SQL> --This was the first execution of the Sql.
SQL> 
SQL> pause

SQL> 
SQL> SELECT *
  2  from sh.sales
  3  where quantity_sold > 30
  4  order by prod_id;

no rows selected


Execution Plan
----------------------------------------------------------                                                                                  
Plan hash value: 3803407550                                                                                                                 
                                                                                                                                            
----------------------------------------------------------------------------------------------                                              
| Id  | Operation            | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |                                              
----------------------------------------------------------------------------------------------                                              
|   0 | SELECT STATEMENT     |       |     1 |    29 |   328   (7)| 00:00:04 |       |       |                                              
|   1 |  SORT ORDER BY       |       |     1 |    29 |   328   (7)| 00:00:04 |       |       |                                              
|   2 |   PARTITION RANGE ALL|       |     1 |    29 |   327   (6)| 00:00:04 |     1 |    28 |                                              
|*  3 |    TABLE ACCESS FULL | SALES |     1 |    29 |   327   (6)| 00:00:04 |     1 |    28 |                                              
----------------------------------------------------------------------------------------------                                              
                                                                                                                                            
Predicate Information (identified by operation id):                                                                                         
---------------------------------------------------                                                                                         
                                                                                                                                            
   3 - filter("QUANTITY_SOLD">30)                                                                                                           


Statistics
----------------------------------------------------------                                                                                  
          0  recursive calls                                                                                                                
          0  db block gets                                                                                                                  
       1718  consistent gets                                                                                                                
          0  physical reads                                                                                                                 
          0  redo size                                                                                                                      
        639  bytes sent via SQL*Net to client                                                                                               
        409  bytes received via SQL*Net from client                                                                                         
          1  SQL*Net roundtrips to/from client                                                                                              
          1  sorts (memory)                                                                                                                 
          0  sorts (disk)                                                                                                                   
          0  rows processed                                                                                                                 

SQL> 
SQL> pause

SQL> 
SQL> set autotrace off
SQL> 
SQL> pause

SQL> 
SQL> --This was the second execution of the Sql.
SQL> 
SQL> pause

SQL> 
SQL> 
SQL> --Verify what Plans have been inserted into Plan Baseline.
SQL> 
SQL> pause

SQL> 
SQL> select sql_handle, plan_name,
  2  origin, enabled, accepted,sql_text
  3  from dba_sql_plan_baselines
  4  where sql_text like 'SELECT%sh.sales%';

SQL_HANDLE               PLAN_NAME                     ORIGIN         ENA ACC                                                               
------------------------ ----------------------------- -------------- --- ---                                                               
SQL_TEXT                                                                                                                                    
--------------------------------------------------------------------------------                                                            
SYS_SQL_7de69bb90f3e54d2 SYS_SQL_PLAN_0f3e54d211df68d0 AUTO-CAPTURE   YES YES                                                               
SELECT *                                                                                                                                    
from sh.sales                                                                                                                               
where quantity_sold > 30                                                                                                                    
order by prod_id                                                                                                                            
                                                                                                                                            
SYS_SQL_7de69bb90f3e54d2 SYS_SQL_PLAN_0f3e54d254bc8843 AUTO-CAPTURE   YES NO                                                                
SELECT *                                                                                                                                    
from sh.sales                                                                                                                               
where quantity_sold > 30                                                                                                                    
order by prod_id                                                                                                                            
                                                                                                                                            

SQL> 
SQL> pause

SQL> 
SQL> --SYS_SQL_PLAN_0f3e54d254bc8843 is the Very First Plan that
SQL> --Is inserted in the Sql Plan Baseline.
SQL> --Note the Very First Plan is ENABLED=YES AND ACCEPTED=YES
SQL> --Note the ORIGIN is AUTO-CAPTURE WHICH MEANS The plan was captured
SQL> --Automatically when optimizer_capture_sql_plan_baselines = TRUE.
SQL> --Note The Plan hash value: 3803407550
SQL> 
SQL> pause

SQL> 
SQL> --Let us change the Optimizer Envoriment.
SQL> 
SQL> pause

SQL> 
SQL> alter system set optimizer_features_enable='10.2.0.3';

System altered.

SQL> alter session set optimizer_index_cost_adj=1;

Session altered.

SQL> 
SQL> pause

SQL> 
SQL> set autotrace on
SQL> 
SQL> pause

SQL> 
SQL> SELECT *
  2  from sh.sales
  3  where quantity_sold > 30
  4  order by prod_id;

no rows selected


Execution Plan
----------------------------------------------------------                                                                                  
Plan hash value: 899219946                                                                                                                  
                                                                                                                                            
-----------------------------------------------------------------------------------------------------------------------                     
| Id  | Operation                           | Name            | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |                     
-----------------------------------------------------------------------------------------------------------------------                     
|   0 | SELECT STATEMENT                    |                 |     1 |    29 |   294  (13)| 00:00:04 |       |       |                     
|   1 |  SORT ORDER BY                      |                 |     1 |    29 |   294  (13)| 00:00:04 |       |       |                     
|   2 |   PARTITION RANGE ALL               |                 |     1 |    29 |   293  (13)| 00:00:04 |     1 |    28 |                     
|*  3 |    TABLE ACCESS BY LOCAL INDEX ROWID| SALES           |     1 |    29 |   293  (13)| 00:00:04 |     1 |    28 |                     
|   4 |     BITMAP CONVERSION TO ROWIDS     |                 |       |       |            |          |       |       |                     
|   5 |      BITMAP INDEX FULL SCAN         | SALES_PROMO_BIX |       |       |            |          |     1 |    28 |                     
-----------------------------------------------------------------------------------------------------------------------                     
                                                                                                                                            
Predicate Information (identified by operation id):                                                                                         
---------------------------------------------------                                                                                         
                                                                                                                                            
   3 - filter("QUANTITY_SOLD">30)                                                                                                           


Statistics
----------------------------------------------------------                                                                                  
          1  recursive calls                                                                                                                
          0  db block gets                                                                                                                  
       2030  consistent gets                                                                                                                
          0  physical reads                                                                                                                 
          0  redo size                                                                                                                      
        639  bytes sent via SQL*Net to client                                                                                               
        409  bytes received via SQL*Net from client                                                                                         
          1  SQL*Net roundtrips to/from client                                                                                              
          1  sorts (memory)                                                                                                                 
          0  sorts (disk)                                                                                                                   
          0  rows processed                                                                                                                 

SQL> 
SQL> pause

SQL> 
SQL> set autotrace off
SQL> 
SQL> pause

SQL> 
SQL> --Note the plan hash value
SQL> 
SQL> pause

SQL> 
SQL> --Let us verify if the plan was inserted into Plan Baseline.
SQL> 
SQL> pause

SQL> 
SQL> select sql_handle, plan_name,
  2  origin, enabled, accepted,sql_text
  3  from dba_sql_plan_baselines
  4  where sql_text like 'SELECT%sh.sales%';

SQL_HANDLE               PLAN_NAME                     ORIGIN         ENA ACC                                                               
------------------------ ----------------------------- -------------- --- ---                                                               
SQL_TEXT                                                                                                                                    
--------------------------------------------------------------------------------                                                            
SYS_SQL_7de69bb90f3e54d2 SYS_SQL_PLAN_0f3e54d211df68d0 AUTO-CAPTURE   YES YES                                                               
SELECT *                                                                                                                                    
from sh.sales                                                                                                                               
where quantity_sold > 30                                                                                                                    
order by prod_id                                                                                                                            
                                                                                                                                            
SYS_SQL_7de69bb90f3e54d2 SYS_SQL_PLAN_0f3e54d254bc8843 AUTO-CAPTURE   YES NO                                                                
SELECT *                                                                                                                                    
from sh.sales                                                                                                                               
where quantity_sold > 30                                                                                                                    
order by prod_id                                                                                                                            
                                                                                                                                            

SQL> 
SQL> pause

SQL> 
SQL> ---A New Plan SYS_SQL_PLAN_0f3e54d211df68d0 was found
SQL> ---And inserted in the Plan Baseline.
SQL> ---Note the ACCEPTED IS NO,as This is second plan added to the base line.
SQL> ---Note The Plan hash value: 899219946
SQL> 
SQL> pause

SQL> 
SQL> --Let us Change the Optimizer Envoriment.
SQL> 
SQL> pause

SQL> 
SQL> alter system set optimizer_features_enable='9.2.0';

System altered.

SQL> alter session set optimizer_index_cost_adj=50;

Session altered.

SQL> alter session set optimizer_index_caching=100;

Session altered.

SQL> 
SQL> pause

SQL> 
SQL> set autotrace on
SQL> 
SQL> pause

SQL> 
SQL> SELECT *
  2  from sh.sales
  3  where quantity_sold > 30
  4  order by prod_id;

no rows selected


Execution Plan
----------------------------------------------------------                                                                                  
Plan hash value: 3803407550                                                                                                                 
                                                                                                                                            
------------------------------------------------------------------------------                                                              
| Id  | Operation            | Name  | Rows  | Bytes | Cost  | Pstart| Pstop |                                                              
------------------------------------------------------------------------------                                                              
|   0 | SELECT STATEMENT     |       |     1 |    29 |    47 |       |       |                                                              
|   1 |  SORT ORDER BY       |       |     1 |    29 |    47 |       |       |                                                              
|   2 |   PARTITION RANGE ALL|       |     1 |    29 |    45 |     1 |    28 |                                                              
|*  3 |    TABLE ACCESS FULL | SALES |     1 |    29 |    45 |     1 |    28 |                                                              
------------------------------------------------------------------------------                                                              
                                                                                                                                            
Predicate Information (identified by operation id):                                                                                         
---------------------------------------------------                                                                                         
                                                                                                                                            
   3 - filter("QUANTITY_SOLD">30)                                                                                                           
                                                                                                                                            
Note                                                                                                                                        
-----                                                                                                                                       
   - cpu costing is off (consider enabling it)                                                                                              


Statistics
----------------------------------------------------------                                                                                  
          1  recursive calls                                                                                                                
          0  db block gets                                                                                                                  
       1718  consistent gets                                                                                                                
          0  physical reads                                                                                                                 
          0  redo size                                                                                                                      
        639  bytes sent via SQL*Net to client                                                                                               
        409  bytes received via SQL*Net from client                                                                                         
          1  SQL*Net roundtrips to/from client                                                                                              
          1  sorts (memory)                                                                                                                 
          0  sorts (disk)                                                                                                                   
          0  rows processed                                                                                                                 

SQL> 
SQL> pause

SQL> 
SQL> set autotrace off
SQL> 
SQL> pause

SQL> 
SQL> --Note the plan hash value.
SQL> 
SQL> pause

SQL> 
SQL> ----Let us verify if the plan was inserted into Plan Baseline.
SQL> 
SQL> pause

SQL> 
SQL> select sql_handle,plan_name,
  2  origin, enabled, accepted,sql_text
  3  from dba_sql_plan_baselines
  4  where sql_text like 'SELECT%sh.sales%';

SQL_HANDLE               PLAN_NAME                     ORIGIN         ENA ACC                                                               
------------------------ ----------------------------- -------------- --- ---                                                               
SQL_TEXT                                                                                                                                    
--------------------------------------------------------------------------------                                                            
SYS_SQL_7de69bb90f3e54d2 SYS_SQL_PLAN_0f3e54d211df68d0 AUTO-CAPTURE   YES YES                                                               
SELECT *                                                                                                                                    
from sh.sales                                                                                                                               
where quantity_sold > 30                                                                                                                    
order by prod_id                                                                                                                            
                                                                                                                                            
SYS_SQL_7de69bb90f3e54d2 SYS_SQL_PLAN_0f3e54d254bc8843 AUTO-CAPTURE   YES NO                                                                
SELECT *                                                                                                                                    
from sh.sales                                                                                                                               
where quantity_sold > 30                                                                                                                    
order by prod_id                                                                                                                            
                                                                                                                                            

SQL> 
SQL> pause

SQL> 
SQL> --Note No Plan was added above because The plan found was not New,
SQL> --It was the same plan as found the First Time
SQL> --Note the Plan hash value: 3803407550
SQL> 
SQL> 
SQL> pause

SQL> 
SQL> --Let us Change the Optimizer Envoriment.
SQL> 
SQL> pause

SQL> 
SQL> alter system set optimizer_features_enable='9.2.0';

System altered.

SQL> alter session set optimizer_mode = first_rows;

Session altered.

SQL> 
SQL> pause

SQL> 
SQL> set autotrace on
SQL> 
SQL> pause

SQL> 
SQL> SELECT *
  2  from sh.sales
  3  where quantity_sold > 30
  4  order by prod_id;

no rows selected


Execution Plan
----------------------------------------------------------                                                                                  
Plan hash value: 899219946                                                                                                                  
                                                                                                                                            
-------------------------------------------------------------------------------------------------------                                     
| Id  | Operation                           | Name            | Rows  | Bytes | Cost  | Pstart| Pstop |                                     
-------------------------------------------------------------------------------------------------------                                     
|   0 | SELECT STATEMENT                    |                 |     1 |    29 |  2211 |       |       |                                     
|   1 |  SORT ORDER BY                      |                 |     1 |    29 |  2211 |       |       |                                     
|   2 |   PARTITION RANGE ALL               |                 |     1 |    29 |  2209 |     1 |    28 |                                     
|*  3 |    TABLE ACCESS BY LOCAL INDEX ROWID| SALES           |     1 |    29 |  2209 |     1 |    28 |                                     
|   4 |     BITMAP CONVERSION TO ROWIDS     |                 |       |       |       |       |       |                                     
|   5 |      BITMAP INDEX FULL SCAN         | SALES_PROMO_BIX |       |       |       |     1 |    28 |                                     
-------------------------------------------------------------------------------------------------------                                     
                                                                                                                                            
Predicate Information (identified by operation id):                                                                                         
---------------------------------------------------                                                                                         
                                                                                                                                            
   3 - filter("QUANTITY_SOLD">30)                                                                                                           
                                                                                                                                            
Note                                                                                                                                        
-----                                                                                                                                       
   - cpu costing is off (consider enabling it)                                                                                              


Statistics
----------------------------------------------------------                                                                                  
          1  recursive calls                                                                                                                
          0  db block gets                                                                                                                  
       2030  consistent gets                                                                                                                
          0  physical reads                                                                                                                 
          0  redo size                                                                                                                      
        639  bytes sent via SQL*Net to client                                                                                               
        409  bytes received via SQL*Net from client                                                                                         
          1  SQL*Net roundtrips to/from client                                                                                              
          1  sorts (memory)                                                                                                                 
          0  sorts (disk)                                                                                                                   
          0  rows processed                                                                                                                 

SQL> 
SQL> pause

SQL> 
SQL> set autotrace off
SQL> 
SQL> pause

SQL> 
SQL> --Note the plan hash value
SQL> 
SQL> pause

SQL> 
SQL> --Let us verify if the plan was inserted into Plan Baseline.
SQL> 
SQL> pause

SQL> 
SQL> select sql_handle, plan_name,
  2  origin, enabled, accepted,sql_text
  3  from dba_sql_plan_baselines
  4  where sql_text like 'SELECT%sh.sales%';

SQL_HANDLE               PLAN_NAME                     ORIGIN         ENA ACC                                                               
------------------------ ----------------------------- -------------- --- ---                                                               
SQL_TEXT                                                                                                                                    
--------------------------------------------------------------------------------                                                            
SYS_SQL_7de69bb90f3e54d2 SYS_SQL_PLAN_0f3e54d211df68d0 AUTO-CAPTURE   YES YES                                                               
SELECT *                                                                                                                                    
from sh.sales                                                                                                                               
where quantity_sold > 30                                                                                                                    
order by prod_id                                                                                                                            
                                                                                                                                            
SYS_SQL_7de69bb90f3e54d2 SYS_SQL_PLAN_0f3e54d254bc8843 AUTO-CAPTURE   YES NO                                                                
SELECT *                                                                                                                                    
from sh.sales                                                                                                                               
where quantity_sold > 30                                                                                                                    
order by prod_id                                                                                                                            
                                                                                                                                            

SQL> 
SQL> pause

SQL> 
SQL> ---Note No plan was added above because The Plan found was not New.
SQL> ---Note the Plan hash value: 899219946
SQL> 
SQL> pause

SQL> 
SQL> --So the SPM Baseline is now populated with 2 different plans.
SQL> 
SQL> pause

SQL> 
SQL> --Let us now  Turn the auto Capture off
SQL> 
SQL> pause

SQL> 
SQL> alter session set optimizer_capture_sql_plan_baselines = FALSE;

Session altered.

SQL> 
SQL> pause

SQL> 
SQL> --Optimizer_capture_sql_plan_baselines needs to be set
SQL> --To TRUE only for Capture purpose.
SQL> --Optimizer_capture_sql_plan_baselines =TRUE is not
SQL> --Needed for USING AN existing SPM Baseline.
SQL> 
SQL> pause

SQL> 
SQL> --Now lets us see how the SPM uses the Plan.
SQL> --The Parameter optimizer_use_sql_plan_baselines
SQL> --Must be true for plans from SPM to be used.
SQL> --By Default optimizer_use_sql_plan_baselines
SQL> --Is set to TRUE only.
SQL> --If optimizer_use_sql_plan_baselines is set
SQL> --To FALSE than Plans will not be used
SQL> --From existing SPM Baseline,
SQL> --Even if they are populated.
SQL> --Note The Plan must be ENABLED=YES AND ACCEPTED=YES
SQL> --To be used by SPM.
SQL> --The Very First Plan for a particular sql that gets
SQL> --Loaded into an SPM Baseline Is ENABLED=YES
SQL> --AND ACCEPTED=YES.
SQL> --After that any Plan that gets loaded into the
SQL> --SPM Baseline is ENABLED=YES AND ACCEPTED=NO.
SQL> --These Plans needs to be ACCEPTED=YES before
SQL> --They can be used,
SQL> --The plans can be made ACCEPTED=YES by using the
SQL> --Plan verification Step.
SQL> 
SQL> pause

SQL> 
SQL> alter system set optimizer_use_sql_plan_baselines =TRUE;

System altered.

SQL> 
SQL> pause

SQL> 
SQL> --Let us change the optimizer envoriment.
SQL> 
SQL> pause

SQL> 
SQL> alter system set optimizer_features_enable='10.2.0.3';

System altered.

SQL> alter session set optimizer_index_cost_adj=1;

Session altered.

SQL> 
SQL> pause

SQL> 
SQL> set autotrace on
SQL> 
SQL> pause

SQL> 
SQL> --Execute the Sql in this new envoriment.
SQL> 
SQL> pause

SQL> 
SQL> SELECT *
  2  from sh.sales
  3  where quantity_sold > 30
  4  order by prod_id;

no rows selected


Execution Plan
----------------------------------------------------------                                                                                  
Plan hash value: 899219946                                                                                                                  
                                                                                                                                            
-----------------------------------------------------------------------------------------------------------------------                     
| Id  | Operation                           | Name            | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |                     
-----------------------------------------------------------------------------------------------------------------------                     
|   0 | SELECT STATEMENT                    |                 |     1 |    29 |   294  (13)| 00:00:04 |       |       |                     
|   1 |  SORT ORDER BY                      |                 |     1 |    29 |   294  (13)| 00:00:04 |       |       |                     
|   2 |   PARTITION RANGE ALL               |                 |     1 |    29 |   293  (13)| 00:00:04 |     1 |    28 |                     
|*  3 |    TABLE ACCESS BY LOCAL INDEX ROWID| SALES           |     1 |    29 |   293  (13)| 00:00:04 |     1 |    28 |                     
|   4 |     BITMAP CONVERSION TO ROWIDS     |                 |       |       |            |          |       |       |                     
|   5 |      BITMAP INDEX FULL SCAN         | SALES_PROMO_BIX |       |       |            |          |     1 |    28 |                     
-----------------------------------------------------------------------------------------------------------------------                     
                                                                                                                                            
Predicate Information (identified by operation id):                                                                                         
---------------------------------------------------                                                                                         
                                                                                                                                            
   3 - filter("QUANTITY_SOLD">30)                                                                                                           
                                                                                                                                            
Note                                                                                                                                        
-----                                                                                                                                       
   - SQL plan baseline "SYS_SQL_PLAN_0f3e54d211df68d0" used for this statement                                                              


Statistics
----------------------------------------------------------                                                                                  
          1  recursive calls                                                                                                                
          0  db block gets                                                                                                                  
       2030  consistent gets                                                                                                                
          0  physical reads                                                                                                                 
          0  redo size                                                                                                                      
        639  bytes sent via SQL*Net to client                                                                                               
        409  bytes received via SQL*Net from client                                                                                         
          1  SQL*Net roundtrips to/from client                                                                                              
          1  sorts (memory)                                                                                                                 
          0  sorts (disk)                                                                                                                   
          0  rows processed                                                                                                                 

SQL> 
SQL> pause

SQL> 
SQL> set autotrace off
SQL> 
SQL> pause

SQL> 
SQL> --Even though we have set
SQL> --alter system set optimizer_features_enable='10.2.0.3';
SQL> --alter session set optimizer_index_cost_adj=1
SQL> --We are still using the Plan with a Plan hash value: 3803407550
SQL> --This is because SQL PLAN baseline was used.
SQL> --Note the Line
SQL> --SQL plan baseline "SYS_SQL_PLAN_0f3e54d254bc8843" used for this statement
SQL> --This indicates SQL plan baseline was used.
SQL> --Note that SYS_SQL_PLAN_0f3e54d254bc8843 was used because it was enabled
SQL> --And accepted=YES as it was the very first Plan.
SQL> 
SQL> Pause

SQL> 
SQL> --Lets us disbale Plan SYS_SQL_PLAN_0f3e54d254bc8843
SQL> --We will use dbms_spm.alter_sql_plan_baseline
SQL> 
SQL> pause

SQL> 
SQL> var pbsts varchar2(30);
SQL> exec :pbsts := dbms_spm.alter_sql_plan_baseline('SYS_SQL_7de69bb90f3e54d2','SYS_SQL_PLAN_0f3e54d254bc8843','accepted','NO');

PL/SQL procedure successfully completed.

SQL> 
SQL> pause

SQL> 
SQL> --Verify the Plan Baseline.
SQL> 
SQL> pause

SQL> 
SQL> select sql_handle, plan_name,
  2  origin, enabled, accepted, fixed, sql_text
  3  from dba_sql_plan_baselines
  4  where sql_text like 'SELECT%sh.sales%';

SQL_HANDLE               PLAN_NAME                     ORIGIN         ENA ACC FIX                                                           
------------------------ ----------------------------- -------------- --- --- ---                                                           
SQL_TEXT                                                                                                                                    
--------------------------------------------------------------------------------                                                            
SYS_SQL_7de69bb90f3e54d2 SYS_SQL_PLAN_0f3e54d211df68d0 AUTO-CAPTURE   YES YES NO                                                            
SELECT *                                                                                                                                    
from sh.sales                                                                                                                               
where quantity_sold > 30                                                                                                                    
order by prod_id                                                                                                                            
                                                                                                                                            
SYS_SQL_7de69bb90f3e54d2 SYS_SQL_PLAN_0f3e54d254bc8843 AUTO-CAPTURE   YES NO  NO                                                            
SELECT *                                                                                                                                    
from sh.sales                                                                                                                               
where quantity_sold > 30                                                                                                                    
order by prod_id                                                                                                                            
                                                                                                                                            

SQL> 
SQL> pause

SQL> 
SQL> --Note that SQL WITH SQL HANDLE SYS_SQL_7de69bb90f3e54d2 AND
SQL> --Plan Name SYS_SQL_PLAN_0f3e54d254bc8843 is accepted=NO
SQL> --So This plan should not be used Now.
SQL> 
SQL> pause

SQL> 
SQL> alter system set optimizer_features_enable='10.2.0.3';

System altered.

SQL> alter session set optimizer_index_cost_adj=1;

Session altered.

SQL> 
SQL> pause

SQL> 
SQL> set autotrace on
SQL> 
SQL> pause

SQL> 
SQL> SELECT *
  2  from sh.sales
  3  where quantity_sold > 30
  4  order by prod_id;

no rows selected


Execution Plan
----------------------------------------------------------                                                                                  
Plan hash value: 899219946                                                                                                                  
                                                                                                                                            
-----------------------------------------------------------------------------------------------------------------------                     
| Id  | Operation                           | Name            | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |                     
-----------------------------------------------------------------------------------------------------------------------                     
|   0 | SELECT STATEMENT                    |                 |     1 |    29 |   294  (13)| 00:00:04 |       |       |                     
|   1 |  SORT ORDER BY                      |                 |     1 |    29 |   294  (13)| 00:00:04 |       |       |                     
|   2 |   PARTITION RANGE ALL               |                 |     1 |    29 |   293  (13)| 00:00:04 |     1 |    28 |                     
|*  3 |    TABLE ACCESS BY LOCAL INDEX ROWID| SALES           |     1 |    29 |   293  (13)| 00:00:04 |     1 |    28 |                     
|   4 |     BITMAP CONVERSION TO ROWIDS     |                 |       |       |            |          |       |       |                     
|   5 |      BITMAP INDEX FULL SCAN         | SALES_PROMO_BIX |       |       |            |          |     1 |    28 |                     
-----------------------------------------------------------------------------------------------------------------------                     
                                                                                                                                            
Predicate Information (identified by operation id):                                                                                         
---------------------------------------------------                                                                                         
                                                                                                                                            
   3 - filter("QUANTITY_SOLD">30)                                                                                                           
                                                                                                                                            
Note                                                                                                                                        
-----                                                                                                                                       
   - SQL plan baseline "SYS_SQL_PLAN_0f3e54d211df68d0" used for this statement                                                              


Statistics
----------------------------------------------------------                                                                                  
          1  recursive calls                                                                                                                
          0  db block gets                                                                                                                  
       2030  consistent gets                                                                                                                
          0  physical reads                                                                                                                 
          0  redo size                                                                                                                      
        639  bytes sent via SQL*Net to client                                                                                               
        409  bytes received via SQL*Net from client                                                                                         
          1  SQL*Net roundtrips to/from client                                                                                              
          1  sorts (memory)                                                                                                                 
          0  sorts (disk)                                                                                                                   
          0  rows processed                                                                                                                 

SQL> 
SQL> pause

SQL> 
SQL> set autotrace off
SQL> 
SQL> pause

SQL> 
SQL> --You can see That NO Plans from SPM Baseline was used.
SQL> 
SQL> pause

SQL> 
SQL> --Now let us enable the use of SPM for sql handle SYS_SQL_7de69bb90f3e54d2
SQL> --And Plan Name SYS_SQL_PLAN_0f3e54d211df68d0
SQL> 
SQL> pause

SQL> 
SQL> var pbsts varchar2(30);
SQL> exec :pbsts := dbms_spm.alter_sql_plan_baseline('SYS_SQL_7de69bb90f3e54d2','SYS_SQL_PLAN_0f3e54d211df68d0','accepted','YES');

PL/SQL procedure successfully completed.

SQL> 
SQL> pause

SQL> 
SQL> --Verify the Plan Baseline.
SQL> 
SQL> pause

SQL> 
SQL> select sql_handle, plan_name,
  2  origin, enabled, accepted,sql_text
  3  from dba_sql_plan_baselines
  4  where sql_text like 'SELECT%sh.sales%';

SQL_HANDLE               PLAN_NAME                     ORIGIN         ENA ACC                                                               
------------------------ ----------------------------- -------------- --- ---                                                               
SQL_TEXT                                                                                                                                    
--------------------------------------------------------------------------------                                                            
SYS_SQL_7de69bb90f3e54d2 SYS_SQL_PLAN_0f3e54d211df68d0 AUTO-CAPTURE   YES YES                                                               
SELECT *                                                                                                                                    
from sh.sales                                                                                                                               
where quantity_sold > 30                                                                                                                    
order by prod_id                                                                                                                            
                                                                                                                                            
SYS_SQL_7de69bb90f3e54d2 SYS_SQL_PLAN_0f3e54d254bc8843 AUTO-CAPTURE   YES NO                                                                
SELECT *                                                                                                                                    
from sh.sales                                                                                                                               
where quantity_sold > 30                                                                                                                    
order by prod_id                                                                                                                            
                                                                                                                                            

SQL> 
SQL> pause

SQL> 
SQL> --Note that SQL with SQL HANDLE SYS_SQL_7de69bb90f3e54d2 AND
SQL> --Plan Name SYS_SQL_PLAN_0f3e54d211df68d0 is accepted=YES
SQL> --and ENABLED=YES
SQL> 
SQL> pause

SQL> 
SQL> --Let us Change the Optimizer Envoriment
SQL> 
SQL> pause

SQL> 
SQL> alter system set optimizer_features_enable='11.1.0.6';

System altered.

SQL> alter session set optimizer_index_cost_adj=100;

Session altered.

SQL> alter session set optimizer_index_caching=0;

Session altered.

SQL> 
SQL> pause

SQL> 
SQL> set autotrace on
SQL> 
SQL> pause

SQL> 
SQL> SELECT *
  2  from sh.sales
  3  where quantity_sold > 30
  4  order by prod_id;

no rows selected


Execution Plan
----------------------------------------------------------                                                                                  
Plan hash value: 899219946                                                                                                                  
                                                                                                                                            
-----------------------------------------------------------------------------------------------------------------------                     
| Id  | Operation                           | Name            | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |                     
-----------------------------------------------------------------------------------------------------------------------                     
|   0 | SELECT STATEMENT                    |                 |     1 |    29 |   294  (13)| 00:00:04 |       |       |                     
|   1 |  SORT ORDER BY                      |                 |     1 |    29 |   294  (13)| 00:00:04 |       |       |                     
|   2 |   PARTITION RANGE ALL               |                 |     1 |    29 |   293  (13)| 00:00:04 |     1 |    28 |                     
|*  3 |    TABLE ACCESS BY LOCAL INDEX ROWID| SALES           |     1 |    29 |   293  (13)| 00:00:04 |     1 |    28 |                     
|   4 |     BITMAP CONVERSION TO ROWIDS     |                 |       |       |            |          |       |       |                     
|   5 |      BITMAP INDEX FULL SCAN         | SALES_PROMO_BIX |       |       |            |          |     1 |    28 |                     
-----------------------------------------------------------------------------------------------------------------------                     
                                                                                                                                            
Predicate Information (identified by operation id):                                                                                         
---------------------------------------------------                                                                                         
                                                                                                                                            
   3 - filter("QUANTITY_SOLD">30)                                                                                                           
                                                                                                                                            
Note                                                                                                                                        
-----                                                                                                                                       
   - SQL plan baseline "SYS_SQL_PLAN_0f3e54d211df68d0" used for this statement                                                              


Statistics
----------------------------------------------------------                                                                                  
         90  recursive calls                                                                                                                
          0  db block gets                                                                                                                  
       2075  consistent gets                                                                                                                
          0  physical reads                                                                                                                 
          0  redo size                                                                                                                      
        639  bytes sent via SQL*Net to client                                                                                               
        409  bytes received via SQL*Net from client                                                                                         
          1  SQL*Net roundtrips to/from client                                                                                              
         13  sorts (memory)                                                                                                                 
          0  sorts (disk)                                                                                                                   
          0  rows processed                                                                                                                 

SQL> 
SQL> pause

SQL> 
SQL> set autotrace off
SQL> 
SQL> pause

SQL> 
SQL> --Note that
SQL> --SQL plan baseline "SYS_SQL_PLAN_0f3e54d211df68d0" used for this statement
SQL> --The Plan hash value: 899219946
SQL> --
SQL> 
SQL> pause

SQL> 
SQL> spool off

}}}
http://docs.oracle.com/cd/E11882_01/server.112/e16638/optplanmgmt.htm#BABEAFGG

http://www.databasejournal.com/features/oracle/article.php/3896411/article.htm
sql baseline 10g http://www.oracle.com/webfolder/technetwork/tutorials/demos/db/10g/r2/sql_baseline.viewlet/sql_baseline_viewlet_swf.html
sql baseline 11g http://www.oracle.com/webfolder/technetwork/tutorials/demos/db/11g/r2/11gr2_baseline/11gr2_baseline_viewlet_swf.html
baselines and better plans http://www.oracle.com/technetwork/issue-archive/2009/09-mar/o29spm-092092.html

Optimizer Plan Change Management: Improved Stability and Performance in 11g http://www.vldb.org/pvldb/1/1454175.pdf
http://optimizermagic.blogspot.com/2009/01/plan-regressions-got-you-down-sql-plan.html
http://optimizermagic.blogspot.com/2009/01/sql-plan-management-part-2-of-4-spm.html
http://optimizermagic.blogspot.com/2009/01/sql-plan-management-part-3-of-4.html
http://optimizermagic.blogspot.com/2009/02/sql-plan-management-part-4-of-4-user.html

https://blogs.oracle.com/optimizer/entry/sql_plan_management_part_1_of_4_creating_sql_plan_baselines
https://blogs.oracle.com/optimizer/entry/sql_plan_management_part_2_of_4_spm_aware_optimizer
https://blogs.oracle.com/optimizer/entry/sql_plan_management_part_3_of_4_evolving_sql_plan_baselines_1
https://blogs.oracle.com/optimizer/entry/sql_plan_management_part_4_of_4_user_interfaces_and_other_features


''OBE videos:''
Controlling Execution Plan Evolution Using SQL Plan Management http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/11g/r2/prod/manage/spm/spm.htm
http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/11g/r1/prod/manage/spm/spm.htm
viewlet http://www.oracle.com/webfolder/technetwork/tutorials/demos/db/11g/r2/changemgmt/05_spm/05_spm_viewlet_swf.html

Plan Stability Features (Including SPM) Start Point [ID 1359841.1]
How to use hints to customize SQL Profile or SQL PLAN Baseline [ID 1400903.1]
How to Use SQL Plan Management (SPM) - Example Usage [ID 456518.1]

Oracle 11g – How to force a sql_id to use a plan_hash_value using SQL Baselines
http://rnm1978.wordpress.com/2011/06/28/oracle-11g-how-to-force-a-sql_id-to-use-a-plan_hash_value-using-sql-baselines/
http://aprakash.wordpress.com/2012/07/05/loading-sql-plan-into-spm-using-awr/
http://jonathanlewis.wordpress.com/2011/01/12/fake-baselines/
http://blog.tanelpoder.com/oracle/performance/sql/oracle-sql-plan-stability/

http://intermediatesql.com/tag/spm/ <-- GOOD STUFF by Maxym Kharchenko
http://technology.amis.nl/wp-content/uploads/2013/04/Koppelaars_SQL_Plan_Mgmt.pdf <-- GOOD STUFF by Toon Koppelaars
http://fordba.wordpress.com/tag/dbms_spm-evolve_sql_plan_baseline/  <-- good stuff
http://www.oracle-base.com/articles/11g/sql-plan-management-11gr1.php  <-- GOOD STUFF

http://www.slideshare.net/mariselsins/using-sql-plan-management-for-performance-testing#, http://www.pythian.com/wp-content/uploads/2013/03/NoCOUG_Journal_201208_Maris_Elsins.pdf


! 2021 
https://www.oracle.com/database/technologies/datawarehouse-bigdata/query-optimization.html <- good stuff white papers
spm 19c https://www.oracle.com/technetwork/database/bi-datawarehousing/twp-sql-plan-mgmt-19c-5324207.pdf
spm 18c https://www.oracle.com/technetwork/database/bi-datawarehousing/twp-sql-plan-management-0218-4403742.pdf

https://blogs.oracle.com/optimizer/what-is-automatic-sql-plan-management-and-why-should-you-care

https://orastory.wordpress.com/2015/05/01/strategies-for-minimising-sql-execution-plan-instability/   <- ideas 


! 2023 

https://blogs.oracle.com/optimizer/post/whats-the-difference-between-spm-auto-capture-and-auto-spm
https://docs.oracle.com/en/database/oracle/oracle-database/19/tgsql/managing-sql-plan-baselines.html#GUID-7024369A-F98D-48E4-921C-C899485C954F
https://blogs.oracle.com/optimizer/post/what-is-the-automatic-sql-tuning-set  ASTS 
https://blogs.oracle.com/optimizer/post/what-is-automatic-sql-plan-management-and-why-should-you-care
ASTS 2686869.1
DBMS_SPM https://docs.oracle.com/en/database/oracle/oracle-database/19/arpls/DBMS_SPM.html#GUID-D6EC284C-053D-417D-B887-94422BCB4E3A
Optimizer Stats in Autonomous Database https://docs.oracle.com/en/cloud/paas/autonomous-database/serverless/adbsb/manage-optimizer-stats.html#GUID-69906542-4DF6-4759-ABC1-1817D77BDB02
FAQ auto stats collection 1233203.1



http://mwidlake.wordpress.com/2009/12/10/command_type-values/
http://mwidlake.wordpress.com/2010/01/08/more-on-command_type-values/

{{{
select * from audit_actions order by action

    ACTION NAME
---------- ----------------------------
         0 UNKNOWN
         1 CREATE TABLE
         2 INSERT
         3 SELECT
         4 CREATE CLUSTER
         5 ALTER CLUSTER
         6 UPDATE
         7 DELETE
         8 DROP CLUSTER
         9 CREATE INDEX
        10 DROP INDEX
        11 ALTER INDEX
        12 DROP TABLE
        13 CREATE SEQUENCE
        14 ALTER SEQUENCE
        15 ALTER TABLE
        16 DROP SEQUENCE
        17 GRANT OBJECT
        18 REVOKE OBJECT
        19 CREATE SYNONYM
        20 DROP SYNONYM
        21 CREATE VIEW
        22 DROP VIEW
        23 VALIDATE INDEX
        24 CREATE PROCEDURE
        25 ALTER PROCEDURE
        26 LOCK
        27 NO-OP
        28 RENAME
        29 COMMENT
        30 AUDIT OBJECT
        31 NOAUDIT OBJECT
        32 CREATE DATABASE LINK
        33 DROP DATABASE LINK
        34 CREATE DATABASE
        35 ALTER DATABASE
        36 CREATE ROLLBACK SEG
        37 ALTER ROLLBACK SEG
        38 DROP ROLLBACK SEG
        39 CREATE TABLESPACE
        40 ALTER TABLESPACE
        41 DROP TABLESPACE
        42 ALTER SESSION
        43 ALTER USER
        44 COMMIT
        45 ROLLBACK
        46 SAVEPOINT
        47 PL/SQL EXECUTE
        48 SET TRANSACTION
        49 ALTER SYSTEM
        50 EXPLAIN
        51 CREATE USER
        52 CREATE ROLE
        53 DROP USER
        54 DROP ROLE
        55 SET ROLE
        56 CREATE SCHEMA
        57 CREATE CONTROL FILE
        59 CREATE TRIGGER
        60 ALTER TRIGGER
        61 DROP TRIGGER
        62 ANALYZE TABLE
        63 ANALYZE INDEX
        64 ANALYZE CLUSTER
        65 CREATE PROFILE
        66 DROP PROFILE
        67 ALTER PROFILE
        68 DROP PROCEDURE
        70 ALTER RESOURCE COST
        71 CREATE MATERIALIZED VIEW LOG
        72 ALTER MATERIALIZED VIEW LOG
        73 DROP MATERIALIZED VIEW LOG
        74 CREATE MATERIALIZED VIEW
        75 ALTER MATERIALIZED VIEW
        76 DROP MATERIALIZED VIEW
        77 CREATE TYPE
        78 DROP TYPE
        79 ALTER ROLE
        80 ALTER TYPE
        81 CREATE TYPE BODY
        82 ALTER TYPE BODY
        83 DROP TYPE BODY
        84 DROP LIBRARY
        85 TRUNCATE TABLE
        86 TRUNCATE CLUSTER
        91 CREATE FUNCTION
        92 ALTER FUNCTION
        93 DROP FUNCTION
        94 CREATE PACKAGE
        95 ALTER PACKAGE
        96 DROP PACKAGE
        97 CREATE PACKAGE BODY
        98 ALTER PACKAGE BODY
        99 DROP PACKAGE BODY
       100 LOGON
       101 LOGOFF
       102 LOGOFF BY CLEANUP
       103 SESSION REC
       104 SYSTEM AUDIT
       105 SYSTEM NOAUDIT
       106 AUDIT DEFAULT
       107 NOAUDIT DEFAULT
       108 SYSTEM GRANT
       109 SYSTEM REVOKE
       110 CREATE PUBLIC SYNONYM
       111 DROP PUBLIC SYNONYM
       112 CREATE PUBLIC DATABASE LINK
       113 DROP PUBLIC DATABASE LINK
       114 GRANT ROLE
       115 REVOKE ROLE
       116 EXECUTE PROCEDURE
       117 USER COMMENT
       118 ENABLE TRIGGER
       119 DISABLE TRIGGER
       120 ENABLE ALL TRIGGERS
       121 DISABLE ALL TRIGGERS
       122 NETWORK ERROR
       123 EXECUTE TYPE
       128 FLASHBACK
       129 CREATE SESSION
       157 CREATE DIRECTORY
       158 DROP DIRECTORY
       159 CREATE LIBRARY
       160 CREATE JAVA
       161 ALTER JAVA
       162 DROP JAVA
       163 CREATE OPERATOR
       164 CREATE INDEXTYPE
       165 DROP INDEXTYPE
       167 DROP OPERATOR
       168 ASSOCIATE STATISTICS
       169 DISASSOCIATE STATISTICS
       170 CALL METHOD
       171 CREATE SUMMARY
       172 ALTER SUMMARY
       173 DROP SUMMARY
       174 CREATE DIMENSION
       175 ALTER DIMENSION
       176 DROP DIMENSION
       177 CREATE CONTEXT
       178 DROP CONTEXT
       179 ALTER OUTLINE
       180 CREATE OUTLINE
       181 DROP OUTLINE
       182 UPDATE INDEXES
       183 ALTER OPERATOR
       197 PURGE USER_RECYCLEBIN
       198 PURGE DBA_RECYCLEBIN
       199 PURGE TABLESAPCE
       200 PURGE TABLE
       201 PURGE INDEX
       202 UNDROP OBJECT
       204 FLASHBACK DATABASE
       205 FLASHBACK TABLE
       206 CREATE RESTORE POINT
       207 DROP RESTORE POINT
       208 PROXY AUTHENTICATION ONLY
       209 DECLARE REWRITE EQUIVALENCE
       210 ALTER REWRITE EQUIVALENCE
       211 DROP REWRITE EQUIVALENCE
}}}
{{{

********************************************* INTRODUCTION *********************************************

--Note1
"A Relational Model of Data for Large Shared Data Banks". In this paper, Dr. Codd proposed
the relational model for database systems.

For more information, see E. F. Codd, The Relational Model for Database Management Version 2
(Reading, Mass.: Addison-Wesley, 1990).


--Note2
There are four types of databases:
1) Heirarchal
2) Network
3) Relational		<-- Oracle 7 is RDBMS
4) Object Relational	<-- Oracle 8 and later


--Note3
1) System Development Life Cycle (5 steps)
	- Strategy & Analysis (where ERD is made)
		- Design
			- Build & Document
				- Transition
					- Production

2) Data Model (4 steps)
	- Model of system in client's mind
		- Entity model of client's model
			- Table model of entity model
				- Tables on disk
				
3) ER Models
	1- Entity (one table)
	2- Attribute (columns in a table) 
		*	--> mandatory 
		o 	--> optional
	3- Relationship (A named association between entities showing optionality and degree) 
		- - -		--> optional element indicating "may be" (optionality)
		-----		--> mandatory element indicating "must be" (optionality)
		crow's foot	--> degree element indicating "one or more" (degree)
		single line	--> degree element indicating "one and only one" (degree)
	

		Each direction of the relationship contains:
		- A label, for example, taught by or assigned to
		- An optionality, either must be or may be
		- A degree, either one and only one or one or more
		
		Note: The term cardinality is a synonym for the term degree.
		
		Each source entity {may be | must be} relationship name {one and only one | one or more} destination
		entity.
		
		Note: The convention is to read clockwise.

	- Unique Identifiers
		A unique identifier (UID) is any combination of attributes or relationships, or both, that serves to
		distinguish occurrences of an entity. Each entity occurrence must be uniquely identifiable.
		- Tag each attribute that is part of the UID with a number symbol: #
		- Tag secondary UIDs with a number sign in parentheses: (#)


********************************************* CHAPTER 1 *********************************************
						WRITING BASIC SELECT STATEMENTS

3 things you could do:
	Projection
	Selection
	Joining

# Note: Throughout this course, the words keyword, clause, and statement are used as follows:
	- A keyword refers to an individual SQL element.
		For example, SELECT and FROM are keywords.
	- A clause is a part of a SQL statement.
		For example, SELECT employee_id, last_name, ... is a clause.
	- A statement is a combination of two or more clauses.
		For example, SELECT * FROM employees is a SQL statement.

# Operator Precedence: MDAS

# If any column value in an arithmetic expression is null, the result is null.

# DESCRIBE


********************************************* CHAPTER 2 *********************************************
						RESTRICTING AND SORTING DATA


The WHERE clause can compare values in columns, literal values, arithmetic expressions, or functions. It consists of three elements:
- Column name
- Comparison condition
- Column name, constant, or list of values


# Character strings are case sensitive, use UPPER or LOWER for case insensitive search


# The default date display is DD-MON-RR


# An alias cannot be used in the WHERE clause.


Other Comparison Operations:
- between...and...
- in (set)
- like
- is null


# Emphasize that the values specified with the BETWEEN operator in the example are inclusive. Explain
that BETWEEN ... AND ... is actually translated by Oracle server to a pair of AND conditions: (a >=
lower limit) AND (a <= higher limit). So using BETWEEN ... AND ... has no
performance benefits, and it is used for logical simplicity.

# Explain that IN ( ... ) is actually translated by Oracle server to a set of OR conditions: a =
value1 OR a = value2 OR a = value3. So using IN ( ... ) has no performance
benefits, and it is used for logical simplicity.

# SELECT employee_id, last_name, job_id
FROM employees
WHERE job_id LIKE �%SA\_%� ESCAPE �\�;  <--- The ESCAPE option identifies the backslash (\) as the escape character. In the pattern, the escape
						character precedes the underscore (_). This causes the Oracle Server to interpret the underscore
						literally.


# NULL: you cannot test with = because a null cannot be equal or unequal to any value						

						
Logical Conditions:
- and 
- or
- not


# Order Evaluated Operator:
	1 Arithmetic operators
	2 Concatenation operator
	3 Comparison conditions
	4 IS [NOT] NULL, LIKE, [NOT] IN
	5 [NOT] BETWEEN
	6 NOT logical condition
	7 AND logical condition
	8 OR logical condition


SELECT last_name, job_id, salary
FROM   hr.employees
WHERE  job_id = 'SA_REP'
OR     job_id = 'AD_PRES'
AND    salary > 15000;

is the same as

SELECT last_name, job_id, salary
FROM   hr.employees
WHERE  job_id = 'SA_REP'
OR     (job_id = 'AD_PRES'
AND    salary > 15000);


# Override rules of precedence by using parentheses.

# ORDER BY: You can specify an expression, or an alias, or column position as the sort condition. 

	# Let the students know that the ORDER BY clause is executed last in query execution. It is placed last unless the "FOR UPDATE" clause is used.

	# Null values are displayed last for ascending sequences and first for descending sequences.


SELECT	  last_name, department_id, salary 
   FROM     hr.employees
   ORDER BY department_id, salary desc;		<-- order by department_id ASC and then by salary DESC

is different from this

SELECT	  last_name, department_id, salary 
   FROM     hr.employees
   ORDER BY department_id desc, salary desc;	<-- order by department_id DESC and then by salary DESC


********************************************* CHAPTER 3 *********************************************
						SINGLE ROW FUNCTIONS

PART ONE:

There are two distinct types of functions:
- Single-row functions
- Multiple-row functions


Single-row functions:
	- Character functions: 	Accept character input and can return both character and number values
	- Number functions: 	Accept numeric input and return numeric values
	- Date functions: 	Operate on values of the DATE data type (All date functions return a value of DATE data type except the MONTHS_BETWEEN function, which returns a number.)
	- Conversion functions: Convert a value from one data type to another
	- General functions:
		 NVL
		 NVL2
		 NULLIF
		 COALSECE
		 CASE
		 DECODE


# CHARACTER FUNCTIONS: (can be divided into the following:)
	
	1) Case-Manipulation Functions:
		lower			LOWER(�SQL Course�)	--> sql course
		upper			UPPER(�SQL Course�)	--> SQL COURSE
		initcap			INITCAP(�SQL Course�)	--> Sql Course
				
	2) Character-Manipulation Functions:
		concat			<karl arao> --> concat(first_name, last_name) --> karlarao
		substr			<TAYLOR>    --> substr(last_name, 1,3)	      --> tay
							substr(last_name, -6,3)	      --> tay
		length			<ABEL> 	    --> length(last_name)	      --> 4
		instr			<TAYLOR>    --> instr(last_name, 'a') 	      --> 2		<-- shows where "a" is
		lpad  			<24000>     --> lpad(salary,10,'*')	      --> *****24000
		rpad			<24000>     --> rpad(salary,10,'*')	      --> 24000*****	--> select last_name, salary/1000, rpad(' ',salary/1000+1, '*') from employees;
		trim				    --> trim(�H� FROM �HelloWorld�)   --> elloWorld
		replace
		
		
# NUMBER FUNCTIONS:

		round			round(45.929, 2)	--> 45.93
					round(45.929, -1) 	--> 50
					round(45.929, 0)	--> 46
		trunc			trunc(45.929, 2)	--> 45.92
					trunc(45.929, -1)	--> 40
					trunc(45.929, 0)	--> 45
		mod			mod(salary, 1000)	--> will output the remainder of salary divided by 1000, used to determine if value is ODD/EVEN
		

# DATE FUNCTIONS:

	DATE is stored internally as follows:
	-------------------------------------
	CENTURY		YEAR		MONTH		DAY		HOUR		MINUTE		SECOND
	19		94		06		07		5		10		43

		
	ASSUME VALUE IS 07-FEB-99:
	
		months_between		months_between(sysdate, hire_date)	--> 31.6982407
		add_months		add_months(hire_date, 6)		--> 07-Aug-99
		next_day		next_day(hire_date, 'Friday')		--> 12-Feb-99
		last_day		last_day(hire_date)			--> 28-Feb-99

	ASSUME SYSDATE IS 25-JUL-95:	1-15 & 16-30 (day) / 0-6 & 7-12 (month)
	
		round 			round(sysdate, 'MONTH')		--> 01-Aug-95
					round(sysdate, 'YEAR')		--> 01-Jan-96
		trunc			trunc(sysdate, 'MONTH')		--> 01-Jul-95
					trunc(sysdate, 'YEAR')		--> 01-Jan-95
					
PART TWO:

# CONVERSION FUNCTIONS:

		to_char			to_char(hire_date, 'MM/YY')	--> 06/95 (FOR ALTERING RETRIEVAL FORMAT - FLEXIBLE)
					to_char(salary, '$99,999.00')	--> $60,000.68	(decimal place rounded to number of places provided if converted TO_CHAR)

		to_date			to_date('May 24, 1999','fxMonth DD, YYYY')	--> TO_DATE to make it a number (just converts it to a date), then format it by TO_CHAR
					to_date('01-Jan-90', 'DD-MON-RR')
					
		to_number		to_number('123,456.00','999,999.00')	--> TO_NUMBER to make it a number (just converts it to a number), then format it by TO_CHAR



	SAMPLE FORMAT ELEMENTS OF VALID DATE FORMATS:
		SCC or CC Century; 			server prefixes B.C. date with -
		Years in dates YYYY or SYYYY Year; 	server prefixes B.C. date with -
		YYY or YY or Y 				Last three, two, or one digits of year
		Y,YYY 					Year with comma in this position
		IYYY, IYY, IY, I 			Four, three, two, or one digit year based on the ISO standard
		SYEAR or YEAR 				Year spelled out; server prefixes B.C. date with -
		BC or AD 				B.C./.D. indicator
		B.C. or A.D. 				B.C./A.D. indicator with periods
		Q 					Quarter of year
		MM 					Month: two-digit value
		MONTH 					Name of month padded with blanks to length of nine characters
		MON 					Name of month, three-letter abbreviation
		RM 					Roman numeral month
		WW or W 				Week of year or month
		DDD or DD or D 				Day of year, month, or week
		DAY 					Name of day padded with blanks to a length of nine characters
		DY 					Name of day; three-letter abbreviation
		J 					Julian day; the number of days since 31 December 4713 B.C.


	ELEMENTS OF DATE FORMAT MODEL:	
		Time elements format the time portion of the date.			--> HH24:MI:SS AM --> 15:45:32 PM
		Add character strings by enclosing them in double quotation marks.	--> DD "of" MONTH --> 12 of OCTOBER
		Number suffixes spell out numbers.					--> ddspth 	  --> fourteenth


	NUMBER FORMAT ELEMENTS (CONVERTING A NUMBER TO THE CHARACTER DATA TYPE):
		Element 	Description 							Example 	Result
		9 		Numeric position (number of 9s determine display width)		999999 		1234
		0 		Display leading zeros 						099999 		001234
		$ 		Floating dollar sign 						$999999 	$1234
		L 		Floating local currency symbol 					L999999 	FF1234
		. 		Decimal point in position specified 				999999.99 	1234.00
		, 		Comma in position specified 					999,999 	1,234
		MI 		Minus signs to right (negative values) 				999999MI 	1234-
		PR 		Parenthesize negative numbers 					999999PR 	<1234>
		EEEE 		Scientific notation (format must specify four Es) 		99.999EEEE 	1.234E+03
		V 		Multiply by 10 n times (n = number of 9s after V) 		9999V99 	123400
		B 		Display zero values as blank, not 0 				B9999.99 	1234.00



	# use "fm" to avoid trailing zeros
		SELECT last_name,TO_CHAR(hire_date,�fmDdspth "of" Month YYYY fmHH:MI:SS AM�)HIREDATE
		FROM employees;
		
	# use "fx" Because the fx modifier is used, an exact match is required and the spaces after the word �May� are not recognized.
		SELECT last_name, hire_date
		FROM   hr.employees
	   	WHERE  hire_date = TO_DATE('May 24, 1999', 'fxMonth DD, YYYY');

	# Emphasize the format D, as the students need it for practice 10. The D format returns a value from 1 to
	7 representing the day of the week. Depending on the NLS date setting options, the value 1 may
	represent Sunday or Monday. In the United States, the value 1 represents Sunday.
	Element Description


	# There are several new data types available in the Oracle9i release pertaining to time. These include:
	TIMESTAMP, TIMESTAMP WITH TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE,
	INTERVAL YEAR, INTERVAL DAY. These are discussed later in the course.


	# RR format
	To find employees who were hired prior to 1990, the RR format can be used. Since the year is now
	greater than 1999, the RR format interprets the year portion of the date from 1950 to 1999.
	The following command, on the other hand, results in no rows being selected because the YY format
	interprets the year portion of the date in the current century (2090).

	SELECT last_name, TO_CHAR(hire_date, 'DD-Mon-yyyy')
   	FROM   hr.employees
   	WHERE  TO_DATE(hire_date, 'DD-Mon-YY') < '01-Jan-1990';		<-- no values will be retrieved becauase it will be interpreted as 2090


# GENERAL FUNCTIONS:

		nvl		last_name, nvl(to_char(commission_pct), 'no commission')	<-- looks for NULL, then label it...but first have to TO_CHAR
				last_name, nvl(commission_pct, 0)				<-- looks for NULL, makes the value "0"
		
		nvl2		nvl2(commission_pct, salary*commission_pct, 0)			<-- execute 2nd if not null, 3rd if null
		
		nullif		nullif(length(first_name), length(last_name))			<-- if both equal then "NULL", if not then return 1st expression
		
		coalesce	coalesce(commission_pct, salary, 10)				<-- return 1 if not null, return 2 if 1 is null and 2 is not null, return 3 if all is null, 4..5..6..n
	
	
	CONDITIONAL EXPRESSIONS:

		# The CASE expression is new in the Oracle9i Server release
		
		case		SELECT last_name, job_id, salary,
					CASE job_id WHEN �IT_PROG� THEN 1.10*salary
					WHEN �ST_CLERK� THEN 1.15*salary
					WHEN �SA_REP� THEN 1.20*salary
					ELSE salary END "REVISED_SALARY" <-- this will be the column name, if there's no ELSE then it will return NULL
				FROM employees;
		
		decode		SELECT last_name, job_id, salary,
					DECODE(job_id, 'IT_PROG', 1.10*salary,
						'ST_CLERK', 1.15*salary,
						'SA_REP', 1.20*salary,
						salary)		 <-- if there's no default value then it will return NULL
					REVISED_SALARY		 <-- this will be the column name
				FROM employees;
				
				SELECT last_name, salary,
					DECODE (TRUNC(salary/2000, 0),
					0, 0.00,
					1, 0.09,
					2, 0.20,
					3, 0.30,
					4, 0.40,
					5, 0.42,
					6, 0.44,
					0.45) TAX_RATE
				FROM employees
				WHERE department_id = 80;
				
				Monthly Salary 	Range Rate
				$0.00 		- 1999.99 00%
				$2,000.00 	- 3,999.99 09%
				$4,000.00 	- 5,999.99 20%
				$6,000.00 	- 7,999.99 30%
				$8,000.00 	- 9,999.99 40%
				$10,000.00 	- 11,999.99 42%
				$12,200.00 	- 13,999.99 44%
				$14,000.00 or greater 45%


********************************************* CHAPTER 4 *********************************************
						DISPLAYING DATA FROM MULTIPLE TABLES

----------------					
--ORACLE SYNTAX
----------------					

# CARTESIAN PRODUCT - if join condition is omittted
	
	select * from
	employees a, departments b	(20 x 8 rows = 160 rows)
	
	
 Types of Joins
	Oracle Proprietary 			SQL: 1999
	Joins (8i and prior): 			Compliant Joins:

	- Equijoin 				- Cross joins
	- Non-equijoin 				- Natural joins
	- Outer join 				- Using clause
	- Self join 				- Full or two sided outer joins
						- Arbitrary join conditions for outer joins
						
 Joins comparing SQL:1999 to Oracle Syntax
	Oracle Proprietary: 			SQL: 1999

	- Equijoin 				- Natural / Inner Join
	- Outer Join				- Left Outer Join
	- Self join 				- Join On
	- Non Equijoin 				- Join Using
	- Cartesian Product			- Cross Join


# EQUIJOIN (a.k.a simple join / inner join)

	SELECT last_name, employees.department_id, department_name
	FROM employees, departments
	WHERE employees.department_id = departments.department_id
	AND last_name = �Matos�;
	
	SELECT e.employee_id, e.last_name, e.department_id, d.department_id, d.location_id	<-- WITH ALIAS
	FROM employees e , departments d 
	WHERE e.department_id = d.department_id;
	
	SELECT e.last_name, d.department_name, l.city						<-- JOINING MORE THAN TWO TABLES (n-1)
	FROM employees e, departments d, locations l
	WHERE e.department_id = d.department_id
	AND d.location_id = l.location_id;

	
	--> to know how many tables to join, "n-1" (if you're joining 4 tables then you need 3 joins)
	
	
# NON-EQUIJOIN

	SELECT e.last_name, e.salary, j.grade_level 
	FROM employees e, job_grades j 
	WHERE e.salary 
	BETWEEN j.lowest_sal AND j.highest_sal;
	

# OUTER JOIN (Place the outer join symbol following the name of the column in the table without the matching rows - where you want it NULL)

	SELECT e.employee_id, e.last_name, e.department_id, d.department_id, d.location_id	<-- GRANT DOES NOT HAVE A DEPARTMENT
	FROM employees e , departments d 
	WHERE e.department_id = d.department_id (+);
	
	SELECT e.last_name, d.department_name, l.city						<-- CONTRACTING DEPARTMENT DOES NOT HAVE ANY EMPLOYEES
	FROM employees e, departments d, locations l
	WHERE e.department_id (+) = d.department_id 
	AND d.location_id (+) = l.location_id;
	
	
	--> You use an outer join to also see rows that do not meet the join condition.
	
	--> The outer join operator can appear on only one side of the expression the side that has information missing. It returns those rows from one table that have no direct match in the other table.
	
	--> A condition involving an outer join cannot use the IN operator or be linked to another condition by the OR operator.

	--> The UNION operator works around the issue of being able to use an outer join operator on one side of the expression. The ANSI full outer join also allows you to have an outer join on both sides of the expression.
	
	
# SELF JOIN

	SELECT worker.last_name || � works for � || manager.last_name 
	FROM employees worker, employees manager 
	WHERE worker.manager_id = manager.employee_id;
	

-------------------					
--SQL: 1999 SYNTAX
-------------------

# CROSS JOIN

	select * from employees		<-- result is Cartesian Product
	cross join departments;


# NATURAL JOIN

	select * from employees		<-- selects rows from the two tables that have equal values in all "matched columns" (the same name & data type)
	natural join departments;
	
	
# USING	(similar to equijoin, but shorter code than "ON")

	SELECT e.employee_id, e.last_name, d.location_id
	FROM employees e 
	JOIN departments d
	USING (department_id);
	WHERE e.department_id = 90;	<-- CAN'T DO THIS, do not use a "table name, alias, or qualifier" in the referenced columns ORA-25154: column part of USING clause cannot have qualifier
	
	select * 			<-- three way join
	from employees a
	join departments b
	using (department_id)
	join locations c
	using (location_id);


# ON (similar to equijoin)

	SELECT employee_id, city, department_name	<-- three way join
	FROM employees e
	JOIN departments d
	ON (d.department_id = e.department_id)
	JOIN locations l
	ON (d.location_id = l.location_id);
	
	
# LEFT OUTER JOIN

	SELECT e.last_name, e.department_id, d.department_name
	FROM employees e
	LEFT OUTER JOIN departments d
	ON (e.department_id = d.department_id);

This query retrieves all rows in the EMPLOYEES table, which is the left table even if there is no match in the DEPARTMENTS table.
This query was completed in earlier releases as follows:
 
   SELECT e.last_name, e.department_id, d.department_name
   FROM   hr.employees e, hr.departments d
   WHERE  e.department_id = d.department_id (+);   -- plus sign will have null, return all emp 
	
# RIGHT OUTER JOIN

	SELECT e.last_name, e.department_id, d.department_name
	FROM employees e
	RIGHT OUTER JOIN departments d
	ON (e.department_id = d.department_id);

This query retrieves all rows in the DEPARTMENTS table, which is the right table even if there is no match in the EMPLOYEES table.
This query was completed in earlier releases as follows:
 
   SELECT e.last_name, e.department_id, d.department_name
   FROM   hr.employees e, hr.departments d
   WHERE  e.department_id(+) = d.department_id ;   -- plus sign will have null, return all dept

	
	
# FULL OUTER JOIN

	SELECT e.last_name, e.department_id, d.department_name		<-- SQL :1999 Syntax
	FROM employees e
	FULL OUTER JOIN departments d
	ON (e.department_id = d.department_id);
	
	SELECT e.last_name, e.department_id, d.department_name		<-- Oracle Syntax
	FROM employees e, departments d
	WHERE e.department_id (+) = d.department_id
	UNION
	SELECT e.last_name, e.department_id, d.department_name
	FROM employees e, departments d
	WHERE e.department_id = d.department_id (+);


********************************************* CHAPTER 5 *********************************************
						AGGREGATING DATA USING GROUP FUNCTIONS
						

Types of Group Functions:
	- AVG
	- COUNT
	- MAX
	- MIN
	- STDDEV
	- SUM
	- VARIANCE

	
# All group functions ignore null values. To substitute a value for null values, use the NVL, NVL2,
or COALESCE functions.


# AVG, SUM, VARIANCE, and STDDEV functions can be used only with numeric data types.


# The NVL function forces group functions to include
null values

	select avg(nvl(a.commission_pct, 0)) from employees a;
	
	
# You cannot use a column alias in the GROUP BY clause.


# The GROUP BY column does not have to be in the
SELECT list.



******************************************************************************************
Types of subqueries (today I couldn't think of the term "inline view")
    * subquery ( subselect used in where clause)
    * correlated subquery (subselect uses fields from outer query)
    * scalar subquery (subselect in select list)
    * inline views (subselect in from clause)
******************************************************************************************
}}}
<<showtoc>>


! sql developer , sqlcl 
https://www.oracle.com/database/technologies/appdev/sql-developer.html

! data grip 
https://www.jetbrains.com/datagrip/?fromMenu

! dbeaver community/pro 
https://dbeaver.io/download/
https://dbeaver.com/
https://www.slant.co/versus/198/210/~dbeaver_vs_datagrip

! robomongo 
https://robomongo.org/













.

{{{
SQL Operations (ROW, SET) (Doc ID 100848.1)


Oracle Database - Enterprise Edition - Version 9.2.0.8 and later
All Platforms
PURPOSE
The document describes about the SQL Operations(ROW, SET) and explains with examples some of the ROW operations.

SCOPE
 This article will be useful for Oracle DBA(s) and Developers.

DETAILS
SQL Operations:


To interpret the Explain Plan and correctly evaluate the SQL Tuning options, it is necessary to understand the differences between the available database operations. The operations can be classified as :

· Row operations
· Set operations

ROW Operations


The Row Operations are executed one row at a time. It will be executed at the FETCH stage, if there is no set operation involved. The user can see the first result before the last row is fetched. Example : FULL TABLE SCAN

AND-EQUAL
CONCATENATION
INDEX UNIQUE SCAN
INDEX RANGE SCAN
HASH JOIN
NESTED LOOPS
TABLE ACCESS BY ROWID
TABLE ACCESS CLUSTER
TABLE ACCESS FULL
TABLE ACCESS HASH

 


Some of the ROW operations are described in detail:


AND-EQUAL:

This merges sorted lists of values returned by indexes. It returns the list of values that are common to both Lists.(ROWIDs found in both indexes). This is used for merges of nonunique indexes and range scans of Unique indexes.

Example:

NOTE: In the images and/or the document content below, the user information and data used represents fictitious data from the Oracle sample schema(s) or Public Documentation delivered with an Oracle database product. Any similarity to actual persons, living or dead, is purely coincidental and not intended in any manner.
Select empno,state,zipcode
From emp
Where state='GA'
And zipcode=65434

Explain Plan

TABLE ACCESS BY ROWID EMP
AND-EQUAL
INDEX RANGE SCAN EMP$STATE
INDEX RANGE SCAN EMP$ZIPCODE

 

CONCATENATION:

The Concatenation does a UNION ALL of result sets.

Example:

Select empno, state, zipcode
From emp
Where (state='KS' and zipcode=45678)
Or (state='MD' and zipcode=87746);

Explain Plan

CONCATENATION
TABLE ACCESS BY ROWID EMP
AND-EQUAL
INDEX RANGE SCAN EMP$STATE
INDEX RANGE SCAN EMP$ZIPCODE
TABLE ACCESS BY ROWID EMP
AND-EQUAL
INDEX RANGE SCAN EMP$STATE
INDEX RANGE SCAN EMP$ZIPCODE

 

HASH JOIN:

This operation joins tables by creating an in-memory bitmap of one of the tables and then using a hashing function to locate the join rows in the second table.

Example:

Select emp.empno
From emp, dept
Where emp.deptno=detp.deptno
And emp.state='NY';

Explain Plan

HASH JOIN
TABLE ACCESS FULL EMP
TABLE ACCESS FULL DEPT

 

INDEX RANGE SCAN:

The index range scan selects a range of values from an index. The index can be either unique or non-unique. The range scans are used with one of the conditions:

· A range operator is used (such as < or >)

· The BETWEEN clause is used

· A search string with wild card is used(such as B%)

· Only part of concatenated index is used(such as using leading column of composite index)

Example:

Select empno,state
From emp
Where deptno > 20;

Explain Plan

TABLE ACCESS BY ROWID EMP
INDEX RANGE SCAN EMP$DEPTNO

 

SET Operations


The Set operations are executed on a result set of rows. It will be executed at EXECUTE stage when the cursor is opened. The user cannot see the first result until all rows are fetched and processed.

Example: FULL TABLE SCAN with GROUP BY clause.

FOR UPDATE
HASH JOIN
INTERSECTION
MERGE JOIN
MINUS
SORT AGGREGATE
SORT GROUP BY
SORT UNIQUE
SORT JOIN
SORT ORDER BY
UNION
}}}


.
{{{
Issues arise from:
* coding
* data mapping / model
* logic 

Delays come from: 
* poor requirements gathering
* technical debt 
* poor project management
}}}




<<<
Starting 11gR1 Oracle introduced Testcase Builder (TCB) as part of the Oracle Database Fault Diagnosability Infrastructure (ADRCI, DBMS_HM and DBMS_SQLDIAG just to keep it simple). Basically it’s a set of APIs to generate a testcase starting from either a SQL ID or a SQL text.
<<<

! howto using the API 
https://mauro-pagano.com/2015/07/09/how-to-get-a-sql-testcase-with-a-single-step/

! howto using SQLD360 
{{{
The sqld360 does generate scripts to build a testcase.

https://github.com/karlarao/sqldb360/blob/master/sql/sqld360_5e_tcb.sql

Plus the standalone sql file
 
Does not do the CBO env though.
And has an option to generate TCB but I do not know if it has been tested.
}}}





{{{
Issues arise from:
* coding
* data mapping / model
* logic 

Delays come from: 
* poor requirements gathering
* technical debt 
* poor project management
}}}


check out nigelbayliss scripts and testcases at https://github.com/oracle/oracle-db-examples/tree/master/optimizer



-- some stories by gverma
Deleting statistics or/and dropping indexes on Global temporary tables can help too https://blogs.oracle.com/gverma/entry/deleting_statistics_orand_drop
10g optimizer case study: Runtime Execution issues with View merging https://blogs.oracle.com/gverma/entry/10g_optimizer_case_study_runti
A tuning case study: The goofy optimizer (9i.x RDBMS ) https://blogs.oracle.com/gverma/entry/a_tuning_case_study_the_goofy_1
Yet Another Case Study: The over-commit trap https://blogs.oracle.com/gverma/entry/yet_another_case_study_the_ove_1
An Application Tuning Case Study: The Deadly Deadlock https://blogs.oracle.com/gverma/entry/an_application_tuning_case_stu_1
A SQL Tuning Case Study: Could we K.I.S.S. Please?  https://blogs.oracle.com/gverma/entry/a_sql_tuning_case_study_could_1
When Conventional Thinking Fails: A Performance Case Study in Order Management Workflow customization https://blogs.oracle.com/gverma/entry/when_conventional_thinking_fai_1
Workflow performance case study: Dont Repeat History, Learn from it https://blogs.oracle.com/gverma/entry/workflow_performance_case_stud_1


http://iamsys.wordpress.com/2012/03/15/oracle-histogram-causing-bad-sql-plan/






<<showtoc>> 

! books 

SQL performance explained https://use-the-index-luke.com/sql/table-of-contents

https://www.amazon.com/Programming-Oracle-Triggers-Procedures-Prentice
https://www.amazon.com/SQL-Antipatterns-Programming-Pragmatic-Programmers-eboo

! articles 
https://www.datacamp.com/community/tutorials/sql-tutorial-query#gs.ePzPPkU







check out [[Data Model, Design]]

<<showtoc>>

! video tutorials 

!! SQL Developer Data Modeler Just what you need
* this shows brewery data model ala "untapped"
https://www.youtube.com/watch?time_continue=3707&v=NfrUy-TYP_8

!! Database Design Tutorial
https://www.youtube.com/watch?v=I_rxqSJAj6U


!! Data Modeling-Oracle SQL Developer Data Modeler
* this shows "items" and "item category" data model
Data Modeling-Oracle SQL Developer Data Modeler-Part 1 to 3 https://www.youtube.com/watch?v=pQdVhyBlP_s&list=PLRchQ6rKGoij_kf9Sfm45X071t-zlINdR
Data Modeling-Oracle SQL Developer Data Modeler-Part 4 https://www.youtube.com/watch?v=0d2rLrKYPzA


!! Introduction to SQL Developer Data Modeler (shows UK example)
* this shows student grading data model 
https://www.youtube.com/watch?v=wsVh1zLmQb0


!! ER DIAGRAM USING MS VISIO
ER DIAGRAM USING MS VISIO 10 part_1 https://www.youtube.com/watch?v=unSWF7IR2nw&list=PLC2183520018E70C1
ER DIAGRAM USING MS VISIO 10 part_2 https://www.youtube.com/watch?v=qimT1FTJzK8&list=PLC2183520018E70C1&index=2
http://usmannoshahi.blogspot.com/2014/06/auto-increment-trigger-from-sql.html



! step by step hands-on

!! logical model 

!!! create new design and save 

[img(80%,80%)[https://i.imgur.com/GgNKoJo.png]]
[img(80%,80%)[https://i.imgur.com/BGqlnn4.png]]

!!! edit model properties 
[img(80%,80%)[https://i.imgur.com/n7Cp0HH.png]]
[img(80%,80%)[https://i.imgur.com/IJyPFo2.png]]
[img(80%,80%)[https://i.imgur.com/QZWUI2L.png]]
[img(80%,80%)[https://i.imgur.com/3fd8r3C.png]]
[img(80%,80%)[https://i.imgur.com/ALCFeyu.png]]
[img(80%,80%)[https://i.imgur.com/80Mjm5k.png]]

!!! edit logical model properties 
[img(80%,80%)[https://i.imgur.com/7AJ33Rc.png]]
[img(80%,80%)[https://i.imgur.com/toKgR3z.png]]

!!! create entity 
[img(80%,80%)[https://i.imgur.com/SfAV1a2.png]]
[img(80%,80%)[https://i.imgur.com/5Un8x0J.png]]
[img(80%,80%)[https://i.imgur.com/70hBtVr.png]]

!!! edit domain administration, enter data types
<<<
whenever we create a database we mainly use 3 data types:
* variable character 
* number 
* date 
<<<
[img(80%,80%)[https://i.imgur.com/VkUErZ5.png]]
[img(80%,80%)[https://i.imgur.com/4PNGGKY.png]]
[img(80%,80%)[https://i.imgur.com/64P6jNI.png]]
[img(80%,80%)[https://i.imgur.com/LPE3C9X.png]]
[img(80%,80%)[https://i.imgur.com/83znKNp.png]]

!!! create display and edit notation 
[img(80%,80%)[https://i.imgur.com/j5xL8if.png]]
[img(80%,80%)[https://i.imgur.com/2FwhOer.png]]

!!! edit relationships, create PK - FK 
* foreign key will be created as a new column if it doesn't exist on the target table 
* if column already exist on the target table, the FK will be appended with sequence number
** to fix this, you need to delete the relationship and the duplicate column
[img(80%,80%)[https://i.imgur.com/kECYnbM.png]]
* uncheck source optional
* CASCADE
[img(80%,80%)[https://i.imgur.com/gGjmuc5.png]]
[img(80%,80%)[https://i.imgur.com/eryzCwf.png]]

!!! create Unique key 
[img(80%,80%)[https://i.imgur.com/P3vKZjk.png]]
* to set DBID as Unique key, click on "Unique Identifiers"
[img(80%,80%)[https://i.imgur.com/uW93xhc.png]]
* click on plus, and edit the new entry, on "attributes and relations" select DBID
[img(80%,80%)[https://i.imgur.com/xcQ2PYX.png]]
* on General name it "dbid UK" and select "Unique Key"
[img(80%,80%)[https://i.imgur.com/yXCZCt3.png]]
[img(80%,80%)[https://i.imgur.com/yYJrbBd.png]]

!!! auto generate sequences with trigger 




!! relational model 

!!! engineer to relational model  
[img(80%,80%)[https://i.imgur.com/mzO396H.png]]

!!! Generate DDL
[img(80%,80%)[https://i.imgur.com/ipm4mGg.png]]
* select 12c database, click OK
[img(80%,80%)[https://i.imgur.com/j1vwUKE.png]]
* click Generate
[img(80%,80%)[https://i.imgur.com/uKrkKV3.png]]
* make sure no errors on generating DDL
[img(80%,80%)[https://i.imgur.com/3UsTVn8.png]]



!! reverse engineer 
http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/sqldevdm/r30/datamodel2moddm/datamodel2moddm.htm
http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/sqldevdm/r20/updatedb/UpdateDB.html
http://www.slideshare.net/kgraziano/reverse-engineering-an-existing-database-using-oracle-sql-developer-data-modeler

[img(80%,80%)[https://i.imgur.com/wE96nms.png]]
* select "New Relational Model"
[img(80%,80%)[https://i.imgur.com/06vpzKX.png]]
.
[img(80%,80%)[https://i.imgur.com/6WFuZQZ.png]]
.
[img(80%,80%)[https://i.imgur.com/i1tc9t7.png]]
.
[img(80%,80%)[https://i.imgur.com/gXqKhFp.png]]
* this is the output of Reverse Engineer 
[img(80%,80%)[https://i.imgur.com/W9Bd60N.png]]
* this is the original 
[img(80%,80%)[https://i.imgur.com/seoO7GD.png]]








http://www.dpriver.com/pp/sqlformat.htm
http://elentok.com/sql     <-- allows collapsing the multiple subqueries of the SQL
! datagenerator 
Swingbench uses data generator, and you can use it separately
http://www.dominicgiles.com/datagenerator.html

! quicksql 
 You can also use quicksql https://docs.oracle.com/database/apex-18.1/AEUTL/using-quick-SQL.htm#AEUTL-GUID-A1308899-AA1D-42EA-8CAE-B128366538FE
Defining new data structures using Quick SQL https://www.youtube.com/watch?v=Ux2eISE9cSQ
 
! meta360 
BTW you can use the meta360 to get all DDL of a schema without the data. Then using that info you can feed it to quick SQL to generate the insert scripts
https://github.com/carlos-sierra/meta360
 
! quickplsql 
https://github.com/mortenbra/quick-plsql
<<<
https://apex.oracle.com/pls/apex/f?p=QUICKPLSQL:HOME
<<<
http://www.evernote.com/shard/s48/sh/4e9718c6-5881-4106-8822-e291a2523b9f/e1d04aa0e9d04b79769bfc57fff373f8
! Possible reasons
* ''CPU starvation'' - In AWR/Statspack, that "Captured SQL.. CPU" section is being pulled from sum(cpu_time_delta) of dba_hist_sqlstat and divided by 'DB CPU' by the time model which only accounts for the "real CPU cycles" which gives you a lower value for the denominator since most of the CPU time is spent on run queue and not accounted  
* ''Module calling SQL'' or ''SQL calling module calling SQL'' - in this scenario the __module__ CPU time number is somewhat equal to the called __SQL__ causing the numbers to double which is more than the accounted real CPU cycles


! Troubleshooting 
* it could really be just a CPU starvation issue
* it could really be a double counting issue
* or it could be both
** if it shows the CPU Wait, then it's CPU starvation
** if it doesn't show the CPU Wait, determine if the CPU starvation happens in a fly by manner (not a sustained workload) ELSE it could just be a double counting issue
''Ultimately you have to triage with fine grained sample intervals (snapper) and with OS data, because the spikes may be hidden from the normalized DBA_HIST_SQLSTAT data''
but
I won't really totally depend on this when troubleshooting, this section of AWR/Statspack is just a means of knowing what are the top consuming SQLs, and I've got a script called awr_topsqlx http://goo.gl/YIkQ7 which shows the AAS for a particular SQL_ID in a time series manner. If, there would be double counting.. it may show the calling PL/SQL and SQL_ID with high AAS.. and that's a good thing because both them are worth investigating.. 
Also 
The "PL/SQL lock timer" on top 5 timed events is just a Statspack thing, in AWR you may see it as "inactive session" if  the job got killed or nothing at all if the job finished


! 1) CPU starvation 
<<<
the workload used here is 256 sessions of IOsaturationtoolkit-v2 https://www.dropbox.com/s/6bwcm5n22b22uoj/IOsaturationtoolkit-v2.tar.bz2, load average peak is 71 on 8 CPU box..
''this one says Captured SQL account for  108.0%''
{{{
^LSQL ordered by CPU Time                    DB/Inst: DW/dw  Snaps: 22768-22769
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> %Total - CPU Time      as a percentage of Total DB CPU
-> %CPU   - CPU Time      as a percentage of Elapsed Time
-> %IO    - User I/O Time as a percentage of Elapsed Time
-> Captured SQL account for  108.0% of Total CPU Time (s):             146                                   <-- this is 146 (real CPU cycles)
-> Captured PL/SQL account for    0.8% of Total CPU Time (s):             146

    CPU                   CPU per           Elapsed
  Time (s)  Executions    Exec (s) %Total   Time (s)   %CPU    %IO    SQL Id
---------- ------------ ---------- ------ ---------- ------ ------ -------------
     154.5           10      15.45  105.7   90,798.7     .2   57.7 1qnnkbgf13csf                                  <-- this is 154.5
Module: SQL*Plus
Select count(*) from owitest

       1.4           49       0.03    0.9        1.4   97.7     .0 fgawnchwmysj7
Module: ASH Viewer
SELECT * FROM V$ACTIVE_SESSION_HISTORY WHERE SAMPLE_ID > :1

       0.4           12       0.04    0.3       96.8     .5   99.5 6gvch1xu9ca3g
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN :
= FALSE; BEGIN EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS(); :mydate := next_date
; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;

       0.2            3       0.07    0.1        0.3   77.2   21.7 2gwy69qkwkhcz
Module: sqlplus@desktopserver.local (TNS V1-V3)
select sample_time, count(sid) from ( select to_char(ash.sample_time,'MM/DD
/YY HH24:MI:SS') sample_time, ash.session_id sid, ash.session_serial#
serial#, ash.user_id user_id, ash.program, ash.sql_id, ash.s
ql_plan_hash_value, sum(decode(ash.session_state,'ON CPU',1,0)) "CPU",

       0.2           24       0.01    0.1       16.1    1.2   92.0 2b064ybzkwf1y
Module: OEM.SystemPool
BEGIN EMD_NOTIFICATION.QUEUE_READY(:1, :2, :3); END;

       0.2            5       0.04    0.1        0.2   98.0     .0 9xcgnpkwktzy9
Module: sqlplus@desktopserver.local (TNS V1-V3)
select sample_time, count(sid) from ( select to_char(ash.sample_time,'MM/DD
/YY HH24:MI:SS') sample_time, ash.session_id sid, ash.session_serial#
serial#, ash.user_id user_id, ash.program, ash.sql_id, ash.s
ql_plan_hash_value, sum(decode(ash.session_state,'ON CPU',1,0)) "CPU",

       0.1            2       0.07    0.1        7.6    1.9   24.2 b4qw7n9wg64mh
INSERT /*+ APPEND LEADING(@"SEL$F5BB74E1" "H"@"SEL$2" "A"@"SEL$1") USE_NL(@"SE
}}}
<<<


! 2) ''Module calling SQL'' or ''SQL calling module calling SQL''

I've done some detailed test cases which are available here (click on each of the tiddlers), the workload is this [[CPU spike 1min idle interval]] which kinda matches the load on the first oracle-l post where it's got PL/SQL lock timer and frequent fast SQLs
* [[doublecounting-test0- 1st encounter]]
* [[doublecounting-test1-killed]]
* [[doublecounting-test2-finished]]

each of the instrumentation are correlated by time the load spike occured  but here are the things that you have to focus on each of the instrumentation:
*collectl - check the columns "User" and "Run" and "Avg1"
*ASH - the number before the "CPU".. that's the number of AAS CPU it consumed
*snapper - on my test cases the snap interval is 1sec (see the snapper commands I used here [[CPU spike 1min idle interval]]).. I need this tool to catch the every 1 min sudden spike of load where my server only have 8 CPUs and the workload is consuming 16 CPUs, if it says "1600% ON CPU" that means it consumed 16 CPUs (1600/100)
*gas - the number of sessions and AVG_ETIME which is the elapsed time per execute
*sql_detail - the CPU_WAIT_EXEC which is the CPU WAIT
*AWR - the Top 5 Timed Events and the "Captured SQL account for", and notice the Executions if it's zero (killed) or has a value (finished)
*Statspack - the Top 5 Timed Events and the "Captured SQL account for", and notice the Executions if it's zero (killed) or has a value (finished)



and below is the summary

<<<
test case used is a modified version of cputoolkit to simulate the high "PL/SQL lock timer" on Statspack https://www.dropbox.com/s/je6eafm1a9pnfpk/cputoolkit.tar.bz2
see the [[CPU spike 1min idle interval]] for the details of the test case script used
{{{
with double counting 
	
	-> Captured SQL accounts for  179.8% of Total DB CPU                                                                 <-- 179.8%
	-> SQL reported below exceeded  1.0% of Total DB CPU
	
	    CPU                  CPU per             Elapsd                     Old
	  Time (s)   Executions  Exec (s)  %Total   Time (s)    Buffer Gets  Hash Value
	---------- ------------ ---------- ------ ---------- --------------- ----------
	    130.06           92       1.41   86.1     222.94      24,669,380  175009430
	Module: sqlplus@desktopserver.local (TNS V1-V3)
	SELECT /*+ cputoolkit ordered                                 us
	e_nl(b) use_nl(c) use_nl(d)                                 full
	(a) full(b) full(c) full(d) */ COUNT(*) FROM SYS.OBJ$ A, SYS.OBJ
	$ B, SYS.OBJ$ C, SYS.OBJ$ D WHERE A.OWNER# = B.OWNER# AND B.OWNE
	
	    119.00           14       8.50   78.7     251.14      24,164,729 1927962500
	Module: sqlplus@desktopserver.local (TNS V1-V3)
	declare         rcount number; begin         -- 600/60=10 minute
	s of workload         for j in 1..1800 loop          -- lotslios
	 by Tanel Poder         select /*+ cputoolkit ordered
	                      use_nl(b) use_nl(c) use_nl(d)
	
	     19.18           46       0.42   12.7      19.81               0 2248514484
	Module: sqlplus@desktopserver.local (TNS V1-V3)
	select to_char(start_time,'DD HH:MI:SS'),        samples,
	 --total,        --waits,        --cpu,        round(fpct * (tot
	al/samples),2) fasl,        decode(fpct,null,null,first) first,
	       round(spct * (total/samples),2) sasl,        decode(spct,
	
	      1.60          277       0.01    1.1       1.87               0 2550496894
	Module: sqlplus@desktopserver.local (TNS V1-V3)
	 select value ||'/'||(select instance_name from v$instance) ||'_
	ora_'||         (select spid||case when traceid is not null then
	 '_'||traceid else null end                 from v$process where
	 addr = (select paddr from v$session

 
without double counting.. the executions is zero, so i think the job has to be cancelled or finish
	
	-> Captured SQL accounts for   99.0% of Total DB CPU                                                                  <-- 99%
	-> SQL reported below exceeded  1.0% of Total DB CPU
	
	    CPU                  CPU per             Elapsd                     Old
	  Time (s)   Executions  Exec (s)  %Total   Time (s)    Buffer Gets  Hash Value
	---------- ------------ ---------- ------ ---------- --------------- ----------
	    198.34          144       1.38   86.1     300.59      37,436,409  175009430
	Module: sqlplus@desktopserver.local (TNS V1-V3)
	SELECT /*+ cputoolkit ordered                                 us
	e_nl(b) use_nl(c) use_nl(d)                                 full
	(a) full(b) full(c) full(d) */ COUNT(*) FROM SYS.OBJ$ A, SYS.OBJ
	$ B, SYS.OBJ$ C, SYS.OBJ$ D WHERE A.OWNER# = B.OWNER# AND B.OWNE
	
	    179.49            0              77.9     281.65      35,710,858 1927962500
	Module: sqlplus@desktopserver.local (TNS V1-V3)
	declare         rcount number; begin         -- 600/60=10 minute
	s of workload         for j in 1..1800 loop          -- lotslios
	 by Tanel Poder         select /*+ cputoolkit ordered
	                      use_nl(b) use_nl(c) use_nl(d)
	
	      6.22          164       0.04    2.7       6.42               0 2005132824
	Module: sqlplus@desktopserver.local (TNS V1-V3)
	select to_char(sysdate,'MM/DD/YY HH24:MI:SS') tm, a.inst_id inst
	, sid, substr(program,1,19) prog, a.username, b.sql_id, child_nu
	mber child, plan_hash_value, executions execs, (elapsed_time/dec
	ode(nvl(executions,0),0,1,executions))/1000000 avg_etime, sql_te
}}}
<<<

! original question from oracle-l 
<<<
http://www.freelists.org/post/oracle-l/DB-CPU-is-much-lower-than-CPU-Time-Reported-by-TOP-SQL-consumers
{{{
*    CPU                  CPU per             Elapsd                     Old
  Time (s)   Executions  Exec (s)  %Total   Time (s)    Buffer Gets  Hash
Value
---------- ------------ ---------- ------ ---------- --------------- ----------
   4407.08       20,294       0.22   51.6   10006.66   1,228,369,784 3703299877
Module: JDBC Thin Client

   3943.14      157,316       0.03   46.2    6915.60   1,034,202,723 1127338565
Module: sel_ancomm_vss_06.tsk@c2aixprod (TNS V1-V3)

   2358.20      269,711       0.01   27.6   4095.76    1,508,308,542 1995656981
Module: sel_zuteiler_alert.tsk@c2aixprod (TNS V1-V3)

   1305.21        9,932       0.13   15.3    2483.90         331,327 1310406159
Module: sel_verwaltung.tsk@c2aixprod (TNS V1-V3)*

These 4 statements already adds up to 12013 CPU Seconds and the DB CPU is 8464 seconds.

Also look this text from top sql statmenet section:

-> Total DB CPU (s):           8,539
*-> Captured SQL accounts for  232.5% of Total DB CPU*
-> SQL reported below exceeded  1.0% of Total DB CPU

Capture SQL is 232% DB CPU! How can this be possible?
}}}
<<<







https://fiddles.io/#
http://sqlfiddle.com/
https://www.db-fiddle.com/
https://akdora.wordpress.com/2009/02/18/rules-of-precedence-in-sql-where-clause/
https://www.tutorialspoint.com/plsql/plsql_operators_precedence.htm


<<showtoc>>


! SQL server on linux - announcement 
https://blogs.microsoft.com/blog/2016/03/07/announcing-sql-server-on-linux/#sm.000ie7pk911due9py3s1m58yce5xt
https://techcrunch.com/2016/11/16/microsofts-sql-server-for-linux-is-now-available-for-testing/
https://www.microsoft.com/en-us/sql-server/sql-server-vnext-including-Linux
https://blogs.technet.microsoft.com/dataplatforminsider/2016/11/16/announcing-sql-server-on-linux-public-preview-first-preview-of-next-release-of-sql-server/
https://blogs.technet.microsoft.com/dataplatforminsider/2016/11/16/announcing-the-next-generation-of-databases-and-data-lakes-from-microsoft/ 
<<<
SQL Server 2016 SP1
We are announcing SQL Server 2016 SP1 which is a unique service pack – for the first time we introduce consistent programming model across SQL Server editions. With this model, programs written to exploit powerful SQL features such as in-memory OLTP, in-memory columnstore analytics, and partitioning will work across Enterprise, Standard and Express editions.
<<<
https://cloudblogs.microsoft.com/sqlserver/2016/12/16/sql-server-on-linux-how-introduction/



! vscode , .NET MVC
download https://code.visualstudio.com/docs/introvideos/overview
https://www.microsoft.com/en-us/sql-server/developer-tools
https://channel9.msdn.com/Tags/sql+server?sort=viewed
https://gitter.im/mssqldev/Lobby
http://discuss.emberjs.com/t/are-developers-creating-ember-apps-outside-the-context-of-rails/283/65
http://stackoverflow.com/questions/25916381/how-am-i-supposed-to-persist-to-sql-server-db-using-ember-js-and-asp-net-mvc
http://www.codeproject.com/Articles/511031/A-sample-real-time-web-application-using-Ember-js


! SQL server on linux - preview 
vm template https://azure.microsoft.com/en-us/marketplace/partners/microsoft/sqlservervnextonredhatenterpriselinux72/
sql-cli https://www.microsoft.com/en-us/sql-server/developer-get-started/node-rhel
https://portal.azure.com
!! docs 
https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-get-started-tutorial
https://www.npmjs.com/package/sql-cli
https://gitter.im/mssqldev/Lobby

! HOWTO 
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-get-started
https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines-windows-hero-tutorial


! azure pricing calculator 
https://azure.microsoft.com/en-us/pricing/calculator/?tduid=(2835563d6d6ab27e429813b35ec8211b)(81561)(2130923)(0400ie1s1jpf)()



! oracle cloud vs azure 
<<<
So I didn't know that I had to select the "Public Cloud Services - US", I had to watch this youtube video https://www.youtube.com/watch?v=tCVvtn3M4c4 , and this youtube video https://www.youtube.com/watch?v=cBBqrRTaMDw , "Identity Domain" , "Welcome to Oracle Cloud"

they also released the preview of SQL server (vNext CTP1) on Linux and made the following database features "in-memory OLTP, in-memory columnstore analytics, and partitioning" available across Enterprise, Standard and Express editions (that's for SQL Server 2016 SP1). Matrix here https://technet.microsoft.com/.../windows/cc645993(v=sql.90) , the blog here https://blogs.technet.microsoft.com/.../announcing-the.../ . That's a very good move to compete w/ Oracle database in terms of price point and features. Also Azure is very easy to use compared to Oracle Cloud and I can see Microsoft gearing towards developer happiness w/ this site https://www.microsoft.com/.../sql.../developer-get-started and the vscode https://code.visualstudio.com/docs/introvideos/overview and this https://gitter.im/mssqldev/Lobby it feels less enterprisey and more of fun w/ genuine contributions by the community

Microsoft just released SQL Server 2016 SP1 1) and at the same time moved most of the Enterprise features to all editions (Standard , Express) for this version 2). Its gonna be hard to sell Oracle options like partitioning, inmemmory  when is free in SQL Serever. 

1) https://blogs.technet.microsoft.com/dataplatforminsider/2016/11/16/sql-server-2016-service-pack-1-generally-available/
2) https://technet.microsoft.com/en-us/windows/cc645993(v=sql.90)
<<<

! cloud UI 
https://builtwith.com/?https%3a%2f%2fportal.azure.com <- built w/ ASP.NET MVC 
https://builtwith.com/cloud.oracle.com <- built w/ J2EE and Foundation 
https://builtwith.com/?https%3a%2f%2fcloud.digitalocean.com <- built w/ rails 










We have this https://github.com/mauropagano/sqld360/blob/master/sql/sqld360_1d_standalone.sql

And the Kerry link that you sent 

We used this before on benchmarking some OBIEE SQLS https://www.dropbox.com/s/t02ysug2t1nufxq/runbenchtoolkit.zip
And there are other tools you can use http://www.rittmanmead.com/2013/03/performance-and-obiee-test-build/

Or you can make use of this to capture the SQLs https://github.com/tmuth/Query-Test-Framework and then run it on top of runbenchtoolkit? 

''Download''
http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/index.html
http://www.oracle.com/technetwork/developer-tools/sqlcl/downloads/sqlcl-relnotes-421-3415922.html
http://www.oracle.com/technetwork/developer-tools/sql-developer/sqldev-newfeatures-v42-3211987.html
SQL Developer Data Modeler User's Guide http://docs.oracle.com/database/sql-developer-4.2/DMDUG/toc.htm


''Migrate settings to new machine''
http://zacktutorials.blogspot.com/2011/02/how-to-copy-oracle-sqldeveloper.html
http://oracledeli.wordpress.com/2011/09/28/sql-developer_migrate_settings_files/
{{{
Navigate to the following location,
Step 1: C:\Documents and Settings\\Application Data\SQL Developer
Step 2: C:\Documents and Settings\\Application Data\SQL Developer\systemXX.X.X.X.XX
Step 3: Copy the product-preferences.xml in the location below,
C:\Documents and Settings\\Application Data\SQL Developer\systemXX.X.X.X.XX\o.sqldeveloper.XX.X.X.XX.XX
Step 4: Copy the connections.xml in the location below,
C:\Documents and Settings\\Application Data\SQL Developer\systemXX.X.X.X.XX\o.jdeveloper.db.connection.XX.X.X.X.XX.XX.XX
Copy the 2 files(product-preferences.xml & connections.xml ) to your new machine in the same location

C:\Users\karl\AppData\Roaming\SQL Developer\system4.2.0.17.089.1709\o.jdeveloper.db.connection.13.0.0.1.42.170225.201
C:\Users\karl\AppData\Roaming\SQL Developer\system4.2.0.17.089.1709\o.sqldeveloper.12.2.1.17.89.1709
}}}

''SetJavaHome''
http://stackoverflow.com/questions/7876502/how-can-i-run-oracle-sql-developer-on-jdk-1-6-and-everything-else-on-1-7
{{{
version 4.x
%APPDATA%\sqldeveloper\1.0.0.0.0\product.conf
}}}

''instance viewer''  
http://www.thatjeffsmith.com/archive/2014/12/sql-developer-4-1-instance-viewer/   , https://www.youtube.com/watch?v=FrdUCdGJEG8



! SELECT asterisk - automatic column population 
https://www.thatjeffsmith.com/archive/2016/11/7-ways-to-avoid-select-from-queries-in-sql-developer/










-- SQL*LOADER

Doc ID 1012594.6 Useful unix utilities to be used with SQL*Loader
Doc ID 1012726.6 Converting load file from delimited to fixed format
Doc ID 77337.1 How to add blank line to a SQL*Plus spooled output file


* SQL*Loader will show as INSERT SQL with module SQL Loader conventional path. So if you are qualifying sqls make sure to look at SQL_TEXT and MODULE 
[img(80%,80%)[ https://i.imgur.com/7kG293E.png ]]
http://steve-lyon.blogspot.com/2013/07/sql-loader-step-by-step-basics-example-1.html
{{{
NAME, BALANCE, START_DT
"Jones, Joe" ,     14 , "Jan-12-2012 09:25:37 AM"
"Loyd, Lizy" , 187.26 , "Aug-03-2004 03:13:00 PM"
"Smith, Sam" ,  298.5 , "Mar-27-1997 11:58:04 AM"
"Doyle, Deb" ,   5.95 , "Nov-30-2010 08:42:21 PM"


create table hr.sql_loader_demo_simple
  ( customer_full_name   varchar2(50)
  , account_balance_amt  number
  , account_start_date   date
  ) ;


------------------------------------------------------------
-- SQL-Loader Basic Control File
------------------------------------------------------------
options  ( skip=1 )
load data
  infile               'data.csv'           
  truncate into table   scott.sql_loader_demo_simple
fields terminated by ","       
optionally enclosed by '"' 
  ( customer_full_name
  , account_balance_amt
  , account_start_date   DATE "Mon-DD-YYYY HH:MI:SS am"
  ) 



sqlldr 'scott/tiger@my_database' control='control.txt' log='results.log

}}}







There was a whitepaper about oracle sqlnet and debugging it, not written by someone of our group I think. Does anyone remember that whitepaper?
I try to look up some oracle network related functions, and try to see if I can get some basics like network layers and accompanying functions.


I think the paper you are looking for is still available at
Examining Oracle Net, Net8, SQL*Net Trace Files (Doc ID 156485.1)
 
Other MOS related notes/references
SQL*NET PACKET STRUCTURE: NS PACKET HEADER (Doc ID 1007807.6)
http://www.nyoug.org/Presentations/2008/Sep/Harris_Listening%20In.pdf
http://ondoc.logand.com/d/359/html
 
 
https://sites.google.com/site/embtdbo/wait-event-documentation/oracle-network-waits
Troubleshooting Waits for 'SQL*Net message to client' and 'SQL*Net more data to client' Events from a Performance Perspective (Doc ID 1404526.1)
High Waits for Event 'SQL*Net message from client' Attributed to SQL in TKProf (Doc ID 400164.1)

<<<
	Reduce Client Bottlenecks
 	 	
A client bottleneck in the context of a slow database is another way to say that most of the time for sessions is being spent outside of the database. This could be due to a truly slow client or a slow network (and related components).
 
 	
Observations and Causes

Examine the table below for common observations and causes:

Note: This list shows some common observations and causes but is not a complete list. If you do not find a possible cause in this list, you can always open a service request with Oracle to investigate other possible causes. Please see the section below called, "Open a Service Request with Oracle Support Services".

 
 
High Wait Time due to Client Events Before Any Type of Call

The Oracle shadow process is spending a significant amount of time waiting for messages from clients. The waits occur between FETCH and PARSE calls or before EXECUTE calls. There are few FETCH calls for the same cursor.

What to look for

TKProf:
Overall wait event summary for non-recursive and recursive statements shows significant amount of time for SQL*Net message from client waits compared to the total elapsed time in the database
Each FETCH call typically returns 5 or more rows (indicating that array fetches are occurring)

 
 
 
Cause Identified: Slow client is unable to respond to the database quickly

The client is running slowly and is taking time to make requests of the database.

Cause Justification
TKProf:
SQL*Net message from client waits are a large part of the overall time (see the overall summary section)
There are more than 5 rows per execution on average (divide total rows by total execution calls for both recursive and non-recursive calls). When array operations are used, you'll see 5 to 10 rows per execution.

You may also observe that performance is good when the same queries that the client sends are executed via a different client (on another node).
 
 
 
Solution Identified: Investigate the client

Its possible that the client or middle-tier is saturated (not enough CPU or memory) and is simply unable to send requests to the database fast enough. 

You will need to check the client for sufficient resources or application bugs that may be delaying database calls.

M

 	Effort Details

Medium effort; It is easy to check clients or mid-tiers for OS resource saturation. Bugs in application code are more difficult to find.

L

 	Risk Details

Low risk.
 
Solution Implementation

It may help to use a tool like OSWatcher to capture OS performance metrics on the client. 

To identify a specific client associated with a database session, see the V$SESSION view under the columns, CLIENT_INFO, PROCESS, MACHINE, PROGRAM.

Documentation
          Reference: V$SESSION

Notes
          The OS Watcher (OSW) User Guide

          The OS Watcher For Windows (OSWFW) User Guide

Implementation Verification


Implement the solution and determine if the performance improves. If performance does not improve, examine the following:
Review other possible reasons
Verify that the data collection was done properly
Verify the problem statement
If you would like to log a service request, a test case would be helpful at this stage.
 
 
 
 
Cause Identified: Slow network limiting the response time between client and database

The network is saturated and this is limiting the ability of the client and database to communicate with each other.

Cause Justification
TKProf:
SQL*Net message from client waits are a large part of the overall time (see the overall summary section)
Array operations are used. This is seen when there are more than 5 rows per execution on average (divide total rows by total execution calls for both recursive and non-recursive calls)
The average time for a ping is about equal to twice the average time for a SQL*Net message from client wait and this time is more than a few milliseconds. This indicates that most of the client time is spent in the network.

You may also observe that performance is good when the same queries that the client sends are executed via a different client on a different subnet (especially one very close to the database server).
 
 
 
Solution Identified: Investigate the network

Check the responsiveness of the network from different subnets and interface cards. The netstat, ping and traceroute utilities can be used to check network performance.

M

 	Effort Details

Medium effort; Network problems are relatively easy to check but sometimes difficult to solve.

L

 	Risk Details

Low risk.
 
Solution Implementation

Consult your system documentation for utilities such as ping, netstat, and traceroute

Implementation Verification


Implement the solution and determine if the performance improves. If performance does not improve, examine the following:
Review other possible reasons
Verify that the data collection was done properly
Verify the problem statement
If you would like to log a service request, a test case would be helpful at this stage.
 
 
 
 
High Wait Time due to Client Events Between FETCH Calls

The Oracle shadow process is spending a significant amount of time waiting for messages from clients between FETCH calls for the same cursor.

What to look for

10046 / TKProf:
Overall wait event summary for non-recursive and recursive statements shows significant amount of time for SQL*Net message from client waits compared to the total elapsed time in the database
The client waits occur between many fetch calls for the same cursor (as seen in the cursor #).
On average, there are less than 5 (and usually 1) row returned per execution

 
 
 
Cause Identified: Lack of Array Operations Causing Excess Calls to the Database

The client is not using array operations to process multiple rows in the database. This means that many more calls are performed against the database. Each call incurs a wait while the database waits for the next call. The time accumulates over many calls and will impact performance.

Cause Justification
TKProf:
SQL*Net message from client waits are a large part of the overall time (see the overall summary section)
There is nearly 1 row per execution on average (divide total rows by total execution calls for both recursive and non-recursive calls). When array operations are used, you'll see 5 to 10 rows per execution.
In some cases, most of the time is for a few SQL statements; you may need to examine the whole TKProf to find where the client waits were highest and examine those for the use of array operations
 
 
 
Solution Identified: Use array operations to avoid calls

Array operations will operate on several rows at a time (either fetch, update, or insert). A single fetch or execute call will do the work of many more. Usually, the benefits of array operations diminish after an arraysize of 10 to 20, but this depends on what the application is doing and should be determined through benchmarking.

Since fewer calls are needed, there are savings in waiting for client messages, network traffic, and database work such as logical reads and block pins.

M

 	Effort Details

Medium effort; Depending on the client, it may be easy or difficult to change the application and use array operations.

L

 	Risk Details

Very low risk; it is risky when enormous array sizes are used in OLTP operations and many rows are expected. This is due to waiting for the entire array to be filled until the first row is returned.
 
Solution Implementation

The implementation of array operations will vary by the type of programming language being used. See the documents below for some common ways to implement array operations.

Documentation
          PL/SQL User's Guide and Reference : Reducing Loop Overhead for DML Statements and Queries with Bulk SQL

          Programmer's Guide to the Oracle Precompilers : Using Host Arrays

          JDBC Developer's Guide and Reference: Update Batching

          JDBC Developer's Guide and Reference: Oracle Row Prefetching

Notes
          Bulk Binding - What it is, Advantages, and How to use it

          How To Fetch Data into a Table of Records using Bulk Collect and FOR All

Implementation Verification


Implement the solution and determine if the performance improves. If performance does not improve, examine the following:
Review other possible reasons
Verify that the data collection was done properly
Verify the problem statement
If you would like to log a service request, a test case would be helpful at this stage.
 

<<<
https://blog.tanelpoder.com/2008/02/10/sqlnet-message-to-client-vs-sqlnet-more-data-to-client/

check email "Re: [SOLVED]&nbsp;Re: SQL*Net more data from client" 
<<<
The “more data” behavior/pattern I think is the same for both “to client” and “from client”, the data on both scenarios span SDU packets it’s just a matter of which side is waiting.
For troubleshooting, those 3 key time accounting instrumentation metrics (on snapper) I believe would be the same for both cases you’ll just reverse the “to” and “from”.
But the bottomline is make sure the client and server packetsize are big enough and the same on both sides (client and server).

http://docwiki.embarcadero.com/DBOptimizer/en/Oracle:_Network_Waits#SQL.2ANet_more_data_to_client
<<<

<<<
https://blog.tanelpoder.com/2008/02/10/sqlnet-message-to-client-vs-sqlnet-more-data-to-client/
Now we see SQL*Net more data to client waits as well as the 5000 rows returned for every fetch call just don’t fit into a single SDU buffer.

I’ll reiterate that both SQL*Net message to client and SQL*Net more data to client waits only record the time it took to write the return data from Oracle’s userland SDU buffer to OS kernel-land TCP socket buffer. Thus the wait times of only microseconds. Thanks to that, all of the time a TCP packet spent “flying” towards the client is actually accounted in SQL*Net message from client wait statistic. The problem here is though, that we don’t know how much of this time was spent on the wire and how much of it was application think time.

Therefore, unless you’re going to buy a tool which is able to interpret TCP ACK echo timestamps, you need to measure network latency using application side instrumentation.

And this blog shows before and after workload screenshots after setting the DEFAULT_SDU_SIZE=32767, RECV_BUF_SIZE=65536, SEND_BUF_SIZE=65536
https://oracleattitude.wordpress.com/2014/08/22/oracle-performance-sqlnet-more-data-from-client/
 
<<<



! the issue
{{{
I can see a huge amount of Network waits on an environment (all related to ‘SQL*Net more data from client’) but I can find no SQL_ID in the sessions waiting for such event.
 
Note: DBA team reports no complaints whatsoever related to this from either the app or the user level
 
The following is happening at H&M in only one of their 3 clustered environments (EU2).
This is the summary of what’s been seen on this environment during last months.
 
Black Friday though Cyber Monday (4 days)
 
BUCKET                         PERCENT    COLOR    TOOLTIP
------------------------------ ---------- ------ ------------------------------------------------------------
ON CPU (42.2%)                       42.2 34CF27 1907967 10s-samples (42.2% of DB Time)
Network (31.6%)                      31.6 989779 1429259 10s-samples (31.6% of DB Time)
Cluster (14.4%)                      14.4 CEC3B5  651200 10s-samples (14.4% of DB Time)
Other (3.3%)                          3.3 F571A0  149882 10s-samples (3.3% of DB Time)
Commit (2.3%)                         2.3 EA6A05  102367 10s-samples (2.3% of DB Time)
Application (1.9%)                    1.9 C42A05   86235 10s-samples (1.9% of DB Time)
User I/O (1.8%)                       1.8 0252D7   80683 10s-samples (1.8% of DB Time)
System I/O (1.7%)                     1.7 1E96DD   78265 10s-samples (1.7% of DB Time)
Concurrency (.7%)                      .7 871C12   33500 10s-samples (.7% of DB Time)
Administrative (.1%)                   .1 75763E    3987 10s-samples (.1% of DB Time)
Scheduler (0%)                          0 9FFA9D     211 10s-samples (0% of DB Time)
Configuration (0%)                      0 594611     181 10s-samples (0% of DB Time)

From a Database perspective currently:
 
SQL> set null ‘(null)’
SQL> select event, sql_id, count('x') from v$active_session_history where event = 'SQL*Net more data from client' group by event, sql_id having count('x') > 10 order by count('x') ;
 
EVENT                                      SQL_ID        COUNT('X')
------------------------------------------ ------------- ----------
SQL*Net more data from client              bd5jc9nsyjq29         12
SQL*Net more data from client              adg2f9v3hsxtt         13
SQL*Net more data from client              434jx12t4g8dn         17
SQL*Net more data from client              (null)             24865
 
SQL> select con_id, sql_id, dbid, program, module, ROW_NUMBER () OVER (ORDER BY COUNT(*) DESC) rn, COUNT(*) samples FROM dba_hist_active_sess_history h WHERE sql_id||program||module IS NOT NULL AND wait_class = 'Network' AND event = 'SQL*Net more data from client' GROUP BY con_id, sql_id, dbid, program, module having COUNT(*) > 100000 ;
 
    CON_ID SQL_ID              DBID PROGRAM                                  MODULE                                           RN    SAMPLES
---------- ------------- ---------- ---------------------------------------- ---------------------------------------- ---------- ----------
         0 (null)        1629892510 JDBC Thin Client                         JDBC Thin Client                                  1    8405707
         0 (null)        1629892510 JDBC Thin Client                         /hmwebservices                                    2     309973
         0 (null)        1629892510 JDBC Thin Client                         /ru_ru                                            3     168724
         0 (null)        1629892510 JDBC Thin Client                         /pl_pl                                            4     168675
 
SQL> @ashtop username,sql_id "event='SQL*Net more data from client'" sysdate-7 sysdate                                                                                
    Total                                                                                              Distinct
  Seconds     AAS %This   USERNAME             SQL_ID        FIRST_SEEN          LAST_SEEN           Execs Seen
--------- ------- ------- -------------------- ------------- ------------------- ------------------- ----------
    92809      .2   97% | HYPRODBRIS           (null)        2018-12-18 14:24:29 2018-12-19 13:04:03          1
       64      .0    0% | HYPRODBRIS           434jx12t4g8dn 2018-12-18 14:26:54 2018-12-19 12:37:49         64
       33      .0    0% | HYPRODBRIS           adg2f9v3hsxtt 2018-12-18 14:39:20 2018-12-19 12:47:16         33
       32      .0    0% | HYPRODBRIS           4rum1h74czt7m 2018-12-18 18:40:14 2018-12-19 12:40:02         32
       30      .0    0% | HYPRODBRIS           8zv14zdf9f2b3 2018-12-18 16:36:11 2018-12-19 12:46:52         30
       29      .0    0% | HYPRODBRIS           737u6qhnqgkc6 2018-12-18 15:20:29 2018-12-19 12:49:59         29
       26      .0    0% | HYPRODBRIS           bd5jc9nsyjq29 2018-12-18 16:28:06 2018-12-19 12:31:25         26
       25      .0    0% | HYPRODBRIS           6pzcrqd3mzk0g 2018-12-18 15:02:49 2018-12-19 12:17:53         25
       25      .0    0% | HYPRODBRIS           7gb0ugms3ppjz 2018-12-18 16:27:54 2018-12-19 12:41:48         25
       23      .0    0% | HYPRODBRIS           70fcfxjcypsn0 2018-12-18 16:39:34 2018-12-19 12:11:18         23
       22      .0    0% | HYPRODBRIS           8vzxyrj296ph5 2018-12-18 17:05:28 2018-12-19 13:00:28         22
       20      .0    0% | HYPRODBRIS           6jdwkdb5b7yp9 2018-12-18 16:30:03 2018-12-19 13:00:16         20
       19      .0    0% | HYPRODBRIS           02drmxbqbf8yz 2018-12-18 15:51:51 2018-12-19 12:56:29         19
       19      .0    0% | HYPRODBRIS           2d10dcz1yf66r 2018-12-18 17:31:01 2018-12-19 12:14:53         19
       19      .0    0% | HYPRODBRIS           3rhxvxhsz1rf1 2018-12-18 15:23:14 2018-12-19 12:42:19         19
 
From an application perspective, Ct tells me the same release is running in all EU1, EU2, EU3 sites (?) and I can find no differences in network configuration (at server level) in either of them.
Any idea on how to track down which processes/programs are causing this behaviour would be highly appreciated.
}}}

! the fix 
{{{
If the SQL statement to be parsed is big enough to be sent in several pieces, the shadow process waits for the full sql statement before it can actually start parsing it.
At this time it waits for “SQL*Net message from client” without any sql_id until it can actually start parsing it.
 
 running SQL statements up to 1.5 MB in size.
 
48KB      statement rendered 2 waits for SQL*Net message from client waits
1.5MB   statement rendered 43 waits for SQL*Net message from client waits
 
Short stack: kslwtectx<-opikndf2<-ttcclr<-ttcc2u<-ttcpip<-opitsk<-opiino<-opiodr<-opidrv<-sou2o<-opimai_real<-ssthrdmain<-main<-__libc_start_mains
}}}









.
-- easy CSV with headings https://www.safaribooksonline.com/library/view/oracle-sqlplus-the/0596007469/re105.html
https://stackoverflow.com/questions/5576901/sqlplus-spooling-how-to-get-rid-of-first-empty-line
{{{
$ s1

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Oct 20 10:03:07 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Last Successful login time: Wed Oct 20 2021 10:01:28 -04:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

10:03:07 SYSTEM@ORCL> @testcsv2
10:03:11 SYSTEM@ORCL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
oracle@localhost.localdomain:/home/oracle/karao/scripts-master/performance:orclcdb
$ cat testcsv2.csv 

"USERNAME","USER_ID","PASSWORD","ACCOUNT_STATUS","LOCK_DATE","EXPIRY_DATE","DEFAULT_TABLESPACE","TEMPORARY_TABLESPACE","LOCAL_TEMP_TABLESPACE","CREATED","PROFILE","INITIAL_RSRC_CONSUMER_GROUP","EXTERNAL_NAME","PASSWORD_VERSIONS","EDITIONS_ENABLED","AUTHENTICATION_TYPE","PROXY_ONLY_CONNECT","COMMON","LAST_LOGIN","ORACLE_MAINTAINED","INHERITED","DEFAULT_COLLATION","IMPLICIT","ALL_SHARD","PASSWORD_CHANGE_DATE"
"SYS",0,,"OPEN",,,"SYSTEM","TEMP","TEMP","17-APR-19","DEFAULT","SYS_GROUP",,"11G 12C ","N","PASSWORD","N","YES",,"Y","YES","USING_NLS_COMP","NO","NO",
"AUDSYS",8,,"LOCKED","31-MAY-19",,"USERS","TEMP","TEMP","17-APR-19","DEFAULT","DEFAULT_CONSUMER_GROUP",,,"N","NONE","N","YES",,"Y","YES","USING_NLS_COMP","NO","NO",
"SYSTEM",9,,"OPEN",,,"SYSTEM","TEMP","TEMP","17-APR-19","DEFAULT","SYS_GROUP",,"11G 12C ","N","PASSWORD","N","YES","20-OCT-21 10.03.07.000000000 AM -04:00","Y","YES","USING_NLS_COMP","NO","NO",
"OUTLN",13,,"LOCKED","31-MAY-19",,"SYSTEM","TEMP","TEMP","17-APR-19","DEFAULT","DEFAULT_CONSUMER_GROUP",,,"N","NONE","N","YES",,"Y","YES","USING_NLS_COMP","NO","NO",
"GSMADMIN_INTERNAL",22,,"LOCKED","31-MAY-19",,"SYSAUX","TEMP","TEMP","17-APR-19","DEFAULT","DEFAULT_CONSUMER_GROUP",,,"N","NONE","N","YES",,"Y","YES","USING_NLS_COMP","NO","NO",
"GSMUSER",23,,"LOCKED","31-MAY-19",,"USERS","TEMP","TEMP","17-APR-19","DEFAULT","DEFAULT_CONSUMER_GROUP",,,"N","NONE","N","YES",,"Y","YES","USING_NLS_COMP","NO","NO",
"DIP",24,,"LOCKED","17-APR-19",,"USERS","TEMP","TEMP","17-APR-19","DEFAULT","DEFAULT_CONSUMER_GROUP",,,"N","NONE","N","YES",,"Y","YES","USING_NLS_COMP","NO","NO",
"REMOTE_SCHEDULER_AGENT",35,,"LOCKED","31-MAY-19",,"USERS","TEMP","TEMP","17-APR-19","DEFAULT","DEFAULT_CONSUMER_GROUP",,,"N","NONE","N","YES",,"Y","YES","USING_NLS_COMP","NO","NO",
"DBSFWUSER",36,,"LOCKED","31-MAY-19",,"SYSAUX","TEMP","TEMP","17-APR-19","DEFAULT","DEFAULT_CONSUMER_GROUP",,,"N","NONE","N","YES",,"Y","YES","USING_NLS_COMP","NO","NO",
oracle@localhost.localdomain:/home/oracle/karao/scripts-master/performance:orclcdb
$ 
oracle@localhost.localdomain:/home/oracle/karao/scripts-master/performance:orclcdb
$ cat testcsv2.sql 

-- format csv
set markup csv on 
set feedback off

-- this will not show the output on screen
set termout off
set echo off verify off
 
-- for performance, set the arraysize to larger value
set arraysize 5000 


spool testcsv2.csv
select * from dba_users where rownum < 10;
spool off

}}}

-- easy HTML 
{{{
SET MARKUP HTML ON 
}}}


-- define variable 
{{{
COL edb360_bypass NEW_V edb360_bypass;
select 3600 edb360_bypass from dual;

or this 

define edb360_secs2go = 3600
}}}

-- PRELIM
http://laurentschneider.com/wordpress/2011/07/sqlplus-prelim.html

-- ESCAPE CHARACTER
http://www.orafaq.com/faq/how_does_one_escape_special_characters_when_writing_sql_queries
http://www.orafaq.com/wiki/SQL*Plus_FAQ
{{{
Define an escape character:
SET ESCAPE '\'
SELECT '\&abc' FROM dual;
}}}


-- Don't scan for substitution variables:
{{{
SET SCAN OFF
SELECT '&ABC' x FROM dual;
}}}


-- NULLIF
https://forums.oracle.com/forums/thread.jspa?threadID=2303647
{{{
The simplest way is NULLIF
NULLIF (x, y)
returns NULL if x and y are the same; otherwise, it returns x. So
n / NULLIF (d, 0)
returns NULL if d is 0; otherwise, it returns n/d.
}}}


-- ACCEPT/HIDE
http://docs.oracle.com/cd/B19306_01/server.102/b14357/ch12005.htm
http://www.database-expert.com/white_papers/oracle_sql_script_that_accepts_passwords.htm


SQL*Plus command line history completion - RLWRAP
  	Doc ID: 	460591.1
{{{
Purpose

The SQL*Plus, the primary interface to the Oracle Database server, 
provides a powerful yet easy-to-use environment for querying, defining, and controlling data.
However, some command-line utilities, for example bash, provide features such as:

- command history (up/down arrow keys)
- auto completion (TAB key)
- searchable command line history (Ctrl+r)

The scope of this bulletin is to provide these features to SQL*Plus.

Scope and Application

For all SQL*Plus users, but in particular for Linux platforms as this note has been written thinking at that Operating System. In any case this idea can work on other OSs as well.
SQL*Plus command line history completion

SQL*Plus users working on Linux platforms have the opportunity to use a readline wrapper "rlwrap". rlwrap is a 'readline wrapper' that uses the GNU readline library to allow the editing of keyboard input for any other command. Input history is remembered across invocations, separately for each command; history completion and search work as in bash and completion word lists can be specified on the command line. Since SQL*Plus is not built with readline library, rlwrap is just doing the job.

- 'rlwrap' is really a tiny program. It's about 24K in size, and you can download it from the official developper (Hans Lub) website http://utopia.knoware.nl/~hlub/uck/rlwrap/

What do you need to compile and run it
A newer (4.2+) GNU readline (you can get it at ftp://ftp.gnu.org/gnu/readline/)
and an ANSI C compiler. 
rlwrap compiles and runs on many Unix systems and on cygwin.

Installation should be as simple as:

./configure
make
make install
 

Compile rlwrap statically

If you don't have the root account you can compile rlwrap statically
and install it under $HOME/bin executing this command:

CFLAGS=-I$HOME/readline-6.0 CPPFLAGS=-I$HOME/readline-6.0 LDFLAGS=-static ./configure --prefix=$HOME/bin
make
make install

where $HOME/readline-6.0 is the 'readline' source location

 



A different option and if your are using Linux, it is to download the newest source rpm package from e.g.: http://download.fedora.redhat.com/pub/epel/5/SRPMS/rlwrap-0.37-1.el5.src.rpm and build the binary rpm package by

# rpm -ivh rlwrap-0.37-1.el5.src.rpm
# cd /usr/src/redhat/SPECS/
# rpmbuild -bb rlwrap.spec
# cd ../RPMS/<arch>/
and then you can install it as any other rpm package by e.g.

# rpm -ivh rlwrap-0.37-1.x86_64.rpm
 

- After installing the package, you should to configure a user's environment so that it makes use of the installed utility, add the following line in '/etc/bashrc' (globally) or in '${HOME}/.bashrc' (locally for the user). Change '<path>' with the right path of your rlwrap:

alias sqlplus='<path>/rlwrap ${ORACLE_HOME}/bin/sqlplus'
The modified .bashrc won't take effect until you launch a new terminal session or until you source .bashrc. So shut down any terminals you already have open and start a new one.

If you now launch SQL*Plus in exactly the way you've used so far, you should be able to type one SQL command and submit it, and then immediately be able to press the up-arrow key and retrieve it. The more SQL commands you issue over time, the more commands rlwrap will remember. As well as just scrolling through your previous SQL commands, you can press 'Ctrl+r' to give you a searchable command line history.


You can also create your own '${HOME}/.sqlplus_completions' file (locally)or '/usr/share/rlwrap/sqlplus' file (globally) with all SQL reserved words (or in case whatever you want) as your auto-completion list (see rlwrap man page for dettails).

And example of '${HOME}/.sqlplus_completions' with some reserved words:

 

COPY PAUSE SHUTDOWN 
DEFINE PRINT SPOOL 
DEL PROMPT SQLPLUS 
ACCEPT DESCRIBE QUIT START 
APPEND DISCONNECT RECOVER STARTUP 
ARCHIVE LOG EDIT REMARK STORE 
ATTRIBUTE EXECUTE REPFOOTER TIMING 
BREAK EXIT REPHEADER TTITLE 
BTITLE GET RESERVED UNDEFINE 
CHANGE HELP RESERVED VARIABLE 
CLEAR HOST RUN WHENEVER 
copy pause shutdown 
define print spool 
del prompt sqlplus 
accept describe quit start 
append disconnect recover startup 
archive log edit remark store 
attribute execute repfooter timing 
break exit repheader ttitle 
btitle get reserved undefine 
change help reserved variable 
clear host run whenever 

ALL ALTER AND ANY ARRAY ARROW AS ASC AT 
BEGIN BETWEEN BY 
CASE CHECK CLUSTERS CLUSTER COLAUTH COLUMNS COMPRESS CONNECT CRASH CREATE CURRENT 
DECIMAL DECLARE DEFAULT DELETE DESC DISTINCT DROP 
ELSE END EXCEPTION EXCLUSIVE EXISTS 
FETCH FORM FOR FROM 
GOTO GRANT GROUP 
HAVING 
IDENTIFIED IF IN INDEXES INDEX INSERT INTERSECT INTO IS 
LIKE LOCK 
MINUS MODE 
NOCOMPRESS NOT NOWAIT NULL 
OF ON OPTION OR ORDEROVERLAPS 
PRIOR PROCEDURE PUBLIC 
RANGE RECORD RESOURCE REVOKE 
SELECT SHARE SIZE SQL START SUBTYPE 
TABAUTH TABLE THEN TO TYPE 
UNION UNIQUE UPDATE USE 
VALUES VIEW VIEWS 
WHEN WHERE WITH 
all alter and any array arrow as asc at 
begin between by 
case check clusters cluster colauth columns compress connect crash create current 
decimal declare default delete desc distinct drop 
else end exception exclusive exists 
fetch form for from 
goto grant group 
having 
identified if in indexes index insert intersect into is 
like lock 
minus mode 
nocompress not nowait null 
of on option or orderoverlaps 
prior procedure public 
range record resource revoke 
select share size sql start subtype 
tabauth table then to type 
union unique update use 
values view views
 

Note: 
You can use 'rlwrap' with all Oracle command line utilities such as Recovery Manager (RMAN) , Oracle Data Pump (expdp), ASM command (asmcmd), etc. 

i.e.: 


alias rman='/usr/bin/rlwrap ${ORACLE_HOME}/bin/rman' 
alias expdp='/usr/bin/rlwrap ${ORACLE_HOME}/bin/expdp'
alias asmcmd='/usr/bin/rlwrap ${ORACLE_HOME}/bin/asmcmd'
References

http://utopia.knoware.nl/~hlub/uck/rlwrap/
ftp://ftp.gnu.org/gnu/readline
http://download.fedora.redhat.com/pub/epel
}}}




-- HEX, DECIMAL, ASCII

Script To Convert Hexadecimal Input Into a Decimal Value
  	Doc ID: 	1019580.6

How to Convert Numbers to Words
  	Doc ID: 	135986.1

Need To Convert A Varchar2 String Into Its Hexadecimal Equivalent
  	Doc ID: 	269578.1









http://www.orafaq.com/wiki/SQL_FAQ#How_does_one_add_a_day.2Fhour.2Fminute.2Fsecond_to_a_date_value.3F

{{{
Here are a couple of examples:
Description	Date Expression
Now	SYSDATE
Tomorow/ next day	SYSDATE + 1
Seven days from now	SYSDATE + 7
One hour from now	SYSDATE + 1/24
Three hours from now	SYSDATE + 3/24
A half hour from now	SYSDATE + 1/48
10 minutes from now	SYSDATE + 10/1440
30 seconds from now	SYSDATE + 30/86400
Tomorrow at 12 midnight	TRUNC(SYSDATE + 1)
Tomorrow at 8 AM	TRUNC(SYSDATE + 1) + 8/24
Next Monday at 12:00 noon	NEXT_DAY(TRUNC(SYSDATE), 'MONDAY') + 12/24
First day of the month at 12 midnight	TRUNC(LAST_DAY(SYSDATE ) + 1)
The next Monday, Wednesday or Friday at 9 a.m	TRUNC(LEAST(NEXT_DAY(sysdate, 'MONDAY'), NEXT_DAY(sysdate, 'WEDNESDAY'), NEXT_DAY(sysdate, 'FRIDAY'))) + 9/24
}}}


weekday https://docs.oracle.com/cd/E51711_01/DR/WeekDay.html




http://laurentschneider.com/wordpress/2005/12/the-sqlplus-settings-i-like.html
http://awads.net/wp/2005/08/04/oracle-sqlplus/


CSV output
{{{
COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;
COLUMN name NEW_VALUE _hostname NOPRINT
select lower(host_name) name from v$instance;

set feedback off pages 0 term off head on und off trimspool on echo off lines 4000 colsep ','
spool awr_cpuwl-tableau-&_instname-&_hostname..csv
<SQL here>
spool off
host sed -n -i '2,$ p' awr_cpuwl-tableau-&_instname-&_hostname..csv
}}}
http://blog.oraclecontractors.com/?p=551
http://pastebin.com/dYCc8NXY
http://www.geekinterview.com/question_details/60974
http://larig.wordpress.com/2011/05/29/formatting-oracle-output-in-sqlplus/
http://stackoverflow.com/questions/643137/how-do-i-spool-to-a-csv-formatted-file-using-sqlplus
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2189860818012
http://database.blogs.webucator.com/2011/02/27/importing-data-using-oracle-sql-developer/  <- import data using SQL Developer
https://forums.oracle.com/forums/thread.jspa?threadID=855621   <- remove dash line below header


-- VARIABLES
http://www.dbforums.com/oracle/1089379-sqlplus-passing-parameters.html
http://www.unix.com/unix-dummies-questions-answers/25395-how-pass-values-oracle-sql-plus-unix-shell-script.html
Spice up your SQL Scripts with Variables http://www.orafaq.com/node/515




Oracle Support Resources List
http://blogs.oracle.com/Support/2007/08/


How to Identify Resource Intensive SQL for Tuning (Doc ID 232443.1)
Example "Top SQL" queries from V$SQLAREA (Doc ID 235146.1)
	

TROUBLESHOOTING Query Tuning
  	Doc ID: 	752662.1

FAQ: Query Tuning Frequently Asked Questions
  	Doc ID: 	Note:398838.1

PERFORMANCE TUNING USING 10g ADVISORS AND MANAGEABILITY FEATURES
  	Doc ID: 	276103.1

How to Submit a Testcase to Oracle Support for Reproducing an Execution Plan
  	Doc ID: 	Note:390610.1




-- RAT

Real Application Testing Now Available for Earlier Releases
  	Doc ID: 	Note:560977.1
  	
TESTING SQL PERFORMANCE IMPACT OF AN ORACLE 9i TO ORACLE DATABASE 10g RELEASE 2 UPGRADE WITH SQL PERFORMANCE ANALYZER
  	Doc ID: 	Note:562899.1



-- EXPLAIN PLAN

Methods for Obtaining a Formatted Explain Plan
  	Doc ID: 	Note:235530.1

SQLTXPLAIN.SQL - Enhanced Explain Plan and related diagnostic info for one SQL statement
  	Doc ID: 	Note:215187.1

Document TitleDatabase Community: SQLTXPLAIN 2: Comparing Two Explain Plans using the SQLTXPLAIN COMPARE method (Doc ID 953964.1)

Database Performance Archived Webcasts (Doc ID 1050869.1)

Support Community SQLT (SQLTXPLAIN) Enhanced Explain Plan and Related Diagnostic Information for One SQL (Doc ID 764311.1)

SQLT (SQLTXPLAIN) - Tool that helps to diagnose SQL statements performing poorly (Doc ID 215187.1)

SQL Code Diagnostics: How to Create an SQLTXPLAIN ("XPLAIN" Method) in 4 to 5 Easy Steps! (Doc ID 804267.1)

bde_x.sql - Simple Explain Plan for given SQL Statement (8i-10g) (Doc ID 174603.1)

SQLT (SQLTXPLAIN) - Tool that helps to diagnose SQL statements performing poorly (Doc ID 215187.1)

coe_xplain_80.sql - Enhanced Explain Plan for given SQL Statement (8.0) (Doc ID 156959.1)

coe_xplain_73.sql - Enhanced Explain Plan for given SQL Statement (7.3) (Doc ID 156960.1)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds and/or Waits generated by EVENT 10046
  	Doc ID: 	Note:224270.1

Implementing and Using the PL/SQL Profiler
  	Doc ID: 	Note:243755.1

Interpreting Raw SQL_TRACE and DBMS_SUPPORT.START_TRACE output
  	Doc ID: 	Note:39817.1TROUBLESHOOTROUBLESHOOTING: Advanced Query Tuning
  	Doc ID: 	163563.1TING: Advanced Query Tuning
  	Doc ID: 	163563.1

Determining the execution plan for a distributed query
  	Doc ID: 	33838.1

How to Display All Loaded Execution Plan for a specific sql_id
  	Doc ID: 	465398.1





-- SLOW

Why is a Particular Query Slower on One Machine than Another?
  	Doc ID: 	604256.1

Potentially Expensive Query Operations
  	Doc ID: 	162142.1

TROUBLESHOOTING: Advanced Query Tuning
  	Doc ID: 	163563.1

TROUBLESHOOTING: Possible Causes of Poor SQL Performance
  	Doc ID: 	33089.1

How to Tune a Query that Cannot be Modified
  	Doc ID: 	122812.1

Diagnostics for Query Tuning Problems
  	Doc ID: 	68735.1






-- SLOW SIMULATE

How to simulate a slow query. Useful for testing of timeout issues
  	Doc ID: 	357615.1




-- HISTOGRAM

Case Study: Judicious Use of Histograms for Oracle Applications Tuning
  	Doc ID: 	358323.1





-- PREDICATE 

Query Performance is influenced by its predicate order
  	Doc ID: 	276877.1




-- SQL PROFILE

http://robineast.wordpress.com/2007/08/04/what-they-dont-tell-you-about-oracle-sql-profiles/
http://kerryosborne.oracle-guy.com/2009/04/oracle-sql-profiles/

Automatic SQL Tuning - SQL Profiles
  	Doc ID: 	271196.1

How To Move SQL Profiles From One Database To Another Database
  	Doc ID: 	457531.1

How To Capture The Entire App.Sqls Execution Plan In Sql Profile
  	Doc ID: 	556133.1

Slow Query - The Explain Plan Changed - Just By Changing One Character it is Fast Again
  	Doc ID: 	463134.1



-- 10053

How to Obtain Tracing of Optimizer Computations (EVENT 10053)
  	Doc ID: 	225598.1

CASE STUDY: Analyzing 10053 Trace Files (Doc ID 338137.1)



-- SQL 

VIEW: "V$SQL" Reference Note
  	Doc ID: 	43762.1

    Useful Join Columns:
      ( ADDRESS,HASH_VALUE ) - Join to <View:V$SQLTEXT> . ( ADDRESS,HASH_VALUE )
      

    Support Notes:
      This view shows one row for each version of each SQL statement.
      See <View:V$SQLAREA> for an aggregated view which groups all versions
      of the same SQL statement together.

      When monitoring performance it can be beneficial to use this view 
      rather then V$SQLAREA if looking at only a subset of statements in the
      shared pool.


-- TYPE

How to Determine Type or Table Dependents of an Object Type
  	Doc ID: 	69661.1



http://www.slaviks-blog.com/2010/03/30/oracle-sql_id-and-hash-value/
http://blogs.oracle.com/toddbao/2010/11/how_to_get_sql_id_from_sql_statement.html

Understanding SQL Plan Baselines in Oracle Database 11g
http://www.databasejournal.com/features/oracle/article.php/3896411/article.htm

HOW TO TUNE ONE SQL FOR VARIOUS SIZE OF DATABASES
http://toadworld.com/BLOGS/tabid/67/EntryId/615/How-to-Tune-One-SQL-for-Various-Size-of-Databases.aspx

Baseline Advisor
http://orastory.wordpress.com/2011/03/22/sql-tuning-set-to-baseline-to-advisor/


{{{
Things to note:
In 10g and 11gR1 the default for SELECT_WORKLOAD_REPOSITORY is to return only BASIC information, which excludes the plan! So DBMS_SPM.LOAD_PLANS_FROM_SQLSET doesn’t load any plans.
It doesn’t throw a warning either, which it could sensibly, since the STS has no plan, and it can see that</grumble>
This changes to TYPICAL in 11gR2 (thanks Surachart!)
Parameter “optimizer_use_sql_plan_baselines” must be set to TRUE for a baseline to be used
Flush the cursor cache after loading the baseline to make sure it gets picked up on next execution of the sql_id
}}}
http://kerryosborne.oracle-guy.com/2009/04/04/oracle-sql-profiles/
http://kerryosborne.oracle-guy.com/2009/07/31/why-isnt-oracle-using-my-outline-profile-baseline/
<<<
Greg Rahn says:	
April 5, 2009 at 4:57 pm

The main difference between an Outline and a SQL Profile is an Outline contains a full set of query execution plan directives where a SQL Profile (created by the Tuning Adviser) only contains adjustments (OPT_ESTIMATE / COLUMN_STATS / TABLE_STATS) for cardinality allowing the optimizer the option to choose the operation based on the additional information. This means an Outline always has exactly the same execution plan, but a SQL Profile may not.

To use an analogy, an Outline is a complete set of turn by turn directions, where a SQL Profile contains only the (adjusted) estimated driving times for portions of the trip.
<<<
{{{
Does SQLT provide formatted 10053 output?
SQLTXPLAIN collects 10053 trace but it does not re-format it. The trace file will be called sqlt_Snnnnn_10053_explain.trc, where nnnnn is the unique identifier for the SQLTXPLAIN command.

10053 is an internal trace and it is not documented. It was created by development for development to analyze issues with the cost based optimizer. It is included in SQLTXPLAIN output for completeness and so that we have the trace in the event that there is a need to engage Oracle support or development. Because SQLTXPLAIN uses EXPLAIN PLAN the SQL ID will be different to the SQL ID of the original SQL. For example
SELECT /*+ monitor */ e.first_name "First", e.last_name "Last", d.department_name "Department"
FROM hr.employees e, hr.departments d
WHERE e.department_id = d.department_id

          which has a SQL ID of 8yw3t99dpvf8k will be changed to

EXPLAIN PLAN SET statement_id = '36861' INTO SQLTXPLAIN.sqlt$_sql_plan_table FOR
SELECT /*+ monitor */ e.first_name "First", e.last_name "Last", d.department_name "Department"
FROM hr.employees e, hr.departments d
WHERE e.department_id = d.department_id

     which has a SQL ID of 3712ayqgzw6hb.

    Because there is a lot of tracing happening during the execution of SQLTXPLAIN the trace will often not begin where the trace of the SQL of interest begins. To find the beginning of the statement of interest search the 10053 trace file for the string "Current SQL Statement".

    The end of the trace for the statement of interest can be found by searching for "atom_hint" or "END SQL Statement Dump".

    10053 can also be obtained by running SQLHC.sql.
}}}


{{{
11.4.5.0 November 21, 2012
ENH: Event 10053 is now enabled with SQL_Optimizer instead of SQL_Compiler. Trace 10053 becomes more readable.

11.4.4.2 February 2, 2012
ENH: EVENT 10053 trace includes now tracing SQL Plan Management SPM. Look for "SPM: " token in trace.

11.4.4.1 January 2, 2012
ENH: SQLT XTRACT on 11.2 uses now DBMS_SQLDIAG.DUMP_TRACE to generate 10053 on child cursor with largest average elapsed time from connecting instance.

ENH: TRCA provides now a script sqlt/run/sqltrcasplit.sql to split a trace file into a 10046 trace and the rest.
(In other words, it provides access to this capability of splitting a 10046/10053 trace to end users.)



}}}
! XPLORE
* ''version upgrade'' 
- because you just don't want to modify the optimizer_features_enable to a lower value as it was before .. let's say you upgraded from 10gR2 to 11gR2 because it has a wide scope and will enable/disable a lot of optimizer features.. because you want to retain the 11gR2 OFE parameter and be systematic and want to pinpoint the exact issue causing the performance regression. So what you can do is to disable/turn off that particular feature using fix control - (can be done with no data)
- for every new release of Oracle the optimizer improvements are implemented through patches or fix controls and this started in DB version 10.2.0.2 I think.. so you can toggle on or off an optimizer feature through fix control
* ''wrong result'' 
- if it returns 1 row instead of 3.. which automatically means it's a bug! the golden rule is, any transformations done by the optimizer must not alter the result set.. it could alter the performance but not the result set.. so you want to pinpoint what specific optimizer fix control that is causing this wrong result set. Because even a new optimizer feature may cause this.. Then after identifying and letting the Oracle Support know the specific fix control culprit, they will provide a bug fix or patch for it - (must be done with data)
* ''just finding a good plan'' 
- a brute force one by one case analysis of every fix control this behaves like the hints injection by DB Optimizer (heuristics) but here it's on a wider and more detailed scope going through all the 1000+ fix control gives you a lot of permutations to choose from which if you find the specific fix control causing it to have a good performance then you can play around with the execution plan with hints.. you can leave this running in a dev or qa environment - (must be done with data)
* ''hard parsing (long parse times)'' 
- it is totally possible that a new optimizer feature can make your parse times really slow.. so this will go through all the fix control cases and will give you a valuable input on where it is going wrong (can be done with no data)

! XHUME	
* ''troubleshooting stats''
- must be done on a dev environment because it hacks and modify the data dictionary, what it does is update the last created date info of a table to be older than the oldest statistics collection history of that table.. then iterates through all the the statistics history and then at some point you'll find the best execution plan, then you can pluck it out using the DBMS_STATS api or compare specific stats period to see what's wrong between two statistics collection

! XGRAM
* ''hack the histogram'' - set of scripts to modify/set and hack the histograms


{{{
EXEC sqltxplain.sqlt$a.purge_repository(81429, 81429);
}}}
{{{
select distinct statid from sqltxplain.SQLT$_STATTAB;

/home/oracle/dba/sqlt/utl/sqltimp.sql s<SQLT ID> sysadm
}}}
''show sqltxplain configuration parameters''
{{{
select name, value from  sqltxplain.SQLI$_PARAMETER;
}}}

''-- execute this''
{{{
-- this will disable the TCB and shorten the STA run
EXEC sqltxplain.sqlt$a.set_param('test_case_builder', 'N');                  <-- the default is Y

EXEC sqltxadmin.sqlt$a.set_sess_param('test_case_builder', 'N');    <-- 2022

EXEC sqltxplain.sqlt$a.set_param('sta_time_limit_secs', '30');             <-- the default is 1800sec
}}}

-- to enable SQL Tuning Advisor do this
{{{
EXEC sqltxplain.sqlt$a.set_param('sql_tuning_advisor', 'Y');                 <-- the default is Y
EXEC sqltxplain.sqlt$a.set_param('sta_time_limit_secs', '1800');
}}}



I usually do the compare on my laptop because on the client site usually I can't install all the other tools that I need.. like the DB Optimizer, SQL Developer, etc. 
this tiddler will show you how to make use of the SQLT compare feature to compare the good and bad run of a particular SQL.. 

I usually do the following steps to drill down on a query.. but for this tiddler I will only discuss the stuff that are highlighted
''- install SQLTXPLAIN
- pull bad and good SQLT runs
- do compare''
- generate local test case
- execute query
- db optimizer


! Install SQLTXPLAIN
{{{
Execute sqlt/install/sqcreate.sql connected as SYS.
# cd sqlt/install
# sqlplus / as sysdba
SQL> START sqcreate.sql
}}}


! Pull bad and good SQLT runs
{{{
# cd sqlt/run
# sqlplus apps
SQL> START sqltxtract.sql [SQL_ID]|[HASH_VALUE]
SQL> START sqltxtract.sql 0w6uydn50g8cx
SQL> START sqltxtract.sql 2524255098
}}}

the two zip files should be on your laptop, in my case I unzipped them on their own directory
{{{
oracle@karl.fedora:/trace/sqlt:orcl
$ ls -ltr
total 72
drwxr-xr-x 5 oracle dba  4096 Oct 10 18:09 utl
drwxr-xr-x 2 oracle dba  4096 Oct 20 16:04 doc
-rw-r--r-- 1 oracle dba 41940 Oct 30 12:13 sqlt_instructions.html
drwxr-xr-x 3 oracle dba  4096 Oct 30 12:56 input
drwxr-xr-x 3 oracle dba  4096 Nov 27 17:13 sqlt_s54471-bad       <-- bad
drwxr-xr-x 2 oracle dba  4096 Nov 27 17:23 install
drwxr-xr-x 3 oracle dba  4096 Nov 27 17:26 sqlt_s54491-good     <-- good 
drwxr-xr-x 2 oracle dba  4096 Nov 27 17:32 run
}}}

''contents of sqlt_s54471-bad''
{{{
oracle@karl.fedora:/trace/sqlt/sqlt_s54471-bad:orcl
$ ls -tlr
total 228468
-rw-rw-r-- 1 oracle dba     21481 Nov 26 13:33 sqlt_s54471_readme.html     <-- this HTML file contains the exact commands to do the COMPARE
-rw-rw-r-- 1 oracle dba     44785 Nov 26 13:33 sqlt_s54471_p3382835738_sqlprof.sql
-rw-rw-r-- 1 oracle dba  27420205 Nov 26 13:33 sqlt_s54471_main.html
-rw-rw-r-- 1 oracle dba    701057 Nov 26 13:33 sqlt_s54471_lite.html
-rw-rw-r-- 1 oracle dba     74054 Nov 26 13:33 sqlt_s54471_sql_monitor.txt
-rw-rw-r-- 1 oracle dba    495488 Nov 26 13:33 sqlt_s54471_sql_monitor.html
-rw-rw-r-- 1 oracle dba    517117 Nov 26 13:33 sqlt_s54471_sql_monitor_active.html
-rw-rw-r-- 1 oracle dba    582550 Nov 26 13:33 sqlt_s54471_sql_detail_active.html
-rw-rw-r-- 1 oracle dba    619823 Nov 26 13:33 sqlt_s54471_tcb.zip
-rw-rw-r-- 1 oracle dba 104563006 Nov 26 13:33 sqlt_s54471_10053_explain.trc
-rw-rw-r-- 1 oracle dba      7782 Nov 26 13:37 sqlt_s54471_tc_sql.sql
-rw-rw-r-- 1 oracle dba      8415 Nov 26 13:37 sqlt_s54471_tc_script.sql
-rw-rw-r-- 1 oracle dba    316568 Nov 26 13:37 sqlt_s54471_opatch.zip
-rw-rw-r-- 1 oracle dba      2588 Nov 26 13:37 sqlt_s54471_driver.zip
-rw-rw-r-- 1 oracle dba  41947040 Nov 26 13:38 sqlt_s54471_trc.zip
-rw-rw-r-- 1 oracle dba     28333 Nov 26 13:38 sqlt_s54471_log.zip
-rw-r--r-- 1 oracle dba  56285212 Nov 27 17:11 sqlt_s54471.zip
drwxr-xr-x 2 oracle dba      4096 Nov 27 17:13 sqlt_s54471_tc
}}}

''contents of sqlt_s54491-good''
{{{
oracle@karl.fedora:/trace/sqlt/sqlt_s54491-good:orcl
$ ls -ltr
total 219324
-rw-rw-r-- 1 oracle dba     21526 Nov 27 02:35 sqlt_s54491_readme.html     <-- this HTML file contains the exact commands to do the COMPARE
-rw-rw-r-- 1 oracle dba  34589692 Nov 27 02:35 sqlt_s54491_main.html
-rw-rw-r-- 1 oracle dba    660035 Nov 27 02:35 sqlt_s54491_lite.html
-rw-rw-r-- 1 oracle dba      1488 Nov 27 02:35 sqlt_s54491_sta_script_mem.sql
-rw-rw-r-- 1 oracle dba    333117 Nov 27 02:35 sqlt_s54491_sta_report_mem.txt
-rw-rw-r-- 1 oracle dba     57213 Nov 27 02:35 sqlt_s54491_sql_monitor.txt
-rw-rw-r-- 1 oracle dba    424680 Nov 27 02:35 sqlt_s54491_sql_monitor.html
-rw-rw-r-- 1 oracle dba    385239 Nov 27 02:35 sqlt_s54491_sql_monitor_active.html
-rw-rw-r-- 1 oracle dba    461504 Nov 27 02:35 sqlt_s54491_sql_detail_active.html
-rw-rw-r-- 1 oracle dba     40487 Nov 27 02:35 sqlt_s54491_p73080644_sqlprof.sql
-rw-rw-r-- 1 oracle dba    616556 Nov 27 02:36 sqlt_s54491_tcb.zip
-rw-rw-r-- 1 oracle dba 104557695 Nov 27 02:36 sqlt_s54491_10053_explain.trc
-rw-rw-r-- 1 oracle dba      7793 Nov 27 09:18 sqlt_s54491_tc_sql.sql
-rw-rw-r-- 1 oracle dba      8659 Nov 27 09:18 sqlt_s54491_tc_script.sql
-rw-rw-r-- 1 oracle dba    316568 Nov 27 09:18 sqlt_s54491_opatch.zip
-rw-rw-r-- 1 oracle dba      2587 Nov 27 09:18 sqlt_s54491_driver.zip
-rw-rw-r-- 1 oracle dba  33308174 Nov 27 09:18 sqlt_s54491_trc.zip
-rw-rw-r-- 1 oracle dba     28711 Nov 27 09:18 sqlt_s54491_log.zip
-rw-r--r-- 1 oracle dba  48453034 Nov 27 17:11 sqlt_s54491.zip
drwxr-xr-x 3 oracle dba      4096 Nov 27 17:44 sqlt_s54491_tc
}}}


! Do compare

* All of the steps below will be executed on your laptop
* All the specific commands are on the sqlt_s<ID>_readme.html file which you can just copy and paste

''start with the bad run''
{{{
Unzip sqlt_s54471_tc.zip from this SOURCE in order to get sqlt_s54471_expdp.dmp.
Copy sqlt_s54471_exp.dmp to the server (BINARY).
Execute import on server:
imp sqltxplain FILE=sqlt_s54471_exp.dmp TABLES=sqlt% IGNORE=Y
OR 
just do 
$ ./sqlt_<ID>_import.sh
}}}

''next is the good run''
{{{
Unzip sqlt_s54491_tc.zip from this SOURCE in order to get sqlt_s54491_expdp.dmp.
Copy sqlt_s54491_exp.dmp to the server (BINARY).
Execute import on server:
imp sqltxplain FILE=sqlt_s54491_exp.dmp TABLES=sqlt% IGNORE=Y
OR 
just do 
$ ./sqlt_<ID>_import.sh
}}}


''query the statement_ids and plan_hash_values''
{{{

SELECT 
       p.statement_id, 
       p.plan_hash_value, 
       DECODE(p.plan_hash_value, s.best_plan_hash_value, '[B]')||
       DECODE(p.plan_hash_value, s.worst_plan_hash_value, '[W]')||
       DECODE(p.plan_hash_value, s.xecute_plan_hash_value, '[X]') attribute, 
       x.sql_id,
       round(x.ELAPSED_TIME/1000000,2) ELAPSED,
       round((x.ELAPSED_TIME/1000000)/x.EXECUTIONS,2) ELAPSED_EXEC,
       SUBSTR(s.method, 1, 3) method,
       SUBSTR(s.instance_name_short, 1, 8) instance,
       SUBSTR(s.sql_text, 1, 60) sql_text
  FROM (
         SELECT DISTINCT plan_hash_value, sqlt_plan_hash_value, statement_id
         FROM sqltxplain.sqlt$_plan_extension
       ) p,
       sqltxplain.sqlt$_sql_statement s,
       sqltxplain.SQLT$_GV$SQLSTATS x
 WHERE p.statement_id = s.statement_id
 AND   p.statement_id = x.statement_id
 ORDER BY
       p.statement_id;

SELECT LPAD(s.statement_id, 5, '0') staid,
       SUBSTR(s.method, 1, 3) method,
       SUBSTR(s.instance_name_short, 1, 8) instance,
       SUBSTR(s.sql_text, 1, 60) sql_text
  FROM sqltxplain.sqlt$_sql_statement s
 WHERE USER IN ('SYS', 'SYSTEM', 'SQLTXPLAIN', s.username)
 ORDER BY
       s.statement_id;


STATEMENT_ID PLAN_HASH_VALUE ATTRIBUTE SQL_ID           ELAPSED ELAPSED_EXEC MET INSTANCE SQL_TEXT
------------ --------------- --------- ------------- ---------- ------------ --- -------- ------------------------------------------------------------
       16274      2337881134 [B][W]    1dx0vsstj8p8m      30.41          3.8 XTR mixtrn   SELECT SETID, CBRE_PROPERTY_ID, AUDIT_STAMP, TO_CHAR(AUDIT_S
       69520      2337881134 [B]       1dx0vsstj8p8m     1519.8        52.41 XTR mixprd   SELECT SETID, CBRE_PROPERTY_ID, AUDIT_STAMP, TO_CHAR(AUDIT_S
       69520        53752269 [W]       1dx0vsstj8p8m     1519.8        52.41 XTR mixprd   SELECT SETID, CBRE_PROPERTY_ID, AUDIT_STAMP, TO_CHAR(AUDIT_S



}}}


''execute compare''
* Note: when doing compare, you may also want to check on the main SQLT reports to check on the plan performance 
sqlt_s<ID>_main.html -> Plans Summary 
sqlt_s<ID>_main.html -> Plan Performance Statistics
sqlt_s<ID>_main.html -> Plan Performance History
{{{
Execute the COMPARE method connecting into SQL*Plus as SQLTXPLAIN. You will be asked to enter which 2 statements you want to compare.
START sqlt/run/sqltcompare.sql
OR 
@sqltcompare <bad ID> <good ID> <bad plan_hash_value> <good plan_hash_value>

SQL> @sqltcompare.sql [statement id1] [statement id2] [plan hash value1] [plan hash value2];
SQL> @sqltcompare 16274 69520 2337881134 2337881134
}}}


{{{


xplore

1) create the nowrap.sql and specify the comment
2) specify the nowrap.sql and TC password
3) look out for "no data found" indicative of an error on the norwap.sql
4) specify correct data formatting, else it will error when nowrap.sql is executed

15:01:05 SYS@dw> start install
Test Case User: TC22518
Password: TC22518


Installation completed.
You are now connected as TC22518.

1. Set CBO env if needed
2. Execute @create_xplore_script.sql

15:01:43 TC22518@dw> @create_xplore_script.sql

Parameter 1:
XPLORE Method: XECUTE (default) or XPLAIN
"XECUTE" requires /* ^^unique_id */ token in SQL
"XPLAIN" uses "EXPLAIN PLAN FOR" command
Enter "XPLORE Method" [XECUTE]:

Parameter 2:
Include CBO Parameters: Y (default) or N
Enter "CBO Parameters" [Y]:

Parameter 3:
Include Exadata Parameters: Y (default) or N
Enter "EXADATA Parameters" [Y]:

Parameter 4:
Include Fix Control: Y (default) or N
Enter "Fix Control" [Y]:

Parameter 5:
Generate SQL Monitor Reports: N (default) or Y
Only applicable when XPLORE Method is XECUTE
Enter "SQL Monitor" [N]: Y


Review and execute @xplore_script_1.sql

SQL>@xplore_script_1.sql nowrap.sql TC22518

}}}
* ''Documentation'' http://db.tt/668XTuvg
* ''Examples'' http://db.tt/VbIAaiBF
* ''Author of SQLTXPLAIN'' - Carlos Sierra http://carlos-sierra.net/
10gR2 version
<<<
''NOTE: when installing SQLTXPLAIN do this!!! makes it less intrusive on production servers''
// True, it is not perfect yet. Very close, though. I do not like how the install scripts ask for a "schema user" or "application user". 
To work around that, I create a new role (SQLTXPLAIN_ROLE) and provide that as my "application_user". 
Then, whenever I want to run SQLTXPLAIN, I just grant/revoke that role to the real application userid. //
http://orajourn.blogspot.com/search/label/SQLTXPLAIN
<<<
11gR2 version 
<<<
o Export SQLT repository
o Import SQLT repository
o Using the COMPARE method
o Restore CBO schema object statistics
o Restore CBO system statistics
o Create local test case using SQLT files
o Create stand-alone TC based on a SQLT TC
o Load SQL Plan from SQL Set
o Restore SQL Set
o Gather CBO statistics without Histograms
o Gather CBO statistics with Histograms
o List generated files
http://www.allguru.net/database/oracle-sql-profile-tuning-command/
<<<

! Install SQLTXPLAIN
{{{
Execute sqlt/install/sqcreate.sql connected as SYS.
# cd sqlt/install
# sqlplus / as sysdba
SQL> START sqcreate.sql
}}}

! Query the statement_ids and plan_hash_values
{{{

SELECT 
       p.statement_id, 
       p.plan_hash_value, 
       DECODE(p.plan_hash_value, s.best_plan_hash_value, '[B]')||
       DECODE(p.plan_hash_value, s.worst_plan_hash_value, '[W]')||
       DECODE(p.plan_hash_value, s.xecute_plan_hash_value, '[X]') attribute, 
       x.sql_id,
       round(x.ELAPSED_TIME/1000000,2) ELAPSED,
       round((x.ELAPSED_TIME/1000000)/NULLIF(x.EXECUTIONS,0),2) ELAPSED_EXEC,
       SUBSTR(s.method, 1, 3) method,
       SUBSTR(s.instance_name_short, 1, 8) instance,
       SUBSTR(s.sql_text, 1, 60) sql_text
  FROM (
         SELECT DISTINCT plan_hash_value, sqlt_plan_hash_value, statement_id
         FROM sqltxplain.sqlt$_plan_extension
       ) p,
       sqltxplain.sqlt$_sql_statement s,
       sqltxplain.SQLT$_GV$SQLSTATS x
 WHERE p.statement_id = s.statement_id
 AND   p.statement_id = x.statement_id
 ORDER BY
       p.statement_id;

SELECT LPAD(s.statement_id, 5, '0') staid,
       SUBSTR(s.method, 1, 3) method,
       SUBSTR(s.instance_name_short, 1, 8) instance,
       SUBSTR(s.sql_text, 1, 60) sql_text
  FROM sqltxplain.sqlt$_sql_statement s
 WHERE USER IN ('SYS', 'SYSTEM', 'SQLTXPLAIN', s.username)
 ORDER BY
       s.statement_id;

}}}
''SQLTXPLAIN scenarios'' http://www.evernote.com/shard/s48/sh/57e47988-c8c0-4cfd-bd9d-b7952e468509/cce0799de209e8483deeaa0c3c6cecec
{{{
To systematically identify why is it behaving differently on DEV2 you can
make use of SQLTXPLAIN (sqltxtract) on both environments
and do a sqltcompare http://karlarao.tiddlyspot.com/#SQLT-compare
Also you can make use of the SQLT test case builder to replicate the plan
that you have on the DEV1 environment
http://karlarao.tiddlyspot.com/#%5B%5Btestcase%20-%20SQLT-tc%20(test%20case%20builder)%5D%5D
,
the *set_cbo_env.sql  will execute a couple of "alter system" commands and
you can just comment that part when executing the test case on the
application schema.

So do this:

COMPARE
-----------------
1) Execute sqltxtract <sql_id> on DEV1
2) Execute sqltxtract <sql_id> on DEV2
3) copy the sqlt_s<ID>.zip generated from DEV1 to DEV2, then extract it
4) look for sqlt_s<ID>_tc.zip and unzip, then
execute ./sqlt_<ID>_import.sh.. that will import the data points from DEV1
to the DEV2 SQLT repository
5) Follow the "query the statement_ids and plan_hash_values" from
http://karlarao.tiddlyspot.com/#SQLT-compare
6) Follow the "execute compare" from
http://karlarao.tiddlyspot.com/#SQLT-compare
7) Open the sqltcompare HTML file and look for the red highlighted text
those are the differences between the two environments


TEST CASE - reproduce the same execution plan
-------------------------------------------------------------------------
1) On DEV2, go to the sqlt_s<ID>_tc.zip that you unzipped from the sqlt of
DEV1
2) The new version of SQLTXPLAIN has xpress.sh which executes
the xpress.sql, the xpress.sql executes the following:
- restore schema object stats from DEV1
- restore system statistics from DEV1
- the sqlt_<ID>_set_cbo_env.sql prompts you to connect as the application
schema
- the tc.sql executes the test case script
3) Now if you want to have the same plan as the DEV1, just execute the
xpress.sh   BUT.. read on the scripts, and be aware
that sqlt_<ID>_set_cbo_env.sql and q.sql executes "alter system" commands
because it tries to make the environments the same. So if you don't want
those "alter system" commands executed just comment them out, you can do
this with the restore schema and object stats as well.



So whenever I do SQL troubleshooting I always run SQLTXPLAIN.. and it
helped me a lot on a bunch of scenarios like:
- pure OLTP system that upgraded from an old to new hardware the CPU speed
was faster on the new environment that made it to change a lot of plans -
then pushing the system stats back to the old hardware value made it go
back to the old plans. How did I discover it? I made use of sqltcompare and
sqlt test case builder
- troubleshooting stats differences and stats problems
- missing indexes
- finding out a locking issue caused by a trigger from one of the tables
- troubleshooting a storage problem from an old and new environment
- parameter changes on the old and new environment
- plan changes caused by parameter change
- etc.

-Karl
}}}

''Whenever I do SQL troubleshooting I always run SQLTXPLAIN''.. and it helped me a lot on a bunch of scenarios like:
<<<
* pure OLTP system that upgraded from an old to new hardware the CPU speed was faster on the new environment that made it to change a lot of plans - then pushing the system stats back to the old hardware value made it go back to the old plans. How did I discover it? I made use of sqltcompare and sqlt test case builder
* troubleshooting stats differences and stats problems
* missing indexes
* finding out a locking issue caused by a trigger from one of the tables
* troubleshooting a storage problem from an old and new environment
* parameter changes on the old and new environment
* plan changes caused by parameter change
- etc.
<<<
! SQLTQ
* on the coe_xref profile, you can just copy and paste the original SQL and the optimizer will tokenize it.. it's just that force matching will not take effect or be the same if you have different binds. so you can induce hints on the dev box then run coe_xref then just edit the output file and remove the hints you've entered even with different formatting the force matching will still take effect
* bind on java :1 instead of a valid :b1, this causes SQL_ID to be different.. so what you can do is edit the SQL_TEXT with a valid bind which is what the sqltq.sql is doing whenever it finds an invalid bind it injects the letter b with it because you can't just have :1 on it
* there are 3 things that could be wrong in a SQL.. environment, stats, and binds.. the force matching signature could be different because of binds
<<<
Hi
 You should read MOS note 167086.1. Also, SQL Performance analyzer (SPA)
is the tool that you need to perform plan regression analysis. But, SPA
would require minimal setup in production to capture tuning sets though.
  If you are interested only about stability (and not worried about using
11g new features), you could upgrade the copy of prod database to 11g, set
optimizer compatibility to 10.2, collect baselines, enable use of base
lines, and set compatibility to 11g. Of course, if the application is not
using bind variables, this approach might not  be optimal.

Cheers

Riyaj Shamsudeen
<<<

Tips for avoiding upgrade related query problems [ID 167086.1]
E1: TDA: Set Up Application to Generate Tuned SQL Statements [ID 629261.1]
How to filter out SQL statement run by specific schemas from a SQL Tuning set. [ID 1268219.1]
How To Move a SQL Tuning Set From One Database to Another [ID 751068.1]
HOW TO TRANSPORT A SQL TUNING SET [ID 456019.1]
HOW TO LOAD QUERIES INTO A SQL TUNING SET [ID 1271343.1]
* Master Note: SQL Query Performance Overview [ID 199083.1]








http://www.freelists.org/post/oracle-l/SQLs-run-in-any-period

{{{
Hi,

We have several environments with 10g (10.2.0.4) in prod and non-prod,and we
get advantage of AWR reports to get the top sqls that are run within any
particular period. We've run into a situation when it seems that the
developers ran some scripts which they are not supposed to. Now if we need
to know all sqls that are run within any time duration to prove our point,
say last 12 hours, I'm sure there must be a way. Can anyone help me in this
regard?
We dont have auditing in place. Is there any script that anyone likes help
me with?


Thanks.
}}}


{{{
Hi Saad,

You could try my scripts  awr_topsql and awr_topsqlx which I've uploaded on
this link http://karlarao.wordpress.com/scripts-resources/

The default for these scripts is get the top 5 SQLs across SNAP_IDs and
"order by" the top 20 according to the total elapsed time..
if you "order by" SNAP_ID you'll get the same output as the AWR reports
you've generated manually using awrrpt.sql across SNAP_IDs you could check
it by comparing the output..

So this makes the task of searching for top SQL easier.. plus, I've added
some metrics to have better view/info of that top SQL..

here are the info/sections you'll get from the script (& some short
description):

1)  - snap_id, time, instance, snap duration
# The time period and snap_id could be used to show the SQLs for a given
workload period..let's say you usual work hours is 9-6pm, you could just
show the particular SQLs on that period.. there's a data range section on
the bottom of the script you could make use of it if you want to filter.

2) - sql_id, plan_hash_value, module
# You could make use of this info if you want to know where the SQL was
executed (SQL*Plus, OWB, Toad, etc.).. plus you could compare the
plan_hash_value but I suggest you make use of Kerry Osborne's
awr_plan_change.sql script if you'd like to search for unstable plans.

3)  - total elapsed time, elapsed time per exec
- cpu time
- io time
- app wait time
- concurrency wait time
- cluster wait time
# These are the time info.. at least without tracing the SQL you'd know what
time component is consuming the elapsed time of that particular SQL.. so
let's say your total elapsed time is 1000sec, and cpu time of 30sec, and io
time of 300sec... you would know that it is consuming significant IO but you
have to look for the other 670sec which could be attributed by "other" wait
events (like PX Deq Credit: send blkd,etc,etc)

4) - LIOs
- PIOs
- direct writes
- rows
- executions
- parse count
- PX
# Some other statistics about the SQL.. if your incurring a lot of PIOs, how
many times this SQL was executed on that period, the # of PX spawed.. just
be careful about these numbers if you have "executions" of let's say 8.. you
have to divide these values to 8 as well as on the time section..
only the "elapsed time per exec" is the per execution value..
this is for formatting reasons I can't fit them all on my screen.. :p

5)  - AAS (Average Active Sessions)
- Time Rank
- SQL type, SQL text
# This is one of my favorites... this will measure how's the SQL is
performing against my database server.. I'm using the AAS & CPU count as my
yardstick for a possible performance problem (I suggest reading Kyle's stuff
about this):
    if AAS < 1
      -- Database is not blocked
    AAS ~= 0
      -- Database basically idle
      -- Problems are in the APP not DB
    AAS < # of CPUs
      -- CPU available
      -- Database is probably not blocked
      -- Are any single sessions 100% active?
    AAS > # of CPUs
      -- Could have performance problems
    AAS >> # of CPUS
      -- There is a bottleneck
so having the AAS as another metric on the TOP SQL is good stuff.. I've also
added the "time rank" column to know what is the SQLs ranking on the top
SQL.. normally the default settings of the script will show time rank 1 and
2.. this could be useful also if you are finding a particular SQL that is on
rank #15 and you are seeing that there's an adhoc query that is time rank #1
and #2 affecting the database performance..




And.... this script could also show SQLs that span across SNAP_IDs... I
would order the output by SNAP_ID and filter on that particular SQL then you
would see that if the SQL is still running and span across let's say 2
SNAP_IDs then the exec count would be 0 (zero) and elapsed time per exec is
0 (zero).. only the time when the query is finished you'll see these values
populated.. I've noticed this behavior and it's the same thing that is shown
on the AWR reports.. you could go here for that scenario
http://karlarao.tiddlyspot.com/#%5B%5BTopSQL%20on%20AWR%5D%5D

}}}
! Starting/Stopping CRS processes
• Use crsctl as root to perform these actions
o To start Oracle high availability services on local node
* crsctl start crs
o To stop Oracle high availability services on local node
* crsctl stop crs
o To start Oracle high availability services in exclusive mode
* crsctl start crs -excl

-- other stop commands
crsctl stop cluster
crsctl stop resource
crsctl stop crs
crsctl stop has
crsctl stop ip
crsctl stop testdns


! Administering databases
• Use srvctl command as oracle to perform these actions
o To check the status of a database
* srvctl status database -d <database>
o To start a database
* srvctl start database -d <database>
o To stop a database
* srvctl stop database -d <database>
o To start a database instance
* srvctl start instance -i <instance> -d <database>
o To stop a database instance
* srvctl stop instance -i <instance> -d <database>
o To start a database service
* srvctl start instance -s <service> -d <database>
o To stop a database service
* srvctl stop instance -s <service> -d <database>


! Administering database services
https://martincarstenbach.wordpress.com/2014/02/18/runtime-load-balancing-advisory-in-rac-12c/
https://easyoradba.com/2012/01/29/transparent-application-failover-taf-service-in-oracle-rac-11gr2/

* Add service 
srvctl add service -d dw -s dw.local -r dw1,dw2
• Use srvctl command as oracle to perform these actions
o To start a service
* srvctl start service -d <database> -s <service>
o To stop a service
* srvctl stop service -d <database> -s <service>
o To relocate a service from one instance to another
* srvctl relocate service -d <database> -s <service> -i <old_instance> -t <new_instance> 

-f Disconnect all sessions during stop or relocate service operations.

o To delete a service
* srvctl remove service -d <database> -s <service>

* inventory services 
srvctl config service -d PALLOC
srvctl config scan

* service creation example for Weblogic connection pool 
srvctl add service -d PALLOC -s PALLOC_SVC -preferred PALLOC1,PALLOC2 -clbgoal short -rlbgoal SERVICE_TIME -notification true


! Checking the status of cluster managed resources
• The following command is in /usr/local/bin and will give the status of cluster resources:
o crsstat


! Listener 
To migrate listener to the new ORACLE_HOME 
* srvctl modify listener -l LISTENER_HCMPRD -o /u01/app/oracle/product/11.2.0.3/dbhome_1




https://blogs.oracle.com/gverma/entry/crsctl_start_crs_does_not_work
http://www.datadisk.co.uk/html_docs/rac/rac_cs.htm
http://www.oracle-home.ro/Oracle_Database/RAC/Startup-start-up-Oracle-Clusterware.html
http://www.oracle-home.ro/Oracle_Database/RAC/11gR2-Clusterware-Startup-Sequence.html
Troubleshoot Grid Infrastructure Startup Issues [ID 1050908.1]
http://www.dbaexpert.com/ASM.Pocket.pdf


https://jorgebarbablog.wordpress.com/2016/03/21/how-to-load-the-ssb-schema-into-an-oracle-database/

http://guyharrison.squarespace.com/blog/2010/10/21/accelerating-oracle-database-performance-with-ssd.html

''MacBook''
http://macperformanceguide.com/Reviews-SSDMacPro.html
http://www.anandtech.com/show/2504/7
http://www.anandtech.com/show/2445/20
http://www.storagereview.com/how_improve_low_ssd_performance_intel_series_5_chipset_environments

Apple's 2010 MacBook Air (11 & 13 inch) Thoroughly Reviewed
http://www.anandtech.com/Show/Index/3991?cPage=13&all=False&sort=0&page=4&slug=apples-2010-macbook-air-11-13inch-reviewed <-- GOOD STUFF 

Support and Q&A for Solid-State Drives
http://blogs.msdn.com/b/e7/archive/2009/05/05/support-and-q-a-for-solid-state-drives-and.aspx <-- GOOD STUFF

http://www.anandtech.com/show/2738 <-- GOOD STUFF REVIEW + TRIM + NICE EXPLANATIONS
http://www.usenix.org/event/usenix08/tech/full_papers/agrawal/agrawal_html/index.html <-- GOOD STUFF PAPER

http://en.wikipedia.org/wiki/NAND_flash#NAND_flash
http://www.anandtech.com/Show/Index/2829?cPage=5&all=False&sort=0&page=11&slug=  <-- The SSD Relapse: Understanding and Choosing the Best SSD
http://forum.notebookreview.com/alienware-m17x/509472-alienware-m17x-crystalmarkdisk-2-2-a-3.html
http://www.pcworld.com/article/192579 <-- how to install SSD in your laptop
http://www.zdnet.com/reviews/product/laptops/apple-macbook-air-fall-2010-core-2-duo-186ghz-128gb-ssd-133-inch/34198701?tag=mantle_skin;content
http://www.tomshardware.com/reviews/compactflash-sdhc-class-10,2574-8.html
http://www.lexar.com/products/lexar-professional-133x-sdxc-card?category=4155
http://www.legitreviews.com/

Intel 320 series
http://www.amazon.com/Intel-SATA-2-5-Inch-Solid-State-Drive/dp/B004T0DNP6/ref=sr_1_7?ie=UTF8&s=electronics&qid=1302071634&sr=1-7
http://www.anandtech.com/show/4244/intel-ssd-320-review/3

OCZ Vertex 3 
http://www.anandtech.com/show/4186/ocz-vertex-3-preview-the-first-client-focused-sf2200


http://www.tomshardware.com/reviews/battlefield-rift-ssd,3062-14.html   ''SSD IO profile - review for games''
http://www.storagereview.com/ssd_vs_hdd - ''SSD vs HDD''


http://guyharrison.squarespace.com/blog/2011/12/6/using-ssd-for-redo-on-exadata-pt-2.html    ''<-- 4KB chunks is faster than 512 bytes''

http://flashdba.com/4k-sector-size/  ''<-- 4K sector size''
http://en.wikipedia.org/wiki/Advanced_Format
https://ata.wiki.kernel.org/index.php/ATA_4_KiB_sector_issues


http://hoopercharles.wordpress.com/2011/12/18/idle-thoughts-ssd-redo-logs-and-sector-size/  ''<-- hoopers, noons, guy discussing the issue.. on my test cases, I noticed huge improvements on sequential read/write with higher chunks when short stroking an LVM from 1MB to 4MB chunks''


''Anatomy of a Solid-state Drive'' http://queue.acm.org/detail.cfm?id=2385276

http://highscalability.com/blog/2013/6/13/busting-4-modern-hardware-myths-are-memory-hdds-and-ssds-rea.html


''Using Solid State Disk to optimize Oracle databases series'' http://guyharrison.squarespace.com/ssdguide
''other tagged as SSD'' http://guyharrison.squarespace.com/blog/tag/ssd

''whitepaper'' http://www.quest.com/Quest_Site_Assets/WhitePapers/Best_Practices_for_Optimizing_Oracle_RDBMS_with_Solid_State_Disk-final.pdf
''cool presentation from LSI guys'' http://www.oswoug.org/Slides/LSI/SolidStateStorageinOracleEnvironmentsv4.pptx
<<<
http://jonathanlewis.wordpress.com/2012/10/05/ssd-2/#comments

I’m not too surprised about Guy’s conclusions about redo on SSD.
This is not because I am expert on SSD (I can spell it) but because I attended a presentation a couple of years ago by a couple of engineers from LSI.
Their job was to do performance testing of the new LSI SSD product for Oracle Flashcache.
They were surprised to consistently find that redo performed better on spinning rust than on SSD.
After discussing it with some other engineers they had a better understanding of the limitations of SSD when used with redo.
The Powerpoint for that presentation is here:
http://www.oswoug.org/Slides/LSI/SolidStateStorageinOracleEnvironmentsv4.pptx

The most interesting redo bits aren’t available in the presention, you had to be there.
There is mention however that due to sequential writes redo performs better on HDD.
<<<

''some of the important points''
{{{
OK so now that I have this super fast device – what does that mean?  The obvious, well isn’t…

It’s all equally accessible – no short stroking 
While it doesn’t rotate, mixed reads and writes do slow it down
Scanning the Device for bad sectors is a thing of the past
It may not be necessary to stripe for performance
In cache cases you might not even need to mirror SSDs
Using Smart Flash Cache AND moving data objects to SSD decreased performance
Online Redo Logs are best handled by HDD because of the sequential writes
}}}


''Other references''

''Solid State Drive vs. Hard Disk Drive Price and Performance Study''
http://www.dell.com/downloads/global/products/pvaul/en/ssd_vs_hdd_price_and_performance_study.pdf
http://en.wikipedia.org/wiki/Solid-state_drive#cite_note-72
http://www.intel.com/support/ssdc/hpssd/sb/CS-029623.htm#5

''redo on SSD''
http://kevinclosson.wordpress.com/2007/07/21/manly-men-only-use-solid-state-disk-for-redo-logging-lgwr-io-is-simple-but-not-lgwr-processing/
http://communities.intel.com/community/datastack/blog/2011/11/07/improve-database-performance-redo-and-transaction-logs-on-solid-state-disks-ssds
http://www.pythian.com/blog/de-confusing-ssd-for-oracle-databases/
http://www.linkedin.com/groups/Anybody-using-SSD-Redo-logs-2922607.S.52078141
http://serverfault.com/questions/159687/putting-oracle-redo-logs-on-dram-ssd-for-a-heavy-write-database
http://odenysenko.wordpress.com/2012/10/18/troubleshooting-log-file-sync-waits/
http://orainternals.wordpress.com/2008/07/07/tuning-log-file-sync-wait-events/
http://goo.gl/8hNbl
http://www.freelists.org/post/oracle-l/Exadata-How-do-you-use-FlashDisk
http://www.lsi.com/downloads/Public/Solid%20State%20Storage/WarpDrive%20SLP-300/WarpDrive_Oracle_Best_Practices.pdf
http://www.emc.com/collateral/hardware/white-papers/h5967-leveraging-clariion-cx4-oracle-deploy-wp.pdf

















http://www.electronicproducts.com/Passive_Components/Capacitors/Supercapacitors_for_SSD_backup_power.aspx
groupadd -g 500 dba
useradd -u 500 -g dba -G dba oracle
mkdir -p /u01/app/oracle
chown -R oracle:dba /u01
chmod -R 775 /u01/


alternatives --install /usr/bin/java java /opt/jdk1.6.0_27/bin/java 1

/usr/java/jdk1.6.0_27/bin

/usr/java/jdk1.6.0_27/bin/java


alternatives --install /usr/bin/java java /usr/java/jdk1.6.0_27/bin/java 1







yum install -y rng-utils-2
make-3.81
binutils-2.17.50.0.6
gcc-4.1.1
libaio-0.3.106
glibc-common-2.3.4-2.9
compat-libstdc++-296 -2.96-132.7.2
libstdc++-4.1.1
libstdc++-devel-4.11
setarch-1.6-1
sysstat-5.0.5-1
compat-db-4.1.25-9














-deconfig dbcontrol db [-repos drop] [-cluster] [-silent] [parameters]: de-configure Database Control
-deconfig centralAgent (db | asm) [-cluster] [ -silent] [parameters]: de-configure central agent management
-deconfig all db [-repos drop] [-cluster] [-silent] [parameters]: de-configure both Database Control and central agent management


/u01/app/oracle/product/middleware/oms11g/sysman/prov/agentpush/logs/EMAgentPush2011-09-26_07-49-39-AM.log







FO: BaseServiceHandler.process Action code: 10
INFO: BaseServiceHandler.process Action code: 10
INFO: BaseServiceHandler.process Action code: 10
INFO: ======SSH setup is already exists for user: oracle for nodes: db1
INFO: Perform doSSHConnectivitySetup which is Mandatory : PASSED
INFO: RETURNING FROM PERFORM VALIDATION:true
INFO: GenericInstaller, validation done.... do connectivity next...
INFO: UIXmlWrapper.UIXmlWrapper filename: /u01/app/oracle/product/middleware/oms11g/sysman/prov/agentpush/logs/tempUI.xml
INFO: prodName: 1 noPrereqClone: false
INFO: For Product : oracle.sysman.prov.agentpush.step1 there will be real deploymnet
INFO: UIXmlWrapper.UIXmlWrapper filename: /u01/app/oracle/product/middleware/oms11g/sysman/prov/agentpush/logs/tempUI.xml
INFO: VirtualHost set to true, so adding a sample entry to the new file
INFO: setEnvMappingFile:Init RI with REMOTE_PATH_PROPERTIES_LOC= null
INFO: Exception :  [Message:] No path files specified for platform on nodes "db1". [Exception:] oracle.sysman.prov.remoteinterfaces.exception.FatalException: No path files specified for platform on nodes "db1".
        at oracle.sysman.prov.remoteinterfaces.nativesystem.NativeSystem.startup(NativeSystem.java:513)
        at oracle.sysman.prov.remoteinterfaces.clusterops.ClusterBaseOps.startup(ClusterBaseOps.java:425)
        at oracle.sysman.prov.remoteinterfaces.clusterops.ClusterBaseOps.startup(ClusterBaseOps.java:338)
        at oracle.sysman.prov.agentpush.services.RemoteInterfaceWrapper.setEnvMappingFile(RemoteInterfaceWrapper.java:550)
        at oracle.sysman.prov.agentpush.services.GenericInstaller.helper(GenericInstaller.java:302)
        at oracle.sysman.prov.agentpush.services.GenericInstaller.run(GenericInstaller.java:658)
        at java.lang.Thread.run(Thread.java:662)







FO: BaseServiceHandler.process Action code: 10
INFO: BaseServiceHandler.process Action code: 10
INFO: BaseServiceHandler.process Action code: 10
INFO: RetVAL: <?xml version = '1.0' encoding = 'UTF-8'?><prov:Descriptions version="1.0.0" xmlns:prov="http://www.oracle.com/sysman/prov/deployment" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.oracle.com/sysman/prov/deployment deploy_state.xsd">  <prov:Provisioning>
    <prov:MetaData>
        <prov:Session sessionId="-55333588:132a45d77d9:-7fe7:1317050630064" timestamp="2011-09-26_10-23-46-AM">
            <prov:SessionLocation location="/u01/app/oracle/product/middleware/oms11g/sysman/prov/agentpush/2011-09-26_10-23-46-AM"/>
        </prov:Session>
    <Prereqs xmlns="http://www.oracle.com/sysman/prov/deployment"><ActionState node="localnode"><Action name="agentpush_localPrereq_StartUp" state="success"/><Action name="runlocalprereqs" state="success"/><Action name="agentpush_localPrereq_ShutDown" state="success"/></ActionState><ActionState node="db1"><Action name="agentpush_remotePrereq_StartUp" state="success"/><Action name="runremoteprereqs" state="not_executed"/><Action name="agentpush_remotePrereq_ShutDown" state="success"/></ActionState><LogLocation node="localnode" location="Prereqs"/><LogLocation node="db1" location="Prereqs/db1"/><PrereqResultsLocation node="localnode" location="Prereqs/results"/><PrereqResultsLocation node="db1" location="Prereqs/db1/results"/></Prereqs><Fixup xmlns="http://www.oracle.com/sysman/prov/deployment"><FixupLocation node="localnode" location="Prereqs/Fixup"/><FixupLocation node="db1" location="Prereqs/db1/Fixup"/></Fixup></prov:MetaData>
        <prov:Interview>
        <prov:DeployMode name="newagent"/>
            <prov:Attribute name="installType" type="String" value="Fresh Install"/>
            <prov:Attribute name="shiphomeLoc" type="String" value="1"/>
            <prov:Attribute name="shiphomeLocVal" type="String" value="null"/>
            <prov:Attribute name="remoteHostNamesStr" type="String" value="db1"/>
            <prov:Attribute name="installBaseDir" type="String" value="/u01/app/oracle"/>
            <prov:Attribute name="version" type="String" value="11.1.0.1.0"/>
            <prov:Attribute name="clusterInstall" type="String" value="null"/>
            <prov:Attribute name="clusterNodeNames" type="String" value="null"/>
            <prov:Attribute name="clusterName" type="String" value=""/>
            <prov:Attribute name="username" type="String" value="oracle"/>
            <prov:Attribute name="portValue" type="String" value="3872"/>
            <prov:Attribute name="preInstallScript" type="String" value=""/>
            <prov:Attribute name="runAsRootPreInstallScript" type="String" value="null"/>
            <prov:Attribute name="postInstallScript" type="String" value=""/>
            <prov:Attribute name="runAsRootPostInstallScript" type="String" value="null"/>
            <prov:Attribute name="runRootSH" type="String" value="null"/>
            <prov:Attribute name="virtualHost" type="String" value="on"/>
            <prov:Attribute name="SLBHost" type="String" value=""/>
            <prov:Attribute name="SLBPort" type="String" value=""/>
            <prov:Attribute name="params" type="String" value=""/>
            <prov:Attribute name="omsPassword" type="String" value="*****"/>
            <prov:Attribute name="isOMS10205OrNewer" type="String" value="true"/>
            <prov:Attribute name="NO_OF_PREREQ_XMLS_TO_PARSE" type="String" value="2"/>
            <prov:Attribute name="appPrereqEntryPointDir" type="String" value="emagent_install"/>
            <prov:Attribute name="platform" type="String" value="linux_x64"/>
        </prov:Interview>
  </prov:Provisioning>
</prov:Descriptions>
INFO: BaseServiceHandler.process Action code: 10
INFO: PrereqWaitServiceHandler._handleBasicPrereqCompletion:agentInstallProps
INFO: prodName: 1 noPrereqClone: false cloneSrcHomenull
INFO: isPrereqProd: false is clone: false isNoPrereqClonefalse
INFO: Its not fake deployment for prop : oracle.sysman.prov.agentpush.step1
INFO: PrereqWaitServiceHandler:_handleBasicPrereqCompletion: retryStarted: null
INFO: PrereqWaitServiceHandler:_handleBasicPrereqCompletion: recoveryStarted: null
INFO: noOfPrereqXmlsToParse:2
INFO: noOfPrereqXmlsToParseInt:2
INFO: finding entrypoint for dir: /u01/app/oracle/product/middleware/oms11g/sysman/prov/agentpush/2011-09-26_10-23-46-AM/prereqs/
INFO: /u01/app/oracle/product/middleware/oms11g/sysman/prov/agentpush/2011-09-26_10-23-46-AM/prereqs//entrypoints exists
INFO: Local Prereq returning all entry points under :/u01/app/oracle/product/middleware/oms11g/sysman/prov/agentpush/2011-09-26_10-23-46-AM/prereqs/
INFO: Prereq parsing: entrypoint is : connectivity
INFO: Entry Point:/u01/app/oracle/product/middleware/oms11g/sysman/prov/agentpush/2011-09-26_10-23-46-AM/prereqs/entrypoints/connectivity has results.xml file:/u01/app/oracle/product/middleware/oms11g/sysman/prov/agentpush/2011-09-26_10-23-46-AM/prereqs/entrypoints/connectivity/local/results/agent/agent_prereq_results.xml
INFO: prereq resultXMLs found for node local are: [/u01/app/oracle/product/middleware/oms11g/sysman/prov/agentpush/2011-09-26_10-23-46-AM/prereqs/entrypoints/connectivity/local/results/agent/agent_prereq_results.xml]
INFO: Result file location is : /u01/app/oracle/product/middleware/oms11g/sysman/prov/agentpush/2011-09-26_10-23-46-AM/prereqs/entrypoints/connectivity/local/results/agent/agent_prereq_results.xml
INFO: no of prereqs executed as found in result file   /u01/app/oracle/product/middleware/oms11g/sysman/prov/agentpush/2011-09-26_10-23-46-AM/prereqs/entrypoints/connectivity/local/results/agent/agent_prereq_results.xml : 43
INFO: the result found till now:




-------------------------------------------------------





[oracle@emgc11g agent_11010]$
[oracle@emgc11g agent_11010]$ scp Linux_x86_64_Grid_Control_agent_download_11_1_0_1_0.zip oracle@db1:/u01/app/oracle/
Linux_x86_64_Grid_Control_agent_download_11_1_0_1_0.zip                                                                                       3%   15MB  12.4KB/s - stalled -^CKilled by signal 2.
[oracle@emgc11g agent_11010]$
[oracle@emgc11g agent_11010]$
[oracle@emgc11g agent_11010]$
[oracle@emgc11g agent_11010]$ scp -l 8192 Linux_x86_64_Grid_Control_agent_download_11_1_0_1_0.zip oracle@db1:/u01/app/oracle/
Linux_x86_64_Grid_Control_agent_download_11_1_0_1_0.zip                                                                                       1% 4808KB  12.6KB/s 10:21:21 ET^CKilled by signal 2.


[oracle@emgc11g agent_11010]$ scp Linux_x86_64_Grid_Control_agent_download_11_1_0_1_0.zip oracle@db1:/u01/app/oracle/
Linux_x86_64_Grid_Control_agent_download_11_1_0_1_0.zip                                                                                       0% 2208KB   1.5MB/s   05:05 ETA^CKilled by signal 2.



[oracle@emgc11g agent_11010]$
[oracle@emgc11g agent_11010]$ ssh -vvv db1







[oracle@emgc11g agent_11010]$ ssh -vvv db1
OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008
debug1: Reading configuration data /home/oracle/.ssh/config
debug1: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to db1 [192.168.203.221] port 22.
debug1: Connection established.
debug1: identity file /home/oracle/.ssh/identity type -1
debug3: Not a RSA1 key file /home/oracle/.ssh/id_rsa.
debug2: key_type_from_name: unknown key type '-----BEGIN'
debug3: key_read: missing keytype
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug2: key_type_from_name: unknown key type '-----END'
debug3: key_read: missing keytype
debug1: identity file /home/oracle/.ssh/id_rsa type 1
debug1: identity file /home/oracle/.ssh/id_dsa type -1
debug1: loaded 3 keys
debug1: Remote protocol version 2.0, remote software version OpenSSH_4.3
debug1: match: OpenSSH_4.3 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_4.3
debug2: fd 3 setting O_NONBLOCK
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: ssh-rsa,ssh-dss
debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se,aes128-ctr,aes192-ctr,aes256-ctr
debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se,aes128-ctr,aes192-ctr,aes256-ctr
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib
debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: ssh-rsa,ssh-dss
debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se,aes128-ctr,aes192-ctr,aes256-ctr
debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se,aes128-ctr,aes192-ctr,aes256-ctr
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: none,zlib@openssh.com
debug2: kex_parse_kexinit: none,zlib@openssh.com
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: mac_init: found hmac-md5
debug1: kex: server->client aes128-cbc hmac-md5 none
debug2: mac_init: found hmac-md5
debug1: kex: client->server aes128-cbc hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug2: dh_gen_key: priv key bits set: 119/256
debug2: bits set: 507/1024
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug3: check_host_in_hostfile: filename /home/oracle/.ssh/known_hosts
debug3: check_host_in_hostfile: match line 1
debug3: check_host_in_hostfile: filename /home/oracle/.ssh/known_hosts
debug3: check_host_in_hostfile: match line 1
debug1: Host 'db1' is known and matches the RSA host key.
debug1: Found key in /home/oracle/.ssh/known_hosts:1
debug2: bits set: 495/1024
debug1: ssh_rsa_verify: signature correct
debug2: kex_derive_keys
debug2: set_newkeys: mode 1
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug2: set_newkeys: mode 0
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug2: service_accept: ssh-userauth
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug2: key: /home/oracle/.ssh/identity ((nil))
debug2: key: /home/oracle/.ssh/id_rsa (0x7f037a015360)
debug2: key: /home/oracle/.ssh/id_dsa ((nil))
debug1: Authentications that can continue: publickey,gssapi-with-mic,password
debug3: start over, passed a different list publickey,gssapi-with-mic,password
debug3: preferred gssapi-with-mic,publickey,keyboard-interactive,password
debug3: authmethod_lookup gssapi-with-mic
debug3: remaining preferred: publickey,keyboard-interactive,password
debug3: authmethod_is_enabled gssapi-with-mic
debug1: Next authentication method: gssapi-with-mic
debug3: Trying to reverse map address 192.168.203.221.
debug1: Unspecified GSS failure.  Minor code may provide more information
Unknown code krb5 195

debug1: Unspecified GSS failure.  Minor code may provide more information
Unknown code krb5 195

debug1: Unspecified GSS failure.  Minor code may provide more information
Unknown code krb5 195

debug2: we did not send a packet, disable method
debug3: authmethod_lookup publickey
debug3: remaining preferred: keyboard-interactive,password
debug3: authmethod_is_enabled publickey
debug1: Next authentication method: publickey
debug1: Trying private key: /home/oracle/.ssh/identity
debug3: no such identity: /home/oracle/.ssh/identity
debug1: Offering public key: /home/oracle/.ssh/id_rsa
debug3: send_pubkey_test
debug2: we sent a publickey packet, wait for reply
debug1: Server accepts key: pkalg ssh-rsa blen 151
debug2: input_userauth_pk_ok: SHA1 fp 8c:3b:de:62:b0:8f:61:41:da:38:55:12:e7:7a:4d:0b:03:29:da:3f
debug3: sign_and_send_pubkey
debug1: read PEM private key done: type RSA
debug1: Authentication succeeded (publickey).
debug1: channel 0: new [client-session]
debug3: ssh_session2_open: channel_new: 0
debug2: channel 0: send open
debug1: Entering interactive session.
debug2: callback start
debug2: client_session2_setup: id 0
debug2: channel 0: request pty-req confirm 0
debug3: tty_make_modes: ospeed 38400
debug3: tty_make_modes: ispeed 38400
debug3: tty_make_modes: 1 3
debug3: tty_make_modes: 2 28
debug3: tty_make_modes: 3 127
debug3: tty_make_modes: 4 21
debug3: tty_make_modes: 5 4
debug3: tty_make_modes: 6 0
debug3: tty_make_modes: 7 0
debug3: tty_make_modes: 8 17
debug3: tty_make_modes: 9 19
debug3: tty_make_modes: 10 26
debug3: tty_make_modes: 12 18
debug3: tty_make_modes: 13 23
debug3: tty_make_modes: 14 22
debug3: tty_make_modes: 18 15
debug3: tty_make_modes: 30 0
debug3: tty_make_modes: 31 0
debug3: tty_make_modes: 32 0
debug3: tty_make_modes: 33 0
debug3: tty_make_modes: 34 0
debug3: tty_make_modes: 35 0
debug3: tty_make_modes: 36 1
debug3: tty_make_modes: 37 0
debug3: tty_make_modes: 38 1
debug3: tty_make_modes: 39 0
debug3: tty_make_modes: 40 0
debug3: tty_make_modes: 41 0
debug3: tty_make_modes: 50 1
debug3: tty_make_modes: 51 1
debug3: tty_make_modes: 52 0
debug3: tty_make_modes: 53 1
debug3: tty_make_modes: 54 1
debug3: tty_make_modes: 55 1
debug3: tty_make_modes: 56 0
debug3: tty_make_modes: 57 0
debug3: tty_make_modes: 58 0
debug3: tty_make_modes: 59 1
debug3: tty_make_modes: 60 1
debug3: tty_make_modes: 61 1
debug3: tty_make_modes: 62 0
debug3: tty_make_modes: 70 1
debug3: tty_make_modes: 71 0
debug3: tty_make_modes: 72 1
debug3: tty_make_modes: 73 0
debug3: tty_make_modes: 74 0
debug3: tty_make_modes: 75 0
debug3: tty_make_modes: 90 1
debug3: tty_make_modes: 91 1
debug3: tty_make_modes: 92 0
debug3: tty_make_modes: 93 0
debug1: Sending environment.
debug3: Ignored env HOSTNAME
debug3: Ignored env SHELL
debug3: Ignored env TERM
debug3: Ignored env HISTSIZE
debug3: Ignored env KDE_NO_IPV6
debug3: Ignored env QTDIR
debug3: Ignored env QTINC
debug3: Ignored env USER
debug3: Ignored env LD_LIBRARY_PATH
debug3: Ignored env LS_COLORS
debug3: Ignored env ORACLE_SID
debug3: Ignored env ORACLE_BASE
debug3: Ignored env KDEDIR
debug3: Ignored env MAIL
debug3: Ignored env PATH
debug3: Ignored env INPUTRC
debug3: Ignored env PWD
debug1: Sending env LANG = en_US.UTF-8
debug2: channel 0: request env confirm 0
debug3: Ignored env KDE_IS_PRELINKED
debug3: Ignored env SSH_ASKPASS
debug3: Ignored env SHLVL
debug3: Ignored env HOME
debug3: Ignored env LOGNAME
debug3: Ignored env QTLIB
debug3: Ignored env CVS_RSH
debug3: Ignored env LESSOPEN
debug3: Ignored env ORACLE_HOME
debug3: Ignored env G_BROKEN_FILENAMES
debug3: Ignored env _
debug3: Ignored env OLDPWD
debug2: channel 0: request shell confirm 0
debug2: fd 3 setting TCP_NODELAY
debug2: callback done
debug2: channel 0: open confirm rwindow 0 rmax 32768
debug2: channel 0: rcvd adjust 2097152
Last login: Mon Sep 26 11:07:21 2011 from 192.168.203.15
https://odd.blog/2008/12/10/how-to-fix-ssh-timeout-problems/
http://ask.systutorials.com/1694/how-to-enable-ssh-service-on-fedora-linux
https://docs.oseems.com/general/application/ssh/disable-timeout
http://www.cyberciti.biz/tips/open-ssh-server-connection-drops-out-after-few-or-n-minutes-of-inactivity.html

{{{
# edit the file
/etc/ssh/sshd_config

TCPKeepAlive no 
ClientAliveInterval 30
ClientAliveCountMax 100

# restart the service
systemctl stop sshd.service
systemctl start sshd.service

}}}

or do it in putty 
https://patrickmn.com/aside/how-to-keep-alive-ssh-sessions/
<<<
On Windows (PuTTY)
In your session properties, go to Connection and under Sending of null packets to keep session active, set Seconds between keepalives (0 to turn off) to e.g. 300 (5 minutes).
<<<

also install the sqldeveloper keepalive 
http://scristalli.github.io/SQL-Developer-4-keepalive/
https://www.swift.com/our-solutions/interfaces-and-integration/alliance-messaging-hub#:~:text=Alliance%20Messaging%20Hub%20(AMH)%20is,%2Dnetwork%2C%20financial%20messaging%20solution.&text=The%20solution%20delivers%20seamless%20routing,and%20new%20levels%20of%20efficiency.

https://www.swift.com/
!current session
{{{
select res.*
    from (
      select *
      from (
        select
          sys_context ('userenv','ACTION') ACTION,
          sys_context ('userenv','AUDITED_CURSORID') AUDITED_CURSORID,
          sys_context ('userenv','AUTHENTICATED_IDENTITY') AUTHENTICATED_IDENTITY,
          sys_context ('userenv','AUTHENTICATION_DATA') AUTHENTICATION_DATA,
          sys_context ('userenv','AUTHENTICATION_METHOD') AUTHENTICATION_METHOD,
          sys_context ('userenv','BG_JOB_ID') BG_JOB_ID,
          sys_context ('userenv','CLIENT_IDENTIFIER') CLIENT_IDENTIFIER,
          sys_context ('userenv','CLIENT_INFO') CLIENT_INFO,
          sys_context ('userenv','CURRENT_BIND') CURRENT_BIND,
          sys_context ('userenv','CURRENT_EDITION_ID') CURRENT_EDITION_ID,
          sys_context ('userenv','CURRENT_EDITION_NAME') CURRENT_EDITION_NAME,
          sys_context ('userenv','CURRENT_SCHEMA') CURRENT_SCHEMA,
          sys_context ('userenv','CURRENT_SCHEMAID') CURRENT_SCHEMAID,
          sys_context ('userenv','CURRENT_SQL') CURRENT_SQL,
          sys_context ('userenv','CURRENT_SQLn') CURRENT_SQLn,
          sys_context ('userenv','CURRENT_SQL_LENGTH') CURRENT_SQL_LENGTH,
          sys_context ('userenv','CURRENT_USER') CURRENT_USER,
          sys_context ('userenv','CURRENT_USERID') CURRENT_USERID,
          sys_context ('userenv','DATABASE_ROLE') DATABASE_ROLE,
          sys_context ('userenv','DB_DOMAIN') DB_DOMAIN,
          sys_context ('userenv','DB_NAME') DB_NAME,
          sys_context ('userenv','DB_UNIQUE_NAME') DB_UNIQUE_NAME,
          sys_context ('userenv','DBLINK_INFO') DBLINK_INFO,
          sys_context ('userenv','ENTRYID') ENTRYID,
          sys_context ('userenv','ENTERPRISE_IDENTITY') ENTERPRISE_IDENTITY,
          sys_context ('userenv','FG_JOB_ID') FG_JOB_ID,
          sys_context ('userenv','GLOBAL_CONTEXT_MEMORY') GLOBAL_CONTEXT_MEMORY,
          sys_context ('userenv','GLOBAL_UID') GLOBAL_UID,
          sys_context ('userenv','HOST') HOST,
          sys_context ('userenv','IDENTIFICATION_TYPE') IDENTIFICATION_TYPE,
          sys_context ('userenv','INSTANCE') INSTANCE,
          sys_context ('userenv','INSTANCE_NAME') INSTANCE_NAME,
          sys_context ('userenv','IP_ADDRESS') IP_ADDRESS,
          sys_context ('userenv','ISDBA') ISDBA,
          sys_context ('userenv','LANG') LANG,
          sys_context ('userenv','LANGUAGE') LANGUAGE,
          sys_context ('userenv','MODULE') MODULE,
          sys_context ('userenv','NETWORK_PROTOCOL') NETWORK_PROTOCOL,
          sys_context ('userenv','NLS_CALENDAR') NLS_CALENDAR,
          sys_context ('userenv','NLS_CURRENCY') NLS_CURRENCY,
          sys_context ('userenv','NLS_DATE_FORMAT') NLS_DATE_FORMAT,
          sys_context ('userenv','NLS_DATE_LANGUAGE') NLS_DATE_LANGUAGE,
          sys_context ('userenv','NLS_SORT') NLS_SORT,
          sys_context ('userenv','NLS_TERRITORY') NLS_TERRITORY,
          sys_context ('userenv','OS_USER') OS_USER,
          sys_context ('userenv','POLICY_INVOKER') POLICY_INVOKER,
          sys_context ('userenv','PROXY_ENTERPRISE_IDENTITY') PROXY_ENTERPRISE_IDENTITY,
          sys_context ('userenv','PROXY_USER') PROXY_USER,
          sys_context ('userenv','PROXY_USERID') PROXY_USERID,
          sys_context ('userenv','SERVER_HOST') SERVER_HOST,
          sys_context ('userenv','SERVICE_NAME') SERVICE_NAME,
          sys_context ('userenv','SESSION_EDITION_ID') SESSION_EDITION_ID,
          sys_context ('userenv','SESSION_EDITION_NAME') SESSION_EDITION_NAME,
          sys_context ('userenv','SESSION_USER') SESSION_USER,
          sys_context ('userenv','SESSION_USERID') SESSION_USERID,
          sys_context ('userenv','SESSIONID') SESSIONID,
          sys_context ('userenv','SID') SID,
          sys_context ('userenv','STATEMENTID') STATEMENTID,
          sys_context ('userenv','TERMINAL') TERMINAL
        from dual
        -- where sys_context ('userenv','SESSIONID') NOT in ('SYS', 'XDB')    -- <<<<< filter by user
      )
      unpivot include nulls (
        val for name in (action, audited_cursorid, authenticated_identity, authentication_data, authentication_method, bg_job_id, client_identifier, client_info, current_bind, current_edition_id, current_edition_name, current_schema, current_schemaid, current_sql, current_sqln, current_sql_length, current_user, current_userid, database_role, db_domain, db_name, db_unique_name, dblink_info, entryid, enterprise_identity, fg_job_id, global_context_memory, global_uid, host, identification_type, instance, instance_name, ip_address, isdba, lang, language, module, network_protocol, nls_calendar, nls_currency, nls_date_format, nls_date_language, nls_sort, nls_territory, os_user, policy_invoker, proxy_enterprise_identity, proxy_user, proxy_userid, server_host, service_name, session_edition_id, session_edition_name, session_user, session_userid, sessionid, sid, statementid, terminal)
      )
    ) res;
}}}

!other session
{{{
-- check with 
select name, value 
            from  V$SES_OPTIMIZER_ENV 
            where sid=54 
               and name='parallel_force_local';
}}}


https://lh5.googleusercontent.com/-SKtDoT5Ipqs/TnwEjxvRpwI/AAAAAAAABWU/zmKYWQVdxE0/s288/networkmap.png

system-config-samba to share a linux filesystem mount to windows
-create user on samba same as the windows userid 


--------------

http://www.drron.com.au/2010/01/16/a-note-about-wdtv-live-and-samba-shares/
http://www.reallylinux.com/docs/sambaserver.shtml
http://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-samba-configuring.html
http://www.cyberciti.biz/tips/how-to-mount-remote-windows-partition-windows-share-under-linux.html

http://mybookworld.wikidot.com/start <-- WD hack

-- from http://www.perfvision.com/statspack/sp_10g.txt

{{{
STATSPACK report for

Database    DB Id    Instance     Inst Num Startup Time    Release     RAC
~~~~~~~~ ----------- ------------ -------- --------------- ----------- ---
          1193559071 cdb10               1 27-Jul-07 11:03 10.2.0.1.0  NO

Host  Name:   tsukuba          Num CPUs:    2        Phys Memory (MB):    6,092
~~~~

Snapshot       Snap Id     Snap Time      Sessions Curs/Sess Comment
~~~~~~~~    ---------- ------------------ -------- --------- -------------------
Begin Snap:        114 30-Jul-07 15:00:06       36      16.9
  End Snap:        116 30-Jul-07 17:00:05       41      24.8
   Elapsed:              119.98 (mins)

Cache Sizes                       Begin        End
~~~~~~~~~~~                  ---------- ----------
               Buffer Cache:       308M             Std Block Size:         8K
           Shared Pool Size:       128M                 Log Buffer:     6,066K

Load Profile                            Per Second       Per Transaction
~~~~~~~~~~~~                       ---------------       ---------------
                  Redo size:            235,846.01            410,605.90
              Logical reads:              6,095.13             10,611.57
              Block changes:              1,406.37              2,448.49
             Physical reads:                  7.23                 12.59
            Physical writes:                 25.45                 44.31
                 User calls:                152.84                266.09
                     Parses:                  3.78                  6.58
                Hard parses:                  0.13                  0.22
                      Sorts:                  9.06                 15.77
                     Logons:                  0.04                  0.06
                   Executes:                151.75                264.20
               Transactions:                  0.57

  % Blocks changed per Read:   23.07    Recursive Call %:    21.61
 Rollback per transaction %:   20.85       Rows per Sort:    52.20

Instance Efficiency Percentages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            Buffer Nowait %:  100.00       Redo NoWait %:   99.98
            Buffer  Hit   %:   99.88    In-memory Sort %:  100.00
            Library Hit   %:   99.80        Soft Parse %:   96.67
         Execute to Parse %:   97.51         Latch Hit %:   99.99
Parse CPU to Parse Elapsd %:   18.63     % Non-Parse CPU:   98.06

 Shared Pool Statistics        Begin   End
                               ------  ------
             Memory Usage %:   91.48   89.32
    % SQL with executions>1:   91.88   83.52
  % Memory for SQL w/exec>1:   97.02   64.37

Top 5 Timed Events                                                    Avg %Total
~~~~~~~~~~~~~~~~~~                                                   wait   Call
Event                                            Waits    Time (s)   (ms)   Time
----------------------------------------- ------------ ----------- ------ ------
PL/SQL lock timer                                2,103       6,170   2934   41.0
log file parallel write                          5,751       2,035    354   13.5
db file parallel write                          16,343       1,708    104   11.4
log file sync                                    2,936       1,285    438    8.5
log buffer space                                 1,307         950    727    6.3
          -------------------------------------------------------------
Host CPU  (CPUs: 2)
~~~~~~~~              Load Average
                      Begin     End      User  System    Idle     WIO     WCPU
                    ------- -------   ------- ------- ------- ------- --------
                       0.13    0.34     47.08    3.51   49.41    0.00   22.83

Instance CPU
~~~~~~~~~~~~
              % of total CPU for Instance:    5.57
              % of busy  CPU for Instance:   11.02
  %DB time waiting for CPU - Resource Mgr:

Memory Statistics                       Begin          End
~~~~~~~~~~~~~~~~~                ------------ ------------
                  Host Mem (MB):      6,092.4      6,092.4
                   SGA use (MB):        468.0        468.0
                   PGA use (MB):         96.9        166.8
    % Host Mem used for SGA+PGA:          9.3         10.4
          -------------------------------------------------------------

Time Model System Stats  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> Ordered by % of DB time desc, Statistic name

Statistic                                       Time (s) % of DB time
----------------------------------- -------------------- ------------
sql execute elapsed time                         3,447.6         75.4
DB CPU                                             718.2         15.7
parse time elapsed                                  88.9          1.9
hard parse elapsed time                             78.9          1.7
sequence load elapsed time                          63.0          1.4
PL/SQL execution elapsed time                       29.8           .7
hard parse (sharing criteria) elaps                  1.4           .0
PL/SQL compilation elapsed time                      1.1           .0
connection management call elapsed                   0.9           .0
repeated bind elapsed time                           0.0           .0
hard parse (bind mismatch) elapsed                   0.0           .0
DB time                                          4,574.2
background elapsed time                          3,976.2
background cpu time                                 84.7
          -------------------------------------------------------------
Wait Events  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> s - second, cs - centisecond,  ms - millisecond, us - microsecond
-> %Timeouts:  value of 0 indicates value was < .5%.  Value of null is truly 0
-> Only events with Total Wait Time (s) >= .001 are shown
-> ordered by Total Wait Time desc, Waits desc (idle events last)

                                                                    Avg
                                                %Time Total Wait   wait    Waits
Event                                    Waits  -outs   Time (s)   (ms)     /txn
--------------------------------- ------------ ------ ---------- ------ --------
PL/SQL lock timer                        2,103    100      6,170   2934      0.5
log file parallel write                  5,751      0      2,035    354      1.4
db file parallel write                  16,343      0      1,708    104      4.0
log file sync                            2,936     33      1,285    438      0.7
log buffer space                         1,307     49        950    727      0.3
SQL*Net message from dblink              8,990      0        681     76      2.2
db file sequential read                 34,436      0        605     18      8.3
enq: RO - fast object reuse                147     30        160   1087      0.0
log file switch (checkpoint incom          200     62        153    763      0.0
local write wait                           437     26        136    310      0.1
control file parallel write              3,596      0        109     30      0.9
log file switch completion                 199     28        104    522      0.0
buffer busy waits                          163     32         69    425      0.0
db file scattered read                   2,644      0         30     11      0.6
SQL*Net more data to dblink              3,507      0         30      9      0.8
os thread startup                           97      8         20    211      0.0
direct path write                          422      0         17     40      0.1
direct path write temp                     106      0         11    104      0.0
enq: CF - contention                        24      4          8    337      0.0
control file sequential read            56,057      0          7      0     13.6
SQL*Net break/reset to client            2,670      0          6      2      0.6
direct path read temp                       50      0          5    100      0.0
db file parallel read                        4      0          2    376      0.0
read by other session                      187      0          1      8      0.0
log file switch (private strand f            4      0          1    276      0.0
single-task message                          5      0          1    156      0.0
log file single write                       74      0          1     10      0.0
latch: In memory undo latch                  1      0          1    643      0.0
library cache pin                           12      0          1     50      0.0
LGWR wait for redo copy                    185     16          1      3      0.0
rdbms ipc reply                            462      0          1      1      0.1
direct path read                           259      0          0      2      0.1
latch: object queue header operat            3      0          0    135      0.0
reliable message                            81      0          0      4      0.0
library cache load lock                     32      0          0      7      0.0
SQL*Net more data to client              3,565      0          0      0      0.9
kksfbc child completion                      2    100          0     51      0.0
latch: cache buffers chains                  6      0          0     14      0.0
latch: shared pool                           9      0          0      5      0.0
row cache lock                              61      0          0      1      0.0
log file sequential read                    74      0          0      0      0.0
SQL*Net message to dblink                8,991      0          0      0      2.2
latch: library cache                        14      0          0      1      0.0
undo segment extension                     477    100          0      0      0.1
latch free                                   1      0          0      3      0.0
SQL*Net message from client          1,094,839      0    113,088    103    264.8
Streams AQ: qmn slave idle wait            257      0      7,038  27386      0.1
Streams AQ: qmn coordinator idle           524     51      7,038  13431      0.1
wait for unread message on broadc        7,156    100      7,028    982      1.7
virtual circuit status                     240    100      7,018  29243      0.1
Wait Events  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> s - second, cs - centisecond,  ms - millisecond, us - microsecond
-> %Timeouts:  value of 0 indicates value was < .5%.  Value of null is truly 0
-> Only events with Total Wait Time (s) >= .001 are shown
-> ordered by Total Wait Time desc, Waits desc (idle events last)

                                                                    Avg
                                                %Time Total Wait   wait    Waits
Event                                    Waits  -outs   Time (s)   (ms)     /txn
--------------------------------- ------------ ------ ---------- ------ --------
Streams AQ: waiting for messages         1,453     98      7,003   4820      0.4
Streams AQ: waiting for time mana           89     44      6,805  76455      0.0
jobq slave wait                          2,285     98      6,661   2915      0.6
class slave wait                             4    100         20   4889      0.0
SQL*Net message to client            1,094,841      0          1      0    264.8
SQL*Net more data from client               64      0          0      0      0.0
          -------------------------------------------------------------
Background Wait Events  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> %Timeouts:  value of 0 indicates value was < .5%.  Value of null is truly 0
-> Only events with Total Wait Time (s) >= .001 are shown
-> ordered by Total Wait Time desc, Waits desc (idle events last)

                                                                    Avg
                                                %Time Total Wait   wait    Waits
Event                                    Waits  -outs   Time (s)   (ms)     /txn
--------------------------------- ------------ ------ ---------- ------ --------
log file parallel write                  5,755      0      2,035    354      1.4
db file parallel write                  16,343      0      1,708    104      4.0
control file parallel write              3,597      0        109     30      0.9
os thread startup                           96      8         20    212      0.0
direct path write                          259      0         17     66      0.1
events in waitclass Other                  444      7          9     20      0.1
log file switch (checkpoint incom            8     88          8    955      0.0
log buffer space                            11     27          6    562      0.0
control file sequential read             5,882      0          1      0      1.4
log file single write                       74      0          1     10      0.0
db file sequential read                     71      0          1     10      0.0
direct path read                           259      0          0      2      0.1
log file switch completion                   2      0          0    162      0.0
log file sequential read                    74      0          0      0      0.0
latch: library cache                         1      0          0      1      0.0
buffer busy waits                           16      0          0      0      0.0
rdbms ipc message                       28,583     76     57,513   2012      6.9
Streams AQ: qmn slave idle wait            257      0      7,038  27386      0.1
Streams AQ: qmn coordinator idle           524     51      7,038  13431      0.1
pmon timer                               2,461     99      7,025   2854      0.6
smon timer                                 344      3      6,942  20181      0.1
Streams AQ: waiting for time mana           89     44      6,805  76455      0.0
class slave wait                             1    100          5   4891      0.0
          -------------------------------------------------------------
Wait Event Histogram  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> Total Waits - units: K is 1000, M is 1000000, G is 1000000000
-> % of Waits - column heading: <=1s is truly <1024ms, >1s is truly >=1024ms
-> % of Waits - value: .0 indicates value was <.05%, null is truly 0
-> Ordered by Event (idle events last)

                           Total ----------------- % of Waits ------------------
Event                      Waits  <1ms  <2ms  <4ms  <8ms <16ms <32ms  <=1s   >1s
-------------------------- ----- ----- ----- ----- ----- ----- ----- ----- -----
LGWR wait for redo copy     185   78.4         2.2   1.6   7.6  10.3
PL/SQL lock timer          2103                                            100.0
SQL*Net break/reset to cli 2670   97.0    .7    .1    .4    .3    .2   1.2
SQL*Net message from dblin 8990   23.9  49.8   1.4   7.9  10.5   2.6   2.2   1.5
SQL*Net message to dblink  8991  100.0
SQL*Net more data from dbl    6  100.0
SQL*Net more data to clien 3570  100.0
SQL*Net more data to dblin 3507   97.1                      .3    .2   2.1    .2
buffer busy waits           163   41.1    .6               2.5   1.8  54.0
control file parallel writ 3597                           27.8  53.4  18.8    .0
control file sequential re   56K  99.7    .0    .0    .1    .1    .0    .1    .0
cursor: mutex S              10  100.0
cursor: mutex X               5  100.0
db file parallel read         4                                       75.0  25.0
db file parallel write       16K    .2   1.0   3.8  15.5  15.0  13.7  49.6   1.3
db file scattered read     2649   59.9  10.9   5.4   5.8   7.9   4.8   5.1    .2
db file sequential read      34K  45.5   1.8   5.0  16.1  15.2   7.9   8.3    .2
direct path read            259   97.3          .4    .8    .4    .8    .4
direct path read temp        50   56.0         2.0   2.0   4.0  10.0  24.0   2.0
direct path write           422   87.9                .5   4.5   3.6   2.8    .7
direct path write temp      106   57.5                                40.6   1.9
enq: CF - contention         24   41.7               4.2   4.2   4.2  33.3  12.5
enq: RO - fast object reus  147    7.5                           9.5  48.3  34.7
kksfbc child completion       2                                      100.0
latch free                    1              100.0
latch: In memory undo latc    1                                      100.0
latch: cache buffers chain    6   83.3                                16.7
latch: cache buffers lru c    1  100.0
latch: library cache         14   57.1  42.9
latch: object queue header    3   33.3                                66.7
latch: shared pool            9         22.2  33.3  33.3  11.1
library cache load lock      32   21.9  15.6  25.0   9.4  18.8   3.1   6.3
library cache pin            12   16.7               8.3  16.7  16.7  41.7
local write wait            437                      2.3  27.7  24.5  45.5
log buffer space           1307     .3          .1    .2    .5    .4  98.5
log file parallel write    5752     .2   1.7   5.8  28.1  19.5   7.5  24.2  13.0
log file sequential read     74   97.3         1.4               1.4
log file single write        74          5.4  44.6  12.2  25.7   5.4   6.8
log file switch (checkpoin  200                            1.5    .5  98.0
log file switch (private s    4                                      100.0
log file switch completion  199    3.5               1.0   1.5   1.5  92.5
log file sync              2938     .9   1.2   4.2   8.7  11.1   5.8  68.1
os thread startup            96                                      100.0
rdbms ipc reply             462   98.7    .2    .4          .2    .2    .2
read by other session       187   40.6   4.3  11.2  19.8  14.4   4.3   5.3
reliable message             81   79.0  17.3                     1.2   2.5
row cache lock               61   95.1         1.6   1.6   1.6
single-task message           5                                      100.0
undo segment extension      477  100.0
SQL*Net message from clien 1094K  96.8   1.0    .3    .2    .1    .1   1.1    .5
SQL*Net message to client  1094K 100.0    .0    .0    .0    .0    .0
Wait Event Histogram  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> Total Waits - units: K is 1000, M is 1000000, G is 1000000000
-> % of Waits - column heading: <=1s is truly <1024ms, >1s is truly >=1024ms
-> % of Waits - value: .0 indicates value was <.05%, null is truly 0
-> Ordered by Event (idle events last)

                           Total ----------------- % of Waits ------------------
Event                      Waits  <1ms  <2ms  <4ms  <8ms <16ms <32ms  <=1s   >1s
-------------------------- ----- ----- ----- ----- ----- ----- ----- ----- -----
SQL*Net more data from cli   64  100.0
Streams AQ: qmn coordinato  524   48.9    .2                                51.0
Streams AQ: qmn slave idle  257                                            100.0
Streams AQ: waiting for me 1453                                   .1   1.2  98.8
Streams AQ: waiting for ti   89   24.7                                11.2  64.0
class slave wait              4                                            100.0
dispatcher timer            120                                            100.0
jobq slave wait            2285                             .0          .5  99.5
pmon timer                 2461    1.5          .0    .0    .1    .1    .9  97.3
rdbms ipc message            28K   5.2    .9   1.0   1.9   1.4   1.4  36.1  52.2
smon timer                  344   45.6    .6    .9   4.7    .3    .6  24.4  23.0
virtual circuit status      240                                            100.0
wait for unread message on 7157     .0          .0    .0          .0  99.9    .1
          -------------------------------------------------------------

SQL ordered by CPU  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> Total DB CPU (s):             718
-> Captured SQL accounts for   47.3% of Total DB CPU
-> SQL reported below exceeded  1.0% of Total DB CPU

    CPU                  CPU per             Elapsd                     Old
  Time (s)   Executions  Exec (s)  %Total   Time (s)    Buffer Gets  Hash Value
---------- ------------ ---------- ------ ---------- --------------- ----------
     14.01        1,187       0.01    2.0      14.32               0 2331695545
Module: Lab128
--lab128
 select replace(stat_name,'TICKS','TIME') stat_name,val
ue from v$osstat
 where substr(stat_name,1,3) !='AVG'

     12.95          588       0.02    1.8      14.65               0 2004329213
Module: Lab128
--lab128
 select latch#,gets,misses,sleeps,immediate_gets,immedi
ate_misses, 
 waits_holding_latch,spin_gets 
 from v$latch where
 gets+immediate_gets>0

     12.61            2       6.31    1.8     181.43          11,010 4116021597
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate;
broken BOOLEAN := FALSE; BEGIN ash.collect(3,1200); :mydate := n
ext_date; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;

     12.59          666       0.02    1.8      13.77               0 3872095438
Module: Realtime Connection
select begin_time, wait_class#,        (time_waited)/(intsize_cs
ec/100) from v$waitclassmetric union all select begin_time, -1,
value from v$sysmetric where metric_name = 'CPU Usage Per Sec' a
nd group_id = 2 order by begin_time, wait_class#

     10.39        6,480       0.00    1.4      10.55               0   19176310
Module: Lab128
--lab128
 select sid,ownerid,user#,sql_id,sql_child_number,seq#,
event#
 ,serial#,row_wait_obj#,row_wait_file#,row_wait_block#,ro
w_wait_row#,blocking_session
 ,service_name,p1,p2,p3,wait_time,s
econds_in_wait,decode(state,'WAITING',0,1) state
 ,machine,progr
am
 from  v$session
 where status='ACTIVE' and username is not n

      9.56        1,101       0.01    1.3      13.03         139,744 3286148528
select c.name, u.name from con$ c, cdef$ cd, user$ u  where c.co
n# = cd.con# and cd.enabled = :1 and c.owner# = u.user#

      9.24            8       1.15    1.3      13.19          14,321  781079612
Call CALC_NEW_DOWN_PROF(:1, :2, :3)

      8.69        2,105       0.00    1.2      11.95               0 3802278413
SELECT A.*, :B1 SAMPLE_TIME FROM V$ASHNOW A

      8.45          336       0.03    1.2       9.66               0 3922007841
Module: Realtime Connection
SELECT event#, sql_id, sql_plan_hash_value, sql_opcode, session_
id, session_serial#, module, action, client_id, DECODE(wait_time
, 0, 'W', 'C'), 1, time_waited, service_hash, user_id, program,
sample_time, p1, p2, p3, current_file#, current_obj#, current_bl
ock#, qc_session_id, qc_instance_id FROM v$active_session_histor

      8.41       15,240       0.00    1.2      28.50         338,766 4175898638
SQL ordered by CPU  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> Total DB CPU (s):             718
-> Captured SQL accounts for   47.3% of Total DB CPU
-> SQL reported below exceeded  1.0% of Total DB CPU

    CPU                  CPU per             Elapsd                     Old
  Time (s)   Executions  Exec (s)  %Total   Time (s)    Buffer Gets  Hash Value
---------- ------------ ---------- ------ ---------- --------------- ----------
UPDATE TOPOLOGY_LINK SET DATETO=sysdate, STATEID=0 WHERE TOPOLOG
YID=:1 AND PARENTID=:2 AND STATEID=1

      7.84            2       3.92    1.1      48.30         381,260 2027985784
UPDATE TMP_CALC_HFC_SLOW_CM_TMP SET STATUS_ERROR = 1 WHERE DOCSI
FSIGQUNERROREDS < PREV_DOCSIFSIGQUNERROREDS OR DOCSIFSIGQCORRECT
EDS < PREV_DOCSIFSIGQCORRECTEDS OR DOCSIFSIGQUNCORRECTABLES < PR
EV_DOCSIFSIGQUNCORRECTABLES OR SYSUPTIME <= PREV_SYSUPTIME OR (
DOCSIFSIGQUNERROREDS - PREV_DOCSIFSIGQUNERROREDS ) + ( DOCSIFSIG

      7.51          479       0.02    1.0       8.57               0 3714876926
Module: Lab128
--lab128
 select sql_id,plan_hash_value,parse_calls,disk_reads,d
irect_writes,
 buffer_gets,rows_processed,serializable_aborts,fe
tches,executions,
 end_of_fetch_count,loads,invalidations,px_ser
vers_executions,
 cpu_time,elapsed_time,application_wait_time,co
ncurrency_wait_time,
 cluster_wait_time,user_io_wait_time,plsql_

      7.37            1       7.37    1.0      10.12          12,690   75475900
     SELECT         trunc(SYSDATE, 'HH24') HOUR_STAMP,         C
M_ID,         MAX(SUBSTR(CM_DESC, 1, 12)) CM_DESC,         MAX(U
P_ID)          UP_ID,         MAX(DOWN_ID)        DOWN_ID,
   MAX(MAC_ID)         MAC_ID,         MAX(CMTS_ID)        CMTS_
ID,         SUM(BYTES_UP)           SUM_BYTES_UP,         SUM(BY

      7.35          390       0.02    1.0       8.08               0 3755369401
Module: Realtime Connection
select metric_id, value from v$sysmetric where intsize_csec > 59
00 and group_id = 2 and       metric_id in (2092,
      2093,                     2125,                     2126,
                    2100,                     2124,
        2127,                     2128)

      7.21        1,177       0.01    1.0       7.42               0 2760020466
Module: Lab128
--lab128
 select indx,ksleswts,kslestmo,round(kslestim / 10000)
from x$kslei 
 where inst_id=userenv('INSTANCE') and kslestim>0

          -------------------------------------------------------------
SQL ordered by Elapsed  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> Total DB Time (s):           4,574
-> Captured SQL accounts for   46.2% of Total DB Time
-> SQL reported below exceeded  1.0% of Total DB Time

  Elapsed                Elap per            CPU                        Old
  Time (s)   Executions  Exec (s)  %Total   Time (s)  Physical Reads Hash Value
---------- ------------ ---------- ------ ---------- --------------- ----------
    244.55            8      30.57    5.3       3.89             573 1916282772
Call CALC_DELETE_MEDIUM_RAWDATA(:1, :2)

    181.43            2      90.72    4.0      12.61               1 4116021597
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate;
broken BOOLEAN := FALSE; BEGIN ash.collect(3,1200); :mydate := n
ext_date; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;

    137.15            2      68.58    3.0       6.00           4,446 3327781611
Call CALC_DELETE_SLOW_RAWDATA(:1, :2)

    128.72        2,105       0.06    2.8       1.20               1 1692944121
UPDATE ASH.DBIDS@REPO SET ASHSEQ = :B2 WHERE DBID = :B1

     93.17          112       0.83    2.0       3.45             713 2689373535
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate;
broken BOOLEAN := FALSE; BEGIN EMD_MAINTENANCE.EXECUTE_EM_DBMS_J
OB_PROCS(); :mydate := next_date; IF broken THEN :b := 1; ELSE :
b := 0; END IF; END;

     82.31            2      41.16    1.8       6.68             345  614087306
INSERT INTO TMP_CALC_HFC_SLOW_CM_LAST SELECT * FROM TMP_CALC_HFC
_SLOW_CM_LAST_TMP

     70.80            2      35.40    1.5       4.74             242  237730869
DELETE FROM TMP_CALC_QOS_SLOW_CM_LAST

     64.02            2      32.01    1.4       4.67             323 3329113987
DELETE FROM TMP_CALC_HFC_SLOW_CM_LAST

     61.19            1      61.19    1.3       5.63           1,555 2149686744
select power.TOPOLOGYID, power.SAMPLE_LENGTH, UNIQUE_CMS, ACTIVE
_CMS, BITSPERSYMBOL, TXPOWER_UP FROM CM_POWER_2 power, TOPOLOGY_
LINK link, UPSTREAM_CHANNEL channel WHERE power.SECONDID = :1 AN
D link.TOPOLOGYID = power.TOPOLOGYID AND link.PARENTLEN = 1 AND
link.STATEID = 1 AND link.LINKTYPEID = 1 AND link.PARENTID = cha

     60.08            2      30.04    1.3       6.79               5  982709942
INSERT INTO TMP_CALC_QOS_SLOW_CM_LAST SELECT * FROM TMP_CALC_QOS
_SLOW_CM_LAST_TMP

     48.30            2      24.15    1.1       7.84               0 2027985784
UPDATE TMP_CALC_HFC_SLOW_CM_TMP SET STATUS_ERROR = 1 WHERE DOCSI
FSIGQUNERROREDS < PREV_DOCSIFSIGQUNERROREDS OR DOCSIFSIGQCORRECT
EDS < PREV_DOCSIFSIGQCORRECTEDS OR DOCSIFSIGQUNCORRECTABLES < PR
EV_DOCSIFSIGQUNCORRECTABLES OR SYSUPTIME <= PREV_SYSUPTIME OR (
DOCSIFSIGQUNERROREDS - PREV_DOCSIFSIGQUNERROREDS ) + ( DOCSIFSIG

          -------------------------------------------------------------
SQL ordered by Gets  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> End Buffer Gets Threshold:     10000 Total Buffer Gets:      43,878,832
-> Captured SQL accounts for   17.4% of Total Buffer Gets
-> SQL reported below exceeded  1.0% of Total Buffer Gets

                                                     CPU      Elapsd     Old
  Buffer Gets    Executions  Gets per Exec  %Total Time (s)  Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
      1,214,045            1    1,214,045.0    2.8     5.45      5.91 1551069132
select errors.TOPOLOGYID, errors.SAMPLE_LENGTH, UNIQUE_CMS, ACTI
VE_CMS, CHANNELWIDTH, BITSPERSYMBOL, SNR_DOWN, RXPOWER_DOWN FROM
 CM_ERRORS errors, CM_POWER_2 power, TOPOLOGY_LINK link, DOWNSTR
EAM_CHANNEL channel where errors.SECONDID = power.SECONDID AND e
rrors.SECONDID = :1 AND errors.TOPOLOGYID = power.TOPOLOGYID AND

      1,065,067            1    1,065,067.0    2.4     6.32     39.14 2109849972
select power.TOPOLOGYID, power.SAMPLE_LENGTH, UNIQUE_CMS, ACTIVE
_CMS, CHANNELWIDTH, RXPOWER_UP, RXPOWER UPSTREAM_AVG_RX FROM CM_
POWER_1 power, TOPOLOGY_LINK link, UPSTREAM_CHANNEL channel, UPS
TREAM_POWER_1 upstream_rx WHERE power.SECONDID = :1 and power.SE
CONDID = upstream_rx.secondid AND link.TOPOLOGYID = power.TOPOLO

        762,573            1      762,573.0    1.7     5.63     61.19 2149686744
select power.TOPOLOGYID, power.SAMPLE_LENGTH, UNIQUE_CMS, ACTIVE
_CMS, BITSPERSYMBOL, TXPOWER_UP FROM CM_POWER_2 power, TOPOLOGY_
LINK link, UPSTREAM_CHANNEL channel WHERE power.SECONDID = :1 AN
D link.TOPOLOGYID = power.TOPOLOGYID AND link.PARENTLEN = 1 AND
link.STATEID = 1 AND link.LINKTYPEID = 1 AND link.PARENTID = cha

        522,314            2      261,157.0    1.2     6.79     60.08  982709942
INSERT INTO TMP_CALC_QOS_SLOW_CM_LAST SELECT * FROM TMP_CALC_QOS
_SLOW_CM_LAST_TMP

        503,642            2      251,821.0    1.1     6.68     82.31  614087306
INSERT INTO TMP_CALC_HFC_SLOW_CM_LAST SELECT * FROM TMP_CALC_HFC
_SLOW_CM_LAST_TMP

          -------------------------------------------------------------
SQL ordered by Reads  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> End Disk Reads Threshold:      1000  Total Disk Reads:          52,072
-> Captured SQL accounts for   75.3% of Total Disk Reads
-> SQL reported below exceeded  1.0% of Total Disk Reads

                                                     CPU      Elapsd     Old
 Physical Reads  Executions  Reads per Exec %Total Time (s)  Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
          4,709            2        2,354.5    9.0     1.59      7.07  694687570
Call CALC_DELETE_OLD_DATA(:1)

          4,446            2        2,223.0    8.5     6.00    137.15 3327781611
Call CALC_DELETE_SLOW_RAWDATA(:1, :2)

          4,141            8          517.6    8.0     5.22     17.10  591467433
Call CALC_TOPOLOGY_MEDIUM(:1, :2, :3, :4)

          3,198            8          399.8    6.1     0.76      4.02  323802731
DELETE FROM TMP_TOP_MED_DN WHERE DOWNID IN( SELECT TOPOLOGYID FR
OM ( SELECT PARENTID,TOPOLOGYID,ROW_NUMBER() OVER(PARTITION BY P
ARENTID ORDER BY DATEFROM DESC) RN FROM TOPOLOGY_LINK WHERE STAT
EID=1 AND TOPOLOGYID_NODETYPEID=128 AND PARENTID_NODETYPEID=127
) WHERE RN>1 )

          1,707            1        1,707.0    3.3     2.09     11.86 2155459437
INSERT /*+ APPEND */ INTO CM_RAWDATA SELECT * FROM CM_RAWDATA_SH
ADOW

          1,555            1        1,555.0    3.0     5.63     61.19 2149686744
select power.TOPOLOGYID, power.SAMPLE_LENGTH, UNIQUE_CMS, ACTIVE
_CMS, BITSPERSYMBOL, TXPOWER_UP FROM CM_POWER_2 power, TOPOLOGY_
LINK link, UPSTREAM_CHANNEL channel WHERE power.SECONDID = :1 AN
D link.TOPOLOGYID = power.TOPOLOGYID AND link.PARENTLEN = 1 AND
link.STATEID = 1 AND link.LINKTYPEID = 1 AND link.PARENTID = cha

          1,523            8          190.4    2.9     0.17      1.15 2531676910
Module: Lab128
--lab128
 select se.fa_se, uit.ui, uipt.uip, uist.uis, fr_s.fr_s
e, t.dt from (select /*+ all_rows */ count(*) fa_se from (select
 ts#,max(length) m from sys.fet$ group by ts#) f, sys.seg$ s whe
re s.ts#=f.ts# and extsize>m) se, (select count(*) ui from sys.i
nd$ where bitand(flags,1)=1) uit, (select count(*) uip from sys.

          1,292            2          646.0    2.5     0.16      0.54 1794345920
DELETE FROM CM_BYTES WHERE SECONDID <= :B1

          1,151            2          575.5    2.2     1.97     32.12 2763442576
INSERT INTO CM_BYTES SELECT SECONDID, CMID, SAMPLE_LENGTH, BYTES
_DOWN, BYTES_UP FROM TMP_CALC_QOS_SLOW_CM WHERE BYTES_DOWN>=0 AN
D BYTES_UP>=0

          1,114            2          557.0    2.1     1.77      7.82  465647697
INSERT INTO CM_QOS_PROF SELECT :B2 , C.TOPOLOGYID, :B2 - :B1 , C
.NODE_PROFILE_ID, C.QOS_PROF_IDX FROM CM_QOS_PROF C WHERE C.TOPO
LOGYID IN ( SELECT CMID FROM TMP_TOP_SLOW_CM MINUS SELECT TOPOLO
GYID FROM CM_QOS_PROF WHERE SECONDID = :B2 ) AND C.SECONDID = :B
1

          1,089            1        1,089.0    2.1     0.49     31.46 3396396246
select TOPOLOGYID, CER from CM_VA where SECONDID = :1 and CER IS
SQL ordered by Reads  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> End Disk Reads Threshold:      1000  Total Disk Reads:          52,072
-> Captured SQL accounts for   75.3% of Total Disk Reads
-> SQL reported below exceeded  1.0% of Total Disk Reads

                                                     CPU      Elapsd     Old
 Physical Reads  Executions  Reads per Exec %Total Time (s)  Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
 NOT NULL

          1,064            2          532.0    2.0     1.82     36.87   18649301
INSERT INTO CM_ERRORS SELECT SECONDID, CMID, SAMPLE_LENGTH, UNER
ROREDS, CORRECTEDS, UNCORRECTABLES, SNR FROM TMP_CALC_HFC_SLOW_C
M

            944            1          944.0    1.8     4.62      6.30 2293375291
Module: Admin Connection
select output from table(dbms_workload_repository.ash_report_htm
l(:1, :2, :3, :4, 0))

            900            1          900.0    1.7     6.32     39.14 2109849972
select power.TOPOLOGYID, power.SAMPLE_LENGTH, UNIQUE_CMS, ACTIVE
_CMS, CHANNELWIDTH, RXPOWER_UP, RXPOWER UPSTREAM_AVG_RX FROM CM_
POWER_1 power, TOPOLOGY_LINK link, UPSTREAM_CHANNEL channel, UPS
TREAM_POWER_1 upstream_rx WHERE power.SECONDID = :1 and power.SE
CONDID = upstream_rx.secondid AND link.TOPOLOGYID = power.TOPOLO

            874            8          109.3    1.7     1.72      4.35 1599796656
INSERT INTO TMP_TOP_MED_DN SELECT M.CMTSID, M.VENDOR_DESC, M.MOD
EL_DESC, MAC_L.TOPOLOGYID, DOWN_L.TOPOLOGYID, M.UP_SNR_CNR_A3, M
.UP_SNR_CNR_A2, M.UP_SNR_CNR_A1, M.UP_SNR_CNR_A0, M.MAC_SLOTS_OP
EN, M.MAC_SLOTS_USED, M.CMTS_REBOOT, 0 FROM TMP_TOP_MED_CMTS M,
TOPOLOGY_LINK DOWN_L, TOPOLOGY_NODE DOWN_N, TOPOLOGY_LINK MAC_L

            757            1          757.0    1.5     0.46      2.59 2347914587
      SELECT                trunc(SYSDATE, 'HH24') HOUR_STAMP,
              M.TOPOLOGYID UP_ID,                T.UP_DESC    UP
_DESC,                T.MAC_ID     MAC_ID,                T.CMTS
_ID    CMTS_ID,                M.MAX_PERCENT_UTIL,
  M.MAX_PACKETS_PER_SEC,                M.AVG_PACKET_SIZE,

            713          112            6.4    1.4     3.45     93.17 2689373535
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate;
broken BOOLEAN := FALSE; BEGIN EMD_MAINTENANCE.EXECUTE_EM_DBMS_J
OB_PROCS(); :mydate := next_date; IF broken THEN :b := 1; ELSE :
b := 0; END IF; END;

            690            2          345.0    1.3     0.34      2.91 4032294671
DELETE FROM CM_POLL_STATUS WHERE LAST_SAMPLETIME <= SYSDATE-30

            686            2          343.0    1.3     0.07      0.29 3762878399
DELETE FROM MISSED_RAWDATA WHERE SAMPLETIME <= TRUNC(SYSDATE,'hh
') - 2/24

            628            1          628.0    1.2     0.91      1.49  856816204
Module: Admin Connection
SELECT ash.current_obj#, ash.dim1_percentage, ash.event, ash.dim
12_percentage,  dbms_ash_internal.get_obj_name(
     my_obj.owner,                      my_obj.object_name,
                 my_obj.subobject_name,                      my_
SQL ordered by Reads  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> End Disk Reads Threshold:      1000  Total Disk Reads:          52,072
-> Captured SQL accounts for   75.3% of Total Disk Reads
-> SQL reported below exceeded  1.0% of Total Disk Reads

                                                     CPU      Elapsd     Old
 Physical Reads  Executions  Reads per Exec %Total Time (s)  Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
obj.object_type),  my_obj.tablespace_name  FROM   ( SELECT d12aa

            573            8           71.6    1.1     3.89    244.55 1916282772
Call CALC_DELETE_MEDIUM_RAWDATA(:1, :2)

          -------------------------------------------------------------
SQL ordered by Executions  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> End Executions Threshold:       100  Total Executions:       1,092,470
-> Captured SQL accounts for   11.3% of Total Executions
-> SQL reported below exceeded  1.0% of Total Executions

                                                CPU per    Elap per     Old
 Executions   Rows Processed   Rows per Exec    Exec (s)   Exec (s)  Hash Value
------------ --------------- ---------------- ----------- ---------- ----------
      15,240          15,240              1.0       0.00        0.00 1999961487
INSERT INTO TOPOLOGY_LINK (TOPOLOGYID, TOPOLOGYID_NODETYPEID, PA
RENTID, PARENTID_NODETYPEID, LINKTYPEID, PARENTLEN, DATEFROM, DA
TETO, STATEID) VALUES (:1,:2,:3,:4,1,:5,sysdate,TO_DATE('9999-12
-31 23:59:59', 'YYYY-MM-DD HH24:MI:SS'),1)

      15,240          15,240              1.0       0.00        0.00 4175898638
UPDATE TOPOLOGY_LINK SET DATETO=sysdate, STATEID=0 WHERE TOPOLOG
YID=:1 AND PARENTID=:2 AND STATEID=1

          -------------------------------------------------------------
SQL ordered by Parse Calls  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> End Parse Calls Threshold:      1000 Total Parse Calls:          27,215
-> Captured SQL accounts for   75.2% of Total Parse Calls
-> SQL reported below exceeded  1.0% of Total Parse Calls

                           % Total    Old
 Parse Calls  Executions   Parses  Hash Value
------------ ------------ -------- ----------
       1,310        1,310     4.81 1254950678
select file# from file$ where ts#=:1

       1,235        1,235     4.54 3404108640
ALTER SESSION SET ISOLATION_LEVEL = READ COMMITTED

       1,235        1,235     4.54 3742653144
select sysdate from dual

       1,101        1,101     4.05 3286148528
select c.name, u.name from con$ c, cdef$ cd, user$ u  where c.co
n# = cd.con# and cd.enabled = :1 and c.owner# = u.user#

         944          944     3.47 2850132846
update seg$ set type#=:4,blocks=:5,extents=:6,minexts=:7,maxexts
=:8,extsize=:9,extpct=:10,user#=:11,iniexts=:12,lists=decode(:13
, 65535, NULL, :13),groups=decode(:14, 65535, NULL, :14), cacheh
int=:15, hwmincr=:16, spare1=DECODE(:17,0,NULL,:17),scanhint=:18
 where ts#=:1 and file#=:2 and block#=:3

         640          640     2.35    2803285
update sys.mon_mods$ set inserts = inserts + :ins, updates = upd
ates + :upd, deletes = deletes + :del, flags = (decode(bitand(fl
ags, :flag), :flag, flags, flags + :flag)), drop_segments = drop
_segments + :dropseg, timestamp = :time where obj# = :objn

         640          640     2.35 2396279102
lock table sys.mon_mods$ in exclusive mode nowait

         548          548     2.01 4143084494
select privilege#,level from sysauth$ connect by grantee#=prior
privilege# and privilege#>0 start with grantee#=:1 and privilege
#>0

         418          418     1.54  794436051
Module: OEM.SystemPool
SELECT INSTANTIABLE, supertype_owner, supertype_name, LOCAL_ATTR
IBUTES FROM all_types WHERE type_name = :1 AND owner = :2

         415            2     1.52  260339297
insert into sys.col_usage$ values (   :objn, :coln,   decode(bit
and(:flag,1),0,0,1),   decode(bitand(:flag,2),0,0,1),   decode(b
itand(:flag,4),0,0,1),   decode(bitand(:flag,8),0,0,1),   decode
(bitand(:flag,16),0,0,1),   decode(bitand(:flag,32),0,0,1),   :t
ime)

         415          415     1.52 2554034351
lock table sys.col_usage$ in exclusive mode nowait

         415        1,327     1.52 3665763022
update sys.col_usage$ set   equality_preds    = equality_preds
  + decode(bitand(:flag,1),0,0,1),   equijoin_preds    = equijoi
SQL ordered by Parse Calls  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> End Parse Calls Threshold:      1000 Total Parse Calls:          27,215
-> Captured SQL accounts for   75.2% of Total Parse Calls
-> SQL reported below exceeded  1.0% of Total Parse Calls

                           % Total    Old
 Parse Calls  Executions   Parses  Hash Value
------------ ------------ -------- ----------
n_preds    + decode(bitand(:flag,2),0,0,1),   nonequijoin_preds
= nonequijoin_preds + decode(bitand(:flag,4),0,0,1),   range_pre
ds       = range_preds       + decode(bitand(:flag,8),0,0,1),

         396          396     1.46 1348827743
select type#,blocks,extents,minexts,maxexts,extsize,extpct,user#
,iniexts,NVL(lists,65535),NVL(groups,65535),cachehint,hwmincr, N
VL(spare1,0),NVL(scanhint,0) from seg$ where ts#=:1 and file#=:2
 and block#=:3

          -------------------------------------------------------------
Instance Activity Stats  DB/Inst: CDB10/cdb10  Snaps: 114-116

Statistic                                      Total     per Second    per Trans
--------------------------------- ------------------ -------------- ------------
CPU used by this session                      73,235           10.2         17.7
CPU used when call started                    69,804            9.7         16.9
CR blocks created                              2,170            0.3          0.5
Cached Commit SCN referenced                   5,439            0.8          1.3
Commit SCN cached                                 37            0.0          0.0
DB time                                    3,354,438          466.0        811.2
DBWR checkpoint buffers written               92,389           12.8         22.3
DBWR checkpoints                                 129            0.0          0.0
DBWR object drop buffers written               5,818            0.8          1.4
DBWR revisited being-written buff                565            0.1          0.1
DBWR thread checkpoint buffers wr             58,196            8.1         14.1
DBWR transaction table writes                    294            0.0          0.1
DBWR undo block writes                       100,810           14.0         24.4
IMU CR rollbacks                                 236            0.0          0.1
IMU Flushes                                    3,876            0.5          0.9
IMU Redo allocation size                   8,979,320        1,247.3      2,171.5
IMU commits                                    2,255            0.3          0.6
IMU contention                                    47            0.0          0.0
IMU ktichg flush                                  24            0.0          0.0
IMU pool not allocated                           415            0.1          0.1
IMU recursive-transaction flush                   15            0.0          0.0
IMU undo allocation size                  16,377,144        2,274.9      3,960.6
IMU- failed to get a private stra                415            0.1          0.1
PX local messages recv'd                           0            0.0          0.0
PX local messages sent                             0            0.0          0.0
SMON posted for undo segment shri                 20            0.0          0.0
SQL*Net roundtrips to/from client          1,094,719          152.1        264.7
SQL*Net roundtrips to/from dblink              8,996            1.3          2.2
active txn count during cleanout             639,508           88.8        154.7
application wait time                         16,554            2.3          4.0
auto extends on undo tablespace                    0            0.0          0.0
background checkpoints completed                  37            0.0          0.0
background checkpoints started                    37            0.0          0.0
background timeouts                           22,223            3.1          5.4
branch node splits                                29            0.0          0.0
buffer is not pinned count                29,662,682        4,120.4      7,173.6
buffer is pinned count                    29,809,596        4,140.8      7,209.1
bytes received via SQL*Net from c        108,581,448       15,082.9     26,259.1
bytes received via SQL*Net from d            962,647          133.7        232.8
bytes sent via SQL*Net to client          93,282,655       12,957.7     22,559.3
bytes sent via SQL*Net to dblink           8,966,262        1,245.5      2,168.4
calls to get snapshot scn: kcmgss            216,338           30.1         52.3
calls to kcmgas                              147,056           20.4         35.6
calls to kcmgcs                              640,324           89.0        154.9
change write time                            103,316           14.4         25.0
cleanout - number of ktugct calls            678,574           94.3        164.1
cleanouts and rollbacks - consist                  4            0.0          0.0
cleanouts only - consistent read              19,868            2.8          4.8
cluster key scan block gets                   27,338            3.8          6.6
cluster key scans                             21,540            3.0          5.2
commit batch performed                            10            0.0          0.0
commit batch requested                            10            0.0          0.0
commit batch/immediate performed                 153            0.0          0.0
commit batch/immediate requested                 153            0.0          0.0
commit cleanout failures: block l             22,940            3.2          5.6
commit cleanout failures: buffer                  67            0.0          0.0
Instance Activity Stats  DB/Inst: CDB10/cdb10  Snaps: 114-116

Statistic                                      Total     per Second    per Trans
--------------------------------- ------------------ -------------- ------------
commit cleanout failures: callbac                219            0.0          0.1
commit cleanout failures: cannot                   0            0.0          0.0
commit cleanouts                             107,561           14.9         26.0
commit cleanouts successfully com             84,335           11.7         20.4
commit immediate performed                       143            0.0          0.0
commit immediate requested                       143            0.0          0.0
commit txn count during cleanout              54,758            7.6         13.2
concurrency wait time                          9,120            1.3          2.2
consistent changes                           832,273          115.6        201.3
consistent gets                           34,015,984        4,725.1      8,226.4
consistent gets - examination             28,527,345        3,962.7      6,899.0
consistent gets direct                             0            0.0          0.0
consistent gets from cache                34,015,984        4,725.1      8,226.4
cursor authentications                           260            0.0          0.1
data blocks consistent reads - un              2,159            0.3          0.5
db block changes                          10,124,487        1,406.4      2,448.5
db block gets                              9,862,848        1,370.0      2,385.2
db block gets direct                           6,668            0.9          1.6
db block gets from cache                   9,856,180        1,369.1      2,383.6
deferred (CURRENT) block cleanout             16,349            2.3          4.0
dirty buffers inspected                       71,152            9.9         17.2
enqueue conversions                           17,965            2.5          4.3
enqueue releases                             147,314           20.5         35.6
enqueue requests                             147,387           20.5         35.6
enqueue timeouts                                  67            0.0          0.0
enqueue waits                                    113            0.0          0.0
execute count                              1,092,470          151.8        264.2
frame signature mismatch                           0            0.0          0.0
free buffer inspected                        331,981           46.1         80.3
free buffer requested                        204,101           28.4         49.4
global undo segment hints helped                   0            0.0          0.0
global undo segment hints were st                  0            0.0          0.0
heap block compress                           52,202            7.3         12.6
hot buffers moved to head of LRU             100,209           13.9         24.2
immediate (CR) block cleanout app             19,872            2.8          4.8
immediate (CURRENT) block cleanou             80,162           11.1         19.4
index fast full scans (full)                     132            0.0          0.0
index fetch by key                        16,874,551        2,344.0      4,080.9
index scans kdiixs1                          837,547          116.3        202.6
leaf node 90-10 splits                         9,341            1.3          2.3
leaf node splits                              17,221            2.4          4.2
lob reads                                        203            0.0          0.1
lob writes                                     1,593            0.2          0.4
lob writes unaligned                           1,593            0.2          0.4
logons cumulative                                257            0.0          0.1
messages received                             24,046            3.3          5.8
messages sent                                 24,047            3.3          5.8
no buffer to keep pinned count                     0            0.0          0.0
no work - consistent read gets             4,854,564          674.3      1,174.0
opened cursors cumulative                     29,312            4.1          7.1
parse count (failures)                             0            0.0          0.0
parse count (hard)                               906            0.1          0.2
parse count (total)                           27,215            3.8          6.6
parse time cpu                                 1,421            0.2          0.3
parse time elapsed                             7,626            1.1          1.8
physical read IO requests                     37,545            5.2          9.1
Instance Activity Stats  DB/Inst: CDB10/cdb10  Snaps: 114-116

Statistic                                      Total     per Second    per Trans
--------------------------------- ------------------ -------------- ------------
physical read bytes                      426,573,824       59,254.6    103,161.8
physical read total IO requests               93,794           13.0         22.7
physical read total bytes              1,346,237,440      187,003.4    325,571.3
physical read total multi block r              2,699            0.4          0.7
physical reads                                52,072            7.2         12.6
physical reads cache                          51,063            7.1         12.4
physical reads cache prefetch                 13,906            1.9          3.4
physical reads direct                          1,009            0.1          0.2
physical reads direct (lob)                        0            0.0          0.0
physical reads direct temporary t                750            0.1          0.2
physical reads prefetch warmup                     0            0.0          0.0
physical write IO requests                   101,042           14.0         24.4
physical write bytes                   1,501,020,160      208,504.0    363,003.7
physical write total IO requests             119,138           16.6         28.8
physical write total bytes             3,434,338,304      477,057.7    830,553.4
physical write total multi block              14,190            2.0          3.4
physical writes                              183,230           25.5         44.3
physical writes direct                         7,677            1.1          1.9
physical writes direct (lob)                       7            0.0          0.0
physical writes direct temporary               4,242            0.6          1.0
physical writes from cache                   175,553           24.4         42.5
physical writes non checkpoint               159,817           22.2         38.7
pinned buffers inspected                          18            0.0          0.0
prefetch warmup blocks aged out b                  0            0.0          0.0
prefetched blocks aged out before                  0            0.0          0.0
process last non-idle time                     7,198            1.0          1.7
recovery blocks read                               0            0.0          0.0
recursive calls                              303,380           42.1         73.4
recursive cpu usage                           36,748            5.1          8.9
redo blocks read for recovery                      0            0.0          0.0
redo blocks written                        3,430,184          476.5        829.6
redo buffer allocation retries                 3,510            0.5          0.9
redo entries                               5,034,912          699.4      1,217.6
redo log space requests                        1,095            0.2          0.3
redo log space wait time                      26,372            3.7          6.4
redo ordering marks                           96,204           13.4         23.3
redo size                              1,697,855,400      235,846.0    410,605.9
redo synch time                              131,843           18.3         31.9
redo synch writes                             10,465            1.5          2.5
redo wastage                               1,345,656          186.9        325.4
redo write time                              208,626           29.0         50.5
redo writer latching time                         55            0.0          0.0
redo writes                                    5,754            0.8          1.4
rollback changes - undo records a             64,619            9.0         15.6
rollbacks only - consistent read               2,148            0.3          0.5
rows fetched via callback                 12,793,552        1,777.1      3,094.0
session connect time                               0            0.0          0.0
session cursor cache hits                     17,301            2.4          4.2
session logical reads                     43,878,832        6,095.1     10,611.6
session pga memory                       225,399,144       31,309.8     54,510.1
session pga memory max                   375,411,048       52,147.7     90,788.7
session uga memory                   524,039,191,952   72,793,331.3 ############
session uga memory max                   329,698,024       45,797.8     79,733.5
shared hash latch upgrades - no w            853,856          118.6        206.5
shared hash latch upgrades - wait                 22            0.0          0.0
sorts (disk)                                       0            0.0          0.0
Instance Activity Stats  DB/Inst: CDB10/cdb10  Snaps: 114-116

Statistic                                      Total     per Second    per Trans
--------------------------------- ------------------ -------------- ------------
sorts (memory)                                65,215            9.1         15.8
sorts (rows)                               3,404,105          472.9        823.2
sql area purged                                  116            0.0          0.0
summed dirty queue length                  1,359,596          188.9        328.8
switch current to new buffer                   7,823            1.1          1.9
table fetch by rowid                      19,110,650        2,654.6      4,621.7
table fetch continued row                        164            0.0          0.0
table scan blocks gotten                     428,059           59.5        103.5
table scan rows gotten                    40,899,306        5,681.3      9,891.0
table scans (long tables)                         34            0.0          0.0
table scans (short tables)                    11,931            1.7          2.9
total number of times SMON posted                333            0.1          0.1
transaction rollbacks                            153            0.0          0.0
transaction tables consistent rea                  8            0.0          0.0
transaction tables consistent rea                100            0.0          0.0
undo change vector size                  664,433,044       92,295.2    160,685.1
user I/O wait time                            80,928           11.2         19.6
user calls                                 1,100,285          152.8        266.1
user commits                                   3,273            0.5          0.8
user rollbacks                                   862            0.1          0.2
workarea executions - onepass                      4            0.0          0.0
workarea executions - optimal                 73,237           10.2         17.7
write clones created in backgroun                330            0.1          0.1
write clones created in foregroun              2,493            0.4          0.6
          -------------------------------------------------------------

Instance Activity Stats  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> Statistics with absolute values (should not be diffed)

Statistic                             Begin Value       End Value
--------------------------------- --------------- ---------------
logons current                                 36              41
opened cursors current                        607           1,017
session cursor cache count                 71,680          74,159
workarea memory allocated                       0          28,311
          -------------------------------------------------------------

Instance Activity Stats  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> Statistics identified by '(derived)' come from sources other than SYSSTAT

Statistic                                      Total  per Hour
--------------------------------- ------------------ ---------
log switches (derived)                            37     18.50
          -------------------------------------------------------------

OS Statistics  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> ordered by statistic type (CPU use, Virtual Memory, Hardware Config), Name

Statistic                                  Total
------------------------- ----------------------
BUSY_TIME                                728,809
IDLE_TIME                                711,843
SYS_TIME                                  50,602
USER_TIME                                678,207
LOAD                                           0
OS_CPU_WAIT_TIME                         328,900
VM_IN_BYTES                          212,729,856
VM_OUT_BYTES                         794,091,520
PHYSICAL_MEMORY_BYTES              6,388,301,824
NUM_CPUS                                       2
          -------------------------------------------------------------
Tablespace IO Stats  DB/Inst: CDB10/cdb10  Snaps: 114-116
->ordered by IOs (Reads + Writes) desc

Tablespace
------------------------------
                 Av      Av     Av                    Av        Buffer Av Buf
         Reads Reads/s Rd(ms) Blks/Rd       Writes Writes/s      Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
TS_STARGUS
        31,284       4   19.8     1.3       36,946        5        146    9.2
UNDOTBS1
            45       0   69.6     1.0       56,258        8        162  437.6
TEMP
         2,868       0    8.8     2.4        5,220        1          0    0.0
SYSAUX
           818       0    5.5     1.2        1,840        0         36    3.1
SYSTEM
         1,810       0    6.4     2.2          342        0          6    5.0
PERFSTAT
           241       0    5.9     1.0          359        0          0    0.0
EXAMPLE
            37       0    3.0     1.0           37        0          0    0.0
USERS
            37       0    1.1     1.0           37        0          0    0.0
          -------------------------------------------------------------
File IO Stats  DB/Inst: CDB10/cdb10  Snaps: 114-116
->Mx Rd Bkt: Max bucket time for single block read
->ordered by Tablespace, File

Tablespace               Filename
------------------------ ----------------------------------------------------
                        Av   Mx                                             Av
                 Av     Rd   Rd    Av                    Av        Buffer BufWt
         Reads Reads/s (ms)  Bkt Blks/Rd       Writes Writes/s      Waits  (ms)
-------------- ------- ----- --- ------- ------------ -------- ---------- ------
EXAMPLE                  /export/home/oracle10/oradata/cdb10/example01.dbf
            37       0   3.0         1.0           37        0          0

PERFSTAT                 /export/home/oracle10/oradata/cdb10/perfstat01.dbf
           241       0   5.9 ###     1.0          359        0          0

SYSAUX                   /export/home/oracle10/oradata/cdb10/sysaux01.dbf
           818       0   5.5 ###     1.2        1,840        0         36    3.1

SYSTEM                   /export/home/oracle10/oradata/cdb10/system01.dbf
         1,810       0   6.4 ###     2.2          342        0          6    5.0

TEMP                     /export/home/oracle10/oradata/cdb10/temp01.dbf
         2,868       0   8.8 ###     2.4        5,220        1          0

TS_STARGUS               /export/home/oracle10/oradata/cdb10/ts_stargus_01.db
        31,284       4  19.8 ###     1.3       36,946        5        146    9.2

UNDOTBS1                 /export/home/oracle10/oradata/cdb10/undotbs01.dbf
            45       0  69.6 ###     1.0       56,258        8        162  437.6

USERS                    /export/home/oracle10/oradata/cdb10/users01.dbf
            37       0   1.1         1.0           37        0          0

          -------------------------------------------------------------
File Read Histogram Stats  DB/Inst: CDB10/cdb10  Snaps: 114-116
->Number of single block reads in each time range
->ordered by Tablespace, File

Tablespace               Filename
------------------------ ----------------------------------------------------
    0 - 2 ms     2 - 4 ms    4 - 8 ms     8 - 16 ms   16 - 32 ms       32+ ms
------------ ------------ ------------ ------------ ------------ ------------
PERFSTAT                 /export/home/oracle10/oradata/cdb10/perfstat01.dbf
          96           22           50           21           16            4

SYSAUX                   /export/home/oracle10/oradata/cdb10/sysaux01.dbf
         310           77          224           94           29           14

SYSTEM                   /export/home/oracle10/oradata/cdb10/system01.dbf
         565          188          446          142           36           34

TS_STARGUS               /export/home/oracle10/oradata/cdb10/ts_stargus_01.db
      13,196        1,392        4,710        4,818        2,539        2,827

UNDOTBS1                 /export/home/oracle10/oradata/cdb10/undotbs01.dbf
           1            0            0            1            1            5

TEMP                     /export/home/oracle10/oradata/cdb10/temp01.dbf
       1,986           27           78           87           39           48

          -------------------------------------------------------------
Buffer Pool Statistics  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> Standard block size Pools  D: default,  K: keep,  R: recycle
-> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
-> Buffers: the number of buffers.  Units of K, M, G are divided by 1000

                                                            Free Writ     Buffer
            Pool         Buffer     Physical    Physical  Buffer Comp       Busy
P   Buffers Hit%           Gets        Reads      Writes   Waits Wait      Waits
--- ------- ---- -------------- ------------ ----------- ------- ---- ----------
D       38K  100     43,867,770       50,739     175,553       0    0        350
          -------------------------------------------------------------

Instance Recovery Stats  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> B: Begin snapshot,  E: End snapshot

  Targt Estd                                  Log File  Log Ckpt    Log Ckpt
  MTTR  MTTR   Recovery   Actual    Target      Size     Timeout    Interval
   (s)   (s)   Estd IOs  Redo Blks Redo Blks Redo Blks  Redo Blks  Redo Blks
- ----- ----- ---------- --------- --------- ---------- --------- ------------
B     0     9        290       556     58761     184320     58761
E     0    10        956      3056    184320     184320    456931
          -------------------------------------------------------------

Buffer Pool Advisory  DB/Inst: CDB10/cdb10  End Snap: 116
-> Only rows with estimated physical reads >0 are displayed
-> ordered by Pool, Block Size, Buffers For Estimate

                                   Est
                                  Phys      Estimated                   Est
    Size for  Size      Buffers   Read     Phys Reads     Est Phys % dbtime
P    Est (M) Factr  (thousands)  Factr    (thousands)    Read Time  for Rds
--- -------- ----- ------------ ------ -------------- ------------ --------
D         28    .1            3    2.1          3,489       56,173     38.4
D         56    .2            7    1.8          2,947       45,783     31.3
D         84    .3           10    1.5          2,485       36,931     25.2
D        112    .4           14    1.4          2,280       33,006     22.5
D        140    .5           17    1.3          2,175       30,988     21.2
D        168    .5           21    1.3          2,102       29,589     20.2
D        196    .6           24    1.2          1,977       27,204     18.6
D        224    .7           28    1.2          1,882       25,373     17.3
D        252    .8           31    1.1          1,812       24,044     16.4
D        280    .9           35    1.0          1,671       21,336     14.6
D        308   1.0           38    1.0          1,625       20,462     14.0
D        336   1.1           42    1.0          1,589       19,770     13.5
D        364   1.2           45    1.0          1,551       19,038     13.0
D        392   1.3           49    0.9          1,527       18,580     12.7
D        420   1.4           52    0.9          1,513       18,303     12.5
D        448   1.5           55    0.9          1,497       18,003     12.3
D        476   1.5           59    0.9          1,480       17,671     12.1
D        504   1.6           62    0.9          1,460       17,301     11.8
D        532   1.7           66    0.9          1,446       17,018     11.6
D        560   1.8           69    0.9          1,425       16,620     11.3
          -------------------------------------------------------------

Buffer wait Statistics  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> ordered by wait time desc, waits desc

Class                        Waits Total Wait Time (s) Avg Time (ms)
---------------------- ----------- ------------------- -------------
undo header                    162                  71           438
data block                     183                   1             8
segment header                   5                   0             2
          -------------------------------------------------------------
PGA Aggr Target Stats  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> B: Begin snap   E: End snap (rows identified with B or E contain data
   which is absolute i.e. not diffed over the interval)
-> PGA cache hit % - percentage of W/A (WorkArea) data processed only in-memory
-> Auto PGA Target - actual workarea memory target
-> W/A PGA Used    - amount of memory used for all Workareas (manual + auto)
-> %PGA W/A Mem    - percentage of PGA memory allocated to workareas
-> %Auto W/A Mem   - percentage of workarea memory controlled by Auto Mem Mgmt
-> %Man W/A Mem    - percentage of workarea memory under manual control

PGA Cache Hit % W/A MB Processed Extra W/A MB Read/Written
--------------- ---------------- -------------------------
           99.5            6,690                        32

                                             %PGA  %Auto   %Man
  PGA Aggr  Auto PGA   PGA Mem    W/A PGA    W/A    W/A    W/A   Global Mem
  Target(M) Target(M)  Alloc(M)   Used(M)    Mem    Mem    Mem    Bound(K)
- --------- --------- ---------- ---------- ------ ------ ------ ----------
B       200       136       96.9        0.0     .0     .0     .0     40,960
E       200       118      166.8       28.8   17.3  100.0     .0     40,960
          -------------------------------------------------------------

PGA Aggr Target Histogram  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> Optimal Executions are purely in-memory operations

    Low    High
Optimal Optimal    Total Execs Optimal Execs 1-Pass Execs M-Pass Execs
------- ------- -------------- ------------- ------------ ------------
     2K      4K         66,759        66,759            0            0
    64K    128K             42            42            0            0
   128K    256K              4             4            0            0
   256K    512K             68            68            0            0
   512K   1024K          4,553         4,553            0            0
     1M      2M          1,790         1,790            0            0
     4M      8M             20            16            4            0
     8M     16M             22            22            0            0
    16M     32M              4             4            0            0
          -------------------------------------------------------------

PGA Memory Advisory  DB/Inst: CDB10/cdb10  End Snap: 116
-> When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value
   where Estd PGA Overalloc Count is 0

                                       Estd Extra    Estd PGA   Estd PGA
PGA Target    Size           W/A MB   W/A MB Read/      Cache  Overalloc
  Est (MB)   Factr        Processed Written to Disk     Hit %      Count
---------- ------- ---------------- ---------------- -------- ----------
        25     0.1         68,983.4         56,207.3     55.0      1,210
        50     0.3         68,983.4         52,212.1     57.0      1,035
       100     0.5         68,983.4         21,393.4     76.0          0
       150     0.8         68,983.4          7,199.5     91.0          0
       200     1.0         68,983.4          7,157.5     91.0          0
       240     1.2         68,983.4          6,802.2     91.0          0
       280     1.4         68,983.4          6,802.2     91.0          0
       320     1.6         68,983.4          6,802.2     91.0          0
       360     1.8         68,983.4          6,802.2     91.0          0
       400     2.0         68,983.4          6,802.2     91.0          0
       600     3.0         68,983.4          6,623.8     91.0          0
       800     4.0         68,983.4          6,623.8     91.0          0
     1,200     6.0         68,983.4          6,623.8     91.0          0
     1,600     8.0         68,983.4          6,623.8     91.0          0
          -------------------------------------------------------------
Process Memory Summary Stats  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> B: Begin snap   E: End snap
-> All rows below contain absolute values (i.e. not diffed over the interval)
-> Max Alloc is Maximum PGA Allocation size at snapshot time
   Hist Max Alloc is the Historical Max Allocation for still-connected processes
-> Num Procs or Allocs:  For Begin/End snapshot lines, it is the number of
   processes. For Category lines, it is the number of allocations
-> ordered by Begin/End snapshot, Alloc (MB) desc

                                                                  Hist   Num
                                          Avg    Std Dev   Max    Max   Procs
             Alloc     Used    Freeabl   Alloc    Alloc   Alloc  Alloc    or
  Category   (MB)      (MB)      (MB)     (MB)    (MB)    (MB)    (MB)  Allocs
- -------- --------- --------- -------- -------- ------- ------- ------ ------
B --------      97.0      50.1     21.2      2.6     4.9      31     48     38
  Other         72.3                         1.9     4.9      31     31     38
  Freeable      21.2        .0                .8      .4       2            26
  SQL            2.5       1.2                .1      .1       0     46     29
  PL/SQL         1.0        .6                .0      .0       0      0     36
E --------     166.9      97.7     38.3      3.9     8.3      48     48     43
  Other         90.3                         2.1     4.6      31     31     43
  Freeable      38.3        .0               1.2     2.3      14            32
  SQL           36.8      34.8               1.1     5.5      32     46     32
  PL/SQL         1.4        .7                .0      .0       0      0     41
          -------------------------------------------------------------

Top Process Memory (by component)  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> ordered by Begin/End snapshot, Alloc (MB) desc

                        Alloc   Used   Freeabl     Max      Hist Max
     PId Category       (MB)    (MB)     (MB)   Alloc (MB) Alloc (MB)
- ------ ------------- ------- ------- -------- ---------- ----------
B      6 LGWR --------    30.7    14.4       .1       30.7       30.7
         Other            30.6                        30.6       30.6
         Freeable           .1      .0                  .1
         PL/SQL             .0      .0                  .0         .0
      11 MMON --------     7.6     5.9      1.4        7.6        7.7
         Other             6.1                         6.1        6.1
         Freeable          1.4      .0                 1.4
         SQL                .1      .0                  .1        1.4
         PL/SQL             .0      .0                  .0         .1
      32  ------------     3.7     2.3      1.0        3.7        5.0
         Other             2.3                         2.3        2.3
         Freeable          1.0      .0                 1.0
         SQL                .2      .1                  .2        2.0
         PL/SQL             .2      .0                  .2         .2
      16 J001 --------     3.5     1.0      1.8        3.5        3.5
         Freeable          1.8      .0                 1.8
         Other             1.6                         1.6        1.6
         SQL                .1      .1                  .1        2.2
         PL/SQL             .1      .0                  .1         .1
      36  ------------     3.3     2.5       .8        3.3        4.9
         Other             2.4                         2.4        2.4
         Freeable           .8      .0                  .8
         SQL                .1      .1                  .1        2.8
         PL/SQL             .0      .0                  .0         .0
      25  ------------     2.8     1.7       .8        2.8       48.2
         Other             1.7                         1.7        1.7
         Freeable           .8      .0                  .8
         SQL                .2      .1                  .2       45.7
         PL/SQL             .0      .0                  .0         .0
      34  ------------     2.7     1.5      1.0        2.7        3.7
         Other             1.5                         1.5        1.5
         Freeable          1.0      .0                 1.0
         SQL                .1      .0                  .1        1.4
         PL/SQL             .1      .0                  .1         .1
      38  ------------     2.6     1.7       .9        2.6        3.9
         Other             1.5                         1.5        1.5
         Freeable           .9      .0                  .9
         SQL                .1      .0                  .1        1.9
         PL/SQL             .1      .0                  .1         .1
      23  ------------     2.5     1.3       .9        2.5        5.8
         Other             1.4                         1.4        1.4
         Freeable           .9      .0                  .9
         SQL                .1      .0                  .1        4.6
         PL/SQL             .0      .0                  .0         .0
      24  ------------     2.5     1.2       .9        2.5        5.7
         Other             1.4                         1.4        1.4
         Freeable           .9      .0                  .9
         SQL                .1      .0                  .1        4.6
         PL/SQL             .0      .0                  .0         .0
       8 SMON --------     2.3      .6      1.4        2.3        2.4
         Freeable          1.4      .0                 1.4
         Other              .8                          .8         .8
         SQL                .1      .0                  .1         .8

Top Process Memory (by component)  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> ordered by Begin/End snapshot, Alloc (MB) desc

                        Alloc   Used   Freeabl     Max      Hist Max
     PId Category       (MB)    (MB)     (MB)   Alloc (MB) Alloc (MB)
- ------ ------------- ------- ------- -------- ---------- ----------
B      8 PL/SQL             .0      .0                  .0         .0
      37  ------------     2.2     1.3       .9        2.2       11.1
         Other             1.3                         1.3        5.2
         Freeable           .9      .0                  .9
         PL/SQL             .0      .0                  .0         .0
         SQL                .0      .0                  .0        5.0
      40  ------------     2.0     1.1       .9        2.0        2.6
         Freeable           .9      .0                  .9
         Other              .7                          .7         .7
         SQL                .4      .2                  .4        1.5
         PL/SQL             .0      .0                  .0         .0
      27  ------------     2.0      .8       .9        2.0        3.8
         Other             1.0                         1.0        1.0
         Freeable           .9      .0                  .9
         SQL                .0      .0                  .0        2.3
         PL/SQL             .0      .0                  .0         .0
      10 CJQ0 --------     1.9      .7       .8        1.9        2.3
         Other             1.1                         1.1        1.1
         Freeable           .8      .0                  .8
         SQL                .1      .0                  .1         .9
         PL/SQL             .0      .0                  .0         .0
      42  ------------     1.9      .5      1.1        1.9        2.3
         Freeable          1.1      .0                 1.1
         Other              .8                          .8         .8
         SQL                .1      .0                  .1        1.2
         PL/SQL             .0      .0                  .0         .0
      33  ------------     1.9     1.5       .3        1.9        2.3
         Other             1.5                         1.5        1.5
         Freeable           .3      .0                  .3
         SQL                .1      .1                  .1         .6
         PL/SQL             .0      .0                  .0         .0
      35 TNS V1-V3 ---     1.9      .7       .3        1.9        1.9
         Other             1.5                         1.5        1.5
         Freeable           .3      .0                  .3
         SQL                .1      .0                  .1         .4
         PL/SQL             .0      .0                  .0         .0
      31  ------------     1.7      .5      1.0        1.7        4.8
         Freeable          1.0      .0                 1.0
         Other              .7                          .7         .7
         SQL                .1      .0                  .1        3.8
         PL/SQL             .0      .0                  .0         .0
      28  ------------     1.7      .5      1.0        1.7        4.0
         Freeable          1.0      .0                 1.0
         Other              .6                          .6         .6
         SQL                .0      .0                  .0        3.3
         PL/SQL             .0      .0                  .0         .0
E     31  ------------    48.1    33.5     13.6       48.1       48.1
         SQL              32.1    32.0                32.1       45.7
         Freeable         13.6      .0                13.6
         Other             2.3                         2.3        2.3
         PL/SQL             .0      .0                  .0         .0
       6 LGWR --------    30.7    14.4       .1       30.7       30.7
         Other            30.6                        30.6       30.6

Top Process Memory (by component)  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> ordered by Begin/End snapshot, Alloc (MB) desc

                        Alloc   Used   Freeabl     Max      Hist Max
     PId Category       (MB)    (MB)     (MB)   Alloc (MB) Alloc (MB)
- ------ ------------- ------- ------- -------- ---------- ----------
E      6 Freeable           .1      .0                  .1
         PL/SQL             .0      .0                  .0         .0
      11 MMON --------     7.6     5.9      1.3        7.6        7.7
         Other             6.2                         6.2        6.2
         Freeable          1.3      .0                 1.3
         PL/SQL             .0      .0                  .0         .1
         SQL                .0      .0                  .0        1.4
      28  ------------     6.0     1.5      1.0        6.0       30.3
         Other             4.8                         4.8        4.8
         Freeable          1.0      .0                 1.0
         SQL                .2      .1                  .2       27.6
         PL/SQL             .0      .0                  .0         .0
      42  ------------     5.5     4.3      1.0        5.5       16.4
         Other             3.9                         3.9        7.4
         Freeable          1.0      .0                 1.0
         SQL                .4      .2                  .4        7.8
         PL/SQL             .2      .0                  .2         .2
      36  ------------     4.3     3.4       .8        4.3        6.0
         Other             3.3                         3.3        3.3
         Freeable           .8      .0                  .8
         SQL                .1      .1                  .1        2.8
         PL/SQL             .0      .0                  .0         .0
      16 J001 --------     3.8      .9      1.9        3.8        3.8
         Freeable          1.9      .0                 1.9
         Other             1.7                         1.7        1.7
         SQL                .1      .1                  .1        2.3
         PL/SQL             .1      .0                  .1         .1
      32  ------------     3.7     2.3      1.0        3.7        5.0
         Other             2.3                         2.3        2.3
         Freeable          1.0      .0                 1.0
         SQL                .2      .1                  .2        2.0
         PL/SQL             .2      .0                  .2         .2
      43 m000 --------     3.2      .8      1.2        3.2        3.2
         Other             1.9                         1.9        1.9
         Freeable          1.2      .0                 1.2
         SQL                .1      .0                  .1        1.3
         PL/SQL             .0      .0                  .0         .0
      30  ------------     2.8     1.5       .4        2.8        3.4
         Other             2.2                         2.2        2.2
         Freeable           .4      .0                  .4
         SQL                .1      .0                  .1        1.7
         PL/SQL             .1      .0                  .1         .1
      24  ------------     2.6     1.5      1.0        2.6       48.0
         Other             1.4                         1.4        1.4
         Freeable          1.0      .0                 1.0
         SQL                .2      .1                  .2       45.7
         PL/SQL             .0      .0                  .0         .0
      38  ------------     2.6     1.7       .9        2.6        3.9
         Other             1.5                         1.5        1.5
         Freeable           .9      .0                  .9
         SQL                .1      .0                  .1        1.9
         PL/SQL             .1      .0                  .1         .1
      41 J003 --------     2.6     1.8       .0        2.6        2.6

Top Process Memory (by component)  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> ordered by Begin/End snapshot, Alloc (MB) desc

                        Alloc   Used   Freeabl     Max      Hist Max
     PId Category       (MB)    (MB)     (MB)   Alloc (MB) Alloc (MB)
- ------ ------------- ------- ------- -------- ---------- ----------
E     41 Other             1.3                         1.3        1.3
         SQL               1.2     1.2                 1.2        1.2
         PL/SQL             .0      .0                  .0         .0
      25  ------------     2.5     1.6       .8        2.5       48.2
         Other             1.5                         1.5        1.6
         Freeable           .8      .0                  .8
         SQL                .2      .1                  .2       45.7
         PL/SQL             .0      .0                  .0         .0
      26  ------------     2.5     1.5      1.0        2.5        6.2
         Other             1.4                         1.4        1.4
         Freeable          1.0      .0                 1.0
         SQL                .1      .0                  .1        4.6
         PL/SQL             .0      .0                  .0         .0
      21  ------------     2.5     1.3      1.0        2.5        6.3
         Other             1.3                         1.3        1.3
         Freeable          1.0      .0                 1.0
         SQL                .1      .0                  .1        4.6
         PL/SQL             .0      .0                  .0         .0
      27  ------------     2.5     1.4      1.0        2.5        6.3
         Other             1.3                         1.3        1.3
         Freeable          1.0      .0                 1.0
         SQL                .1      .0                  .1        4.6
         PL/SQL             .0      .0                  .0         .0
      20  ------------     2.4     1.5       .3        2.4        2.4
         Other             1.9                         1.9        1.9
         Freeable           .3      .0                  .3
         PL/SQL             .1      .0                  .1         .1
         SQL                .1      .0                  .1         .8
      23  ------------     2.3     1.4       .9        2.3        5.8
         Other             1.2                         1.2        1.2
         Freeable           .9      .0                  .9
         SQL                .2      .1                  .2        4.6
         PL/SQL             .0      .0                  .0         .0
       8 SMON --------     2.3      .6      1.4        2.3        2.4
         Freeable          1.4      .0                 1.4
         Other              .8                          .8         .8
         SQL                .1      .0                  .1         .8
         PL/SQL             .0      .0                  .0         .0
          -------------------------------------------------------------
Enqueue activity  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> only enqueues with waits are shown
-> Enqueue stats gathered prior to 10g should not be compared with 10g data
-> ordered by Wait Time desc, Waits desc

Enqueue Type (Request Reason)
------------------------------------------------------------------------------
    Requests    Succ Gets Failed Gets       Waits Wt Time (s)  Av Wt Time(ms)
------------ ------------ ----------- ----------- ------------ --------------
RO-Multiple Object Reuse (fast object reuse)
         828          828           0          92          164       1,778.10
CF-Controlfile Transaction
       4,380        4,378           2          21            8         394.57
          -------------------------------------------------------------

Undo Segment Summary  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> Min/Max TR (mins) - Min and Max Tuned Retention (minutes)
-> STO - Snapshot Too Old count,  OOS - Out Of Space count
-> Undo segment block stats:
   uS - unexpired Stolen,   uR - unexpired Released,   uU - unexpired reUsed
   eS - expired   Stolen,   eR - expired   Released,   eU - expired   reUsed

Undo   Num Undo       Number of  Max Qry     Max Tx Min/Max   STO/  uS/uR/uU/
 TS# Blocks (K)    Transactions  Len (s)      Concy TR (mins) OOS   eS/eR/eU
---- ---------- --------------- -------- ---------- --------- ----- -----------
   1      100.6          35,353      151          6 15/16.6   0/0   0/0/0/0/0/0
          -------------------------------------------------------------


Undo Segment Stats  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> Most recent 35 Undostat rows, ordered by End Time desc

                Num Undo    Number of Max Qry  Max Tx Tun Ret STO/  uS/uR/uU/
End Time          Blocks Transactions Len (s)   Concy  (mins) OOS   eS/eR/eU
------------ ----------- ------------ ------- ------- ------- ----- -----------
30-Jul 16:53      22,174        6,157      68       4      15 0/0   0/0/0/0/0/0
30-Jul 16:43      25,060        8,393      91       6      15 0/0   0/0/0/0/0/0
30-Jul 16:33       8,605        1,924       0       4      15 0/0   0/0/0/0/0/0
30-Jul 16:23       4,861        1,331       0       3      15 0/0   0/0/0/0/0/0
30-Jul 16:13         669          558       0       3      15 0/0   0/0/0/0/0/0
30-Jul 16:03         163          577      39       3      15 0/0   0/0/0/0/0/0
30-Jul 15:53         641          670       0       3      15 0/0   0/0/0/0/0/0
30-Jul 15:43      18,180        8,713     151       6      17 0/0   0/0/0/0/0/0
30-Jul 15:33      13,650        4,299       0       3      15 0/0   0/0/0/0/0/0
30-Jul 15:23       5,704        1,470       0       4      15 0/0   0/0/0/0/0/0
30-Jul 15:13         752          824       0       3      15 0/0   0/0/0/0/0/0
30-Jul 15:03         156          437       0       3      15 0/0   0/0/0/0/0/0
          -------------------------------------------------------------
Latch Activity  DB/Inst: CDB10/cdb10  Snaps: 114-116
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
  willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0

                                           Pct    Avg   Wait                 Pct
                              Get          Get   Slps   Time       NoWait NoWait
Latch                       Requests      Miss  /Miss    (s)     Requests   Miss
------------------------ -------------- ------ ------ ------ ------------ ------
AWR Alerted Metric Eleme         38,444    0.0             0            0
Consistent RBA                    5,792    0.0             0            0
FOB s.o list latch                  299    0.0             0            0
In memory undo latch             48,296    0.0    1.0      1        8,337    0.0
JS mem alloc latch                   12    0.0             0            0
JS queue access latch                12    0.0             0            0
JS queue state obj latch         51,762    0.0             0            0
JS slv state obj latch              265    0.0             0            0
KGX_diag                              1    0.0             0            0
KMG MMAN ready and start          2,393    0.0             0            0
KTF sga latch                        16    0.0             0        2,170    0.0
KWQMN job cache list lat            209    0.0             0            0
KWQP Prop Status                      2    0.0             0            0
MQL Tracking Latch                    0                    0          141    0.0
Memory Management Latch               0                    0        2,393    0.0
OS process                          918    0.0             0            0
OS process allocation             3,010    0.0             0            0
OS process: request allo            265    0.0             0            0
PL/SQL warning settings           1,857    0.0             0            0
SQL memory manager latch              4    0.0             0        2,362    0.0
SQL memory manager worka        187,821    0.0             0            0
Shared B-Tree                       275    0.0             0            0
active checkpoint queue          19,980    0.0    0.0      0            0
active service list              15,541    0.0             0        2,462    0.0
archive control                   2,591    0.0             0            0
begin backup scn array              184    0.0             0            0
cache buffer handles             44,767    0.0             0            0
cache buffers chains         81,274,536    0.0    0.0      0      332,470    0.0
cache buffers lru chain         776,957    0.0    0.0      0       48,710    0.2
cache table scan latch                0                    0        2,649    0.0
channel handle pool latc            932    0.0             0            0
channel operations paren         49,152    0.1    0.0      0            0
checkpoint queue latch          310,068    0.0    0.0      0      165,813    0.0
client/application info           3,920    0.0             0            0
commit callback allocati            188    0.0             0            0
compile environment latc         14,532    0.0             0            0
dictionary lookup                   110    0.0             0            0
dml lock allocation              43,741    0.0             0            0
dummy allocation                    509    0.0             0            0
enqueue hash chains             313,028    0.0    0.0      0        8,626    0.0
enqueues                        194,211    0.0             0            0
event group latch                   135    0.0             0            0
file cache latch                  1,870    0.0             0            0
global KZLD latch for me             47    0.0             0            0
global tx hash mapping           19,501    0.0             0            0
hash table column usage             625    0.0             0      206,624    0.0
hash table modification             272    0.0             0            0
job workq parent latch                0                    0          244    0.0
job_queue_processes para            238    0.0             0            0
kks stats                         2,827    0.0             0            0
Latch Activity  DB/Inst: CDB10/cdb10  Snaps: 114-116
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
  willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0

                                           Pct    Avg   Wait                 Pct
                              Get          Get   Slps   Time       NoWait NoWait
Latch                       Requests      Miss  /Miss    (s)     Requests   Miss
------------------------ -------------- ------ ------ ------ ------------ ------
ksuosstats global area            2,864    0.0    1.0      0            0
ktm global data                     384    0.0             0            0
kwqbsn:qsga                         275    0.0             0            0
lgwr LWN SCN                      5,840    0.1    0.0      0            0
library cache                 2,540,877    0.0    0.0      0        3,538    0.0
library cache load lock           2,219    0.0    0.0      0          432    0.0
library cache lock              125,561    0.0             0            0
library cache lock alloc          3,399    0.0             0            0
library cache pin             2,345,637    0.0    0.0      0            0
library cache pin alloca          1,624    0.0             0            0
list of block allocation          4,519    0.0             0            0
loader state object free            794    0.0             0            0
message pool operations           1,126    0.0             0            0
messages                        100,936    0.0    0.0      0            0
mostly latch-free SCN             5,853    0.2    0.0      0            0
multiblock read objects           6,668    0.0             0            0
ncodef allocation latch             140    0.0             0            0
object queue header heap            789    0.0             0        5,293    0.0
object queue header oper        946,784    0.0    0.0      0            0
object stats modificatio            706    0.4    0.0      0            0
parallel query alloc buf            948    0.0             0            0
parameter list                       95    0.0             0            0
parameter table allocati            258    0.0             0            0
post/wait queue                   7,277    0.0    0.0      0        4,742    0.0
process allocation                  265    0.0             0          135    0.0
process group creation              265    0.0             0            0
qmn task queue latch              1,028    0.0             0            0
redo allocation                  41,085    0.1    0.0      0    5,036,618    0.0
redo copy                             0                    0    5,036,686    0.0
redo writing                     42,999    0.0    0.0      0            0
resmgr group change latc            854    0.0             0            0
resmgr:actses active lis          1,768    0.0             0            0
resmgr:actses change gro            321    0.0             0            0
resmgr:free threads list            500    0.0             0            0
resmgr:schema config              1,192    0.0             0            0
row cache objects               726,241    0.1    0.0      0        2,901    0.0
rules engine aggregate s             32    0.0             0            0
rules engine rule set st            264    0.0             0            0
sequence cache                    8,908    0.0             0            0
session allocation              239,923    0.0    0.0      0            0
session idle bit              2,214,854    0.0    0.0      0            0
session state list latch            571    0.0             0            0
session switching                   140    0.0             0            0
session timer                     2,462    0.0             0            0
shared pool                     109,418    0.1    0.1      0            0
simulator hash latch          2,579,022    0.0    0.0      0            0
simulator lru latch           2,536,133    0.0    0.0      0       19,731    0.0
slave class                           4    0.0             0            0
slave class create                   17    0.0             0            0
sort extent pool                  5,868    0.0    0.0      0            0
Latch Activity  DB/Inst: CDB10/cdb10  Snaps: 114-116
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
  willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0

                                           Pct    Avg   Wait                 Pct
                              Get          Get   Slps   Time       NoWait NoWait
Latch                       Requests      Miss  /Miss    (s)     Requests   Miss
------------------------ -------------- ------ ------ ------ ------------ ------
state object free list                4    0.0             0            0
statistics aggregation              280    0.0             0            0
temp lob duration state               3    0.0             0            0
temporary table state ob              5    0.0             0            0
threshold alerts latch              607    0.0             0            0
transaction allocation        1,971,670    0.0             0            0
transaction branch alloc          4,366    0.0             0            0
undo global data                885,891    0.0    0.0      0            0
user lock                           428    0.0             0            0
          -------------------------------------------------------------
Latch Sleep breakdown  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> ordered by misses desc

                                       Get                                 Spin
Latch Name                        Requests       Misses      Sleeps        Gets
-------------------------- --------------- ------------ ----------- -----------
cache buffers chains            81,274,536        7,697           6       7,691
cache buffers lru chain            776,957          334           1         333
library cache                    2,540,877          305          14         291
shared pool                        109,418           75           9          66
object queue header operat         946,784           67           3          64
In memory undo latch                48,296            1           1           0
ksuosstats global area               2,864            1           1           0
          -------------------------------------------------------------
Latch Miss Sources  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> only latches with sleeps are shown
-> ordered by name, sleeps desc

                                                     NoWait              Waiter
Latch Name               Where                       Misses     Sleeps   Sleeps
------------------------ -------------------------- ------- ---------- --------
In memory undo latch     ktiFlush: child                  0          1        1
cache buffers chains     kcbgtcr: kslbegin excl           0          4        1
cache buffers chains     kcbgtcr: fast path               0          2        5
cache buffers chains     kcbgcur: kslbegin                0          1        0
cache buffers chains     kcbbxsv                          0          1        0
cache buffers chains     kcbrls: kslbegin                 0          1        3
cache buffers chains     kcbnew: new latch again          0          1        0
cache buffers chains     kcbgtcr: kslbegin shared         0          1        1
cache buffers lru chain  kcbbxsv: move to being wri       0          1        0
ksuosstats global area   ksugetosstat                     0          1        1
library cache lock       kgllkdl: child: no lock ha       0          5        0
object queue header oper kcbo_switch_q_bg                 0          1        0
object queue header oper kcbw_unlink_q_bg                 0          1        0
object queue header oper kcbo_write_q                     0          1        0
shared pool              kghalo                           0          9        5
shared pool              kghfrunp: clatch: nowait         0          9        0
          -------------------------------------------------------------
Mutex Sleep  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> ordered by Wait Time desc

                                                                         Wait
Mutex Type         Location                                 Sleeps     Time (s)
------------------ -------------------------------- -------------- ------------
Cursor Parent      kkspsc0 [KKSPRTLOC26]                         7          0.0
Cursor Parent      kkspsc0 [KKSPRTLOC27]                         3          0.0
Cursor Parent      kksfbc [KKSPRTLOC2]                           5          0.0
          -------------------------------------------------------------
Dictionary Cache Stats  DB/Inst: CDB10/cdb10  Snaps: 114-116
->"Pct Misses"  should be very low (< 2% in most cases)
->"Final Usage" is the number of cache entries being used in End Snapshot

                                   Get    Pct    Scan   Pct      Mod      Final
Cache                         Requests   Miss    Reqs  Miss     Reqs      Usage
------------------------- ------------ ------ ------- ----- -------- ----------
dc_awr_control                     137    0.0       0              4          1
dc_database_links                  106    0.0       0              0          1
dc_files                           147    0.0       0              0          7
dc_global_oids                  10,527    0.2       0              0         27
dc_histogram_data                8,275    3.8       0              0        536
dc_histogram_defs               20,526    6.5       0              0      1,449
dc_object_grants                    62   37.1       0              0         26
dc_object_ids                   21,680    2.0       0              0        376
dc_objects                       4,476    9.0       0            111        371
dc_profiles                        212    0.0       0              0          2
dc_rollback_segments             1,364    0.0       0              0         22
dc_segments                      5,535    5.1       0            941        222
dc_sequences                       149    2.0       0            149          4
dc_tablespace_quotas             2,294    0.0       0              0          2
dc_tablespaces                 110,983    0.0       0              0          8
dc_usernames                       739    0.3       0              0          8
dc_users                        59,259    0.0       0              0         44
outstanding_alerts                 250    6.4       0             32         16
          -------------------------------------------------------------


Library Cache Activity  DB/Inst: CDB10/cdb10  Snaps: 114-116
->"Pct Misses"  should be very low

                         Get  Pct        Pin        Pct               Invali-
Namespace           Requests  Miss     Requests     Miss     Reloads  dations
--------------- ------------ ------ -------------- ------ ---------- --------
BODY                   1,266    0.6         13,678    0.1          4        0
CLUSTER                   45    4.4             72    4.2          0        0
INDEX                    123    4.1            243    7.8         14        0
SQL AREA                 614   70.0      1,106,350    0.1        587      357
TABLE/PROCEDURE        2,220    6.2         32,828    3.3        521        0
TRIGGER                   75    0.0          1,120    0.3          3        0
          -------------------------------------------------------------
Rule Sets  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> * indicates Rule Set activity (re)started between Begin/End snaps
-> Top 25 ordered by Evaluations desc

                                                               No-SQL  SQL
Rule                                *     Eval/sec Reloads/sec Eval % Eval %
----------------------------------- - ------------ ----------- ------ ------
SYS.ALERT_QUE_R                                  0           0      0      0
          -------------------------------------------------------------
Shared Pool Advisory  DB/Inst: CDB10/cdb10  End Snap: 116
-> SP: Shared Pool     Est LC: Estimated Library Cache   Factr: Factor
-> Note there is often a 1:Many correlation between a single logical object
   in the Library Cache, and the physical number of memory objects associated
   with it.  Therefore comparing the number of Lib Cache objects (e.g. in
   v$librarycache), with the number of Lib Cache Memory Objects is invalid

                                        Est LC Est LC  Est LC Est LC
    Shared    SP   Est LC                 Time   Time    Load   Load      Est LC
      Pool  Size     Size       Est LC   Saved  Saved    Time   Time         Mem
  Size (M) Factr      (M)      Mem Obj     (s)  Factr     (s)  Factr    Obj Hits
---------- ----- -------- ------------ ------- ------ ------- ------ -----------
        96    .8       17        2,126 #######    1.0   3,022    1.5  17,814,526
       112    .9       32        3,178 #######    1.0   2,129    1.0  17,880,989
       128   1.0       45        4,017 #######    1.0   2,030    1.0  17,889,372
       144   1.1       60        5,638 #######    1.0   2,012    1.0  17,891,230
       160   1.3       75        6,925 #######    1.0   2,006    1.0  17,892,241
       176   1.4       90        8,389 #######    1.0   1,999    1.0  17,893,491
       192   1.5      105        9,841 #######    1.0   1,994    1.0  17,894,332
       208   1.6      120       11,208 #######    1.0   1,988    1.0  17,895,911
       224   1.8      135       12,039 #######    1.0   1,965    1.0  17,897,261
       240   1.9      150       12,962 #######    1.0   1,955    1.0  17,898,124
       256   2.0      165       13,918 #######    1.0   1,950    1.0  17,898,777
          -------------------------------------------------------------
SGA Memory Summary  DB/Inst: CDB10/cdb10  Snaps: 114-116

                                                        End Size (Bytes)
SGA regions                      Begin Size (Bytes)       (if different)
------------------------------ -------------------- --------------------
Database Buffers                        322,961,408
Fixed Size                                1,979,648
Redo Buffers                              6,406,144
Variable Size                           159,386,368
                               -------------------- --------------------
sum                                     490,733,568
          -------------------------------------------------------------


SGA breakdown difference  DB/Inst: CDB10/cdb10  Snaps: 114-116
-> Top 35 rows by size, ordered by Pool, Name (note rows with null values for
   Pool column, or Names showing free memory are always shown)
-> Null value for Begin MB or End MB indicates the size of that Pool/Name was
   insignificant, or zero in that snapshot

Pool   Name                                 Begin MB         End MB  % Diff
------ ------------------------------ -------------- -------------- --------
java p free memory                              24.0           24.0     0.00
shared ASH buffers                               4.0            4.0     0.00
shared CCursor                                   6.4            6.1    -4.48
shared FileOpenBlock                             1.4            1.4     0.00
shared Heap0: KGL                                3.4            3.4     1.17
shared KCB Table Scan Buffer                     3.8            3.8     0.00
shared KGLS heap                                 3.5            1.7   -51.17
shared KQR M PO                                  2.1                 -100.00
shared KSFD SGA I/O b                            3.8            3.8     0.00
shared PCursor                                   4.3            4.3    -0.04
shared PL/SQL MPCODE                             3.5            4.6    30.43
shared db_block_hash_buckets                     2.2            2.2     0.00
shared event statistics per sess                 1.5            1.5     0.00
shared free memory                              10.9           13.7    25.39
shared kglsim hash table bkts                    4.0            4.0     0.00
shared kglsim heap                               1.3            1.3     0.00
shared kglsim object batch                       1.8            1.8     0.00
shared kks stbkt                                 1.5            1.5     0.00
shared library cache                             9.2            9.2    -0.04
shared private strands                           2.3            2.3     0.00
shared row cache                                 7.1            7.1     0.00
shared sql area                                 24.0           23.6    -1.77
       buffer_cache                            308.0          308.0     0.00
       fixed_sga                                 1.9            1.9     0.00
       log_buffer                                6.1            6.1     0.00
          -------------------------------------------------------------
SQL Memory Statistics  DB/Inst: CDB10/cdb10  Snaps: 114-116

                                   Begin            End         % Diff
                          -------------- -------------- --------------
   Avg Cursor Size (KB):           44.50          57.82          23.05
 Cursor to Parent ratio:            1.10           1.18           6.74
          Total Cursors:           1,354          1,294          -4.64
          Total Parents:           1,232          1,098         -12.20
          -------------------------------------------------------------
init.ora Parameters  DB/Inst: CDB10/cdb10  Snaps: 114-116

                                                                  End value
Parameter Name                Begin value                       (if different)
----------------------------- --------------------------------- --------------
audit_file_dest               /export/home/oracle10/admin/cdb10
background_dump_dest          /export/home/oracle10/admin/cdb10
compatible                    10.2.0.1.0
control_files                 /export/home/oracle10/oradata/cdb
core_dump_dest                /export/home/oracle10/admin/cdb10
db_block_size                 8192
db_cache_size                 322961408
db_domain
db_file_multiblock_read_count 8
db_name                       cdb10
db_recovery_file_dest         /export/home/oracle10/flash_recov
db_recovery_file_dest_size    2147483648
dispatchers                   (PROTOCOL=TCP) (SERVICE=cdb10XDB)
job_queue_processes           10
open_cursors                  300
pga_aggregate_target          209715200
processes                     150
remote_login_passwordfile     EXCLUSIVE
sga_max_size                  490733568
sga_target                    0
shared_pool_size              134217728
undo_management               AUTO
undo_tablespace               UNDOTBS1
user_dump_dest                /export/home/oracle10/admin/cdb10
          -------------------------------------------------------------

End of Report ( sp_114_116.lst )

}}}
-- from http://www.perfvision.com/statspack/sp_9i.txt

{{{
STATSPACK report for

DB Name         DB Id    Instance     Inst Num Release     Cluster Host
------------ ----------- ------------ -------- ----------- ------- ------------
CDB           1745492617 cdb                 1 9.2.0.5.0   NO      limerock

            Snap Id     Snap Time      Sessions Curs/Sess Comment
            ------- ------------------ -------- --------- -------------------
Begin Snap:      35 27-Jul-07 08:50:31       16      26.7
  End Snap:      37 27-Jul-07 08:52:27       16      26.7
   Elapsed:                1.93 (mins)

Cache Sizes (end)
~~~~~~~~~~~~~~~~~
               Buffer Cache:       304M      Std Block Size:         8K
           Shared Pool Size:        32M          Log Buffer:       512K

Load Profile
~~~~~~~~~~~~                            Per Second       Per Transaction
                                   ---------------       ---------------
                  Redo size:              8,680.03             41,953.50
              Logical reads:             36,522.81            176,526.92
              Block changes:                 13.83                 66.83
             Physical reads:                  0.01                  0.04
            Physical writes:                  1.98                  9.58
                 User calls:                 21.15                102.21
                     Parses:                  4.32                 20.88
                Hard parses:                  0.03                  0.17
                      Sorts:                  5.69                 27.50
                     Logons:                  0.21                  1.00
                   Executes:             36,221.39            175,070.04
               Transactions:                  0.21

  % Blocks changed per Read:    0.04    Recursive Call %:    99.94
 Rollback per transaction %:    8.33       Rows per Sort:    84.41

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            Buffer Nowait %:  100.00       Redo NoWait %:  100.00
            Buffer  Hit   %:  100.00    In-memory Sort %:  100.00
            Library Hit   %:   99.35        Soft Parse %:   99.20
         Execute to Parse %:   99.99         Latch Hit %:   99.21
Parse CPU to Parse Elapsd %:   30.00     % Non-Parse CPU:   99.98

 Shared Pool Statistics        Begin   End
                               ------  ------
             Memory Usage %:   95.26   95.35
    % SQL with executions>1:   63.65   63.38
  % Memory for SQL w/exec>1:   67.54   67.41

Top 5 Timed Events
~~~~~~~~~~~~~~~~~~                                                     % Total
Event                                               Waits    Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
CPU time                                                          195    48.93
PL/SQL lock timer                                      37         111    27.83
latch free                                          1,649          84    21.09
control file parallel write                            36           4      .89
log file parallel write                                69           3      .68
          -------------------------------------------------------------
Wait Events for DB: CDB  Instance: cdb  Snaps: 35 -37
-> s  - second
-> cs - centisecond -     100th of a second
-> ms - millisecond -    1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)

                                                                   Avg
                                                     Total Wait   wait    Waits
Event                               Waits   Timeouts   Time (s)   (ms)     /txn
---------------------------- ------------ ---------- ---------- ------ --------
PL/SQL lock timer                      37         37        111   3005      1.5
latch free                          1,649      1,649         84     51     68.7
control file parallel write            36          0          4     99      1.5
log file parallel write                69         69          3     40      2.9
SQL*Net message from dblink           594          0          1      2     24.8
log file sync                          23          0          1     31      1.0
db file parallel write                  2          0          0     35      0.1
control file sequential read          394          0          0      0     16.4
db file sequential read                 1          0          0     10      0.0
LGWR wait for redo copy                 2          0          0      5      0.1
SQL*Net more data to client           125          0          0      0      5.2
SQL*Net message to dblink             594          0          0      0     24.8
SQL*Net break/reset to clien            6          0          0      0      0.3
SQL*Net message from client         2,330          0        690    296     97.1
SQL*Net message to client           2,330          0          0      0     97.1
SQL*Net more data from clien           95          0          0      0      4.0
          -------------------------------------------------------------
Background Wait Events for DB: CDB  Instance: cdb  Snaps: 35 -37
-> ordered by wait time desc, waits desc (idle events last)

                                                                   Avg
                                                     Total Wait   wait    Waits
Event                               Waits   Timeouts   Time (s)   (ms)     /txn
---------------------------- ------------ ---------- ---------- ------ --------
control file parallel write            36          0          4     99      1.5
log file parallel write                69         69          3     40      2.9
db file parallel write                  2          0          0     35      0.1
LGWR wait for redo copy                 2          0          0      5      0.1
rdbms ipc message                     250        181        336   1343     10.4
smon timer                              1          1        300 ######      0.0
pmon timer                             87         35        113   1302      3.6
          -------------------------------------------------------------
SQL ordered by Gets for DB: CDB  Instance: cdb  Snaps: 35 -37
-> End Buffer Gets Threshold:   10000
-> Note that resources reported for PL/SQL includes the resources used by
   all SQL statements called within the PL/SQL code.  As individual SQL
   statements are also reported, it is possible and valid for the summed
   total % to exceed 100

                                                     CPU      Elapsd
  Buffer Gets    Executions  Gets per Exec  %Total Time (s)  Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
        300,002            1      300,002.0    7.1    13.52    104.77   29540053
Module: SQL*Plus
declare  r rowid; begin  for i in 1..300000 loop             --u
pdate emp set sal=sal where empno=2;             --commit;
select rowid  into r from emp_hash where empno=13;  end loop; en
d;

        300,002            1      300,002.0    7.1    13.65    105.88   94571329
Module: SQL*Plus
declare  r rowid; begin  for i in 1..300000 loop             --u
pdate emp set sal=sal where empno=2;             --commit;
select rowid  into r from emp_hash where empno=8;  end loop; end
;

        300,002      300,000            1.0    7.1     7.27     54.06  322898470
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=7

        300,002            1      300,002.0    7.1    13.64    106.98  404650074
Module: SQL*Plus
declare  r rowid; begin  for i in 1..300000 loop             --u
pdate emp set sal=sal where empno=2;             --commit;
select rowid  into r from emp_hash where empno=1;  end loop; end
;

        300,002            1      300,002.0    7.1    14.15    106.41  779315540
Module: SQL*Plus
declare  r rowid; begin  for i in 1..300000 loop             --u
pdate emp set sal=sal where empno=2;             --commit;
select rowid  into r from emp_hash where empno=3;  end loop; end
;

        300,002      300,000            1.0    7.1     7.15     48.09  895863666
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=6

        300,002      300,000            1.0    7.1     7.12     44.48  901988619
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=13

        300,002      300,000            1.0    7.1     7.40     49.95 1016842815
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=14

        300,002            1      300,002.0    7.1    13.61    104.30 1195290885
Module: SQL*Plus
declare  r rowid; begin  for i in 1..300000 loop             --u
pdate emp set sal=sal where empno=2;             --commit;
select rowid  into r from emp_hash where empno=5;  end loop; end
;
SQL ordered by Gets for DB: CDB  Instance: cdb  Snaps: 35 -37
-> End Buffer Gets Threshold:   10000
-> Note that resources reported for PL/SQL includes the resources used by
   all SQL statements called within the PL/SQL code.  As individual SQL
   statements are also reported, it is possible and valid for the summed
   total % to exceed 100

                                                     CPU      Elapsd
  Buffer Gets    Executions  Gets per Exec  %Total Time (s)  Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------

        300,002      300,000            1.0    7.1     7.26     49.07 1213725976
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=3

        300,002      300,000            1.0    7.1     7.05     46.49 1483699112
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=9

        300,002      300,000            1.0    7.1     7.51     53.09 1541773081
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=4

        300,002      300,000            1.0    7.1     6.95     47.00 1762017642
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=5

        300,002            1      300,002.0    7.1    13.61    103.74 1961700584
Module: SQL*Plus
declare  r rowid; begin  for i in 1..300000 loop             --u
pdate emp set sal=sal where empno=2;             --commit;
select rowid  into r from emp_hash where empno=14;  end loop; en
d;

        300,002      300,000            1.0    7.1     7.53     44.97 1992710801
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=8

        300,002            1      300,002.0    7.1    13.71    105.37 2424958502
Module: SQL*Plus
declare  r rowid; begin  for i in 1..300000 loop             --u
pdate emp set sal=sal where empno=2;             --commit;
select rowid  into r from emp_hash where empno=12;  end loop; en
d;

        300,002            1      300,002.0    7.1    13.76    103.51 2715777206
Module: SQL*Plus
declare  r rowid; begin  for i in 1..300000 loop             --u
pdate emp set sal=sal where empno=2;             --commit;
select rowid  into r from emp_hash where empno=2;  end loop; end
;

        300,002      300,000            1.0    7.1     7.38     44.31 2950501839
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=2

        300,002      300,000            1.0    7.1     7.00     54.49 3015592115
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=10

SQL ordered by Gets for DB: CDB  Instance: cdb  Snaps: 35 -37
-> End Buffer Gets Threshold:   10000
-> Note that resources reported for PL/SQL includes the resources used by
   all SQL statements called within the PL/SQL code.  As individual SQL
   statements are also reported, it is possible and valid for the summed
   total % to exceed 100

                                                     CPU      Elapsd
  Buffer Gets    Executions  Gets per Exec  %Total Time (s)  Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
        300,002            1      300,002.0    7.1    13.68    103.22 3089756567
Module: SQL*Plus
declare  r rowid; begin  for i in 1..300000 loop             --u
pdate emp set sal=sal where empno=2;             --commit;
select rowid  into r from emp_hash where empno=11;  end loop; en
d;

        300,002            1      300,002.0    7.1    13.66    103.39 3299045037
Module: SQL*Plus
declare  r rowid; begin  for i in 1..300000 loop             --u
pdate emp set sal=sal where empno=2;             --commit;
select rowid  into r from emp_hash where empno=9;  end loop; end
;

        300,002      300,000            1.0    7.1     7.05     52.18 3478188992
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=1

        300,002            1      300,002.0    7.1    13.66    107.26 3507678102
Module: SQL*Plus
declare  r rowid; begin  for i in 1..300000 loop             --u
pdate emp set sal=sal where empno=2;             --commit;
select rowid  into r from emp_hash where empno=10;  end loop; en
d;

        300,002            1      300,002.0    7.1    13.63    105.86 3555711258
Module: SQL*Plus
declare  r rowid; begin  for i in 1..300000 loop             --u
pdate emp set sal=sal where empno=2;             --commit;
select rowid  into r from emp_hash where empno=4;  end loop; end
;

        300,002            1      300,002.0    7.1    13.62    103.58 3788634673
Module: SQL*Plus
declare  r rowid; begin  for i in 1..300000 loop             --u
pdate emp set sal=sal where empno=2;             --commit;
select rowid  into r from emp_hash where empno=6;  end loop; end
;

          -------------------------------------------------------------
SQL ordered by Reads for DB: CDB  Instance: cdb  Snaps: 35 -37
-> End Disk Reads Threshold:    1000

                                                     CPU      Elapsd
 Physical Reads  Executions  Reads per Exec %Total Time (s)  Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
              1            2            0.5  100.0     0.87      0.95 3674571752
Module: sqlplus@limerock (TNS V1-V3)
     begin         :snap := statspack.snap;      end;

              0            1            0.0    0.0    13.52    104.77   29540053
Module: SQL*Plus
declare  r rowid; begin  for i in 1..300000 loop             --u
pdate emp set sal=sal where empno=2;             --commit;
select rowid  into r from emp_hash where empno=13;  end loop; en
d;

              0           14            0.0    0.0     0.01      0.01   62978080
Module: sqlplus@limerock (TNS V1-V3)
SELECT NULL FROM DUAL FOR UPDATE NOWAIT

              0            1            0.0    0.0    13.65    105.88   94571329
Module: SQL*Plus
declare  r rowid; begin  for i in 1..300000 loop             --u
pdate emp set sal=sal where empno=2;             --commit;
select rowid  into r from emp_hash where empno=8;  end loop; end
;

              0            1            0.0    0.0     0.00      0.00  130926350
select count(*) from sys.job$ where next_date < :1 and (field1 =
 :2 or (field1 = 0 and 'Y' = :3))

              0      300,000            0.0    0.0     7.27     54.06  322898470
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=7

              0           20            0.0    0.0     0.06      0.01  380234442
Module: Lab128
--lab128
 select namespace,gets,gethits,pins,pinhits,reloads,inv
alidations 
 from v$librarycache where gets>0

              0            1            0.0    0.0    13.64    106.98  404650074
Module: SQL*Plus
declare  r rowid; begin  for i in 1..300000 loop             --u
pdate emp set sal=sal where empno=2;             --commit;
select rowid  into r from emp_hash where empno=1;  end loop; end
;

              0            4            0.0    0.0     0.08      0.07  418041023
Module: Lab128
--lab128
 select name,v value from ( 
  select /*+ no_merge */ 
  name,decode(value,'TRUE',1,'FALSE',0,to_number(value)) v 
  fr
om v$system_parameter where type in (1,3,6) 
   and rownum >0 
)

              0            1            0.0    0.0     0.00      0.00  615142939
INSERT INTO SMON_SCN_TIME (THREAD, TIME_MP, TIME_DP, SCN_WRP, SC
N_BAS)  VALUES (:1, :2, :3, :4, :5)

              0           13            0.0    0.0     0.01      0.02  680302622
SQL ordered by Reads for DB: CDB  Instance: cdb  Snaps: 35 -37
-> End Disk Reads Threshold:    1000

                                                     CPU      Elapsd
 Physical Reads  Executions  Reads per Exec %Total Time (s)  Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
Module: Lab128
--lab128
 select tablespace ts_name,session_addr,sqladdr,sqlhash
, 
 blocks,segfile#,segrfno#,segtype 
 from v$sort_usage

              0            1            0.0    0.0    14.15    106.41  779315540
Module: SQL*Plus
declare  r rowid; begin  for i in 1..300000 loop             --u
pdate emp set sal=sal where empno=2;             --commit;
select rowid  into r from emp_hash where empno=3;  end loop; end
;

              0      300,000            0.0    0.0     7.15     48.09  895863666
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=6

              0      300,000            0.0    0.0     7.12     44.48  901988619
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=13

              0           50            0.0    0.0     0.00      0.00  998317450
Module: sqlplus@limerock (TNS V1-V3)
SELECT VALUE FROM STATS$SYSSTAT WHERE SNAP_ID = :B4 AND DBID = :
B3 AND INSTANCE_NUMBER = :B2 AND NAME = :B1

              0      300,000            0.0    0.0     7.40     49.95 1016842815
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=14

              0           37            0.0    0.0     0.02      0.74 1053795750
COMMIT

              0           37            0.0    0.0     0.00      0.01 1093001666
SELECT TO_NUMBER(TO_CHAR(SYSDATE,'D')) FROM DUAL

              0            2            0.0    0.0     0.24      0.25 1116368370
Module: sqlplus@limerock (TNS V1-V3)
INSERT INTO STATS$SQLTEXT ( HASH_VALUE , TEXT_SUBSET , PIECE , S
QL_TEXT , ADDRESS , COMMAND_TYPE , LAST_SNAP_ID ) SELECT ST1.HAS
H_VALUE , SS.TEXT_SUBSET , ST1.PIECE , ST1.SQL_TEXT , ST1.ADDRES
S , ST1.COMMAND_TYPE , SS.SNAP_ID FROM V$SQLTEXT ST1 , STATS$SQL
_SUMMARY SS WHERE SS.SNAP_ID = :B3 AND SS.DBID = :B2 AND SS.INST

              0           20            0.0    0.0     0.05      0.05 1144592741
Module: Lab128
--lab128
 select statistic#, value from v$sysstat where value!=0


              0           18            0.0    0.0     0.00      0.10 1160064496
Module: Lab128
--lab128
 select addr,pid,spid,pga_alloc_mem from v$process

              0            1            0.0    0.0    13.61    104.30 1195290885
Module: SQL*Plus
declare  r rowid; begin  for i in 1..300000 loop             --u
SQL ordered by Reads for DB: CDB  Instance: cdb  Snaps: 35 -37
-> End Disk Reads Threshold:    1000

                                                     CPU      Elapsd
 Physical Reads  Executions  Reads per Exec %Total Time (s)  Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
pdate emp set sal=sal where empno=2;             --commit;
select rowid  into r from emp_hash where empno=5;  end loop; end
;

              0      300,000            0.0    0.0     7.26     49.07 1213725976
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=3

              0           37            0.0    0.0     0.02      0.09 1231279053
UPDATE ASH.DBIDS@REPO SET ASHSEQ = :B2 WHERE DBID = :B1

              0            4            0.0    0.0     0.00      0.00 1254950678
select file# from file$ where ts#=:1

              0            3            0.0    0.0     0.08      0.29 1272081939
Module: Lab128
--lab128
 select address,hash_value,piece,sql_text 
 from V$SQLT
EXT_WITH_NEWLINES where address=:1 and hash_value=:2

              0            1            0.0    0.0     0.00      0.00 1287368460
Module: Lab128
--lab128
 select file_id,tablespace_name ts_name, sum(bytes) byt
es 
 from dba_free_space 
 group by file_id, tablespace_name

              0           21            0.0    0.0     0.00      0.01 1316169839
select job, nvl2(last_date, 1, 0) from sys.job$ where (((:1 <= n
ext_date) and (next_date < :2))    or  ((last_date is null) and
(next_date < :3))) and (field1 = :4 or (field1 = 0 and 'Y' = :5)
) and (this_date is null) order by next_date, job

              0           18            0.0    0.0     0.08      0.03 1356713530
select privilege#,level from sysauth$ connect by grantee#=prior
privilege# and privilege#>0 start with (grantee#=:1 or grantee#=
1) and privilege#>0

          -------------------------------------------------------------
SQL ordered by Executions for DB: CDB  Instance: cdb  Snaps: 35 -37
-> End Executions Threshold:     100

                                                CPU per    Elap per
 Executions   Rows Processed   Rows per Exec    Exec (s)   Exec (s)  Hash Value
------------ --------------- ---------------- ----------- ---------- ----------
     300,000         300,000              1.0       0.00        0.00  322898470
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=7

     300,000         300,000              1.0       0.00        0.00  895863666
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=6

     300,000         300,000              1.0       0.00        0.00  901988619
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=13

     300,000         300,000              1.0       0.00        0.00 1016842815
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=14

     300,000         300,000              1.0       0.00        0.00 1213725976
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=3

     300,000         300,000              1.0       0.00        0.00 1483699112
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=9

     300,000         300,000              1.0       0.00        0.00 1541773081
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=4

     300,000         300,000              1.0       0.00        0.00 1762017642
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=5

     300,000         300,000              1.0       0.00        0.00 1992710801
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=8

     300,000         300,000              1.0       0.00        0.00 2950501839
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=2

     300,000         300,000              1.0       0.00        0.00 3015592115
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=10

     300,000         300,000              1.0       0.00        0.00 3478188992
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=1

     300,000         300,000              1.0       0.00        0.00 3868060563
Module: SQL*Plus
SELECT ROWID FROM EMP_HASH WHERE EMPNO=11

     300,000         300,000              1.0       0.00        0.00 3995262551
Module: SQL*Plus
SQL ordered by Executions for DB: CDB  Instance: cdb  Snaps: 35 -37
-> End Executions Threshold:     100

                                                CPU per    Elap per
 Executions   Rows Processed   Rows per Exec    Exec (s)   Exec (s)  Hash Value
------------ --------------- ---------------- ----------- ---------- ----------
SELECT ROWID FROM EMP_HASH WHERE EMPNO=12

         483             483              1.0       0.00        0.00 3986604633
INSERT INTO ASH.V$ASH@REPO VALUES (:B1,:B2,:B3,:B4,:B5,:B6,:B7,:
B8,:B9,:B10,:B11,:B12,:B13,:B14,:B15,:B16,:B17,:B18,:B19,:B20,:B
21,:B22,:B23,:B24)

         102           1,436             14.1       0.00        0.00 3737298577
Module: Lab128
--lab128
 select --+first_rows
 w.sid,s.ownerid,s.user#,s.sql_ad
dress,s.sql_hash_value,
 w.seq#,w.event event#,
 w.p1,w.p2,w.p3,

 w.wait_time,w.seconds_in_wait,decode(w.state,'WAITING',0,1) st
ate,s.serial#,
 row_wait_obj#,row_wait_file#,row_wait_block#,row
_wait_row#,machine,program
 from  v$session s, v$session_wait w

          50              50              1.0       0.00        0.00  998317450
Module: sqlplus@limerock (TNS V1-V3)
SELECT VALUE FROM STATS$SYSSTAT WHERE SNAP_ID = :B4 AND DBID = :
B3 AND INSTANCE_NUMBER = :B2 AND NAME = :B1

          37               0              0.0       0.00        0.02 1053795750
COMMIT

          37              37              1.0       0.00        0.00 1093001666
SELECT TO_NUMBER(TO_CHAR(SYSDATE,'D')) FROM DUAL

          37              37              1.0       0.00        0.00 1231279053
UPDATE ASH.DBIDS@REPO SET ASHSEQ = :B2 WHERE DBID = :B1

          37              37              1.0       0.00        0.00 2344200387
SELECT ASHSEQ.NEXTVAL FROM DUAL

          37             483             13.1       0.00        0.01 3802278413
SELECT A.*, :B1 SAMPLE_TIME FROM V$ASHNOW A

          30           1,440             48.0       0.01        0.03 1642626990
Module: bltwish.exe
select indx /* v9 */,                                       ksle
swts,                                       trunc(kslestim/10000
)                                 from sys.oem$kslei
                      where ksleswts > 0

          30             423             14.1       0.02        0.08 1866839420
Module: bltwish.exe
select                  1,                  to_char(sysdate,'SSS
SS')+trunc(sysdate-to_date('JAN-01-1970 00:00:00','MON-DD-YYYY H
H24:MI:SS'))*86400 ,                  sysdate,
s.indx          ,                  decode(w.ksusstim,
                  0,decode(n.kslednam,

          30              30              1.0       0.00        0.00 3378495259
Module: bltwish.exe
select  to_char(sysdate,'SSSSS') +
          (to_char(sysdate,'J')- 2454309 )*86400
SQL ordered by Executions for DB: CDB  Instance: cdb  Snaps: 35 -37
-> End Executions Threshold:     100

                                                CPU per    Elap per
 Executions   Rows Processed   Rows per Exec    Exec (s)   Exec (s)  Hash Value
------------ --------------- ---------------- ----------- ---------- ----------
                  from dual

          30           4,230            141.0       0.00        0.00 4160458976
Module: bltwish.exe
select                                    KSUSGSTN ,  KSUSGSTV
                             from
      sys.oem$ksusgsta                              where
                              KSUSGSTV> 0


          22              22              1.0       0.00        0.00 1693927332
select count(*) from sys.job$ where (next_date > sysdate) and (n
ext_date < (sysdate+5/86400))

          21               0              0.0       0.00        0.00 1316169839
select job, nvl2(last_date, 1, 0) from sys.job$ where (((:1 <= n
ext_date) and (next_date < :2))    or  ((last_date is null) and
(next_date < :3))) and (field1 = :4 or (field1 = 0 and 'Y' = :5)
) and (this_date is null) order by next_date, job

          20             120              6.0       0.00        0.00  380234442
Module: Lab128
--lab128
 select namespace,gets,gethits,pins,pinhits,reloads,inv
alidations 
 from v$librarycache where gets>0

          20           2,822            141.1       0.00        0.00 1144592741
Module: Lab128
--lab128
 select statistic#, value from v$sysstat where value!=0


          20              20              1.0       0.00        0.00 1977490509
Module: Lab128
--lab128
 select cnum_set, buf_got, sum_write, sum_scan, free_bu
ffer_wait, 
 write_complete_wait, buffer_busy_wait, free_buffer_
inspected, 
 dirty_buffers_inspected, db_block_change, db_block_
gets, consistent_gets, 
 physical_reads, physical_writes, set_ms
ize 
 from v$buffer_pool_statistics

          19              19              1.0       0.00        0.00 1520741509
Module: Lab128
--lab128
 select * from (select nvl(sum(decode(status,'WAIT(COMM
ON)',1,0)),0) mts_idle, 
  nvl(sum(decode(status,'WAIT(COMMON)',
0,1)),0) mts_busy, 
  nvl(sum(idle),0) mts_idle_time, nvl(sum(bu

          -------------------------------------------------------------
SQL ordered by Parse Calls for DB: CDB  Instance: cdb  Snaps: 35 -37
-> End Parse Calls Threshold:      1000

                           % Total
 Parse Calls  Executions   Parses  Hash Value
------------ ------------ -------- ----------
          30           30     5.99 1642626990
Module: bltwish.exe
select indx /* v9 */,                                       ksle
swts,                                       trunc(kslestim/10000
)                                 from sys.oem$kslei
                      where ksleswts > 0

          30           30     5.99 1866839420
Module: bltwish.exe
select                  1,                  to_char(sysdate,'SSS
SS')+trunc(sysdate-to_date('JAN-01-1970 00:00:00','MON-DD-YYYY H
H24:MI:SS'))*86400 ,                  sysdate,
s.indx          ,                  decode(w.ksusstim,
                  0,decode(n.kslednam,

          30           30     5.99 3378495259
Module: bltwish.exe
select  to_char(sysdate,'SSSSS') +
          (to_char(sysdate,'J')- 2454309 )*86400
                  from dual

          30           30     5.99 4160458976
Module: bltwish.exe
select                                    KSUSGSTN ,  KSUSGSTV
                             from
      sys.oem$ksusgsta                              where
                              KSUSGSTV> 0


          18           18     3.59 1356713530
select privilege#,level from sysauth$ connect by grantee#=prior
privilege# and privilege#>0 start with (grantee#=:1 or grantee#=
1) and privilege#>0

          17           17     3.39 3469977555
Module: sqlplus@limerock (TNS V1-V3)
ALTER SESSION SET TIME_ZONE='-07:00'

          17           17     3.39 3997906522
select user# from sys.user$ where name = 'OUTLN'

          14           14     2.79   62978080
Module: sqlplus@limerock (TNS V1-V3)
SELECT NULL FROM DUAL FOR UPDATE NOWAIT

          14           14     2.79 1432236634
Module: sqlplus@limerock (TNS V1-V3)
BEGIN DBMS_APPLICATION_INFO.SET_MODULE(:1,NULL); END;

          14           14     2.79 2009857449
Module: sqlplus@limerock (TNS V1-V3)
SELECT CHAR_VALUE FROM SYSTEM.PRODUCT_PRIVS WHERE   (UPPER('SQL*
Plus') LIKE UPPER(PRODUCT)) AND   ((UPPER(USER) LIKE USERID) OR
(USERID = 'PUBLIC')) AND   (UPPER(ATTRIBUTE) = 'ROLES')
SQL ordered by Parse Calls for DB: CDB  Instance: cdb  Snaps: 35 -37
-> End Parse Calls Threshold:      1000

                           % Total
 Parse Calls  Executions   Parses  Hash Value
------------ ------------ -------- ----------

          14           14     2.79 2865022085
Module: sqlplus@limerock (TNS V1-V3)
BEGIN DBMS_OUTPUT.DISABLE; END;

          14           14     2.79 3096433403
Module: sqlplus@limerock (TNS V1-V3)
SELECT ATTRIBUTE,SCOPE,NUMERIC_VALUE,CHAR_VALUE,DATE_VALUE FROM
SYSTEM.PRODUCT_PRIVS WHERE (UPPER('SQL*Plus') LIKE UPPER(PRODUCT
)) AND (UPPER(USER) LIKE USERID)

          14           14     2.79 4119976668
Module: sqlplus@limerock (TNS V1-V3)
SELECT USER FROM DUAL

          14           14     2.79 4282642546
Module: SQL*Plus
SELECT DECODE('A','A','1','2') FROM DUAL

           4            4     0.80 1254950678
select file# from file$ where ts#=:1

           4            4     0.80 1480482175
Module: lab128_1584.exe
begin DBMS_APPLICATION_INFO.SET_MODULE('Lab128',NULL); end;

           4            4     0.80 2011103812
Module: Lab128
--lab128
 select object_id,data_object_id,owner,object_type, 
 o
bject_name||decode(subobject_name,null,null,' ('||subobject_name
||')') obj_name, created 
 from dba_objects where data_object_id
 is not null and created>=:1

           4            4     0.80 3033724852
Module: lab128_1584.exe
--lab128
 select sid,serial#,systimestamp from v$session where s
id in (select sid from v$mystat where rownum=1)

           4            4     0.80 3194447098
Module: lab128_1584.exe
alter session set optimizer_mode=choose

           4            4     0.80 3986506689
Module: lab128_1584.exe
ALTER SESSION SET NLS_LANGUAGE= 'AMERICAN' NLS_TERRITORY= 'AMERI
CA' NLS_CURRENCY= '$' NLS_ISO_CURRENCY= 'AMERICA' NLS_NUMERIC_CH
ARACTERS= '.,' NLS_CALENDAR= 'GREGORIAN' NLS_DATE_FORMAT= 'DD-MO
N-RR' NLS_DATE_LANGUAGE= 'AMERICAN' NLS_SORT= 'BINARY' TIME_ZONE
= '-07:00' NLS_COMP= 'BINARY' NLS_DUAL_CURRENCY= '$' NLS_TIME_FO

           3            3     0.60 3716207873
update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,
order$=:6,cache=:7,highwater=:8,audit$=:9,flags=:10 where obj#=:
1
SQL ordered by Parse Calls for DB: CDB  Instance: cdb  Snaps: 35 -37
-> End Parse Calls Threshold:      1000

                           % Total
 Parse Calls  Executions   Parses  Hash Value
------------ ------------ -------- ----------

           2           50     0.40  998317450
Module: sqlplus@limerock (TNS V1-V3)
SELECT VALUE FROM STATS$SYSSTAT WHERE SNAP_ID = :B4 AND DBID = :
B3 AND INSTANCE_NUMBER = :B2 AND NAME = :B1

           2            2     0.40 1116368370
Module: sqlplus@limerock (TNS V1-V3)
INSERT INTO STATS$SQLTEXT ( HASH_VALUE , TEXT_SUBSET , PIECE , S
QL_TEXT , ADDRESS , COMMAND_TYPE , LAST_SNAP_ID ) SELECT ST1.HAS
H_VALUE , SS.TEXT_SUBSET , ST1.PIECE , ST1.SQL_TEXT , ST1.ADDRES
S , ST1.COMMAND_TYPE , SS.SNAP_ID FROM V$SQLTEXT ST1 , STATS$SQL
_SUMMARY SS WHERE SS.SNAP_ID = :B3 AND SS.DBID = :B2 AND SS.INST

           2            2     0.40 3404108640
ALTER SESSION SET ISOLATION_LEVEL = READ COMMITTED

           2            2     0.40 3674571752
Module: sqlplus@limerock (TNS V1-V3)
     begin         :snap := statspack.snap;      end;

           2            2     0.40 3742653144
select sysdate from dual

           1            1     0.20   29540053
Module: SQL*Plus
declare  r rowid; begin  for i in 1..300000 loop             --u
pdate emp set sal=sal where empno=2;             --commit;
select rowid  into r from emp_hash where empno=13;  end loop; en
d;

          -------------------------------------------------------------
Instance Activity Stats for DB: CDB  Instance: cdb  Snaps: 35 -37

Statistic                                      Total     per Second    per Trans
--------------------------------- ------------------ -------------- ------------
CPU used by this session                      19,543          168.5        814.3
CPU used when call started                    19,542          168.5        814.3
CR blocks created                                  0            0.0          0.0
Cached Commit SCN referenced                       0            0.0          0.0
Commit SCN cached                                  0            0.0          0.0
DBWR buffers scanned                               0            0.0          0.0
DBWR checkpoint buffers written                  230            2.0          9.6
DBWR checkpoints                                   0            0.0          0.0
DBWR free buffers found                            0            0.0          0.0
DBWR lru scans                                     0            0.0          0.0
DBWR make free requests                            0            0.0          0.0
DBWR summed scan depth                             0            0.0          0.0
DBWR transaction table writes                      0            0.0          0.0
DBWR undo block writes                            65            0.6          2.7
SQL*Net roundtrips to/from client              2,264           19.5         94.3
SQL*Net roundtrips to/from dblink                594            5.1         24.8
active txn count during cleanout                  44            0.4          1.8
background checkpoints completed                   0            0.0          0.0
background checkpoints started                     0            0.0          0.0
background timeouts                              134            1.2          5.6
branch node splits                                 0            0.0          0.0
buffer is not pinned count                 4,216,672       36,350.6    175,694.7
buffer is pinned count                        68,992          594.8      2,874.7
bytes received via SQL*Net from c            381,702        3,290.5     15,904.3
bytes received via SQL*Net from d             93,520          806.2      3,896.7
bytes sent via SQL*Net to client             942,808        8,127.7     39,283.7
bytes sent via SQL*Net to dblink             318,849        2,748.7     13,285.4
calls to get snapshot scn: kcmgss          4,201,733       36,221.8    175,072.2
calls to kcmgas                                  101            0.9          4.2
calls to kcmgcs                                   26            0.2          1.1
change write time                                  1            0.0          0.0
cleanout - number of ktugct calls                 64            0.6          2.7
cleanouts and rollbacks - consist                  0            0.0          0.0
cleanouts only - consistent read                  13            0.1          0.5
cluster key scan block gets                4,200,464       36,210.9    175,019.3
cluster key scans                          4,200,309       36,209.6    175,012.9
commit cleanout failures: block l                  0            0.0          0.0
commit cleanout failures: buffer                   0            0.0          0.0
commit cleanout failures: callbac                  0            0.0          0.0
commit cleanouts                                 219            1.9          9.1
commit cleanouts successfully com                219            1.9          9.1
commit txn count during cleanout                  44            0.4          1.8
consistent changes                                 0            0.0          0.0
consistent gets                            4,234,500       36,504.3    176,437.5
consistent gets - examination                  3,195           27.5        133.1
current blocks converted for CR                    0            0.0          0.0
cursor authentications                             0            0.0          0.0
data blocks consistent reads - un                  0            0.0          0.0
db block changes                               1,604           13.8         66.8
db block gets                                  2,146           18.5         89.4
deferred (CURRENT) block cleanout                106            0.9          4.4
dirty buffers inspected                            0            0.0          0.0
enqueue conversions                            2,093           18.0         87.2
enqueue releases                               1,525           13.2         63.5
enqueue requests                               1,526           13.2         63.6
enqueue timeouts                                   1            0.0          0.0
Instance Activity Stats for DB: CDB  Instance: cdb  Snaps: 35 -37

Statistic                                      Total     per Second    per Trans
--------------------------------- ------------------ -------------- ------------
enqueue waits                                      0            0.0          0.0
execute count                              4,201,681       36,221.4    175,070.0
free buffer inspected                              0            0.0          0.0
free buffer requested                            139            1.2          5.8
hot buffers moved to head of LRU                   0            0.0          0.0
immediate (CR) block cleanout app                 13            0.1          0.5
immediate (CURRENT) block cleanou                 47            0.4          2.0
index fast full scans (full)                       0            0.0          0.0
index fetch by key                             2,097           18.1         87.4
index scans kdiixs1                           16,762          144.5        698.4
leaf node 90-10 splits                             5            0.0          0.2
leaf node splits                                  24            0.2          1.0
logons cumulative                                 24            0.2          1.0
messages received                                 71            0.6          3.0
messages sent                                     71            0.6          3.0
no buffer to keep pinned count                     0            0.0          0.0
no work - consistent read gets             4,227,439       36,443.4    176,143.3
number of auto extends on undo ta                  0            0.0          0.0
opened cursors cumulative                        381            3.3         15.9
parse count (failures)                             0            0.0          0.0
parse count (hard)                                 4            0.0          0.2
parse count (total)                              501            4.3         20.9
parse time cpu                                     3            0.0          0.1
parse time elapsed                                10            0.1          0.4
physical reads                                     1            0.0          0.0
physical reads direct                              0            0.0          0.0
physical writes                                  230            2.0          9.6
physical writes direct                             0            0.0          0.0
physical writes non checkpoint                   104            0.9          4.3
pinned buffers inspected                           0            0.0          0.0
prefetched blocks                                  0            0.0          0.0
prefetched blocks aged out before                  0            0.0          0.0
process last non-idle time            21,339,926,032  183,964,879.6 ############
recovery array read time                           0            0.0          0.0
recovery array reads                               0            0.0          0.0
recovery blocks read                               0            0.0          0.0
recursive calls                            4,202,407       36,227.7    175,100.3
recursive cpu usage                           15,346          132.3        639.4
redo blocks written                            2,103           18.1         87.6
redo buffer allocation retries                     0            0.0          0.0
redo entries                                     869            7.5         36.2
redo log space requests                            0            0.0          0.0
redo log space wait time                           0            0.0          0.0
redo ordering marks                                0            0.0          0.0
redo size                                  1,006,884        8,680.0     41,953.5
redo synch time                                   72            0.6          3.0
redo synch writes                                 23            0.2          1.0
redo wastage                                  18,768          161.8        782.0
redo write time                                  428            3.7         17.8
redo writer latching time                          1            0.0          0.0
redo writes                                       69            0.6          2.9
rollback changes - undo records a                  0            0.0          0.0
rollbacks only - consistent read                   0            0.0          0.0
rows fetched via callback                      1,393           12.0         58.0
session connect time                  21,339,926,032  183,964,879.6 ############
session logical reads                      4,236,646       36,522.8    176,526.9
Instance Activity Stats for DB: CDB  Instance: cdb  Snaps: 35 -37

Statistic                                      Total     per Second    per Trans
--------------------------------- ------------------ -------------- ------------
session pga memory                                 0            0.0          0.0
session pga memory max                        65,536          565.0      2,730.7
session uga memory                           314,496        2,711.2     13,104.0
session uga memory max                     8,751,552       75,444.4    364,648.0
shared hash latch upgrades - no w             16,699          144.0        695.8
sorts (disk)                                       0            0.0          0.0
sorts (memory)                                   660            5.7         27.5
sorts (rows)                                  55,711          480.3      2,321.3
summed dirty queue length                          0            0.0          0.0
switch current to new buffer                      14            0.1          0.6
table fetch by rowid                          33,238          286.5      1,384.9
table fetch continued row                          0            0.0          0.0
table scan blocks gotten                         238            2.1          9.9
table scan rows gotten                           751            6.5         31.3
table scans (long tables)                          0            0.0          0.0
table scans (short tables)                       244            2.1         10.2
transaction rollbacks                              0            0.0          0.0
transaction tables consistent rea                  0            0.0          0.0
transaction tables consistent rea                  0            0.0          0.0
user calls                                     2,453           21.2        102.2
user commits                                      22            0.2          0.9
user rollbacks                                     2            0.0          0.1
workarea executions - multipass                    0            0.0          0.0
workarea executions - onepass                      0            0.0          0.0
workarea executions - optimal                  1,088            9.4         45.3
write clones created in backgroun                  0            0.0          0.0
write clones created in foregroun                  0            0.0          0.0
          -------------------------------------------------------------
Tablespace IO Stats for DB: CDB  Instance: cdb  Snaps: 35 -37
->ordered by IOs (Reads + Writes) desc

Tablespace
------------------------------
                 Av      Av     Av                    Av        Buffer Av Buf
         Reads Reads/s Rd(ms) Blks/Rd       Writes Writes/s      Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
TS_STARGUS
             0       0    0.0                  165        1          0    0.0
UNDOTBS1
             0       0    0.0                   65        1          0    0.0
PERFSTAT
             1       0   10.0     1.0            0        0          0    0.0
          -------------------------------------------------------------
File IO Stats for DB: CDB  Instance: cdb  Snaps: 35 -37
->ordered by Tablespace, File

Tablespace               Filename
------------------------ ----------------------------------------------------
                 Av      Av     Av                    Av        Buffer Av Buf
         Reads Reads/s Rd(ms) Blks/Rd       Writes Writes/s      Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
PERFSTAT                 /export/home/oracle/oradata/cdb/perfstat01.dbf
             1       0   10.0     1.0            0        0          0

TS_STARGUS               /export/home/oracle/oradata/cdb/ts_stargus_01.dbf
             0       0                         165        1          0

UNDOTBS1                 /export/home/oracle/oradata/cdb/undotbs01.dbf
             0       0                          65        1          0

          -------------------------------------------------------------
Buffer Pool Statistics for DB: CDB  Instance: cdb  Snaps: 35 -37
-> Standard block size Pools  D: default,  K: keep,  R: recycle
-> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k

                                                           Free    Write  Buffer
     Number of Cache      Buffer    Physical   Physical  Buffer Complete    Busy
P      Buffers Hit %        Gets       Reads     Writes   Waits    Waits   Waits
--- ---------- ----- ----------- ----------- ---------- ------- --------  ------
D       37,715 100.0   4,113,113           1        230       0        0       0
          -------------------------------------------------------------

Instance Recovery Stats for DB: CDB  Instance: cdb  Snaps: 35 -37
-> B: Begin snapshot,  E: End snapshot

  Targt Estd                                    Log File   Log Ckpt   Log Ckpt
  MTTR  MTTR   Recovery    Actual     Target      Size     Timeout    Interval
   (s)   (s)   Estd IOs  Redo Blks  Redo Blks  Redo Blks  Redo Blks  Redo Blks
- ----- ----- ---------- ---------- ---------- ---------- ---------- ----------
B     0     0                184637     184320     184320     487328
E     0     0                184453     184320     184320     489431
          -------------------------------------------------------------

Buffer Pool Advisory for DB: CDB  Instance: cdb  End Snap: 37
-> Only rows with estimated physical reads >0 are displayed
-> ordered by Block Size, Buffers For Estimate

        Size for  Size      Buffers for  Est Physical          Estimated
P   Estimate (M) Factr         Estimate   Read Factor     Physical Reads
--- ------------ ----- ---------------- ------------- ------------------
D             32    .1            3,970          3.55          6,776,798
D             64    .2            7,940          2.64          5,042,221
D             96    .3           11,910          2.04          3,902,538
D            128    .4           15,880          1.55          2,968,018
D            160    .5           19,850          1.36          2,599,759
D            192    .6           23,820          1.26          2,404,270
D            224    .7           27,790          1.17          2,233,313
D            256    .8           31,760          1.07          2,050,577
D            288    .9           35,730          1.02          1,942,525
D            304   1.0           37,715          1.00          1,911,084
D            320   1.1           39,700          0.96          1,843,014
D            352   1.2           43,670          0.93          1,779,633
D            384   1.3           47,640          0.91          1,735,053
D            416   1.4           51,610          0.87          1,671,944
D            448   1.5           55,580          0.85          1,620,410
D            480   1.6           59,550          0.80          1,519,992
D            512   1.7           63,520          0.79          1,503,275
D            544   1.8           67,490          0.78          1,487,894
D            576   1.9           71,460          0.77          1,477,315
D            608   2.0           75,430          0.76          1,448,569
D            640   2.1           79,400          0.73          1,400,297
          -------------------------------------------------------------
PGA Aggr Target Stats for DB: CDB  Instance: cdb  Snaps: 35 -37
-> B: Begin snap   E: End snap (rows dentified with B or E contain data
   which is absolute i.e. not diffed over the interval)
-> PGA cache hit % - percentage of W/A (WorkArea) data processed only in-memory
-> Auto PGA Target - actual workarea memory target
-> W/A PGA Used    - amount of memory used for all Workareas (manual + auto)
-> %PGA W/A Mem    - percentage of PGA memory allocated to workareas
-> %Auto W/A Mem   - percentage of workarea memory controlled by Auto Mem Mgmt
-> %Man W/A Mem    - percentage of workarea memory under manual control

PGA Cache Hit % W/A MB Processed Extra W/A MB Read/Written
--------------- ---------------- -------------------------
          100.0               31                         0

Warning:  pga_aggregate_target was set too low for current workload, as this
          value was exceeded during this interval.  Use the PGA Advisory view
          to help identify a different value for pga_aggregate_target.

                                             %PGA  %Auto   %Man
  PGA Aggr  Auto PGA   PGA Mem    W/A PGA    W/A    W/A    W/A   Global Mem
  Target(M) Target(M)  Alloc(M)   Used(M)    Mem    Mem    Mem    Bound(K)
- --------- --------- ---------- ---------- ------ ------ ------ ----------
B        24         4       43.2        0.0     .0     .0     .0      1,228
E        24         4       43.3        0.0     .0     .0     .0      1,228
          -------------------------------------------------------------

PGA Aggr Target Histogram for DB: CDB  Instance: cdb  Snaps: 35 -37
-> Optimal Executions are purely in-memory operations

    Low    High
Optimal Optimal    Total Execs Optimal Execs 1-Pass Execs M-Pass Execs
------- ------- -------------- ------------- ------------ ------------
     8K     16K            927           927            0            0
    16K     32K            114           114            0            0
    32K     64K             22            22            0            0
    64K    128K              2             2            0            0
   512K   1024K             23            23            0            0
          -------------------------------------------------------------

PGA Memory Advisory for DB: CDB  Instance: cdb  End Snap: 37
-> When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value
   where Estd PGA Overalloc Count is 0

                                       Estd Extra    Estd PGA   Estd PGA
PGA Target    Size           W/A MB   W/A MB Read/      Cache  Overalloc
  Est (MB)   Factr        Processed Written to Disk     Hit %      Count
---------- ------- ---------------- ---------------- -------- ----------
        12     0.5         38,168.6          8,976.9     81.0        420
        18     0.8         38,168.6          8,976.9     81.0        267
        24     1.0         38,168.6          4,822.2     89.0        174
        29     1.2         38,168.6          4,433.7     90.0          2
        34     1.4         38,168.6          4,406.1     90.0          0
        38     1.6         38,168.6          4,402.7     90.0          0
        43     1.8         38,168.6          3,956.6     91.0          0
        48     2.0         38,168.6          3,794.2     91.0          0
        72     3.0         38,168.6          2,933.2     93.0          0
        96     4.0         38,168.6          2,820.9     93.0          0
       144     6.0         38,168.6          1,378.8     97.0          0
       192     8.0         38,168.6          1,061.1     97.0          0
          -------------------------------------------------------------
Rollback Segment Stats for DB: CDB  Instance: cdb  Snaps: 35 -37
->A high value for "Pct Waits" suggests more rollback segments may be required
->RBS stats may not be accurate between begin and end snaps when using Auto Undo
  managment, as RBS may be dynamically created and dropped as needed

        Trans Table       Pct   Undo Bytes
RBS No      Gets        Waits     Written        Wraps  Shrinks  Extends
------ -------------- ------- --------------- -------- -------- --------
     0           17.0    0.00               0        0        0        0
     1           79.0    0.00           2,278        0        0        0
     2           95.0    0.00             380        0        0        0
     3          109.0    0.00         167,692        0        0        0
     4           97.0    0.00             656        0        0        0
     5           67.0    0.00             434        0        0        0
     6           64.0    0.00             788        0        0        0
     7          117.0    0.00             672        0        0        0
     8          102.0    0.00             434        0        0        0
     9          134.0    0.00         193,104        0        0        0
    10          119.0    0.00             544        0        0        0
          -------------------------------------------------------------
Rollback Segment Storage for DB: CDB  Instance: cdb  Snaps: 35 -37
->Optimal Size should be larger than Avg Active

RBS No    Segment Size      Avg Active    Optimal Size    Maximum Size
------ --------------- --------------- --------------- ---------------
     0         385,024               0                         385,024
     1     109,174,784     187,464,208                     209,838,080
     2     257,024,000     376,982,905                     257,024,000
     3      25,288,704     318,918,134                     243,458,048
     4      22,142,976      62,433,998                     109,240,320
     5      11,657,216     105,183,436                     157,474,816
     6      17,948,672     319,545,521                     260,169,728
     7      15,851,520     320,974,179                     205,250,560
     8     159,506,432     317,225,223                     249,683,968
     9      21,094,400      12,377,948                     484,433,920
    10      17,948,672     288,462,988                     243,392,512
          -------------------------------------------------------------
Latch Activity for DB: CDB  Instance: cdb  Snaps: 35 -37
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
  willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0

                                           Pct    Avg   Wait                 Pct
                              Get          Get   Slps   Time       NoWait NoWait
Latch                       Requests      Miss  /Miss    (s)     Requests   Miss
------------------------ -------------- ------ ------ ------ ------------ ------
Consistent RBA                       69    0.0             0            0
FIB s.o chain latch                   6    0.0             0            0
FOB s.o list latch                   26    0.0             0            0
SQL memory manager latch              2    0.0             0           36    0.0
SQL memory manager worka          2,706    0.0             0            0
active checkpoint queue              40    0.0             0            0
archive control                      41    0.0             0            0
cache buffer handles                  8    0.0             0            0
cache buffers chains          8,473,919    0.8    0.0      0          131    0.0
cache buffers lru chain             375    0.0             0          177    0.0
channel handle pool latc             42    0.0             0            0
channel operations paren            136    0.0             0            0
checkpoint queue latch            2,530    0.0    0.0      0          140    0.0
child cursor hash table              84    0.0             0            0
dml lock allocation                 223    0.0             0            0
dummy allocation                     48    0.0             0            0
enqueue hash chains               5,148    0.0             0            0
enqueues                          2,794    0.0             0            0
event group latch                    21    0.0             0            0
global tx hash mapping            1,299    0.0             0            0
job_queue_processes para              2    0.0             0            0
ktm global data                       1    0.0             0            0
lgwr LWN SCN                         70    0.0             0            0
library cache                    26,286    0.0             0            0
library cache pin                 2,534    0.0             0            0
library cache pin alloca          2,356    0.0             0            0
list of block allocation             50    0.0             0            0
messages                            641    0.0             0            0
mostly latch-free SCN                70    0.0             0            0
ncodef allocation latch              27    0.0             0            0
post/wait queue                      37    0.0             0           23    0.0
process allocation                   21    0.0             0           21    0.0
process group creation               42    0.0             0            0
redo allocation                   1,026    0.1    0.0      0            0
redo copy                             0                    0          888    0.2
redo writing                        321    0.0             0            0
row cache enqueue latch           3,044    0.0             0            0
row cache objects                 3,228    0.0             0            0
sequence cache                      177    0.0             0            0
session allocation                  441    0.0             0            0
session idle bit                  5,014    0.0    0.0      0            0
session switching                    27    0.0             0            0
session timer                        62    0.0             0            0
shared pool                       2,437    0.0             0            0
simulator hash latch                919    0.0             0            0
simulator lru latch                   7    0.0             0            7    0.0
sort extent pool                     41    0.0             0            0
transaction allocation           27,170    0.0             0            0
transaction branch alloc            101    0.0             0            0
undo global data                  1,744    0.0             0            0
Latch Activity for DB: CDB  Instance: cdb  Snaps: 35 -37
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
  willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0

                                           Pct    Avg   Wait                 Pct
                              Get          Get   Slps   Time       NoWait NoWait
Latch                       Requests      Miss  /Miss    (s)     Requests   Miss
------------------------ -------------- ------ ------ ------ ------------ ------
user lock                            72    0.0             0            0
          -------------------------------------------------------------
Latch Sleep breakdown for DB: CDB  Instance: cdb  Snaps: 35 -37
-> ordered by misses desc

                                      Get                            Spin &
Latch Name                       Requests      Misses      Sleeps Sleeps 1->4
-------------------------- -------------- ----------- ----------- ------------
cache buffers chains            8,473,919      68,091       1,649 0/0/0/0/0
          -------------------------------------------------------------
Latch Miss Sources for DB: CDB  Instance: cdb  Snaps: 35 -37
-> only latches with sleeps are shown
-> ordered by name, sleeps desc

                                                     NoWait              Waiter
Latch Name               Where                       Misses     Sleeps   Sleeps
------------------------ -------------------------- ------- ---------- --------
cache buffers chains     kcbgtcr: kslbegin excl           0      1,171    1,409
cache buffers chains     kcbrls: kslbegin                 0        478      240
          -------------------------------------------------------------
Dictionary Cache Stats for DB: CDB  Instance: cdb  Snaps: 35 -37
->"Pct Misses"  should be very low (< 2% in most cases)
->"Cache Usage" is the number of cache entries being used
->"Pct SGA"     is the ratio of usage to allocated size for that cache

                                   Get    Pct    Scan   Pct      Mod      Final
Cache                         Requests   Miss    Reqs  Miss     Reqs      Usage
------------------------- ------------ ------ ------- ----- -------- ----------
dc_database_links                1,040    0.0       0              0          1
dc_objects                          19    0.0       0              0        695
dc_profiles                         18    0.0       0              0          1
dc_rollback_segments                11    0.0       0              0         12
dc_sequences                         3    0.0       0              3          5
dc_tablespaces                      35    0.0       0              0          5
dc_user_grants                     140    0.0       0              0         16
dc_usernames                        37    0.0       0              0          6
dc_users                           391    0.0       0              0         19
          -------------------------------------------------------------


Library Cache Activity for DB: CDB  Instance: cdb  Snaps: 35 -37
->"Pct Misses"  should be very low

                         Get  Pct        Pin        Pct               Invali-
Namespace           Requests  Miss     Requests     Miss     Reloads  dations
--------------- ------------ ------ -------------- ------ ---------- --------
BODY                      47    0.0             47    0.0          0        0
SQL AREA                 403    1.0          1,032    0.8          0        0
TABLE/PROCEDURE           57    0.0            145    0.0          0        0
          -------------------------------------------------------------
Shared Pool Advisory for DB: CDB  Instance: cdb  End Snap: 37
-> Note there is often a 1:Many correlation between a single logical object
   in the Library Cache, and the physical number of memory objects associated
   with it.  Therefore comparing the number of Lib Cache objects (e.g. in
   v$librarycache), with the number of Lib Cache Memory Objects is invalid

                                                          Estd
Shared Pool    SP       Estd         Estd     Estd Lib LC Time
   Size for  Size  Lib Cache    Lib Cache   Cache Time   Saved  Estd Lib Cache
  Estim (M) Factr   Size (M)      Mem Obj    Saved (s)   Factr    Mem Obj Hits
----------- ----- ---------- ------------ ------------ ------- ---------------
         16    .5         20        2,700          258     1.0         142,856
         32   1.0         35        5,145          258     1.0         142,880
         48   1.5         48        8,497          259     1.0         143,134
         64   2.0         48        8,497          259     1.0         143,134
          -------------------------------------------------------------
SGA Memory Summary for DB: CDB  Instance: cdb  Snaps: 35 -37

SGA regions                       Size in Bytes
------------------------------ ----------------
Database Buffers                    318,767,104
Fixed Size                              731,712
Redo Buffers                            811,008
Variable Size                        67,108,864
                               ----------------
sum                                 387,418,688
          -------------------------------------------------------------


SGA breakdown difference for DB: CDB  Instance: cdb  Snaps: 35 -37

Pool   Name                                Begin value        End value  % Diff
------ ------------------------------ ---------------- ---------------- -------
shared 1M buffer                             2,098,176        2,098,176    0.00
shared Checkpoint queue                        513,280          513,280    0.00
shared FileIdentificatonBlock                  349,824          349,824    0.00
shared FileOpenBlock                           818,960          818,960    0.00
shared KGK heap                                  7,000            7,000    0.00
shared KGLS heap                             2,343,848        2,343,848    0.00
shared KQR L PO                              1,068,048        1,068,048    0.00
shared KQR M PO                              1,053,744        1,053,744    0.00
shared KQR S SO                                  4,120            4,120    0.00
shared KQR X PO                                  2,576            2,576    0.00
shared KSXR large reply queue                  167,624          167,624    0.00
shared KSXR pending messages que               853,952          853,952    0.00
shared KSXR receive buffers                  1,034,000        1,034,000    0.00
shared PL/SQL DIANA                            803,064          803,064    0.00
shared PL/SQL MPCODE                           607,400          624,976    2.89
shared PLS non-lib hp                            2,088            2,088    0.00
shared SYSTEM PARAMETERS                       169,016          169,016    0.00
shared character set object                    279,728          279,728    0.00
shared dictionary cache                      3,229,952        3,229,952    0.00
shared enqueue                                 218,952          218,952    0.00
shared errors                                   13,088           13,088    0.00
shared event statistics per sess             1,294,440        1,294,440    0.00
shared fixed allocation callback                   472              472    0.00
shared free memory                           3,182,136        3,122,272   -1.88
shared joxs heap init                            4,240            4,240    0.00
shared krvxrr                                  253,056          253,056    0.00
shared ksm_file2sga region                     370,496          370,496    0.00
shared library cache                        11,717,552       11,742,120    0.21
shared message pool freequeue                  771,984          771,984    0.00
shared miscellaneous                        12,213,312       12,213,312    0.00
shared parameters                               48,368           50,664    4.75
shared sessions                                310,960          310,960    0.00
shared sim memory hea                          328,304          328,304    0.00
shared sql area                             20,934,016       20,949,440    0.07
shared table definiti                           12,648           12,648    0.00
shared temporary tabl                           25,840           25,840    0.00
shared trigger defini                            2,128            2,128    0.00
shared trigger inform                              472              472    0.00
       buffer_cache                        318,767,104      318,767,104    0.00
       fixed_sga                               731,712          731,712    0.00
       log_buffer                              787,456          787,456    0.00
          -------------------------------------------------------------
init.ora Parameters for DB: CDB  Instance: cdb  Snaps: 35 -37

                                                                  End value
Parameter Name                Begin value                       (if different)
----------------------------- --------------------------------- --------------
background_dump_dest          /export/home/oracle/admin/cdb/bdu
compatible                    9.2.0.0.0
control_files                 /export/home/oracle/oradata/cdb/c
core_dump_dest                /export/home/oracle/admin/cdb/cdu
cursor_space_for_time         TRUE
db_block_size                 8192
db_cache_size                 318767104
db_domain
db_file_multiblock_read_count 16
db_name                       cdb
fast_start_mttr_target        0
instance_name                 cdb
java_pool_size                0
job_queue_processes           3
nls_date_format               YYYY-MM-DD HH24:MI:SS
open_cursors                  500
optimizer_dynamic_sampling    2
optimizer_index_cost_adj      40
optimizer_mode                ALL_ROWS
pga_aggregate_target          25165824
processes                     100
remote_login_passwordfile     NONE
shared_pool_size              33554432
timed_statistics              TRUE
undo_management               AUTO
undo_retention                900
undo_tablespace               UNDOTBS1
user_dump_dest                /export/home/oracle/admin/cdb/udu
          -------------------------------------------------------------

End of Report

}}}
http://collectl.sourceforge.net/NetworkStats.html
Harvard goes PaaS with SELinux Sandbox http://opensource.com/education/12/8/harvard-goes-paas-selinux-sandbox?sc_cid=70160000000TmB8AAK
Introducing the SELinux Sandbox http://danwalsh.livejournal.com/28545.html
Cool things with SELinux... Introducing sandbox -X http://danwalsh.livejournal.com/31146.html
{{{
-- sandy info
http://www.w7forums.com/sandy-bridge-review-intel-core-i7-2600k-i5-2500k-and-core-i3-2100-tested-t9378.html
http://www.geek.com/articles/chips/new-intel-atom-and-core-i7-processors-on-the-way-2011053/

--defective/bug sandy bridge
http://www.notebookcheck.net/Intel-s-defective-Sandy-Bridge-Chipsets-Status-Report.45596.0.html
http://www.anandtech.com/show/4142/intel-discovers-bug-in-6series-chipset-begins-recall

--cougar sata bug
http://www.anandtech.com/show/4143/the-source-of-intels-cougar-point-sata-bug

--z68 smart response technology, putting SSD ala cache
http://hothardware.com/Reviews/Intel-Z68-Express-Chipset-With-Smart-Response-Technology/
http://www.anandtech.com/show/4329/intel-z68-chipset-smart-response-technology-ssd-caching-review/4

--sandy bridge max 32GB not yet here.. needs non-ECC
http://www.amazon.com/review/R1VPPYQ2C823XM/ref=cm_cr_dp_cmt?ie=UTF8&ASIN=B00288BHIG&nodeID=172282&tag=&linkCode=#wasThisHelpful






}}}
/***
|Name:|SaveCloseTiddlerPlugin|
|Description:|Provides two extra toolbar commands, saveCloseTiddler and cancelCloseTiddler|
|Version:|3.0 ($Rev: 5502 $)|
|Date:|$Date: 2008-06-10 23:31:39 +1000 (Tue, 10 Jun 2008) $|
|Source:|http://mptw.tiddlyspot.com/#SaveCloseTiddlerPlugin|
|Author:|Simon Baird <simon.baird@gmail.com>|
|License:|http://mptw.tiddlyspot.com/#TheBSDLicense|
To use these you must add them to the tool bar in your EditTemplate
***/
//{{{
merge(config.commands,{

	saveCloseTiddler: {
		text: 'done/close',
		tooltip: 'Save changes to this tiddler and close it',
		handler: function(ev,src,title) {
			var closeTitle = title;
			var newTitle = story.saveTiddler(title,ev.shiftKey);
			if (newTitle)
				closeTitle = newTitle;
			return config.commands.closeTiddler.handler(ev,src,closeTitle);
		}
	},

	cancelCloseTiddler: {
		text: 'cancel/close',
		tooltip: 'Undo changes to this tiddler and close it',
		handler: function(ev,src,title) {
			// the same as closeTiddler now actually
			return config.commands.closeTiddler.handler(ev,src,title);
		}
	}

});

//}}}
https://www.safaribooksonline.com/library/view/learning-functional-data/9781785888731/
http://thestrugglingblogger.com/2011/01/scavenger-hunt-early-in-2011/
http://www.howcast.com/videos/27585-How-To-Create-a-Modern-Scavenger-Hunt
http://www.ehow.com/how_7935_devise-scavenger-hunt.html
http://www.diva-girl-parties-and-stuff.com/scavenger-hunts.html
http://patrickpowers.net/2011/01/social-media-scavenger-hunt-pulls-it-all-together/
http://lifehacker.com/5787809/april-fools-day-qr-code-scavenger-hunt
http://www.manilabookfair.com/BOA/special_events/mechanics%20html/Scavengers-Hunt-Mechanics.html
http://www.scavengerhuntanywhere.com/
http://www.scavengerhuntclues.org/
http://knol.google.com/k/scavenger-hunts-how-to-write-fun-and-challenging-clues#     <-- GOOD STUFF
http://www.geekmom.com/2011/09/state-fair-geek-scavenger-hunt/
http://www.consolationchamps.com/content/geocaching.html
http://blog.makezine.com/archive/2011/09/qr-code-scavenger-hunt-at-ri-mini-maker-faire-this-saturday.html
http://blog.thomnichols.org/2011/08/quirk-a-cross-platform-qr-scavenger-hunt-game
http://2d-code.co.uk/next-level-qr-code-scavenger-hunt/
http://www.teampedia.net/wiki/index.php?title=QR_Code_Scavenger_Hunt
http://mashable.com/2010/12/14/qr-code-scavenger-hunt/
http://www.youtube.com/watch?v=m08rU5ipX9o
http://qrwild.com/
http://www.qrcodepress.com/qr-code-scavenger-hunts-the-next-big-thing/85927/




-- geeks
http://culturewav.es/public_thought/66707
http://googleio.appspot.com/qrhunt
http://googleio.appspot.com/qr/
http://technologyconference.org/scavenger_hunt.cfm
http://virtualgeek.typepad.com/virtual_geek/2011/08/vmworld-2011-top-10-number-5.html
http://vtexan.com/2011/08/the-great-scavenger-vhunt-at-vmworld/
http://blog.cowger.us/2011/08/19/vmworld-vhunt/
http://www.facebook.com/defcon?v=box_3
http://www.facebook.com/defconscavhunt?sk=photos
http://www.barcodelib.com/java_barcode/barcode_symbologies/qrcode.html   <-- LIBRARY


http://www.youtube.com/watch?v=OB-tmGmZ_Fk <-- google forms
http://news.cnet.com/8301-17939_109-10166251-2.html <-- google docs validation
https://docs.google.com/support/bin/topic.py?topic=1360868 <-- google docs documentation
http://www.youtube.com/watch?v=SCf5qRajTtI&feature=results_video&playnext=1&list=PLC929B061B0F74983 <-- simple example
























	Database Scripts Library Index
HighAvailability.RMAN - Recovery Manager

	131704.1

Script: To generate a Database Link create script
  	Doc ID: 	Note:1020175.6


-- DEPENDENCIES

Script To List Recursive Dependency Between Objects
  	Doc ID: 	139594.1

HowTo: Show recursive dependencies and reverse: which objects are dependent of ...
  	Doc ID: 	756350.1

<<showtoc>>

! Performance tests 
* Oracle  FMW  SOA  11g  R1:  Using Secure Files https://www.oracle.com/technetwork/database/availability/oraclefmw-soa-11gr1-securefiles-1842740.pdf
* another good one from my notes https://www.oracle.com/technical-resources/articles/database/sql-11g-securefiles.html
* LOB vs securefiles (in german) http://www.database-consult.de/docs/LOBversusSF1.pdf

! for migration 
they have to do export/import or dbms_redefinition
they need space for the new compressed table before they can drop the old one
if they have space in RECO they can use that for the new table, then alter table move later on after dropping the old table
all scenarios can be tested

! General Troubleshooting 
{{{
Here are some of the questions that need to be answered when troubleshooting securefiles/LOB (some of them we already know):
> Is it TX/4 or TX/6
> Are the (gc) buffer busy waits on the LOB segment or the LOB index ?
> Are they on the space management blocks or on the "real content" blocks of the object ?
> Is the LOB defined with the default chunksize or 32K chunksize - is the sizing appropriate ?
> Is each LOB in its own tablespace ? Are the tablespace single-file tablespaces or multi-file tablespaces. 
> Do the tablespaces use system extent management, or fixed size ?
> Are there any features involved that would slow down the insert/update/delete process (compression, deduplication)
> Are the LOBs largely subject to inserts with reads, or are there lots of updates and deletes ?
> Are the LOBS nocache, cache, or cache read ?
}}}

All About Security: User, Privilege, Role, SYSDBA, O/S Authentication, Audit, Encryption, OLS, Data Vault
  	Doc ID: 	Note:207959.1


-- PRIVILEGES

Script to Create View to Show All User Privs
  	Doc ID: 	Note:1020286.6

Script to Show System and Object Privs for a User
  	Doc ID: 	Note:1019508.6


  	
-- DEFAULT PASSWORDS

160861.1
  	



-- FGAC

Note 67977.1 Oracle8i FGAC - Working Examples



-- APPLICATION CONTEXT 

How to Determine Active Context (DBMS_SESSION.LIST_CONTEXT)
  	Doc ID: 	Note:69573.1




-- OVERVIEW OF ORACLE SECURITY SERVER
  	Doc ID: 	Note:1031071.6



-- ERROR MESSAGES

Fine Grained Access Control Feature Is Not Available In the Oracle Server Standard Edition
  	Doc ID: 	Note:219911.1




-- PASSWORD

ORACLE_SID, TNS Alias,Password File and others Case Sensitiveness
  	Doc ID: 	225097.1

Script to prevent a user from changing his password
  	Doc ID: 	Note:135878.1 	

Oracle Created Database Users: Password, Usage and Files References
  	Doc ID: 	160861.1

Oracle Password Management Policy
  	Doc ID: 	114930.1


-- 11g PASSWORD

11g R1 New Feature : Case Sensitive Passwords and Strong User Authentication
  	Doc ID: 	429465.1

ORA-01017 when changing expired password using OCIPASSWORDCHANGE against 11.1
  	Doc ID: 	788538.1


-- PASSWORD FILE 

How to Avoid Common Flaws and Errors Using Passwordfile
  	Doc ID: 	185703.1


-- PROFILE

11G DEFAULT Profile Changes
  	Doc ID: 	454635.1



-- HARDENING

Security Check List: Steps to Make Your Database Secure from Attacks
  	Doc ID: 	131752.1




-- AUDIT

Moving AUD$ to Another Tablespace and Adding Triggers to AUD$ 
  Doc ID:  Note:72460.1 




-- AUDIT VAULT 







-- SYS

Audit Sys Logins
 	Doc ID:	Note:462564.1
 	
-- AUD$ TABLE

Moving AUD$ to Another Tablespace and Adding Triggers to AUD$
 	Doc ID:	Note:72460.1
 	
Note 1019377.6 - Script to move SYS.AUD$ table out of SYSTEM tablespace
Note 166301.1 - How to Reorganize SYS.AUD$ Table
Note 731908.1 - New Feature DBMS_AUDIT_MGMT To Manage And Purge Audit Information
Note 73408.1 - How to Truncate, Delete, or Purge Rows from the Audit Trail Table SYS.AUD$

Problem: Linux64: Installing 32bit 10g Grid Control Fails Due to Incompatibility with the 64bit OS
  	Doc ID: 	421749.1

Enterprise Manager Support Matrix for zLinux
  	Doc ID: 	725980.1

Linux Crashes when Enterprise Manager Agent Starts on RHEL 4 Update 6 and 7
  	Doc ID: 	729543.1

Can OSAUD Collect SQL Text or Bind Variables? - NO
  	Doc ID: 	729280.1

How To Set the AUDIT_SYSLOG _LEVEL Parameter?
  	Doc ID: 	553225.1

Audit Sys Logins
  	Doc ID: 	462564.1

HOW TO CAPTURE ALL THE DDL STATEMENTS
  	Doc ID: 	739604.1

New Feature DBMS_AUDIT_MGMT To Manage And Purge Audit Information
  	Doc ID: 	731908.1




-- DATABASE VAULT

http://www.oracle.com/technology/deploy/security/database-security/database-vault/index.html

Industry expert Rich Mogull explains the importance of Separation of Duties for Database Administration, a Ziff Davis Enterprise Security Webcast Sponsored By Oracle
http://www.oracle.com/pls/ebn/live_viewer.main?p_direct=yes&p_shows_id=6469943

"Keep Them Separated", a Ziff Davis whitepaper describing best practices for internal controls and separation of duties to ensure compliant database management 
http://www.oracle.com/dm/09q1field/keep_them_separated_zd_whitepaper_6-18-08.pdf

Forrester's Noel Yuhanna on Security and Compliance with Oracle Database Vault 
http://www.oracle.com/pls/ebn/live_viewer.main?p_direct=yes&p_shows_id=5337015

IDC Report: Preventing Enterprise Data Leaks at the Source 
http://www.oracle.com/corporate/analyst/reports/infrastructure/sec/209752.pdf

Oracle Database Vault Transparent Privileged User Access Control iSeminar
http://www.oracle.com/pls/ebn/live_viewer.main?p_direct=yes&p_shows_id=5617423

Oracle Database Vault Demo
http://www.oracle.com/pls/ebn/swf_viewer.load?p_shows_id=5641797&p_referred=0&p_width=800&p_height=600

Protecting Applications with Oracle Database Vault Whitepaper
http://www.oracle.com/technology/deploy/security/database-security/pdf/database-vault-11g-whitepaper.pdf

Oracle Database Vault for E-Business Suite Application Data Sheet 
http://www.oracle.com/technology/deploy/security/database-security/pdf/ds_database_vault_ebusiness.pdf

Enterprise Data Security Assessment
http://www.oracle.com/broadband/survey/security/index.html

Installing Database Vault in a Data Guard Environment
  	Doc ID: 	754065.1

Cannot Install Database Vault in a Single Instance Database in a RAC home.
  	Doc ID: 	604773.1

How To Restrict The Access To An Object For The Object's Owner
  	Doc ID: 	550265.1



-- APPS - DATABASE VAULT

Integrating Oracle E-Business Suite Release 11i with Oracle Database Vault 10.2.0.4
  	Doc ID: 	428503.1 	Type: 	WHITE PAPER





-- ENCRYPTION NETWORK

Encrypting EBS 11i Network Traffic using Advanced Security Option / Advanced Networking Option
 	Doc ID:	Note:391248.1




-- LABEL SECURITY

How to Install / Deinstall Oracle Label Security Oracle9i/10g
 	Doc ID:	Note:171155.1

	if you'll install OLS on softwares with 10.2.0.3, you must reinstall the patchset and after the installation run the catols.sql
	If you add the OLS option with the OUI after you have applied a patchset, you
	must re-apply the same patchset, the OUI that comes with the patchset will then
	update the binary component of the OLS option to the same patchset level as the RDBMS.
	This action will typically take little time as compared to a complete patchset installation.

After Installing OLS, Create Policy Issues ORA-12447 and ORA-600 [KGHALO2]
 	Doc ID:	Note:303511.1

Oracle Label Security Frequently Asked Questions
 	Doc ID:	Note:213684.1

Note 234599.1 Enabling Oracle Label Security in Oracle E-Business Suite

Oracle Label Security Packages affect Data Guard usage of Switchover and connections to Primary Database
 	Doc ID:	Note:265192.1

Installing Oracle Label Security Automatically Moves AUD$ Table out from SYS into SYSTEM schema
 	Doc ID:	Note:278184.1

catnools.sql is not available in $ORACLE_HOME/rdbms/admin
 	Doc ID:	Note:239825.1

Ora-439 Oracle Label Security Option Not Enabled though Already Installed
 	Doc ID:	Note:250411.1

Unable to Install OLS on 10.1.0.3
 	Doc ID:	Note:303751.1

Bug 3024516 - Oracle Label Security marked as INVALID in DBA_REGISTRY after upgrade
 	Doc ID:	Note:3024516.8

Easy way to install, follow this OBE: 
  http://www.oracle.com/technology/obe/obe10gdb/install/lsinstall/lsinstall.htm
 	



-- SSL AUTHENTICATION

Step by Step Guide To Configure SSL Authentication
  	Doc ID: 	736510.1



-- TDE

10g R2 New Feature TDE : Transparent Data Encryption
  	Doc ID: 	317311.1

Fails To Open / Create The Wallet: ORA-28353
  	Doc ID: 	395252.1

Using Transparent Data Encryption with Oracle E-Business Suite Release 11i
  	Doc ID: 	403294.1

TDE - Trying To Open Wallet In Default Location Fails With Ora-28353
  	Doc ID: 	391086.1

How to Open the Encryption Wallet Automatically When the Database Starts.
  	Doc ID: 	460293.1

Bug 5551624 - ORA-28353 creating a wallet
  	Doc ID: 	5551624.8

10gR2: How to Export/Import with Data Encrypted with Transparent Data Encryption (TDE) -- TDE is only compatible with DataPump export and DataPump import.
  	Doc ID: 	317317.1

Using Transparent Data Encryption In An Oracle Dataguard Config in 10gR2
  	Doc ID: 	389958.1

Managing TDE wallets in a RAC environment
  	Doc ID: 	567287.1

Transferring Encrypted Data from one Database to Another
  	Doc ID: 	270919.1

Selective Data Encryption in Oracle RDBMS, Overview and References
  	Doc ID: 	232000.1

How To Generate A New Master Encryption Key for the TDE
  	Doc ID: 	445147.1

Doc ID 728292.1 Known Issues When Using TDE and Indexes on the Encrypted Columns
  	
Doc ID 454980.1 Best Practices for having indexes on encrypted columns using TDE in 10gR2




-- SECURE APPLICATION ROLES

ORA-28201 Not Enough Privileges to Enable Application Role
  	Doc ID: 	150418.1

OERR: ORA-28201 Not enough privileges to enable application role \'%s\'
  	Doc ID: 	173528.1

An Example of Using Application Context's Initialized Globally
  	Doc ID: 	242156.1

Changing Role within Stored Procedures using dbms_session.set_role
  	Doc ID: 	69483.1





-- DEFINER - INVOKER RIGHTS

Invokers Rights Procedure Executed by Definers Rights Procedures
  	Doc ID: 	162489.1

How to Know if a Stored Procedure is Defined as AUTHID CURRENT_USER ?
  	Doc ID: 	130425.1




-- PROXY USERS

Using JDBC to Connect Through a Proxy User
  	Doc ID: 	227538.1

http://www.oracle.com/technology/products/ias/toplink/doc/1013/main/_html/dblgcfg008.htm

http://www.it-eye.nl/weblog/2005/09/12/oracle-proxy-users-by-example/

http://www.it-eye.nl/weblog/2005/09/09/oracle-proxy-users/

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:21575905259251

http://www.oracle.com/technology/tech/java/sqlj_jdbc/htdocs/jdbc_faq_0.htm#05_14




-- PUBLIC

Be Cautious When Revoking Privileges Granted to PUBLIC 
  Doc ID:  247093.1 

Some Views That Belong To SYS Are Not Created 
  Doc ID:  434905.1 

PUBLIC : Is it a User, a Role, a User Group, a Privilege ? 
  Doc ID:  234551.1 



-- CPU 

steve links 

http://www.integrigy.com/oracle-security-blog/archive/2008/01/31/oracle-exploits
http://www.oracle.com/technology/deploy/security/critical-patch-updates/cpuapr2009.html


Critical Patch Update April 2009 Patch Availability Document for Oracle Products
  	Doc ID: 	786800.1

http://www.oracle.com/technology/deploy/security/critical-patch-updates/cpuapr2009.html

Critical Patch Update April 2009 Database Known Issues
  	Doc ID: 	786803.1

https://metalink.oracle.com/metalink/plsql/f?p=200:10:1924032030661268483::NO:::

Security Alerts and Critical Patch Updates- Frequently Asked Questions
  	Doc ID: 	360470.1

10.2.0.3 Patch Set - Availability and Known Issues
  	Doc ID: 	401435.1

Release Schedule of Current Database Patch Sets
  	Doc ID: 	742060.1

10.2.0.4 Patch Set - List of Bug Fixes by Problem Type
  	Doc ID: 	401436.1

How To Find The Description/Details Of The Bugs Fixed By A Patch Using Opatch?
  	Doc ID: 	750350.1

Critical Patch Update April 2009 Database Patch Security Vulnerability Molecule Mapping
  	Doc ID: 	786811.1

How to confirm that a Critical Patch Update (CPU) has been installed
  	Doc ID: 	821263.1

Introduction to "Bug Description" Articles
  	Doc ID: 	245840.1

Interim Patch (One-Off Patch) FAQ
  	Doc ID: 	726362.1

Security Alerts and Critical Patch Updates- Frequently Asked Questions
  	Doc ID: 	360470.1

http://www.oracle.com/technology/deploy/security/cpu/cpufaq.htm

http://www.slaviks-blog.com/2009/01/20/oracle-cpu-dissected/





http://www.freelists.org/post/oracle-l/AWR-logical-reads-question,3

{{{
step 1 ####
11490   RPT_STAT_DEFS(STAT_LOGC_READ).NAME := 'session logical reads';
11491   RPT_STAT_DEFS(STAT_LOGC_READ).SOURCE := SRC_SYSDIF;

step 2 ####
 (select dataobj#, obj#, dbid,
14958                        sum(logical_reads_delta) logical_reads
14959                 from dba_hist_seg_stat

step 3####
         decode(:gets, 0, to_number(null),
14955                         100 * logical_reads / :gets) ratio

and also.. there's a part where it filters for COMMAND_TYPE 47 -- PL/SQL BLOCK' or begin/declare *  which marks it as zero 0
}}}


{{{
session logical reads                     6,050,561       10,033.0       2,383.1


Segments by Logical Reads                  DB/Inst: IVRS/ivrs  Snaps: 338-339
-> Total Logical Reads:       6,050,561
-> Captured Segments account for  101.7% of Total

           Tablespace                      Subobject  Obj.       Logical
Owner         Name    Object Name            Name     Type         Reads  %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
TPCH       TPCHTAB    LINEITEM                        TABLE    4,960,400   81.98
TPCH       TPCHTAB    ORDERS                          TABLE      502,768    8.31
TPCH       TPCHTAB    PARTSUPP                        TABLE      161,968    2.68
TPCH       TPCHTAB    PART                            TABLE       95,984    1.59
TPCC       USERS      STOCK_I1                        INDEX       91,984    1.52
          -------------------------------------------------------------

          
session logical reads                     6,050,561       10,033.0       2,383.1


4,960,400/6,050,561
= 0.819824806327876
}}}
/***
|Name:|SelectThemePlugin|
|Description:|Lets you easily switch theme and palette|
|Version:|1.0.1 ($Rev: 3646 $)|
|Date:|$Date: 2008-02-27 02:34:38 +1000 (Wed, 27 Feb 2008) $|
|Source:|http://mptw.tiddlyspot.com/#SelectThemePlugin|
|Author:|Simon Baird <simon.baird@gmail.com>|
|License:|http://mptw.tiddlyspot.com/#TheBSDLicense|
!Notes
* Borrows largely from ThemeSwitcherPlugin by Martin Budden http://www.martinswiki.com/#ThemeSwitcherPlugin
* Theme is cookie based. But set a default by setting config.options.txtTheme in MptwConfigPlugin (for example)
* Palette is not cookie based. It actually overwrites your ColorPalette tiddler when you select a palette, so beware. 
!Usage
* {{{<<selectTheme>>}}} makes a dropdown selector
* {{{<<selectPalette>>}}} makes a dropdown selector
* {{{<<applyTheme>>}}} applies the current tiddler as a theme
* {{{<<applyPalette>>}}} applies the current tiddler as a palette
* {{{<<applyTheme TiddlerName>>}}} applies TiddlerName as a theme
* {{{<<applyPalette TiddlerName>>}}} applies TiddlerName as a palette
***/
//{{{

config.macros.selectTheme = {
	label: {
      		selectTheme:"select theme",
      		selectPalette:"select palette"
	},
	prompt: {
		selectTheme:"Select the current theme",
		selectPalette:"Select the current palette"
	},
	tags: {
		selectTheme:'systemTheme',
		selectPalette:'systemPalette'
	}
};

config.macros.selectTheme.handler = function(place,macroName)
{
	var btn = createTiddlyButton(place,this.label[macroName],this.prompt[macroName],this.onClick);
	// want to handle palettes and themes with same code. use mode attribute to distinguish
	btn.setAttribute('mode',macroName);
};

config.macros.selectTheme.onClick = function(ev)
{
	var e = ev ? ev : window.event;
	var popup = Popup.create(this);
	var mode = this.getAttribute('mode');
	var tiddlers = store.getTaggedTiddlers(config.macros.selectTheme.tags[mode]);
	// for default
	if (mode == "selectPalette") {
		var btn = createTiddlyButton(createTiddlyElement(popup,'li'),"(default)","default color palette",config.macros.selectTheme.onClickTheme);
		btn.setAttribute('theme',"(default)");
		btn.setAttribute('mode',mode);
	}
	for(var i=0; i<tiddlers.length; i++) {
		var t = tiddlers[i].title;
		var name = store.getTiddlerSlice(t,'Name');
		var desc = store.getTiddlerSlice(t,'Description');
		var btn = createTiddlyButton(createTiddlyElement(popup,'li'), name?name:t, desc?desc:config.macros.selectTheme.label['mode'], config.macros.selectTheme.onClickTheme);
		btn.setAttribute('theme',t);
		btn.setAttribute('mode',mode);
	}
	Popup.show();
	return stopEvent(e);
};

config.macros.selectTheme.onClickTheme = function(ev)
{
	var mode = this.getAttribute('mode');
	var theme = this.getAttribute('theme');
	if (mode == 'selectTheme')
		story.switchTheme(theme);
	else // selectPalette
		config.macros.selectTheme.updatePalette(theme);
	return false;
};

config.macros.selectTheme.updatePalette = function(title)
{
	if (title != "") {
		store.deleteTiddler("ColorPalette");
		if (title != "(default)")
			store.saveTiddler("ColorPalette","ColorPalette",store.getTiddlerText(title),
					config.options.txtUserName,undefined,"");
		refreshAll();
		if(config.options.chkAutoSave)
			saveChanges(true);
	}
};

config.macros.applyTheme = {
	label: "apply",
	prompt: "apply this theme or palette" // i'm lazy
};

config.macros.applyTheme.handler = function(place,macroName,params,wikifier,paramString,tiddler) {
	var useTiddler = params[0] ? params[0] : tiddler.title;
	var btn = createTiddlyButton(place,this.label,this.prompt,config.macros.selectTheme.onClickTheme);
	btn.setAttribute('theme',useTiddler);
	btn.setAttribute('mode',macroName=="applyTheme"?"selectTheme":"selectPalette"); // a bit untidy here
}

config.macros.selectPalette = config.macros.selectTheme;
config.macros.applyPalette = config.macros.applyTheme;

config.macros.refreshAll = { handler: function(place,macroName,params,wikifier,paramString,tiddler) {
	createTiddlyButton(place,"refresh","refresh layout and styles",function() { refreshAll(); });
}};

//}}}
<<showtoc>>

! querying 
http://docs.sequelizejs.com/en/v3/docs/querying/

! Sequelize CRUD 101
http://lorenstewart.me/2016/10/03/sequelize-crud-101/?utm_source=nodeweekly&utm_medium=email

! sequelize support for oracle 
https://github.com/gintsgints/sequelize  <- the most updated fork, use this
https://github.com/featurist/sworm <- this looks pretty good, has a wrapper to node-oracledb
https://github.com/sequelize/sequelize/issues/3013
https://github.com/sequelize/sequelize/search?q=oracle&type=Issues&utf8=%E2%9C%93
http://stackoverflow.com/questions/33803398/no-data-recovered-with-sequelize-oracle
https://github.com/SGrondin/oracle-orm
http://stackoverflow.com/questions/14403153/node-js-oracle-orm
http://stackoverflow.com/search?page=2&tab=relevance&q=oracle%20sequelize
http://stackoverflow.com/questions/14403153/node-js-oracle-orm/21118082#21118082
<<<
nodejs the ORM's that are best supported are for open source databases 

Oracle and companies that use Oracle tend to use ADF (Java), APEX (PL/SQL) plus other Oracle specific tools (Report Writer, Oracle Forms)
<<<
Yes, don't get the sequence at all. just put the seq_name.nextval directly in the insert.

insert into x (c1, c2, c3) values (seq_name.nextval,:1, :2);

Also, make sure the sequences are cached.


---------


> 
> Oracle 11g PL-SQL supports the following syntax for getting the next value
> from a Sequence into a variable
> 
> DECLARE
>  l_n_seqval NUMBER;
> BEGIN
>  l_n_seqval := SEQ_NAME.NEXTVAL;
> END;
> 
> whereas in 10g and earlier a Select from DUAL was needed
> 
> DECLARE
>  l_n_seqval NUMBER;
> BEGIN
>  SELECT SEQ_NAME.NEXTVAL
>  INTO l_n_seqval
>  FROM DUAL;
> END;
> 
> We use the 10g technique in a group of triggers which generate audit
> records; the sequence gets tens or hundreds of thousands of hits daily.
> Any minuscule incremental improvement could be significant.
> 
> In your experience is there any efficiency to be gained from the 11g syntax
> (i.e., avoiding the SELECT FROM DUAL)?
alter session set "_serial_direct_read" = always;

http://dioncho.wordpress.com/2010/06/09/interesting-combination-of-rac-and-serial-direct-path-read/
http://dioncho.wordpress.com/2009/07/21/disabling-direct-path-read-for-the-serial-full-table-scan-11g/
http://sai-oracle.blogspot.com/2007/12/how-to-bypass-buffer-cache-for-full.html
http://oracledoug.com/serendipity/index.php?/archives/1321-11g-and-direct-path-reads.html
Control Services and Scheduled Jobs at Startup?
http://morganslibrary.org/hci/hci012.html
Peoplesoft - Using Set Processing https://docs.oracle.com/cd/E57990_01/pt853pbh2/eng/pt/tape/task_UsingSetProcessing-07720a.html#topofpage
Moving from Procedural to Set-Based Thinking http://www.orchestrapit.co.uk/?p=171
Faster Batch Processing http://www.oracle.com/technetwork/testcontent/o26performance-096310.html
https://savvinov.com/2017/07/10/set-based-processing/
http://blog.orapub.com/20120513/oracle-database-row-versus-set-processing-surprise.html
http://structureddata.org/2010/07/20/the-core-performance-fundamentals-of-oracle-data-warehousing-set-processing-vs-row-processing/
Real-World Performance - 8 - Set Based Parallel Processing https://www.youtube.com/watch?v=sriSU6eWGzU

https://www.codeproject.com/Articles/34142/Understanding-Set-based-and-Procedural-approaches










http://unixed.com/blog/2014/09/setup-x11-access-to-the-solaris-gui-gnome-desktop/
http://www.clustrix.com/blog/bid/257352/Sharding-In-Theory-and-Practice

https://instagram-engineering.com/sharding-ids-at-instagram-1cf5a71e5a5c
https://en.wikipedia.org/wiki/Universally_unique_identifier


HERA sharding on RAC
https://medium.com/paypal-engineering/scaling-database-access-for-100s-of-billions-of-queries-per-day-paypal-introducing-hera-e192adacda54





Troubleshooting: Tuning the Shared Pool and Tuning Library Cache Latch Contention (Doc ID 62143.1)

http://blog.tanelpoder.com/2010/11/04/a-little-new-feature-for-shared-pool-geeks/

understanding shared pool memory structures https://www.oracle.com/technetwork/database/manageability/ps-s003-274003-106-1-fin-v2-128827.pdf


http://www.overclockers.com/short-stroke-raid/
http://www.simplisoftware.com/Public/index.php?request=HdTach

http://www.overclock.net/raid-controllers-software/690318-raid-0-hdds-best-stripe-size.html

http://www.tomshardware.com/forum/244351-32-partion-hard-drive-performance
<<<
Just so we're all clear, the fastest portion of any hard disk is the outer track, not the inner track. Hard disks will also allocate partitions from the outside first, and move inwards with additional partitions.

The trick of taking a large hard drive and partitioning it such that a small partition is the only thing on it and uses only the outer, faster tracks is called short-stroking.

The partition's STR is quite fast because only the outer tracks are used, and the access times for that partition go down as well, because the head is only moving over a shorter range of tracks instead of the whole platter.

This technique has been used in enterprise environments on SCSI/SAS drives to increase database performance, where in many cases speed of access is more important that storage capacity.
<<<


http://techreport.com/forums/viewtopic.php?f=5&t=3843
http://www.storagereview.com/articles/200109/20010918ST380021A_STR.html

Capacity's Effect on Server Performance http://www.storagereview.com/capacity_s_effect_on_server_performance
''Do the following:''
{{{
http://www.makeuseof.com/tag/7-hidden-windows-caches-clear/ <- do this first on windows
http://superuser.com/questions/1050417/how-to-clean-windows-installer-folder-in-windows-10      
   https://blogs.technet.microsoft.com/joscon/2012/01/18/can-you-safely-delete-files-in-the-windirinstaller-directory/
   https://www.raymond.cc/blog/safely-delete-unused-msi-and-mst-files-from-windows-installer-folder/
   http://www.homedev.com.au/free/patchcleaner
   http://superuser.com/questions/707767/how-can-i-free-up-drive-space-from-the-windows-installer-folder-without-killing
http://www.techentice.com/delete-pagefile-sys-in-windows-7/

http://www.howtogeek.com/184091/5-ways-to-free-up-disk-space-on-a-mac/ <- then do this on mac 

http://www.netreliant.com/news/9/17/Compacting-VirtualBox-Disk-Images-Windows-Guests.html   <- GOOD STUFF
}}}

{{{
* download sdelete v.1.61 at kaige21.tistory.com/288
* degrag C drive - Right-click the drive and choose the Properties option, select the Tools tab and click the Defragment now
* execute sdelete - sdelete.exe -z C:
* shutdown VM
* compact vdi - VBoxManage modifyhd --compact "[drive]:\[path_to_image_file]\[name_of_image_file].vdi"
}}}


http://www.joshhardman.net/shrink-virtualbox-vdi-files/
http://maketecheasier.com/shrink-your-virtualbox-vm/2009/04/06
http://kakku.wordpress.com/2008/06/23/virtualbox-shrink-your-vdi-images-space-occupied-disk-size/
http://www.linuxreaders.com/2009/04/21/how-to-shrink-your-virtualbox-vm/
http://jimiz.net/blog/2010/02/compress-vdi-file-virtualbox/

https://www.maketecheasier.com/shrink-your-virtualbox-vm
http://dantwining.co.uk/2011/07/18/how-to-shrink-a-dynamically-expanding-guest-virtualbox-image/
http://superuser.com/questions/529149/how-to-compact-virtualboxs-vdi-file-size


gc buffer busy acquire  https://juliandontcheff.wordpress.com/2013/04/21/dba-tips-for-tuning-siebel-on-rac-and-exadata/
Oracle RAC Database aware Applications - A Developer’s Checklist  http://www.oracle.com/technetwork/database/availability/racdbawareapplications-1933522.pdf

Bug 14618938 : EXADATA: GCS DRM FREEZE IN ENTER SERVER MODE
Oracle Database - the best choice for Siebel Applications    http://www.oracle.com/us/products/database/oracle-database-siebel-bwp-068927.pdf
Guidelines for Using Real Application Clusters for an Oracle Database  https://docs.oracle.com/cd/E14004_01/books/SiebInstWIN/SiebInstCOM_RDBMS13.html
Siebel on Exadata   http://www.oracle.com/technetwork/database/features/availability/maa-wp-siebel-exadata-177506.pdf

http://www.wikihow.com/Calculate-Growth-Rate
http://stackoverflow.com/questions/19824601/how-calculate-growth-rate-in-long-format-data-frame
How to simulate a slow query. Useful for testing of timeout issues [ID 357615.1]

{{{
Purpose

A simple way for controlling the speed at which a query executes.  Useful when investigating timeout issues.
Software Requirements/Prerequisites

SQL*Plus
Configuring the Sample Code

User should have execute privilege on DBMS_LOCK package.

Running the Sample Code

1- login with SQL*PLUS

2- execute the function slow_query

Caution

This sample code is provided for educational purposes only and not supported by Oracle Support Services. It has been tested internally, however, and works as documented. We do not guarantee that it will work for you, so be sure to test it in your environment before relying on it.
Proofread this sample code before using it! Due to the differences in the way text editors, e-mail packages and operating systems handle text formatting (spaces, tabs and carriage returns), this sample code may not be in an executable state when you first receive it. Check over the sample code to ensure that errors of this type are corrected.

Sample Code

CREATE OR REPLACE FUNCTION slow_query( p_wait number)  
  RETURN varchar2 IS  
       v_date1 date;  
       v_date2 date;  
BEGIN  
    SELECT sysdate INTO v_date1 FROM dual;  
    FOR i in 1..p_wait LOOP  
       dbms_lock.sleep(1);  
    END LOOP;  
    SELECT sysdate INTO v_date2 FROM dual;  
    RETURN to_char(trunc((v_date2 - v_date1) *60*60*24)) || ' seconds delay';  
END;  

This implementation is preferable over one that implements a single call to dbms_lock.sleep over a long period of time. The reason for this is that some types of break requests may not get processed until after the call is completed depending on the type client making the call ( i.e. JDBC/thin connections) and the OS capability to signal the system call invoked by the dbms_lock.sleep.


Another possible implementation that uses a busy loop rather than locking calls is:
create or replace function slow_query(p_seconds_wait number) 
       return varchar2 is
v_date_end date;
v_date_now date;
v_date_start date;
begin
select sysdate, sysdate, sysdate + p_seconds_wait/(24*60*60)
                     into v_date_start, v_date_now, v_date_end from dual;
while ( v_date_now < v_date_end ) loop
  select sysdate into v_date_now from dual;
end loop;
return to_char(trunc((v_date_now - v_date_start) *60*60*24))
             || ' seconds delay';
end;
/ 

Sample Code Output

SQL> select slow_query(10) from dual; 

SLOW_QUERY(10) 
-------------------------------------------------------------------------------- 
10 seconds delay 


<------ This query took 10 seconds to execute 


1* select dname, slow_query(2) slow_query from dept 
SQL> / 

DNAME SLOW_QUERY 
------------------------------------------ -------------------- 
ACCOUNTING 2 seconds delay 
RESEARCH 2 seconds delay 
SALES 2 seconds delay 
OPERATIONS 2 seconds delay 


<---------- this query took 8 seconds to execute 

tip: If you set the arraysize to be 1 you can actually see the rows coming in one by one. If the arraysize is 15 then the rows appear together after 8 seconds delay. 
}}}



{{{

1) Created PL/SQL to Make Sleep :
CREATE OR REPLACE FUNCTION slow( p_seconds in number ) RETURN number IS BEGIN dbms_lock.sleep( p_seconds ); RETURN 1; END;

2) Call the above PL/SQL to sleep below SQL Query and make it run for more than 6+ min.

SELECT slow( 0.1 ) FROM dual CONNECT BY level <= 20000
}}}
Fishworks simulator quick guide http://www.evernote.com/shard/s48/sh/be6faabd-df78-465a-bc4b-ce4db3c99358/6b4d5f084d8b5d1351b081009914829e

Karl Arao'<<tiddler ToggleRightSidebar with: "s">> TiddlyWiki
https://www.evernote.com/l/ADAUIHBMoIxODJ3X3rxHczGEkh0n6LVnT3M
http://download.oracle.com/docs/cd/E11857_01/em.111/e16790/sizing.htm#EMADM9354
http://download.oracle.com/docs/cd/B16240_01/doc/em.102/e10954/sizing.htm#CEGCDFFE


how frequent does it talk to the OMS? 
what is the size of data it pushes to OMS?

1) Download Skype RPM
2) Plug in the cam
3) Read this
http://forum.skype.com/index.php?showtopic=522511
https://help.ubuntu.com/community/Webcam#Skype
4) Install the 32bit libv4l
yum install libv4l.i686
vi /usr/local/bin/skype
LD_PRELOAD=/usr/lib/libv4l/v4l2convert.so
/usr/bin/skype
chmod a+x /usr/local/bin/skype 

or run as 

bash -c 'LD_PRELOAD=/usr/lib/libv4l/v4l2convert.so skype'

5) restart skype
Exadata Smart Scan troubleshooting wrong results
http://www.evernote.com/shard/s48/sh/13cfe3fd-f9c1-423c-b1d2-c8a27708178b/fc6a3da907ab0e68939d8761530d1bd4

Exadata: How to diagnose smart scan and wrong results [ID 1260804.1]
http://www.youtube.com/watch?v=L_Ye89cDmKU
Why Diskless Booting is Good http://wiki.smartos.org/display/DOC/Using+SmartOS
Why you need ZFS http://wiki.smartos.org/display/DOC/ZFS
Tuning the IO Throttle http://wiki.smartos.org/display/DOC/Tuning+the+IO+Throttle
http://wiki.smartos.org/display/DOC/How+to+create+a+Virtual+Machine+in+SmartOS
http://wiki.smartos.org/display/DOC/How+to+create+a+KVM+VM+%28+Hypervisor+virtualized+machine+%29+in+SmartOS

cuddletech ppt http://www.cuddletech.com/RealWorld-OpenSolaris.pdf
http://serverfault.com/questions/363842/what-design-features-make-joyents-zfs-and-amazons-ebs-s3-reliable





http://www.evernote.com/shard/s48/sh/80abddd2-ecf4-4c7c-9c91-bc8f28e2562e/fe9ba812e6f02fe6f8d5fd00dd37c707
How to diagnose smart scan and wrong results [ID 1260804.1]
Best Practices for OLTP on the Sun Oracle Database Machine [ID 1269706.1]


''-- troubleshooting''
http://kerryosborne.oracle-guy.com/2010/06/exadata-offload-the-secret-sauce/
http://tech.e2sn.com/oracle/exadata/performance-troubleshooting/exadata-smart-scan-performance
http://fritshoogland.wordpress.com/2010/08/23/an-investigation-into-exadata/

http://danirey.wordpress.com/2011/03/07/oracle-exadata-performance-revealed-smartscan-part-iii/
http://www.slideshare.net/padday/the-real-life-social-network-v2



http://www.akadia.com/services/solaris_tips.html

Transparent Failover with Solaris MPxIO and Oracle ASM
http://blogs.sun.com/BestPerf/entry/transparent_failover_with_solaris_mpxio	

http://developers.sun.com/solaris/articles/solaris_perftools.html
http://www.solarisinternals.com/wiki/index.php/Solaris_Internals_and_Performance_FAQ
http://blogs.sun.com/WCP/entry/cooltst_cool_threads_selection_tool
http://cooltools.sunsource.net/cooltst/index.html
http://glennfawcett.wordpress.com/2010/09/21/oracle-open-world-presentation-uploaded-optimizing-oracle-databases-on-sparc-enterprise-m-series-servers/

{{{
system sockets cores threads
M9000-64 64 256 512
M9000-32 32 128 256
M8000 16 64 128
M5000 8 32 64
M4000 4 16 32
M3000 1 4 8
}}}
http://www.hotchips.org/wp-content/uploads/hc_archives/hc24/HC24-9-Big-Iron/HC24.29.926-SPARC-T5-CMT-Turullois-Oracle-final6.pdf
also check this out ''SPARC M5-32 and M6-32 Servers: Processor numbering and decoding CPU location. (Doc ID 1540202.1)''

{{{
$ sh showcpucount
Total number of physical processors: 4
Number of virtual processors: 384
Total number of cores: 48
Number of cores per physical processor: 12
Number of hardware threads (strands or vCPUs) per core: 8
Processor speed: 3600 MHz (3.60 GHz)
-e
** Socket-Core-vCPU mapping **
showcpucount: line 25: syntax error at line 34: `(' unexpected

$ prtdiag | head -1
System Configuration:  Oracle Corporation  sun4v SPARC T5-8


oracle@enksc1client01:/export/home/oracle:dbm011
$ prtdiag
System Configuration:  Oracle Corporation  sun4v SPARC T5-8
Memory size: 785152 Megabytes

================================ Virtual CPUs ================================


CPU ID Frequency Implementation         Status
------ --------- ---------------------- -------
0      3600 MHz  SPARC-T5               on-line
1      3600 MHz  SPARC-T5               on-line
2      3600 MHz  SPARC-T5               on-line
3      3600 MHz  SPARC-T5               on-line
4      3600 MHz  SPARC-T5               on-line
5      3600 MHz  SPARC-T5               on-line
6      3600 MHz  SPARC-T5               on-line
7      3600 MHz  SPARC-T5               on-line
8      3600 MHz  SPARC-T5               on-line
9      3600 MHz  SPARC-T5               on-line
10     3600 MHz  SPARC-T5               on-line
11     3600 MHz  SPARC-T5               on-line
12     3600 MHz  SPARC-T5               on-line
13     3600 MHz  SPARC-T5               on-line
14     3600 MHz  SPARC-T5               on-line
15     3600 MHz  SPARC-T5               on-line
16     3600 MHz  SPARC-T5               on-line
17     3600 MHz  SPARC-T5               on-line
18     3600 MHz  SPARC-T5               on-line
19     3600 MHz  SPARC-T5               on-line
20     3600 MHz  SPARC-T5               on-line
21     3600 MHz  SPARC-T5               on-line
22     3600 MHz  SPARC-T5               on-line
23     3600 MHz  SPARC-T5               on-line
24     3600 MHz  SPARC-T5               on-line
25     3600 MHz  SPARC-T5               on-line
26     3600 MHz  SPARC-T5               on-line
27     3600 MHz  SPARC-T5               on-line
28     3600 MHz  SPARC-T5               on-line
29     3600 MHz  SPARC-T5               on-line
30     3600 MHz  SPARC-T5               on-line
31     3600 MHz  SPARC-T5               on-line
32     3600 MHz  SPARC-T5               on-line
33     3600 MHz  SPARC-T5               on-line
34     3600 MHz  SPARC-T5               on-line
35     3600 MHz  SPARC-T5               on-line
36     3600 MHz  SPARC-T5               on-line
37     3600 MHz  SPARC-T5               on-line
38     3600 MHz  SPARC-T5               on-line
39     3600 MHz  SPARC-T5               on-line
40     3600 MHz  SPARC-T5               on-line
41     3600 MHz  SPARC-T5               on-line
42     3600 MHz  SPARC-T5               on-line
43     3600 MHz  SPARC-T5               on-line
44     3600 MHz  SPARC-T5               on-line
45     3600 MHz  SPARC-T5               on-line
46     3600 MHz  SPARC-T5               on-line
47     3600 MHz  SPARC-T5               on-line
48     3600 MHz  SPARC-T5               on-line
49     3600 MHz  SPARC-T5               on-line
50     3600 MHz  SPARC-T5               on-line
51     3600 MHz  SPARC-T5               on-line
52     3600 MHz  SPARC-T5               on-line
53     3600 MHz  SPARC-T5               on-line
54     3600 MHz  SPARC-T5               on-line
55     3600 MHz  SPARC-T5               on-line
56     3600 MHz  SPARC-T5               on-line
57     3600 MHz  SPARC-T5               on-line
58     3600 MHz  SPARC-T5               on-line
59     3600 MHz  SPARC-T5               on-line
60     3600 MHz  SPARC-T5               on-line
61     3600 MHz  SPARC-T5               on-line
62     3600 MHz  SPARC-T5               on-line
63     3600 MHz  SPARC-T5               on-line
64     3600 MHz  SPARC-T5               on-line
65     3600 MHz  SPARC-T5               on-line
66     3600 MHz  SPARC-T5               on-line
67     3600 MHz  SPARC-T5               on-line
68     3600 MHz  SPARC-T5               on-line
69     3600 MHz  SPARC-T5               on-line
70     3600 MHz  SPARC-T5               on-line
71     3600 MHz  SPARC-T5               on-line
72     3600 MHz  SPARC-T5               on-line
73     3600 MHz  SPARC-T5               on-line
74     3600 MHz  SPARC-T5               on-line
75     3600 MHz  SPARC-T5               on-line
76     3600 MHz  SPARC-T5               on-line
77     3600 MHz  SPARC-T5               on-line
78     3600 MHz  SPARC-T5               on-line
79     3600 MHz  SPARC-T5               on-line
80     3600 MHz  SPARC-T5               on-line
81     3600 MHz  SPARC-T5               on-line
82     3600 MHz  SPARC-T5               on-line
83     3600 MHz  SPARC-T5               on-line
84     3600 MHz  SPARC-T5               on-line
85     3600 MHz  SPARC-T5               on-line
86     3600 MHz  SPARC-T5               on-line
87     3600 MHz  SPARC-T5               on-line
88     3600 MHz  SPARC-T5               on-line
89     3600 MHz  SPARC-T5               on-line
90     3600 MHz  SPARC-T5               on-line
91     3600 MHz  SPARC-T5               on-line
92     3600 MHz  SPARC-T5               on-line
93     3600 MHz  SPARC-T5               on-line
94     3600 MHz  SPARC-T5               on-line
95     3600 MHz  SPARC-T5               on-line
96     3600 MHz  SPARC-T5               on-line
97     3600 MHz  SPARC-T5               on-line
98     3600 MHz  SPARC-T5               on-line
99     3600 MHz  SPARC-T5               on-line
100    3600 MHz  SPARC-T5               on-line
101    3600 MHz  SPARC-T5               on-line
102    3600 MHz  SPARC-T5               on-line
103    3600 MHz  SPARC-T5               on-line
104    3600 MHz  SPARC-T5               on-line
105    3600 MHz  SPARC-T5               on-line
106    3600 MHz  SPARC-T5               on-line
107    3600 MHz  SPARC-T5               on-line
108    3600 MHz  SPARC-T5               on-line
109    3600 MHz  SPARC-T5               on-line
110    3600 MHz  SPARC-T5               on-line
111    3600 MHz  SPARC-T5               on-line
112    3600 MHz  SPARC-T5               on-line
113    3600 MHz  SPARC-T5               on-line
114    3600 MHz  SPARC-T5               on-line
115    3600 MHz  SPARC-T5               on-line
116    3600 MHz  SPARC-T5               on-line
117    3600 MHz  SPARC-T5               on-line
118    3600 MHz  SPARC-T5               on-line
119    3600 MHz  SPARC-T5               on-line
120    3600 MHz  SPARC-T5               on-line
121    3600 MHz  SPARC-T5               on-line
122    3600 MHz  SPARC-T5               on-line
123    3600 MHz  SPARC-T5               on-line
124    3600 MHz  SPARC-T5               on-line
125    3600 MHz  SPARC-T5               on-line
126    3600 MHz  SPARC-T5               on-line
127    3600 MHz  SPARC-T5               on-line
128    3600 MHz  SPARC-T5               on-line
129    3600 MHz  SPARC-T5               on-line
130    3600 MHz  SPARC-T5               on-line
131    3600 MHz  SPARC-T5               on-line
132    3600 MHz  SPARC-T5               on-line
133    3600 MHz  SPARC-T5               on-line
134    3600 MHz  SPARC-T5               on-line
135    3600 MHz  SPARC-T5               on-line
136    3600 MHz  SPARC-T5               on-line
137    3600 MHz  SPARC-T5               on-line
138    3600 MHz  SPARC-T5               on-line
139    3600 MHz  SPARC-T5               on-line
140    3600 MHz  SPARC-T5               on-line
141    3600 MHz  SPARC-T5               on-line
142    3600 MHz  SPARC-T5               on-line
143    3600 MHz  SPARC-T5               on-line
144    3600 MHz  SPARC-T5               on-line
145    3600 MHz  SPARC-T5               on-line
146    3600 MHz  SPARC-T5               on-line
147    3600 MHz  SPARC-T5               on-line
148    3600 MHz  SPARC-T5               on-line
149    3600 MHz  SPARC-T5               on-line
150    3600 MHz  SPARC-T5               on-line
151    3600 MHz  SPARC-T5               on-line
152    3600 MHz  SPARC-T5               on-line
153    3600 MHz  SPARC-T5               on-line
154    3600 MHz  SPARC-T5               on-line
155    3600 MHz  SPARC-T5               on-line
156    3600 MHz  SPARC-T5               on-line
157    3600 MHz  SPARC-T5               on-line
158    3600 MHz  SPARC-T5               on-line
159    3600 MHz  SPARC-T5               on-line
160    3600 MHz  SPARC-T5               on-line
161    3600 MHz  SPARC-T5               on-line
162    3600 MHz  SPARC-T5               on-line
163    3600 MHz  SPARC-T5               on-line
164    3600 MHz  SPARC-T5               on-line
165    3600 MHz  SPARC-T5               on-line
166    3600 MHz  SPARC-T5               on-line
167    3600 MHz  SPARC-T5               on-line
168    3600 MHz  SPARC-T5               on-line
169    3600 MHz  SPARC-T5               on-line
170    3600 MHz  SPARC-T5               on-line
171    3600 MHz  SPARC-T5               on-line
172    3600 MHz  SPARC-T5               on-line
173    3600 MHz  SPARC-T5               on-line
174    3600 MHz  SPARC-T5               on-line
175    3600 MHz  SPARC-T5               on-line
176    3600 MHz  SPARC-T5               on-line
177    3600 MHz  SPARC-T5               on-line
178    3600 MHz  SPARC-T5               on-line
179    3600 MHz  SPARC-T5               on-line
180    3600 MHz  SPARC-T5               on-line
181    3600 MHz  SPARC-T5               on-line
182    3600 MHz  SPARC-T5               on-line
183    3600 MHz  SPARC-T5               on-line
184    3600 MHz  SPARC-T5               on-line
185    3600 MHz  SPARC-T5               on-line
186    3600 MHz  SPARC-T5               on-line
187    3600 MHz  SPARC-T5               on-line
188    3600 MHz  SPARC-T5               on-line
189    3600 MHz  SPARC-T5               on-line
190    3600 MHz  SPARC-T5               on-line
191    3600 MHz  SPARC-T5               on-line
192    3600 MHz  SPARC-T5               on-line
193    3600 MHz  SPARC-T5               on-line
194    3600 MHz  SPARC-T5               on-line
195    3600 MHz  SPARC-T5               on-line
196    3600 MHz  SPARC-T5               on-line
197    3600 MHz  SPARC-T5               on-line
198    3600 MHz  SPARC-T5               on-line
199    3600 MHz  SPARC-T5               on-line
200    3600 MHz  SPARC-T5               on-line
201    3600 MHz  SPARC-T5               on-line
202    3600 MHz  SPARC-T5               on-line
203    3600 MHz  SPARC-T5               on-line
204    3600 MHz  SPARC-T5               on-line
205    3600 MHz  SPARC-T5               on-line
206    3600 MHz  SPARC-T5               on-line
207    3600 MHz  SPARC-T5               on-line
208    3600 MHz  SPARC-T5               on-line
209    3600 MHz  SPARC-T5               on-line
210    3600 MHz  SPARC-T5               on-line
211    3600 MHz  SPARC-T5               on-line
212    3600 MHz  SPARC-T5               on-line
213    3600 MHz  SPARC-T5               on-line
214    3600 MHz  SPARC-T5               on-line
215    3600 MHz  SPARC-T5               on-line
216    3600 MHz  SPARC-T5               on-line
217    3600 MHz  SPARC-T5               on-line
218    3600 MHz  SPARC-T5               on-line
219    3600 MHz  SPARC-T5               on-line
220    3600 MHz  SPARC-T5               on-line
221    3600 MHz  SPARC-T5               on-line
222    3600 MHz  SPARC-T5               on-line
223    3600 MHz  SPARC-T5               on-line
224    3600 MHz  SPARC-T5               on-line
225    3600 MHz  SPARC-T5               on-line
226    3600 MHz  SPARC-T5               on-line
227    3600 MHz  SPARC-T5               on-line
228    3600 MHz  SPARC-T5               on-line
229    3600 MHz  SPARC-T5               on-line
230    3600 MHz  SPARC-T5               on-line
231    3600 MHz  SPARC-T5               on-line
232    3600 MHz  SPARC-T5               on-line
233    3600 MHz  SPARC-T5               on-line
234    3600 MHz  SPARC-T5               on-line
235    3600 MHz  SPARC-T5               on-line
236    3600 MHz  SPARC-T5               on-line
237    3600 MHz  SPARC-T5               on-line
238    3600 MHz  SPARC-T5               on-line
239    3600 MHz  SPARC-T5               on-line
240    3600 MHz  SPARC-T5               on-line
241    3600 MHz  SPARC-T5               on-line
242    3600 MHz  SPARC-T5               on-line
243    3600 MHz  SPARC-T5               on-line
244    3600 MHz  SPARC-T5               on-line
245    3600 MHz  SPARC-T5               on-line
246    3600 MHz  SPARC-T5               on-line
247    3600 MHz  SPARC-T5               on-line
248    3600 MHz  SPARC-T5               on-line
249    3600 MHz  SPARC-T5               on-line
250    3600 MHz  SPARC-T5               on-line
251    3600 MHz  SPARC-T5               on-line
252    3600 MHz  SPARC-T5               on-line
253    3600 MHz  SPARC-T5               on-line
254    3600 MHz  SPARC-T5               on-line
255    3600 MHz  SPARC-T5               on-line
256    3600 MHz  SPARC-T5               on-line
257    3600 MHz  SPARC-T5               on-line
258    3600 MHz  SPARC-T5               on-line
259    3600 MHz  SPARC-T5               on-line
260    3600 MHz  SPARC-T5               on-line
261    3600 MHz  SPARC-T5               on-line
262    3600 MHz  SPARC-T5               on-line
263    3600 MHz  SPARC-T5               on-line
264    3600 MHz  SPARC-T5               on-line
265    3600 MHz  SPARC-T5               on-line
266    3600 MHz  SPARC-T5               on-line
267    3600 MHz  SPARC-T5               on-line
268    3600 MHz  SPARC-T5               on-line
269    3600 MHz  SPARC-T5               on-line
270    3600 MHz  SPARC-T5               on-line
271    3600 MHz  SPARC-T5               on-line
272    3600 MHz  SPARC-T5               on-line
273    3600 MHz  SPARC-T5               on-line
274    3600 MHz  SPARC-T5               on-line
275    3600 MHz  SPARC-T5               on-line
276    3600 MHz  SPARC-T5               on-line
277    3600 MHz  SPARC-T5               on-line
278    3600 MHz  SPARC-T5               on-line
279    3600 MHz  SPARC-T5               on-line
280    3600 MHz  SPARC-T5               on-line
281    3600 MHz  SPARC-T5               on-line
282    3600 MHz  SPARC-T5               on-line
283    3600 MHz  SPARC-T5               on-line
284    3600 MHz  SPARC-T5               on-line
285    3600 MHz  SPARC-T5               on-line
286    3600 MHz  SPARC-T5               on-line
287    3600 MHz  SPARC-T5               on-line
288    3600 MHz  SPARC-T5               on-line
289    3600 MHz  SPARC-T5               on-line
290    3600 MHz  SPARC-T5               on-line
291    3600 MHz  SPARC-T5               on-line
292    3600 MHz  SPARC-T5               on-line
293    3600 MHz  SPARC-T5               on-line
294    3600 MHz  SPARC-T5               on-line
295    3600 MHz  SPARC-T5               on-line
296    3600 MHz  SPARC-T5               on-line
297    3600 MHz  SPARC-T5               on-line
298    3600 MHz  SPARC-T5               on-line
299    3600 MHz  SPARC-T5               on-line
300    3600 MHz  SPARC-T5               on-line
301    3600 MHz  SPARC-T5               on-line
302    3600 MHz  SPARC-T5               on-line
303    3600 MHz  SPARC-T5               on-line
304    3600 MHz  SPARC-T5               on-line
305    3600 MHz  SPARC-T5               on-line
306    3600 MHz  SPARC-T5               on-line
307    3600 MHz  SPARC-T5               on-line
308    3600 MHz  SPARC-T5               on-line
309    3600 MHz  SPARC-T5               on-line
310    3600 MHz  SPARC-T5               on-line
311    3600 MHz  SPARC-T5               on-line
312    3600 MHz  SPARC-T5               on-line
313    3600 MHz  SPARC-T5               on-line
314    3600 MHz  SPARC-T5               on-line
315    3600 MHz  SPARC-T5               on-line
316    3600 MHz  SPARC-T5               on-line
317    3600 MHz  SPARC-T5               on-line
318    3600 MHz  SPARC-T5               on-line
319    3600 MHz  SPARC-T5               on-line
320    3600 MHz  SPARC-T5               on-line
321    3600 MHz  SPARC-T5               on-line
322    3600 MHz  SPARC-T5               on-line
323    3600 MHz  SPARC-T5               on-line
324    3600 MHz  SPARC-T5               on-line
325    3600 MHz  SPARC-T5               on-line
326    3600 MHz  SPARC-T5               on-line
327    3600 MHz  SPARC-T5               on-line
328    3600 MHz  SPARC-T5               on-line
329    3600 MHz  SPARC-T5               on-line
330    3600 MHz  SPARC-T5               on-line
331    3600 MHz  SPARC-T5               on-line
332    3600 MHz  SPARC-T5               on-line
333    3600 MHz  SPARC-T5               on-line
334    3600 MHz  SPARC-T5               on-line
335    3600 MHz  SPARC-T5               on-line
336    3600 MHz  SPARC-T5               on-line
337    3600 MHz  SPARC-T5               on-line
338    3600 MHz  SPARC-T5               on-line
339    3600 MHz  SPARC-T5               on-line
340    3600 MHz  SPARC-T5               on-line
341    3600 MHz  SPARC-T5               on-line
342    3600 MHz  SPARC-T5               on-line
343    3600 MHz  SPARC-T5               on-line
344    3600 MHz  SPARC-T5               on-line
345    3600 MHz  SPARC-T5               on-line
346    3600 MHz  SPARC-T5               on-line
347    3600 MHz  SPARC-T5               on-line
348    3600 MHz  SPARC-T5               on-line
349    3600 MHz  SPARC-T5               on-line
350    3600 MHz  SPARC-T5               on-line
351    3600 MHz  SPARC-T5               on-line
352    3600 MHz  SPARC-T5               on-line
353    3600 MHz  SPARC-T5               on-line
354    3600 MHz  SPARC-T5               on-line
355    3600 MHz  SPARC-T5               on-line
356    3600 MHz  SPARC-T5               on-line
357    3600 MHz  SPARC-T5               on-line
358    3600 MHz  SPARC-T5               on-line
359    3600 MHz  SPARC-T5               on-line
360    3600 MHz  SPARC-T5               on-line
361    3600 MHz  SPARC-T5               on-line
362    3600 MHz  SPARC-T5               on-line
363    3600 MHz  SPARC-T5               on-line
364    3600 MHz  SPARC-T5               on-line
365    3600 MHz  SPARC-T5               on-line
366    3600 MHz  SPARC-T5               on-line
367    3600 MHz  SPARC-T5               on-line
368    3600 MHz  SPARC-T5               on-line
369    3600 MHz  SPARC-T5               on-line
370    3600 MHz  SPARC-T5               on-line
371    3600 MHz  SPARC-T5               on-line
372    3600 MHz  SPARC-T5               on-line
373    3600 MHz  SPARC-T5               on-line
374    3600 MHz  SPARC-T5               on-line
375    3600 MHz  SPARC-T5               on-line
376    3600 MHz  SPARC-T5               on-line
377    3600 MHz  SPARC-T5               on-line
378    3600 MHz  SPARC-T5               on-line
379    3600 MHz  SPARC-T5               on-line
380    3600 MHz  SPARC-T5               on-line
381    3600 MHz  SPARC-T5               on-line
382    3600 MHz  SPARC-T5               on-line
383    3600 MHz  SPARC-T5               on-line

======================= Physical Memory Configuration ========================
Segment Table:
--------------------------------------------------------------
Base           Segment  Interleave   Bank     Contains
Address        Size     Factor       Size     Modules
--------------------------------------------------------------
0x0            256 GB   4            64 GB    /SYS/PM0/CM0/CMP/BOB0/CH0/D0
                                              /SYS/PM0/CM0/CMP/BOB0/CH1/D0
                                              /SYS/PM0/CM0/CMP/BOB1/CH0/D0
                                              /SYS/PM0/CM0/CMP/BOB1/CH1/D0
                                     64 GB    /SYS/PM0/CM0/CMP/BOB2/CH0/D0
                                              /SYS/PM0/CM0/CMP/BOB2/CH1/D0
                                              /SYS/PM0/CM0/CMP/BOB3/CH0/D0
                                              /SYS/PM0/CM0/CMP/BOB3/CH1/D0
                                     64 GB    /SYS/PM0/CM0/CMP/BOB4/CH0/D0
                                              /SYS/PM0/CM0/CMP/BOB4/CH1/D0
                                              /SYS/PM0/CM0/CMP/BOB5/CH0/D0
                                              /SYS/PM0/CM0/CMP/BOB5/CH1/D0
                                     64 GB    /SYS/PM0/CM0/CMP/BOB6/CH0/D0
                                              /SYS/PM0/CM0/CMP/BOB6/CH1/D0
                                              /SYS/PM0/CM0/CMP/BOB7/CH0/D0
                                              /SYS/PM0/CM0/CMP/BOB7/CH1/D0

0x80000000000  256 GB   4            64 GB    /SYS/PM0/CM1/CMP/BOB0/CH0/D0
                                              /SYS/PM0/CM1/CMP/BOB0/CH1/D0
                                              /SYS/PM0/CM1/CMP/BOB1/CH0/D0
                                              /SYS/PM0/CM1/CMP/BOB1/CH1/D0
                                     64 GB    /SYS/PM0/CM1/CMP/BOB2/CH0/D0
                                              /SYS/PM0/CM1/CMP/BOB2/CH1/D0
                                              /SYS/PM0/CM1/CMP/BOB3/CH0/D0
                                              /SYS/PM0/CM1/CMP/BOB3/CH1/D0
                                     64 GB    /SYS/PM0/CM1/CMP/BOB4/CH0/D0
                                              /SYS/PM0/CM1/CMP/BOB4/CH1/D0
                                              /SYS/PM0/CM1/CMP/BOB5/CH0/D0
                                              /SYS/PM0/CM1/CMP/BOB5/CH1/D0
                                     64 GB    /SYS/PM0/CM1/CMP/BOB6/CH0/D0
                                              /SYS/PM0/CM1/CMP/BOB6/CH1/D0
                                              /SYS/PM0/CM1/CMP/BOB7/CH0/D0
                                              /SYS/PM0/CM1/CMP/BOB7/CH1/D0

0x300000000000 256 GB   4            64 GB    /SYS/PM3/CM0/CMP/BOB0/CH0/D0
                                              /SYS/PM3/CM0/CMP/BOB0/CH1/D0
                                              /SYS/PM3/CM0/CMP/BOB1/CH0/D0
                                              /SYS/PM3/CM0/CMP/BOB1/CH1/D0
                                     64 GB    /SYS/PM3/CM0/CMP/BOB2/CH0/D0
                                              /SYS/PM3/CM0/CMP/BOB2/CH1/D0
                                              /SYS/PM3/CM0/CMP/BOB3/CH0/D0
                                              /SYS/PM3/CM0/CMP/BOB3/CH1/D0
                                     64 GB    /SYS/PM3/CM0/CMP/BOB4/CH0/D0
                                              /SYS/PM3/CM0/CMP/BOB4/CH1/D0
                                              /SYS/PM3/CM0/CMP/BOB5/CH0/D0
                                              /SYS/PM3/CM0/CMP/BOB5/CH1/D0
                                     64 GB    /SYS/PM3/CM0/CMP/BOB6/CH0/D0
                                              /SYS/PM3/CM0/CMP/BOB6/CH1/D0
                                              /SYS/PM3/CM0/CMP/BOB7/CH0/D0
                                              /SYS/PM3/CM0/CMP/BOB7/CH1/D0

0x380000000000 256 GB   4            64 GB    /SYS/PM3/CM1/CMP/BOB0/CH0/D0
                                              /SYS/PM3/CM1/CMP/BOB0/CH1/D0
                                              /SYS/PM3/CM1/CMP/BOB1/CH0/D0
                                              /SYS/PM3/CM1/CMP/BOB1/CH1/D0
                                     64 GB    /SYS/PM3/CM1/CMP/BOB2/CH0/D0
                                              /SYS/PM3/CM1/CMP/BOB2/CH1/D0
                                              /SYS/PM3/CM1/CMP/BOB3/CH0/D0
                                              /SYS/PM3/CM1/CMP/BOB3/CH1/D0
                                     64 GB    /SYS/PM3/CM1/CMP/BOB4/CH0/D0
                                              /SYS/PM3/CM1/CMP/BOB4/CH1/D0
                                              /SYS/PM3/CM1/CMP/BOB5/CH0/D0
                                              /SYS/PM3/CM1/CMP/BOB5/CH1/D0
                                     64 GB    /SYS/PM3/CM1/CMP/BOB6/CH0/D0
                                              /SYS/PM3/CM1/CMP/BOB6/CH1/D0
                                              /SYS/PM3/CM1/CMP/BOB7/CH0/D0
                                              /SYS/PM3/CM1/CMP/BOB7/CH1/D0


======================================== IO Devices =======================================
Slot +            Bus   Name +                            Model      Max Speed  Cur Speed
Status            Type  Path                                         /Width     /Width
-------------------------------------------------------------------------------------------
/SYS/MB/USB_CTLR  PCIE  usb-pciexclass,0c0330                        --         --
                        /pci@300/pci@1/pci@0/pci@4/pci@0/pci@6/usb@0
/SYS/RIO/XGBE0    PCIE  network-pciex8086,1528                       --         --
                        /pci@300/pci@1/pci@0/pci@4/pci@0/pci@8/network@0
/SYS/RIO/NET1     PCIE  network-pciex8086,1528                       --         --
                        /pci@300/pci@1/pci@0/pci@4/pci@0/pci@8/network@0,1
/SYS/MB/SASHBA0   PCIE  scsi-pciex1000,87                 LSI,2308_2 --         --
                        /pci@300/pci@1/pci@0/pci@4/pci@0/pci@c/scsi@0
/SYS/RCSA/PCIE1   PCIE  network-pciex8086,10fb            X1109a-z/1109a-z --         --
                        /pci@300/pci@1/pci@0/pci@6/network@0
/SYS/RCSA/PCIE1   PCIE  network-pciex8086,10fb            X1109a-z/1109a-z --         --
                        /pci@300/pci@1/pci@0/pci@6/network@0,1
/SYS/RCSA/PCIE3   PCIE  pciex15b3,1003                               --         --
                        /pci@340/pci@1/pci@0/pci@6/pciex15b3,1003@0
/SYS/RCSA/PCIE9   PCIE  network-pciex8086,10fb            X1109a-z/1109a-z --         --
                        /pci@380/pci@1/pci@0/pci@a/network@0
/SYS/RCSA/PCIE9   PCIE  network-pciex8086,10fb            X1109a-z/1109a-z --         --
                        /pci@380/pci@1/pci@0/pci@a/network@0,1
/SYS/RCSA/PCIE11  PCIE  pciex15b3,1003                               --         --
                        /pci@3c0/pci@1/pci@0/pci@e/pciex15b3,1003@0

============================ Environmental Status ============================
Fan sensors:
All fan sensors are OK.

Temperature sensors:
All temperature sensors are OK.

Current sensors:
All current sensors are OK.

Voltage sensors:
All voltage sensors are OK.

============================ FRU Status ============================
All FRUs are enabled.
oracle@enksc1client01:/export/home/oracle:dbm011
$
oracle@enksc1client01:/export/home/oracle:dbm011
$
oracle@enksc1client01:/export/home/oracle:dbm011
$
oracle@enksc1client01:/export/home/oracle:dbm011
$ ls
esp                           local.login                   oradiag_oracle                set_cluster_interconnect.wk1
local.cshrc                   local.profile                 set_cluster_interconnect.lst

}}}
! Solaris Performance Metrics Disk Utilisation by Process 
http://www.brendangregg.com/Solaris/paper_diskubyp1.pdf
{{{

zoneadm list -civ | grep er2zgrc319v
zoneadm list -civ | grep er2zgrc320v
zoneadm list -civ | grep er2zgrc321v
zoneadm list -civ | grep er2zgrc322v




# 1 - To check the current environment properties:

    svccfg -s system/identity:node listprop config

    root@er2zgrc321v:~# svccfg -s system/identity:node listprop config
    config                       application
    config/enable_mapping       boolean     true
    config/ignore_dhcp_hostname boolean     false
    config/loopback             astring
    config/nodename             astring     er2zgrc321v


# 2 - Set the new hostname

    from: er2zgrc321v-i
    to: er2zgrc421v

    svccfg -s system/identity:node setprop config/nodename="er2zgrc421v"
    svccfg -s system/identity:node setprop config/loopback="er2zgrc421v"

    root@er2zgrc321v:~# svccfg -s system/identity:node listprop config
    config                       application
    config/enable_mapping       boolean     true
    config/ignore_dhcp_hostname boolean     false
    config/nodename             astring     er2zgrc421v
    config/loopback             astring     er2zgrc421v
    root@er2zgrc321v:~#



# 3- Refresh the properties:

svccfg -s system/identity:node refresh

#4 - Restart the service:

svcadm restart system/identity:node

#5 - verify that the changes took place:

svccfg -s system/identity:node listprop config





zoneadm -z er2zgrc321v-i reboot



root@er2s1app01:~# zoneadm list -civ | grep er2zgrc319v
root@er2s1app01:~# zoneadm list -civ | grep er2zgrc320v
root@er2s1app01:~# zoneadm list -civ | grep er2zgrc321v
  35 er2zgrc321v-i    running     /zones/er2zgrc321v           solaris    excl
root@er2s1app01:~# zoneadm list -civ | grep er2zgrc322v


root@er2s1app01:~# zlogin er2zgrc321v-i
[Connected to zone 'er2zgrc321v-i' pts/5]
Last login: Thu Feb 23 22:17:15 2017 from er2s1vm02.erp.h
Oracle Corporation      SunOS 5.11      11.3    August 2016
You have new mail.
root@er2zgrc421v:~#
root@er2zgrc421v:~#
root@er2zgrc421v:~#
root@er2zgrc421v:~#



}}}
{{{

Edit the resolv.conf and nsswitch.conf and then run this script to upload it to SMF

/SAP_media/enkitec/scripts/nscfg.sh


# Configure SMF (directly)

#svccfg -s network/dns/client listprop config
#svccfg -s network/dns/client setprop config/nameserver = net_address: "(99.999.10.53 99.999.200.53)"
#svccfg -s network/dns/client setprop config/domain = astring: erp.example.com
#svccfg -s network/dns/client setprop config/search = astring: '("erp.example.com")'
#svccfg -s name-service/switch setprop config/ipnodes = astring: '("files dns")'
#svccfg -s name-service/switch setprop config/host = astring: '("files dns")'
#svccfg -s network/dns/client listprop config
#svccfg -s name-service/switch listprop config
#svcadm enable dns/client
#svcadm refresh name-service/switch

# Or modify up your /etc/resolv.conf and your /etc/nsswitch.conf and then import them with nscfg.

nscfg import -f svc:/system/name-service/switch:default
nscfg import -f name-service/switch:default
/usr/sbin/nscfg import -f dns/client
nscfg import -f dns/client:default

svcadm enable dns/client
svcadm refresh name-service/switch
svcadm refresh dns/client


}}}


! nsswitch 
{{{
in sap it needs to be 

root@hostname:~# cat /etc/nsswitch.conf | grep hosts
hosts:  cluster files dns

}}}
! DNS
{{{

root@er1p2vm03:~# cat /etc/resolv.conf

#
# _AUTOGENERATED_FROM_SMF_V1_
#
# WARNING: THIS FILE GENERATED FROM SMF DATA.
#   DO NOT EDIT THIS FILE.  EDITS WILL BE LOST.
# See resolv.conf(4) for details.

domain  erp.example.com
search  erp.example.com
options timeout:1
nameserver      99.999.10.53
nameserver      99.999.200.53


svccfg -s network/dns/client delprop config/domain
svcadm refresh dns/client
root@er1p2vm03:~# cat /etc/resolv.conf

#
# _AUTOGENERATED_FROM_SMF_V1_
#
# WARNING: THIS FILE GENERATED FROM SMF DATA.
#   DO NOT EDIT THIS FILE.  EDITS WILL BE LOST.
# See resolv.conf(4) for details.

search  erp.example.com
options timeout:1
nameserver      99.999.10.53
nameserver      99.999.200.53

}}}

!! references 
<<<
11.2 Grid Install Fails with SEVERE: [FATAL] [INS-13013], and PRVF-5640 or a Warning in "Task resolv.conf Integrity" (Doc ID 1271996.1)
https://blogs.oracle.com/gurubalan/entry/dns_client_configuration_guide_for
svccfg man page http://docs.oracle.com/cd/E19253-01/816-5166/6mbb1kqjj/index.html
DNS client configuration steps in Oracle Solaris 11 https://blogs.oracle.com/gurubalan/entry/dns_client_configuration_guide_for
https://newbiedba.wordpress.com/2012/12/05/solaris-11-how-to-configure-resolv-conf-and-nsswitch-conf/
https://www.itfromallangles.com/2012/05/solaris-11-dns-client-configuration-using-svccfg/
How to set dns-server and search domain in solaris 5.11 http://www.rocworks.at/wordpress/?p=284
https://blogs.oracle.com/SolarisSMF/entry/changes_to_svccfg_import_and

<<<


! NTP
{{{

root@er1p1vm03:~# cat /etc/inet/ntp.conf
server 99.999.10.53
server 99.999.200.53
slewalways yes
disable pll


echo "slewalways yes" >> /etc/inet/ntp.conf
echo "disable pll" >> /etc/inet/ntp.conf

svccfg -s svc:/network/ntp:default setprop config/slew_always = true
svcadm refresh ntp
svcadm restart ntp
svcprop -p config/slew_always svc:/network/ntp:default
}}}

!! references
<<<
Oracle RAC Install: Runcluvfy.sh Fails With PRVF-5436 When Using NTP.CONF On Solaris 11 (Doc ID 1511006.1)
11.2.0.1/11.2.0.2 to 11.2.0.3 Grid Infrastructure and Database Upgrade on Exadata Database Machine (Doc ID 1373255.1) 
o	CVU  may complain about missing "'slewalways yes' & 'disable pll'". If this is the case then this message can be ignored. Solaris 11 Express has an SMF property for configuring slew NTP settings, see bug 13612271
CML1069-DellStorageCenterOracleRAC-SolarisBPs.pdf
Managing Network Time Protocol (Tasks) https://docs.oracle.com/cd/E23824_01/html/821-1454/time-20.html
https://rageek.wordpress.com/2012/04/10/oracle-rac-and-ntpd-conf-configuration-on-solaris-11/
How to Deploy Oracle RAC on Oracle Solaris 11 Zone Clusters http://www.oracle.com/technetwork/articles/servers-storage-admin/deployrac-onsolaris11-1721976.html

<<<



Monitoring Swap Resources https://docs.oracle.com/cd/E23824_01/html/821-1459/fsswap-52195.html
Playing with Swap Monitoring and Increasing Swap Space Using ZFS Volumes http://www.oracle.com/technetwork/articles/servers-storage-admin/monitor-swap-solaris-zfs-2216650.html
Video Tutorial: Installing Solaris 11 in VirtualBox
http://blogs.oracle.com/jimlaurent/2010/11/video_tutorial_installing_solaris_11_in_virtualbox.html

http://www.oracle.com/technetwork/server-storage/solaris11/downloads/index.html
''what's new'' http://www.oracle.com/technetwork/server-storage/solaris11/documentation/solaris11-whatsnew-201111-392603.pdf
''Taking Your First Steps with Oracle Solaris 11'' http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-112-s11-first-steps-524819.html
http://www.oracle.com/technetwork/server-storage/solaris11/documentation/index.html
http://en.wikipedia.org/wiki/Solaris_(operating_system)
http://www.sun.drydog.com/faq/s86faq.html#s4.18
http://www.oracle-base.com/articles/11g/OracleDB11gR2InstallationOnSolaris10.php
http://137.254.16.27/jimlaurent/entry/video_tutorial_installing_solaris_11


{{{
The isainfo command can be used to determine if a Solaris system has been configured to run in 32 or 64 bit mode.

Run the command

isainfo -v

If the system is running in 32 bit mode, you will see the following output:

32-bit sparc applications

On a 64 bit Solaris system, you’ll see:

64-bit sparcv9 applications
32-bit sparc applications



bash-3.00# isainfo
amd64 i386bash-3.00# isainfo -kv
64-bit amd64 kernel modulesbash-3.00# isainfo -nv
64-bit amd64 applications
cx16 mon sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpubash-3.00# isainfo -b
64





/usr/bin/isainfo -kvIf your OS is 64-bit, you will see output like:64-bit sparcv9 kernel modulesIf your OS is 32-bit, you will get this output:32-bit sparc kernel modules

}}}
http://blogs.oracle.com/pomah/entry/configuration_example_of_oracle_asm1
http://gdesaboyina.wordpress.com/2009/04/01/oracle-asm-on-solaris-containerslocalzonesnon-global-zones/
http://blogs.oracle.com/pomah/entry/configuring_oracle_asm_in_solaris
http://blogs.oracle.com/pomah/entry/configuration_example_of_oracle_asm
http://askdba.org/weblog/2008/07/oracle-11g-installation-on-solaris-10/
http://blog.csdn.net/wenchenzhao113/article/details/4383886
http://www.oracle.com/technetwork/articles/systems-hardware-architecture/deploying-rac-in-containers-168438.pdf    <-- GOOD STUFF
http://wikis.sun.com/display/BluePrints/Deploying+Oracle+Real+Application+Clusters+(RAC)+on+Solaris+Zone+Clusters
http://goo.gl/GZtcV  <-- racsig vmware asm on solaris 

also look at the [[Veritas Oracle Doc]]

https://www.safaribooksonline.com/library/view/oracle-solaris-11/9781618660831/

<<showtoc>>


! Installing Oracle Solaris 11
<<<
user - jack:jack
root - root:solaris
<<<
!! Two ways of installing 
* using Live CD (only x86) or Text installer 
!! install log location 
{{{
oracle@enksc1db0201:/export/home/oracle:dbm012
$ less /var/sadm/system/logs/install_log
}}}
!! system messages 
{{{
less /var/sadm/system/logs/messages 
}}}

!! OBP (openboot prom)
openboot prom  <- solaris boot loader used on SPARC 
grub <- boot loader used on x86
* to access the openboot prom
{{{
eeprom
monitor
banner
}}}


! Updating and Managing Packages (IPS)
* IPS replaced SVR4 found in earlier releases
* allows you to list, search, install, update, remove packages

!! IPS admin
** manage all software packages
** manage software publishers 
** manage repositories
** update an image to a new OS release
*** can be used to create images, it's basically an installation and you can basically baseline your image change things with IPS snapshot it or back it up and put it up as a bootable environment and you don't have to make this changes stick you can revert back to the image you had if you like so you can test new OS packages without damaging your system
** create and manage boot environments
*** this is what the images are - boot environments, you can have a baseline image, or several of them and boot from any of them 

!! IPS terms
** manifest - describes an IPS package 
** repository - internet or network location, the location is specified by a URI (universal resource identifier)
** image - a location where IPS packages can be installed 
** catalog - lists all packages in a given repository 
** package archive - file that contains publisher info 
** mirror - repo that contain package content 
** boot environment (BE) - bootable instance of an image (OS)

!! CLI - IPS 

{{{
# publisher stuff
pkg publisher    <- list publisher
pkg set-publisher -g http://pkg.openindiana.org/sfe sfe     <- add publisher

# troubleshooting 
pkg info
pkg contents
pkg history
pkg uninstall
}}}

!! beadm (manage boot environments)
{{{
$ beadm list
BE               Flags Mountpoint Space   Policy Created
--               ----- ---------- -----   ------ -------
SCMU_2016.07     NR    /          13.65G  static 2016-08-30 15:29
solaris          -     -          115.86M static 2016-08-29 16:55
solaris-backup-1 -     -          103.84M static 2016-08-29 23:02
solaris-bkup     -     -          103.74M static 2016-08-29 22:42
solaris-idr      -     -          299.37M static 2016-08-30 03:57

beadm create test
beadm destroy test 
}}}



! Administering Services (SMF - service management facility) - svcs and svcadm
* SMF part of FMA (fault management architecture)
* comes in both GUI (SMF services) and CLI versions 

!! SMF standard naming convention - FMRI (fault management resource identifier)
* scheme - type of service
* location - system service is running on 
* function category - the service function
** applications
** network
** device
** milestone (run level)
** system
* description - service name
* instance - which instance (if many instances are running)

!! example name convention 
{{{
scheme://location/function/description:instance
Example: svc://localhost/network/nfs/server:default
}}}

!! service states 
* online
* offline
* disabled
* maintenance
* degraded
* legacy_run

!! CLI - svcs, svcadm, svccfg 
* svcs - list the services and properties
* svcadm - administration of services
* svccfg - create/define your own service (created in a service manifest file, the best is to build from an existing manifest file)

!!! svcs 
{{{
svcs -a 

# show all services that ExaWatcher depends upon in order to run
$ svcs -d ExaWatcher
STATE          STIME    FMRI
online         Jan_23   svc:/milestone/network:default
online         Jan_23   svc:/system/filesystem/local:default
online         Jan_23   svc:/milestone/multi-user:default
oracle@enksc1db0201:/export/home/oracle:dbm012

# show all services that depend on ExaWatcher itself for them to run
$ svcs -D ExaWatcher
STATE          STIME    FMRI
oracle@enksc1db0201:/export/home/oracle:dbm012


$ svcs -d smtp
STATE          STIME    FMRI
online         Jan_23   svc:/system/identity:domain
online         Jan_23   svc:/network/service:default
online         Jan_23   svc:/milestone/name-services:default
online         Jan_23   svc:/system/filesystem/local:default
online         Jan_23   svc:/system/filesystem/autofs:default
online         Jan_23   svc:/system/system-log:default
oracle@enksc1db0201:/var/svc/log:dbm012
$
oracle@enksc1db0201:/var/svc/log:dbm012
$ svcs -D smtp
STATE          STIME    FMRI
oracle@enksc1db0201:/var/svc/log:dbm012
$
oracle@enksc1db0201:/var/svc/log:dbm012

# verbose on service
$ svcs -xv smtp
svc:/network/smtp:sendmail (sendmail SMTP mail transfer agent)
 State: online since Mon Jan 23 17:12:29 2017
   See: man -M /usr/share/man -s 1M sendmail
   See: /var/svc/log/network-smtp:sendmail.log
Impact: None.


}}}

!!! svcadm 
{{{
svcadm -h

# boot and shutdown system
svcadm milestone svc:/milestone/single-user:default
svcadm milestone svc:/milestone/all
svcadm milestone svc:/milestone/none
svcadm milestone help
}}}

!!! svccfg
{{{
# create a new service
svccfg -h 
svccfg validate NewService.xml
svccfg import NewService.xml
svcadm enable NewService
}}}

!! services log location 
* each service has its own log
{{{
system-zones-monitoring:default.log
system-zones:default.log
oracle@enksc1db0201:/var/svc/log:dbm012
$ less system-pkgserv:default.log
oracle@enksc1db0201:/var/svc/log:dbm012
$ pwd
/var/svc/log

}}}



! Administering Data Storage (ZFS)

!! ZFS does the following
* storage
* data integrity 
* encryption 
* backup and restore of files
* creation and management of containers (zones)

!! ZFS features
* enables addressing of multiple disk storage devices as a large contiguous block 
* 128-bit addressing (means no size restrictions of files)
* 256-bit checksums on all disk operations 
* support RAID parity, striping, and mirroring schemes
* automated detection and repair of corrupt data
* encryption to protect sensitive data
* data compression to save space 
* user storage quotas
* sharing data with other ZFS pools 
* snapshot and recovery

!! ZFS terms
* Filesystem
* Pool - one or more disk devices or partitions
* Clone - an exact copy of a ZFS filesystem
* Snapshot - a copy of the state of the filesystem
* Checksum - check integrity
* Quota - limit on the storage amount for a user

!! ZFS storage pools
* created and configured using the /usr/sbin/zpool command
* the rpool is the default ZFS pool 

!!! zpool commands 
[img(50%,50%)[ http://i.imgur.com/2zoGGwY.png ]]
!!! CLI - create a pool 
{{{
mkdir /zfstest
cd /zfstest
mkfile -n 100m testdisk1
mkfile -n 100m testdisk2
mkfile -n 100m testdisk3
mkfile -n 100m testdisk4

zpool create testpool /zfstest/testdisk1 /zfstest/testdisk2 /zfstest/testdisk3

root@enksc1db0201:/zfstest# zpool list
NAME      SIZE  ALLOC  FREE  CAP  DEDUP  HEALTH  ALTROOT
rpool     416G   172G  244G  41%  1.00x  ONLINE  -
testpool  285M   164K  285M   0%  1.00x  ONLINE  -

root@enksc1db0201:/zfstest# zpool status
  pool: rpool
 state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
        pool will no longer be accessible on older software versions.
  scan: none requested
config:

        NAME                         STATE     READ WRITE CKSUM
        rpool                        ONLINE       0     0     0
          mirror-0                   ONLINE       0     0     0
            c0t5000CCA01D8F2528d0s0  ONLINE       0     0     0
            c0t5000CCA01D8FC350d0s0  ONLINE       0     0     0

errors: No known data errors

  pool: testpool
 state: ONLINE
  scan: none requested
config:

        NAME                  STATE     READ WRITE CKSUM
        testpool              ONLINE       0     0     0
          /zfstest/testdisk1  ONLINE       0     0     0
          /zfstest/testdisk2  ONLINE       0     0     0
          /zfstest/testdisk3  ONLINE       0     0     0

errors: No known data errors



root@enksc1db0201:/zfstest# df -h
Filesystem             Size   Used  Available Capacity  Mounted on
testpool               253M    31K       253M     1%    /testpool




zpool destroy testpool
}}}


!! ZFS file systems
https://docs.oracle.com/cd/E23824_01/html/821-1448/gaynd.html

!!! ZFS commands 
[img(50%,50%)[ http://i.imgur.com/J4y91jv.png ]]

!!! CLI - create a filesystem 
{{{
zfs create testpool/data

!!! CLI - list filesystems
Managing Your ZFS Root Pool https://docs.oracle.com/cd/E23824_01/html/821-1448/gjtuk.html
Querying ZFS File System Information https://docs.oracle.com/cd/E23824_01/html/821-1448/gazsu.html
{{{
zfs list
}}}



root@enksc1db0201:/zfstest# df -h
Filesystem             Size   Used  Available Capacity  Mounted on
testpool               253M    32K       253M     1%    /testpool
testpool/data          253M    31K       253M     1%    /testpool/data

}}}

!!! mount/unmount
{{{
zfs unmount testpool/data
zfs mount testpool/data

# to mount all
zfs mount -a
}}}


!! ZFS snapshots and clones
!!! snapshot 
* a snapshot is a read-only copy of the state of a ZFS filesystem
* takes up almost no disk space
* keeps track of only changes to the filesystem 
!!! clone 
* a clone is a writeable copy of a snapshot 
* used to turn a snapshot into a complete filesystem 
* can be placed within any other pool

!!! administering ZFS snapshot and clone 
* Timeslider is a GUI tool to manage snapshots
* you can also use zfs commands 

!!! create a snapshot 
{{{
zfs snapshot <poolname>@<snapshotname>
zfs snapshot testpool@friday

# to rollback to a given snapshot
# you would usually rollback to the most recent snapshot, 
# if you need to rollback on older then you need to delete the most recent snapshot
zfs rollback 
}}}

!!!! list/get all snapshots
{{{
root@enksc1db0201:~# zfs get all | grep -i "type                             snapshot"
rpool/ROOT/SCMU_2016.07@install                  type                             snapshot                                         -
rpool/ROOT/SCMU_2016.07@snapshot                 type                             snapshot                                         -
rpool/ROOT/SCMU_2016.07@2016-08-30-04:02:48      type                             snapshot                                         -
rpool/ROOT/SCMU_2016.07@2016-08-30-08:57:45      type                             snapshot                                         -
rpool/ROOT/SCMU_2016.07@2016-08-30-20:29:07      type                             snapshot                                         -
rpool/ROOT/SCMU_2016.07/var@install              type                             snapshot                                         -
rpool/ROOT/SCMU_2016.07/var@snapshot             type                             snapshot                                         -
rpool/ROOT/SCMU_2016.07/var@2016-08-30-04:02:48  type                             snapshot                                         -
rpool/ROOT/SCMU_2016.07/var@2016-08-30-08:57:45  type                             snapshot                                         -
rpool/ROOT/SCMU_2016.07/var@2016-08-30-20:29:07  type                             snapshot                                         -
testpool@friday                                  type                             snapshot                                         -

}}}

!!! create a clone
{{{
zfs clone
}}}



!! troubleshooting ZFS

!!! get history of chages
{{{
zpool history
}}}

!!! get info on pool and filesystem 
{{{
zfs get all
zfs list
zpool status
}}}


! Administering Oracle Solaris Zones

!! Zone configuration

!! Zone resource utilization 

!! Administering zones 

!! Zone and resource issues 



! Administering a Physical Network


! Administering User Accounts


! System and File Access


! System Processes and Tasks










''cpu count''
{{{
http://www.solarisinternals.com/wiki/index.php/CPU/Processor  <-- good stuff reference
http://blogs.oracle.com/sistare/entry/cpu_to_core_mapping  <-- good script
mpstat |tail +2 |wc -l
# psrinfo -v
/usr/sbin/psrinfo 
/usr/platform/sun4u/sbin/prtdiag 
uname -p

prtdiag
prtconf
swap -l
top

prtconf | grep "Memory"

check Total physical memory:

# prtdiag -v | grep Memory

# prtconf | grep Memory

---

check Free physical Memory:

# top (if available)

# sar -r 5 10
Free Memory=freemen*8 (pagesize=8k)

# vmstat 5 10
Free Memory = free

---

For swap:

# swap -s
# swap -l
}}}


''check memory''
http://oraclepoint.com/oralife/2011/02/09/different-ways-to-check-memory-usage-on-solaris-server/
{{{
Unix Commands
1.  echo ::memstat | mdb –k
2. prstat –t
3. ps -efo pmem,uid,pid,ppid,pcpu,comm | sort -r
4. /usr/proc/bin/pmap -x <process-id>

Scripts & Tools
1. NMUPM utility (Oracle Support)  How to Check the Host Memory Usage on Solaris via NMUPM Utility [ID 741004.1]
}}}

{{{
nmupm_mem.sh :

#!/bin/ksh 
PAGESZ="/usr/bin/pagesize" 
BC="/bin/bc" 

SCALE=2 
WAIT=300 
MAXCOUNT=3 


NMUPM="$ORACLE_HOME/bin/nmupm osLoad" 


echo "Calulates average memory (interval $WAIT (s)) usage on Solaris using nmupm" 

PAGESIZE=`$PAGESZ` 
result1=`$NMUPM | awk -F"|" '{print $14 }'` 
REALMEM=`$NMUPM | awk -F"|" '{print $13 }'` 
#echo $result1 

X=0 
while [ $X -le $MAXCOUNT ] 
do 

sleep $WAIT 

result2=`$NMUPM | awk -F"|" '{print $14 }'` 
#echo $result2 
DIFF="($result2 - $result1) * $PAGESIZE / 1024 / $WAIT" 
RESULT=$($BC << EOF 
scale=$SCALE 
(${DIFF}) 
EOF) 

MEMREL="$RESULT / $REALMEM * 100" 
MEMPCT=$($BC << EOF 
scale=$SCALE 
(${MEMREL}) 
EOF) 

#echo $result1 
echo "Memory $REALMEM [kB] Freemem $RESULT [kB] %Free $MEMPCT" 
result1=$result2 

X=$((X+1)) 
done
}}}



<<<
how to login on pdom  <- ilom (use to connect,restart,get info)
how to login on ldom  <- global/non-global
how to login on zones <- zlogin 
what is a solaris cluster <- clustered filesystem (tied with zones availability so use clzc)
how rac is configured <- zone level or ldom level 
<<<



''Nice paper on hardware virtualization that also applies to zones'' http://neerajbhatia.wordpress.com/2011/10/07/capacity-planning-and-performance-management-on-ibm-powervm-virtualized-environment/

''Consolidating Applications with Oracle Solaris Containers'' http://www.oracle.com/us/products/servers-storage/solaris/consolid-solaris-containers-wp-075578.pdf
http://www.usenix.org/events/vm04/wips/tucker.pdf

http://61.153.44.88/opensolaris/solaris-containers-resource-management-and-solaris-zones-developer-guide/html/p21.html
http://61.153.44.88/opensolaris/solaris-containers-resource-management-and-solaris-zones-developer-guide/html/p2.html#concepts-2

''search for "solaris poolstat output" and you'll find lot's of resources regarding containers''
Solaris Containers — What They Are and How to Use Them http://www.google.com.ph/url?sa=t&source=web&cd=52&ved=0CB4QFjABODI&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.150.5215%26rep%3Drep1%26type%3Dpdf&rct=j&q=solaris%20poolstat%20output&ei=8fecTqyzIJDJsQKPyKiTCg&usg=AFQjCNGserXvEgoNyYzJaXHtqfqw78dSCA&cad=rja

''System Administration Guide: Virtualization Using the Solaris Operating System'' http://dc401.4shared.com/doc/FUgUu5Vu/preview.html
''BEST PRACTICES FOR RUNNING ORACLE DATABASES IN  SOLARIS™ CONTAINERS'' http://www.filibeto.org/sun/lib/blueprints/820-7195.pdf
april 2010 ''Best Practices for Running Oracle Databases in Oracle Solaris Containers'' http://developers.sun.com/solaris/docs/oracle_containers.pdf
''The Sun BluePrints™ Guide to Solaris™ Containers'' http://61.153.44.88/server-storage/820-0001.pdf
''poolstat, the counterpart of lparstat on aix'' http://download.oracle.com/docs/cd/E19963-01/html/821-1460/rmpool-107.html, http://docs.huihoo.com/opensolaris/solaris-containers-resource-management-and-solaris-zones/html/p36.html, http://dlc.sun.com/osol/docs/content/SYSADRM/rmpool-107.html, http://download.oracle.com/docs/cd/E19455-01/817-1592/rmpool.task-105/index.html <-- this is the output 
{{{
machine% poolstat
                              pset
       id pool           size used load
        0 pool_default      4  3.6  6.2
        1 pool_sales        4  3.3  8.4
}}}
''System Administration Guide: Solaris Containers, Resource Management, and Zones'' http://www.filibeto.org/~aduritz/truetrue/solaris10/sys-admin-rm.pdf

''How the SPARC T4 Processor Optimizes Throughput Capacity: A Case Study'' http://www.oracle.com/technetwork/server-storage/sun-sparc-enterprise/documentation/t-series-latency-1579242.pdf

''CMT performance''
Important Considerations for Operating Oracle RAC on T-Series Servers [ID 1181315.1]
Migration from fast single threaded CPU machine to CMT UltraSPARC T1 and T2 results in increased CPU reporting and diminished performance [ID 781763.1]







On Solaris 8, Persistent Write Contention for File System Files May Result in Degraded I/O Performance [ID 1019557.1]
Database Responding Very Slowly - Aiowait Timed Out Messages [ID 236322.1]
How to Use the Solaris Truss Command to Trace and Understand System Call Flow and Operation [ID 1010771.1]
Database Hangs With Aiowait Time Out Warning if Async IO Is True [ID 163530.1]
Warning: Aiowait Timed Out 1 Times Database Not Responding Cannot kill processes [ID 743425.1]
Warning "aiowait timed out x times" in alert.log [ID 222989.1]   <-- GOOD STUFF
Database Instance Hang at Database Checkpoint With Block Change Tracking Enabled. [ID 1326886.1]
ORA-12751 cpu time or run time policy violation [ID 761298.1]
Solaris[TM] Operating System: All TNF Probes in the Kernel and Prex [ID 1017600.1]
How to Analyze High CPU Utilization In Solaris [ID 1008930.1]
How to Determine What is Consuming CPU System Time Using the lockstat Command [ID 1001812.1]
GUDS - A Script for Gathering Solaris Performance Data [ID 1285485.1]



Sun Fire[TM] Midframe/Midrange Servers: CPU/Memory Board Dynamic Reconfiguration (DR) Considerations [ID 1003332.1] <-- cfgadm

Migration from fast single threaded CPU machine to CMT UltraSPARC T1 and T2 results in increased CPU reporting
  	Doc ID: 	781763.1




<<showtoc>>

! corrupted MAC
ssh or scp connection terminates with the error "Corrupted MAC on input" (Doc ID 1389880.1) To BottomTo Bottom 

! NFS mount hang 
System gets hung while reboot due to in progress NFS READ or WRITE operations, even though NFS server is available https://access.redhat.com/solutions/778173
RHEL mount hangs: nfs: server [...] not responding, still trying https://access.redhat.com/solutions/28211
Strange NFS problem (not responding still trying) https://community.hpe.com/t5/System-Administration/Strange-NFS-problem-not-responding-still-trying/td-p/3269194




How to Configure a Physical Interface After System Installation http://docs.oracle.com/cd/E19253-01/816-4554/fpdcn/index.html
How to Get Started Configuring Your Network in Oracle Solaris 11 http://www.oracle.com/technetwork/articles/servers-storage-dev/s11-network-config-1632927.html


http://blogs.oracle.com/observatory/entry/replacing_the_system_hdd_on
How to enable SAR (System Activity Reporter) on Solaris 10
http://muctable.org/?p=102

http://www.virtualsystemsadmin.com/?q=node/194
{{{
simply put these lines in the root crontab. Execution of script at the cron intervals collects data and bangs into daily files.
# Collect measurements at 10-minute intervals
0,10,20,30,40,50   * * * *   /usr/lib/sa/sa1
# Create daily reports and purge old files
0 * * * *   /usr/lib/sa/sa2 -A
}}}

http://docs.oracle.com/cd/E23824_01/html/821-1451/spconcepts-60676.html
The data files are placed in the /var/adm/sa directory
http://blogs.oracle.com/jimlaurent/entry/solaris_faq_myths_and_facts


http://nixcraft.com/solaris-opensolaris/738-how-find-swap-solaris-unix.html
http://www.ehow.com/how_6080053_determine-paging-space-solaris.html
{{{
df -kh swap
swap -s
}}}

Linux Kernel: The SLAB Allocator [ID 434351.1]
TECH: Unix Virtual Memory, Paging & Swapping explained [ID 17094.1]
ADDM Reports Significant Virtual Memory Paging [ID 1322964.1]
ADDM Reports "Significant Virtual Memory Paging Was Detected On The Host Operating System" [ID 395957.1]


/usr/platform/sun4u/sbin/prtdiag -v

If you just want information on the CPU’s you can also try:

psrinfo -v

Finally, to just get your total memory size do:

prtconf | grep Memory


For Hardware Info:

/usr/platforum/$(uname -m)/sbin/prtdiag -v

For Disks:

Either /usr/sbin/format or /usr/bin/iostat -En   <-- disk info


Solaris Tips and Tricks http://sysunconfig.net/unixtips/solaris.html
VxFS Commands quick reference - http://eval.veritas.com/downloads/van/fs_quickref.pdf

http://hub.opensolaris.org/bin/view/Community+Group+zones/faq#HQ:Whatisazone3F	

http://www.usenix.org/events/vm04/wips/tucker.pdf  <-- cool stuff usenix whitepaper


-- Identify global, non-global
http://alittlestupid.com/2009/03/30/how-to-identify-a-solaris-non-global-zone/
http://www.mysysad.com/2009/01/indentify-zone-processes-via-global.html
http://unix.ittoolbox.com/groups/technical-functional/solaris-l/how-to-find-which-is-the-global-zone-for-a-particular-nonglobal-zone-3307215
http://www.unix.com/solaris/128825-how-identify-global-non-global-solaris-server.html
11g
---- 
SQL Result Cache
PL/SQL Function Cache

Compression

SecureFiles
http://www.oracle.com/us/corporate/features/sparc-supercluster-t4-4-489157.html
Data Sheet - http://www.oracle.com/us/products/servers-storage/servers/sparc-enterprise/t-series/sparc-supercluster-ds-496616.pdf
FAQ - http://www.oracle.com/us/products/servers-storage/servers/sparc-enterprise/t-series/sparc-supercluster-faq-496617.pdf


m5000 server
http://www.oracle.com/us/products/servers-storage/servers/sparc-enterprise/m-series/m5000/overview/index.html
http://www.m5000-server.com/
http://en.wikipedia.org/wiki/SPARC_Enterprise


http://www.infoq.com/news/2014/02/sparkr-announcement
https://amplab.cs.berkeley.edu/2014/01/26/large-scale-data-analysis-made-easier-with-sparkr/
http://wikis.sun.com/display/SAPonSun/SAP+on+Sun
http://wikis.sun.com/display/SAPonSun/Demystifying+Oracle+IO
http://wikis.sun.com/display/SAPonSun/Speedup+SAP+%28Performance+Tuning%29
{{{
'Demystifying Oracle I/O - clarification on synchronous, asynchronous, blocking, nonblocking, direct & direct path I/O'
'SAP on Oracle Performance in high latency Metrocluster Setup'
'Getting insights with DTrace - Part 1: Analyzing Oracle Logwriter w/ buffered vs. direct I/O'
crosslink: 'Getting insights with DTrace - Part 2: OS version checks (uname)'; since this article is not related to performance, it is posted in Danger Zone*
'Getting insights with DTrace - Part 3: Analyzing SAP Appserver I/O'
'Implementing Oracle on ZFS and ZFS Storage Appliances'
'Oracle DB and Flash Devices'
 Demystifying Oracle IO 
 Getting insights with DTrace - Part 1 
 Getting insights with DTrace - Part 3 
 Implementing Oracle on ZFS and ZFS Storage Appliances 
 Oracle DB and Flash Devices 
 SAP on Oracle Performance in high latency Metrocluster Setup 
}}}

http://wikis.sun.com/display/SAPonSun/Oracle+DB+and+Flash+Devices
http://wikis.sun.com/display/SAPonSun/SAP+on+Oracle+Performance+in+high+latency+Metrocluster+Setup
http://wikis.sun.com/display/SAPonSun/Widening+the+Storage+Bottleneck+for+an+Oracle+Database
http://wikis.sun.com/display/SAPonSun/Implementing+Oracle+on+ZFS+and+ZFS+Storage+Appliances
http://wikis.sun.com/display/OC2dot5/Installation  <-- Ops Center
http://wikis.sun.com/display/SAPonSun/Getting+insights+with+DTrace+-+Part+1
http://wikis.sun.com/display/SAPonSun/Getting+insights+with+DTrace+-+Part+2
http://wikis.sun.com/display/SAPonSun/Getting+insights+with+DTrace+-+Part+3
http://wikis.sun.com/display/SAPonSun/Running+SAP+on+OpenSolaris









http://blog.tanelpoder.com/2008/06/15/advanced-oracle-troubleshooting-guide-part-6-understanding-oracle-execution-plans-with-os_explain/
http://blog.tanelpoder.com/2009/04/24/tracing-oracle-sql-plan-execution-with-dtrace/
http://blog.tanelpoder.com/2008/10/31/advanced-oracle-troubleshooting-guide-part-9-process-stack-profiling-from-sqlplus-using-ostackprof/
http://blog.tanelpoder.com/2008/09/02/oracle-hidden-costs-revealed-part2-using-dtrace-to-find-why-writes-in-system-tablespace-are-slower-than-in-others/
http://blog.tanelpoder.com/2008/06/15/advanced-oracle-troubleshooting-guide-part-6-understanding-oracle-execution-plans-with-os_explain/

Lab128 has automated the pstack sampling, os_explain, & reporting. Good tool to know where the query was spending time http://goo.gl/fyH5x
http://www.business-intelligence-quotient.com/?p=1083
http://blogs.oracle.com/optimizer/2010/11/star_transformation.html
! 2014
How to Start a Startup (Stanford CS183B) http://startupclass.samaltman.com/
http://www.wired.com/2014/09/now-can-take-free-y-combinator-startup-course-online/
http://venturebeat.com/2014/09/25/how-the-tech-elite-teach-stanford-students-to-build-billion-dollar-companies-in-11-quotes/
http://venturebeat.com/2014/10/09/how-peter-thiel-teaches-stanford-students-to-create-billion-dollar-monopolies-in-3-quotes/
http://www.quora.com/How-to-Start-a-Startup-Stanford-CS183B
''videos'' https://www.youtube.com/channel/UCxIJaCMEptJjxmmQgGFsnCg  

! 2013
http://blog.ycombinator.com/tag/Startup%20School%202013
''videos'' http://blog.ycombinator.com/videos-from-startup-school-2013-are-now-online

! YC startup school 
http://www.startupschool.org/


Troubleshoot Grid Infrastructure Startup Issues [ID 1050908.1]
Top 5 Grid Infrastructure Startup Issues [ID 1368382.1]

{{{
To determine the status of GI, please run the following commands:

1. $GRID_HOME/bin/crsctl check crs
2. $GRID_HOME/bin/crsctl stat res -t -init
3. $GRID_HOME/bin/crsctl stat res -t
4. ps -ef | egrep 'init|d.bin'
}}}
Startup Videos
Something ventured
Startup Kids
http://www.hulu.com/20-under-20-transforming-tomorrow
http://thenextweb.com/entrepreneur/2012/12/02/how-to-hire-the-right-developer-for-your-tech-startup/
How to Collect Diagnostics for Database Hanging Issues (Doc ID 452358.1)



The normal distribution and the impirical value
http://www.wisc-online.com/objects/ViewObject.aspx?ID=TMH2102

The area under the standard normal distribution
http://www.wisc-online.com/Objects/ViewObject.aspx?ID=TMH3302


Creating a Scatter Plot in Excel
http://www.ncsu.edu/chemistry/resource/excel/excel.html
http://www.youtube.com/watch?v=nnM-7Q6gmUA
http://www.youtube.com/watch?v=MTsRlauTtd4



''Statistics in Oracle''
http://www.java2s.com/Tutorial/Oracle/0400__Linear-Regression-Functions/Catalog0400__Linear-Regression-Functions.htm
http://www.adp-gmbh.ch/ora/sql/agg/index.html <-- aggregate functions in oracle
http://www.vlamis.com/Papers/oow2001-1.pdf
http://www.dbasupport.com/oracle/ora9i/functions1_1.shtml
http://download.oracle.com/docs/cd/B14117_01/server.101/b10736/analysis.htm       <-- official doc
http://download.oracle.com/docs/cd/B12037_01/server.101/b10759/functions117.htm <-- sql reference
http://oracledmt.blogspot.com/2007/02/new-oracle-statistical-functions-page.html
http://weblogs.sdn.sap.com/files/Statistical_Analysis-Oracle.ppt&pli=1
http://www.morganslibrary.com/reference/analytic_functions.html
http://ykud.com/blog/cognos/calculating-trend-lines-in-cognos-report-studio-and-oracle-sql
http://wwwmaths.anu.edu.au/~mendelso/papers/BMN31-03-09.pdf
http://www.nyoug.org/Presentations/SIG/DataWarehousing/dw_sig_nov_2002.PDF
http://www.rittmanmead.com/2004/08/27/analytic-functions-in-owb/
http://www.olsug.org/wiki/images/f/f9/Oracle_Statistical_Functions_preso_1.ppt
http://www.uga.edu/oir/reports/OracleAnaylticFunction-SAIR-2006.ppt
http://www.olsug.org/Presentations/May_2005/Workshops/Statistical_Analysis_of_Gene_Expression_Data_with_Oracle_and_R_Workshop.pdf
http://www.stat.yale.edu/~hz68/Adaptive-FLR.pdf
http://www.nocoug.org/download/2008-08/2008_08_NCOUG_11g4DW_hb.pdf



http://blogs.oracle.com/datamining/2010/08/the_meaning_of_probability.html


''Oracle Documentation'' - ''Linear Regression'' http://docs.oracle.com/cd/E11882_01/server.112/e25554/analysis.htm#BCFIIAGJ
''REGR_SLOPE'' Analytic Sales Forecast - http://dspsd.blogspot.com/2012/02/analytic-sales-forecast.html, http://www.rittmanmead.com/2012/03/statistical-analysis-in-the-database/


''Articles''
http://www.kdnuggets.com/2015/02/10-things-statistics-big-data-analysis.html






http://jonathanlewis.wordpress.com/statspack-examples/
-- from http://www.perfvision.com/statspack/statspack10.txt

{{{
Database
Cache Sizes
Load Profile
Instance Efficiency Percentages
Top 5 Timed Events
Host CPU  (CPUs: 2)
Instance CPU
Memory Statistics
Time Model System Stats
Wait Events
Background Wait Events
Wait Event Histogram
SQL ordered by CPU
SQL ordered by Elapsed
SQL ordered by Reads
SQL ordered by Executions
SQL ordered by Parse Calls
Instance Activity Stats
Instance Activity Stats
-> Statistics with absolute values (should not be diffed)
Instance Activity Stats
-> Statistics identified by '(derived)' come from sources other than SYSSTAT
OS Statistics
Tablespace IO Stats
File IO Stats
File Read Histogram Stats
Buffer Pool Statistics
Instance Recovery Stats
Buffer Pool Advisory
Buffer wait Statistics
PGA Aggr Target Stats
PGA Aggr Target Histogram
PGA Memory Advisory
Process Memory Summary Stats
Top Process Memory (by component)
Enqueue activity
Undo Segment Summary
Undo Segment Stats
Latch Activity
Latch Sleep breakdown
Latch Miss Sources
Mutex Sleep
Dictionary Cache Stats
Library Cache Activity
Rule Sets
Shared Pool Advisory
SGA Memory Summary
SGA breakdown difference
SQL Memory Statistics
init.ora Parameters

}}}
-- from http://www.perfvision.com/statspack/statspack9.txt	

{{{
Cache Sizes
Load Profile
Instance Efficiency Percentages (Target 100%)
Top 5 Timed Events
Wait Events for 
Background Wait Events for 
SQL ordered by Gets for 
SQL ordered by Reads for 
SQL ordered by Executions for 
SQL ordered by Parse Calls for 
Instance Activity Stats for 
Tablespace IO Stats for 
File IO Stats for 
Buffer Pool Statistics for 
Instance Recovery Stats for 
Buffer Pool Advisory for 
PGA Aggr Target Stats for 
PGA Aggr Target Histogram for 
PGA Memory Advisory for 
Rollback Segment Stats for 
Rollback Segment Storage for 
Latch Activity for 
Latch Activity for 
Latch Sleep breakdown for 
Latch Miss Sources for 
Dictionary Cache Stats for 
Library Cache Activity for 
Shared Pool Advisory for 
SGA Memory Summary for 
SGA breakdown difference for 
init.ora Parameters for 

}}}
Calculate IOPS in a storage array
http://www.zdnetasia.com/calculate-iops-in-a-storage-array-62061792.htm
http://www.techrepublic.com/blog/the-enterprise-cloud/calculate-iops-in-a-storage-array/

Calculate IOPS per disk 
https://communities.netapp.com/community/netapp-blogs/databases/blog/2011/08/11/formula-to-calculate-iops-per-disk
{{{
Formula: 
IOPS Estimated = 1 / ((seek time / 1000) + (latency / 1000)

Let's make a simple test:
 
SAS - 600GB 15K - Seagate - http://www.seagate.com/www/en-us/products/enterprise-hard-drives/cheetah-15k#tTabContentSpecifications
Estimated IOPS = 1 / ( ( (average read seek time+average write seek time) / 2) / 1000) + (average latency / 1000)
Estimated IOPS = 1 / ((3.65 / 1000) + (2.0 / 1000) = 1 / (0.00365) + (0.002) = 176.99115044247787610619469026549 - ~ 175 IOPS
 
SATA - 1TB 7.2K - Seagate - http://www.seagate.com/www/en-us/products/enterprise-hard-drives/constellation-es/constellation-es-1/#tTabContentSpecifications
Estimated IOPS = 1 / ( ( (average read seek time+average write seek time) / 2) / 1000) + (average latency / 1000)
Estimated IOPS = 1 / ((9.00 / 1000) + (4.16 / 1000) = 1 / (0.009) + (0.00416) = 75.987841945288753799392097264438 - ~ 75 IOPS
}}}

More on Performance Metrics: The Relationship Between IOPS and Latency
http://www.networkcomputing.com/servers-storage/more-on-performance-metrics-the-relation/240005213

IOPS calculator http://wmarow.com/storage/strcalc.html
RAID calculator http://wmarow.com/storage/raidslider.html
array estimator http://wmarow.com/storage/goals.html

STORAGE NOTES – IOPS, RAID, PERFORMANCE AND RELIABILITY http://www.virtuallyimpossible.co.uk/storage-notes-iops-raid-performance-and-reliability/



Storage array capacity: Performance vs. cost
http://www.zdnetasia.com/storage-array-capacity-performance-vs-cost-62062039.htm?scid=nl_z_tgsr
http://blogs.oracle.com/rdm/entry/capacity_sizing_for_15k_disks

See also these other sources:
    * http://blog.aarondelp.com/2009/10/its-now-all-about-iops.html
    * http://www.yellow-bricks.com/2009/12/23/iops/
    * http://www.tomshardware.com/forum/251893-32-raid-raid

High Performance Storage Systems for SQL Server 
http://www.simple-talk.com/sql/performance/high-performance-storage-systems-for-sql-server/    <-- MB/s per disk
<<<
All Aboard the IO Bus!

To illustrate IO bus saturation, let’s consider a simple example. A 1GB fiber channel is capable of handling about 90 MB/Sec of throughput. Assuming each disk it services is capable of 150 IOPS (of 8K each), that’s a total of 1.2 MB/Sec, which means that the channel is capable of handling up to 75 disks. Any more than that and we have channel saturation, meaning we need more channels, higher channel throughput capabilities, or both.

The other crucial consideration here is the type of IO we’re performing. In the above calculation of 150*8K IOPS, we assumed a random/OLTP type workload. In reporting/OLAP environments, we’ll have a lot more sequential IO consisting of, for example, large table scans during data warehouse loads. In such cases, the IO throughput requirements are a lot higher. Depending on the disk, ''the maximum MB/Sec will vary, but let’s assume 40 MB/Sec. It only takes three of those disks to produce 120 MB/Sec, leading to saturation of our 1GB fiber channel.''

In general, OLTP systems feature lots of disks to overcome latency issues, and OLAP systems feature lots of channels to handle peak throughput demands. It’s important that we consider both IOPS, to calculate the number of disks we need, and the IO type, to ensure the IO bus is capable of handling the throughput. But what about SANs?
<<<

Sane SAN
http://jamesmorle.wordpress.com/2010/08/23/sanesan2010-introduction/
http://jamesmorle.wordpress.com/2010/08/23/sanesan2010-serial-to-serial-when-one-bottleneck-isnt-enough/
http://jamesmorle.wordpress.com/2010/09/06/sane-san2010-storage-arrays-ready-aim-fire/
http://jamesmorle.wordpress.com/2011/09/16/right-practice/

Queue Depth
http://storage.ittoolbox.com/groups/technical-functional/emc-l/clearing-outstanding-disk-io-1574369
http://en.wikipedia.org/wiki/IOPS
http://forums11.itrc.hp.com/service/forums/questionanswer.do?admit=109447626+1279241670018+28353475&threadId=1310049
http://www.ardentperf.com/2008/03/13/oracle-iops-and-hba-queue-depth/
http://www.ardentperf.com/2008/01/31/oracle-io-and-operating-system-caching/

Evaluating Storage Benchmarks http://www.enterprisestorageforum.com/hardware/features/article.php/3668416/Evaluating-Storage-Benchmarks.htm
Measuring Storage Performance http://www.enterprisestorageforum.com/hardware/features/article.php/3671466/Measuring-Storage-Performance

Oracle and storage IOs, explanations and experience at CERN http://cdsweb.cern.ch/record/1177416/files/CHEP2009-28-24.pdf

Bob Sneed's nice paper ''Oracle I/O: Supply and Demand'' http://vnull.pcnet.com.pl/dl/solaris/Oracle_IO_1.1.pdf , http://bobsneed.wordpress.com/2009/11/05/oracle-io-supply-and-demand/

IOPS - Frames per second 
http://www.thesanman.org/2012/03/understanding-iops.html

Converged Fabrics: Part 1 - Converged Fabrics http://youtu.be/qiU8QcAArwE
Converged Fabrics: Part 2 - Calculating IOPs http://youtu.be/VAlbVOyQ7w0 









storage index aging
http://oracle-sage.com/2014/11/03/exadata-storage-index-aging-part-1/
http://oracle-sage.com/2014/11/04/exadata-storage-index-aging-part-2a/
http://oracle-sage.com/2014/11/04/exadata-storage-index-aging-part-2b/
http://oracle-sage.com/2014/11/05/exadata-storage-index-aging-part-3-analysis-on-test-2b/




{{{
HARDWARE RAID - leverage on it, if it's available
SOFTWARE RAID - only if Hardware RAID is not available



RAID 0
RAID 1		<-- min of 2.. +1 hot spare
RAID 1+0	<-- min of 4.. +1 hot spare
RAID 5 <-- min of 3.. +1 hot spare

Note: use it with LVM to automatically scale, just add to existing Volume Group then LVEXTEND



RAC 9i
Theoretical Maximum number of nodes = 64
Practical Maximum number of nodes = 8



RAC 10g
Theoretical Maximum number of nodes = 128
Practical Maximum number of nodes = 8



RedHat Cluster Suite
Practical Maximum number of nodes = 32



ASM's minimum ASM disk = 4
ASM's maximum ASM disk = 8
Recommended RAID configuration = 1+0



OCFS2, 
- can't use it in LVM
- EMC have its own LVM software.. leverage on it
- if in a clustered environment, then use OCFS2 for the FLASH_RECOVERY_AREA



LVM is not cluster-aware
- if you're planning for a cluster filesystem on LVM then use GFS (global filesystem)
- if you have EMC luns, then don't mix it with the internal disks because they have
	different performance charteristics and LUNS sometimes disappear, 
	ideal is create a separate Volume Group for different disk characteristics
	but you can't get extents from a different Volume Group
- if not in a clustered environment, use EXT3 for FLASH_RECOVERY_AREA



ibm storage LUN limit is 375GB.. times 4 = 2.9TB
}}}
''How to Tell if the IO of the Database is Slow [ID 1275596.1]''

{{{
====================================================================================
RAID  Type of RAID        Control       Database        Redo Log        Archive Log
                            File          File            File            File
====================================================================================
0     Striping             Avoid*          OK*           Avoid*           Avoid*     
------------------------------------------------------------------------------------
1     Shadowing             OK             OK          Recommended       Recommended
------------------------------------------------------------------------------------
0+1   Striping +            OK         Recommended       Avoid            Avoid     
      Shadowing                           (1)                                                         
------------------------------------------------------------------------------------
3     Striping with         OK           Avoid           Avoid            Avoid     
      Static Parity                       (2)                                                                    
------------------------------------------------------------------------------------
5     Striping with         OK           Avoid           Avoid            Avoid     
      Rotating Parity                     (2)
------------------------------------------------------------------------------------

*   RAID 0 does not provide any protection against failures. It requires a strong backup
    strategy.
(1) RAID 0+1 is recommended for database files because this avoids hot spots and gives 
    the best possible performance during a disk failure.  The disadvantage of RAID 0+1 
    is that it is a costly configuration.
(2) When heavy write operation involves this datafile




-- RAID CONFIGURATION

I/O Tuning with Different RAID Configurations
 	Doc ID:	Note:30286.1
 	
Avoiding I/O Disk Contention
 	Doc ID:	Note:148342.1


 	
-- SOLID STATE
 	
Solid State Disks & DSS Operations
  	Doc ID: 	Note:76413.1


-- iSCSI
Document TitleUsing Openfiler iSCSI with an Oracle RAC database on Linux (Doc ID 371434.1)

  	
-- RAW DEVICES

Announcement of De-Support of using RAW devices in Release 12G
 	Doc ID:	NOTE:578455.1

Making the decision to use raw devices
  	Doc ID: 	Note:29676.1



-- ASYNC IO, DIRECT IO

Pros and Cons of Using Direct I/O for Databases [ID 1005087.1]
          Understanding Cyclic Caching and Page Cache on Solaris 8 and Above [ID 1003383.1]
          Oracle database restart takes longer on high end systems running Solaris[TM] 8 [ID 1003483.1]

ASM INHERENTLY PERFORMS ASYNCHRONOUS I/O REGARDLESS OF FILESYSTEMIO_OPTIONS PARAMETER
  	Doc ID: 	Note:751463.1

File System's Buffer Cache versus Direct I/O   <-- PARAMETER
  	Doc ID: 	Note:462072.1

Init.ora Parameter "FILESYSTEMIO_OPTIONS" is Incorrectly Set to "NONE" as a DEFAULT in 9.2.0 on AIX
  	Doc ID: 	Note:230238.1

RMAN Backup Controlfile Fails With RMAN-03009 ORA-01580 ORA-27044
  	Doc ID: 	Note:737877.1

How To Check if Asynchronous I/O is Working On Linux
  	Doc ID: 	Note:237299.1

DirectIO on Redhat and SuSe Linux
  	Doc ID: 	Note:297521.1

Direct I/O or Concurrent I/O on AIX 5L
  	Doc ID: 	272520.1

Async io and AdvFS - Does Oracle Support it?
  	Doc ID: 	50548.1


SOLARIS: Asynchronous I/O (AIO) on Solaris (SPARC) servers
  	Doc ID: 	48769.1

AIX How does Oracle use AIO servers and what determines how many are used? [ID 443368.1]
AIX Recommendations For using CIO/DIO for Filesystems containing Oracle Files on AIX [ID 960055.1]




-- CIO Concurrent IO
How to use Concurrent I/O on HP-UX and improve throughput on an Oracle single-instance database [ID 1231869.1]
see warnings on using CIO on ORACLE_HOMEs at [[Veritas Oracle Doc]] which should also be the same for JFS2 on AIX
Db_block_size Requirements For Direct IO / Concurrent IO [ID 418714.1]
Slow I/O On HP Unix [ID 457063.1] <-- shows mounting of cio
Question On Retail Predictive Application Server (RPAS) And Concurrent IO (Asynchronous Mode) [ID 1303046.1]
Direct I/O or Concurrent I/O on AIX 5L [ID 272520.1] <-- nice matrix on AIX JFS
Direct I/O (DIO) and Concurrent I/O (CIO) on AIX 5L [ID 257338.1]



-- QUICK IO

How to Verify Quick I/O is Working
  	Doc ID: 	135447.1


-- filesystemio_options
filesystemio_options and filesystem mounts Supported/recommended on AIX for Oracle 9i [ID 602791.1]



-- BENCHMARK

Comparing Performance Between RAW IO vs OCFS vs EXT 2/3
  	Doc ID: 	236679.1

Oracm Large Disk IO Timeout Message Explained
  	Doc ID: 	359898.1





-- WARNING

WARNING:1 Oracle process running out of OS kernelI/O resources
      Doc ID:     748607.1

10.2 Grid Agent Can Break RAID Mirroring and Cause Hard Disk To Go Offline
      Doc ID:     454647.1

Process spins and traces with "Asynch I/O kernel limits" Warnings [ID 1313555.1]
WARNING:Could not increase the asynch I/O limit to 514 for SQL direct I/O. It is set to 128
WARNING:io_submit failed due to kernel limitations MAXAIO for process=0 pending aio=0



-- BLOCKS

Extent and Block Space Calculation and Usage in Oracle Databases
  	Doc ID: 	10640.1

Note 162994.1 SCRIPT TO REPORT EXTENTS AND CONTIGUOUS FREE SPACE

Script: Computing Table Size
  	Doc ID: 	70183.1

Note: 1019709.6  SCRIPT TO REPORT TABLESPACE FREE AND FRAGMENTATION
Note: 1019585.6  SCRIPT TO CALCULATE BLOCKS NEEDED BY A TABLE
Note: 1019524.6  SCRIPT TO REPORT SPACE USED IN A TABLESPACE
Note: 1019505.6  SCRIPT TO SHOW TABLE EXTENTS & STORAGE PARAMETERS

RDBPROD: Space Management and Thresholds in Rdb
  	Doc ID: 	62688.1

RDBPROD: How to identify fragmented rows in a table
  	Doc ID: 	283203.1

RDBPROD: Tutorial on Area Page Sizing
  	Doc ID: 	62689.1


}}}
http://blog.go-faster.co.uk/2012/03/editing-hints-in-stored-outlines.html
http://strataconf.com/stratany2011/public/content/video
http://strataconf.com/stratany2012/public/schedule/proceedings
Which Doc Contains Standard Edition's Replication Features?
http://forums.oracle.com/forums/thread.jspa?messageID=4557612

Streams in Oracle SE 
http://forums.oracle.com/forums/thread.jspa?messageID=4035228

Streams between Standard and Enterprise edition is not working [ID 567872.1]
<<<
In 11g, the method to capture changes in SE is to use synchronous capture. For more information check: http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/strms_capture.htm#CACIDGBI)

Also an example on how to configure is in the 2Day+ Data Replication and Integration manual.
The example demonstrates setting up sync capture as 2way replication (bidirectional) http://download.oracle.com/docs/cd/B28359_01/server.111/b28324/tdpii_repcont.htm#BABDEBBA
<<<
''Enhanced subquery optimizations in Oracle - vldb09-423.pdf''
http://www.evernote.com/shard/s48/sh/6da5b118-5471-4dee-bb8a-bdf0c7da9893/c7327d55701ccaeff53861ef496cb79d
http://blogs.oracle.com/optimizer/2010/09/optimizer_transformations_subquery_unesting_part_2.html
{{{
=subtotal(9,yourrange)
}}}


http://www.ozgrid.com/forum/showthread.php?t=59996&page=1
http://www.pcreview.co.uk/forums/do-you-sum-only-visible-cells-you-have-filtered-list-t1015388.html
http://www.pcreview.co.uk/forums/totals-reflecting-filtered-cells-only-not-all-data-worksheet-t1038534.html
http://support.microsoft.com/kb/187667
http://office.microsoft.com/en-us/excel-help/countif-HP005209029.aspx
http://www.eggheadcafe.com/software/aspnet/29984846/how-can-i-count-the-number-of-characters-on-a-cell.aspx
http://www.ehow.com/how_5925626_count-number-characters-ms-excel.html
http://www.excelforum.com/excel-new-users/372335-formula-to-count-number-of-times-the-letter-x-appears-in-a-column.html
http://www.google.com.ph/search?sourceid=chrome&ie=UTF-8&q=ms+excel+if+or
http://www.officearticles.com/excel/if_statements_in_formulas_in_microsoft_excel.htm
http://www.bluemoosetech.com/microsoft-excel-functions.php?jid=19&title=Microsoft%20Excel%20Functions:%20IF,%20AND,%20OR
http://www.experiglot.com/2006/12/11/how-to-use-nested-if-statements-in-excel-with-and-or-not/



! Sun Sparc T3 CPUs - thread:core ratio
see discussions here https://www.evernote.com/shard/s48/sh/ddf62a51-c7d9-489b-b1f4-c14b008a1d63/68a86a16889ac103


http://www.natecarlson.com/2010/05/07/review-supermicros-sc847a-4u-chassis-with-36-drive-bays/
https://www.youtube.com/watch?v=xtOg44r6dsE


..
! RHEL 3 and below (double the size)
 
! RHEL 4:
  1) if RAM <= 2GB then swap = 2X RAM
  2) if RAM > 2GB then
   
   e.g. 4GB
    (2GB x 2) + 2GB
    = 6GB (3x swap partition)
   
   e.g. 8GB
    (2GB x 2) + 6GB
    = 10GB (5x swap partition)

! RHEL 5:
http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Installation_Guide/s2-diskpartrecommend-x86.html
#
A swap partition (at least 256 MB) — swap partitions are used to support virtual memory. In other words, data is written to a swap partition when there is not enough RAM to store the data your system is processing.
If you are unsure about what size swap partition to create, make it twice the amount of RAM on your machine. It must be of type swap.
Creation of the proper amount of swap space varies depending on a number of factors including the following (in descending order of importance):

    *
      The applications running on the machine.
    *
      The amount of physical RAM installed on the machine.
    *
      The version of the OS. 

Swap should equal 2x physical RAM for up to 2 GB of physical RAM, and then an additional 1x physical RAM for any amount above 2 GB, but never less than 32 MB.
So, if:
M = Amount of RAM in GB, and S = Amount of swap in GB, then

If M < 2
	S = M *2
Else
	S = M + 2

Using this formula, a system with 2 GB of physical RAM would have 4 GB of swap, while one with 3 GB of physical RAM would have 5 GB of swap. Creating a large swap space partition can be especially helpful if you plan to upgrade your RAM at a later time.
For systems with really large amounts of RAM (more than 32 GB) you can likely get away with a smaller swap partition (around 1x, or less, of physical RAM). 
 

! more recent RHEL 5 and RHEL6:

consult/read the documentation because physical memory nowadays are getting bigger and bigger.. 
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/index.html
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Installation_Guide/s1-diskpartitioning-x86.html
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/s2-diskpartrecommend-x86.html






! NOTE: 
1) LVM distributes its extents across disks... so don't put your SWAP on an LVM
2) SWAP must be near the center of the cylinder
http://www.centos.org/docs/5/html/5.1/Deployment_Guide/s2-swap-creating-lvm2.html
http://www.centos.org/docs/4/html/rhel-sag-en-4/s1-swap-adding.html
http://serverfault.com/questions/306419/is-the-fdisk-partition-type-important-when-using-lvm	<-- GOOD STUFF
http://www.linuxquestions.org/questions/red-hat-31/can-we-use-partition-type-83-for-creating-a-lvm-volume-762819/ <-- GOOD 
http://sourceforge.net/tracker/?func=detail&aid=2528606&group_id=115473&atid=671650 <-- GOOD STUFF
http://www.techotopia.com/index.php/Adding_and_Managing_Fedora_Swap_Space#Adding_Swap_Space_to_the_Volume_Group



http://forums.fedoraforum.org/showthread.php?t=146289
http://www.walkernews.net/2007/07/02/how-to-create-linux-lvm-in-3-minutes/
http://forums.fedoraforum.org/showthread.php?p=707874
http://www.howtoforge.com/linux_lvm
http://www.mail-archive.com/debian-bugs-dist@lists.debian.org/msg301789.html
http://forum.soft32.com/linux2/Bug-410227-mkswap-good-checking-partition-type-83-ftopict71038.html
http://forums.opensuse.org/english/get-technical-help-here/install-boot-login/390784-install-fails-format-swap-3008-a.html
http://nixforums.org/about150579-mkswap-mistake.html
http://www.redhat.com/magazine/009jul05/features/lvm2/
https://help.ubuntu.com/10.04/serverguide/C/advanced-installation.html




! Installation
<<<
1) Download the swingbench here http://www.dominicgiles.com/swingbench.html

2) Set the environment variables at /home/oracle/dba/benchmark/swingbench/swingbench.env
* JAVAHOME 
* SWINGHOME 
* ORACLE_HOME
export JAVAHOME=/u01/app/oracle/product/11.2.0.3/dbhome_1/jdk
export SWINGHOME=/home/oracle/dba/benchmark/swingbench
export ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/dbhome_1

3) Run the oewizard

for standard testcase across platforms
- the ordercount and customercount should be 1000000
- and start with #ofusers 1000, then increase to 1200
{{{
/home/oracle/dba/benchmark/swingbench/bin/oewizard

-- 1M customers
11:27:20 SYS@dw> select sum(bytes)/1024/1024 from dba_segments where owner = 'SOE';

SUM(BYTES)/1024/1024
--------------------
          724.851563
}}}
* if you are using ASM just specify +DATA
* If you don't have SYSDBA access to the machine then you can create your own user "karlarao"
then grant CONNECT,DBA,SYSDBA to that user
then edit the oewizard.xml file
{{{
<?xml version = '1.0' encoding = 'UTF-8'?>
<WizardConfig Mode="InterActive" Name="Oracle Entry Install Wizard" xmlns="http://www.dominicgiles.com/swingbench/wizard">
   <WizardSteps RunnableStep="5">
      <WizardStep SourceFile="com.dom.benchmarking.swingbench.wizards.oe.Step0"/>
      <WizardStep SourceFile="com.dom.benchmarking.swingbench.wizards.oe.Step1"/>
      <WizardStep SourceFile="com.dom.benchmarking.swingbench.wizards.oe.Step2"/>
      <WizardStep SourceFile="com.dom.benchmarking.swingbench.wizards.oe.Step3"/>
      <WizardStep SourceFile="com.dom.benchmarking.swingbench.wizards.oe.Step4"/>
      <WizardStep SourceFile="com.dom.benchmarking.swingbench.wizards.oe.Step5"/>
      <WizardStep SourceFile="com.dom.benchmarking.swingbench.wizards.oe.Step6"/>
      <WizardStep SourceFile="com.dom.benchmarking.swingbench.wizards.oe.Step7"/>
   </WizardSteps>
   <DefaultParameters>
      <Parameter Key="dbapassword" Value="karlarao"/>
      <Parameter Key="password" Value="soe"/>
      <Parameter Key="itsize" Value="374M"/>
      <Parameter Key="datatablespacesexists" Value="false"/>
      <Parameter Key="partitoningrequired" Value="true"/>
      <Parameter Key="indextablespacesexists" Value="false"/>
      <Parameter Key="Mode" Value="InterActive"/>
      <Parameter Key="dbausername" Value="karlarao"/>
      <Parameter Key="indextablespace" Value="soeindex"/>
      <Parameter Key="operation" Value="create"/>
      <Parameter Key="ordercount" Value="1000000"/>
      <Parameter Key="indexdatafile" Value="+DATA"/>
      <Parameter Key="datafile" Value="+DATA"/>
      <Parameter Key="connectionstring" Value="desktopserver.local:1521:dw"/>
      <Parameter Key="tablespace" Value="soe"/>
      <Parameter Key="username" Value="soe"/>
      <Parameter Key="customercount" Value="1000000"/>
      <Parameter Key="tsize" Value="195M"/>
      <Parameter Key="connectiontype" Value="thin"/>
   </DefaultParameters>
</WizardConfig>
}}}
then after running the oewizard, run the following commands as "/ as sysdba":
{{{
grant execute on dbms_lock to soe;
exec dbms_stats.gather_schema_stats('soe');
exec dbms_utility.compile_schema('soe',true);
}}}

increase the processes parameter to 2048

4) Edit the swingconfig.xml with your connect string, system password
''connect string''
{{{
format is <<hostname>>:<<port>>:<<service>>
$ cat swingconfig.xml | grep -i connect
   <Connection>
      <ConnectString>desktopserver.local:1521:dw</ConnectString>
   </Connection>
}}}
''system password''
{{{
$ cat swingconfig.xml | grep -i system
      <SystemUserName>system</SystemUserName>
      <SystemPassword>oracle</SystemPassword>
}}}
OR the user you created on the oewizard step
{{{
$ cat swingconfig.xml | grep -i system
      <SystemUserName>karlarao</SystemUserName>
      <SystemPassword>karlarao</SystemPassword>
}}}
<<<


! Connection String differences
<<<
''Thin JDBC'' .. apparently this does not failover sessions
{{{
<ConnectString>desktopserver.local:1521:dw</ConnectString>
<DriverType>Oracle10g Type IV jdbc driver (thin)</DriverType>
}}}
''OCI - tnsnames.ora'' with FAILOVER_MODE option
{{{
<ConnectString>exadata</ConnectString>
<DriverType>Oracle10g Type II jdbc driver (oci)</DriverType>
}}}
<<<


! Run the benchmark - Single Instance
''Review the command options here'' http://www.dominicgiles.com/commandline.html

1) In ''GUI'' mode
<<<
cpumonitor will give you the CPU and IO graphs
{{{
cd $HOME/swingbench/bin
./cpumonitor
./swingbench -cpuloc localhost
}}}
to stop, cancel the swingbench then stop the coordinator
{{{
./coordinator -stop
}}}
<<<
2) To have a consistent load using ''charbench''
<<<
{{{
while : ; do ./charbench -a -rt 00:01 ; echo "---" ; done
}}}
or 
just edit the file swingconfig.xml and change the default 15 users to 1000
{{{
<NumberOfUsers>1000</NumberOfUsers>
}}}
then just execute ./charbench
<<<
3) Using ''minibench''
<<<
{{{
cd $HOME/swingbench/bin
./cpumonitor
./minibench -cpuloc localhost
}}}
to stop, cancel the minibench then stop the coordinator
{{{
./coordinator -stop
}}}
<<<


! Run the benchmark - RAC
http://www.dominicgiles.com/clusteroverviewwalkthough23.html
{{{
install swingbench on db1 and db2
run oewizard on db1

./coordinator -g
./minibench -g group1 -cs exadata1 -co localhost &
./minibench -g group2 -cs exadata2 -co localhost &

edit the HostName and MonitoredNodes tags of clusteroverview.xml

./clusteroverview
./coordinator -stop
}}}
http://www.dominicgiles.com/blog/files/859a2dd3f34b49a43e5a39380d39b680-7.html
http://dominicgiles.com/swingbench/clusteroverview21f.pdf
relocate session rac https://forums.oracle.com/forums/thread.jspa?messageID=3742732



! OLTP and DSS workload mix
{{{
Benchmark          Description 	Read/Write Ratio
Order Entry          TPC-C like		60/40     <-- based on OE schema, stress interconnects and memory
Calling Circle        Telco based         70/30     <-- stress the CPU and memory without the need for a powerful I/O subsystem
Stress Test           Simple I,U,D,S	50/50     <-- simply fires random inserts,updates,selects and updates against a well know table
Sales History        DSS			100/0     <-- based on SH schema, designed to test the performance of complicated queries when run against large tables
}}}
<<<
[img[ https://www.evernote.com/shard/s48/sh/7611f954-dc37-45d1-b2c6-22d01f224692/c1578b3143b3d345f2306c26b925c080/res/c597305c-2dff-4aa7-b67b-931a6105907c/swingbench.png ]]
<<<


''Issues'' 
entropy on random,urandom 
http://www.freelists.org/post/oracle-l/swingbench-connection-issue
http://www.usn-it.de/index.php/2009/02/20/oracle-11g-jdbc-driver-hangs-blocked-by-devrandom-entropy-pool-empty/
http://www.freelists.org/post/oracle-l/Difference-between-devurandom-and-devurandom-Was-swingbench-connection-issue,2


''Possible test  cases''
Real Application Clusters, Online table rebuilds, Standby databases, Online backup and recovery etc.


''My old config files''
{{{

MyBookLive:~/backup# find /DataVolume/shares/Public/Backup -iname "swingconfig.xml"
/DataVolume/shares/Public/Backup/Disks/disk1-WD1TB/backup/Documents/backup/temp/dbrocaix/swingconfig.xml
/DataVolume/shares/Public/Backup/Disks/WD1TB/backup/temp/dbrocaix/swingconfig.xml

MyBookLive:/DataVolume/shares/Public/Backup/Disks/disk1-WD1TB/backup/Documents/backup/temp/dbrocaix# ls -ltr
total 704
-rwxrwxrwx 1 root root 1131 Feb 28  2011 swingbench.env
-rwxrwxrwx 1 root root   58 Feb 28  2011 char.txt
-rwxrwxrwx 1 root root 4884 Feb 28  2011 swingconfig.xml.oe
-rwxrwxrwx 1 root root 3504 Feb 28  2011 swingconfig.xml.cc
-rwxrwxrwx 1 root root  673 Feb 28  2011 swingbench.css
-rwxrwxrwx 1 root root  251 Feb 28  2011 swingbench
-rwxrwxrwx 1 root root 4937 Feb 28  2011 swingconfig.xml
-rwxrwxrwx 1 root root  179 Feb 28  2011 s2h.sql
-rwxrwxrwx 1 root root  190 Feb 28  2011 stopdb.sh
-rwxrwxrwx 1 root root  170 Feb 28  2011 startdb.sh
-rwxrwxrwx 1 root root 1011 Feb 28  2011 g.sql
}}}


! Swingbench on a CPU centric micro bench scenario VS cputoolkit
The thing here is the more consistent and less parameters you have on your test cases 
the more repeatable it will be and easier for you to measure the response time effects 
of the any environment and configuration changes
* It's doing IO, so you'll not only see CPU on your AAS.. but also log file sync when you ramp up the number of users so if you have slow disks then you are burning some of your response time on IO
* As you ramp up more users let's say from 1 to 50.. and as you do more CPU WAIT IO and increase your load average you'll start getting ORA-03111
{{{
	12:00:01 AM   runq-sz  plist-sz   ldavg-1   ldavg-5  ldavg-15
	12:10:01 AM         5       813      0.00      0.00      0.00
	12:20:01 AM         5       811      0.00      0.00      0.00
	02:20:01 PM         2      1872      4.01     20.40     11.73
	02:30:02 PM         4      1896    327.89    126.20     50.80 <-- load avg
	02:40:01 PM         6       871      0.31     18.10     27.58
	02:50:01 PM         7       867      0.12      2.57     14.52
	
	You'll start getting ORA-03111: break received on communication channel
	http://royontechnology.blogspot.com/2009/06/mysterious-ora-03111-error.html
}}}
* But swingbench is still a totally awesome tool, I can use it for a bunch of test cases but using it on a CPU centric micro benchmark like observing threads vs cores just gives a lot of non-CPU noise 
* cputoolkit as a micro benchmark tool allows you to measure the effect on scalability (LIOs per unit of work) as you consumes the max # of CPUs.. see the LIOS_ELAP which takes into account the time spent on CPU_WAIT, if you just derive the unit of work on LIOS_EXEC it will be incorrect as it only takes into account how many times you have done work and not including the time spent on run-queue (CPU_WAIT) which is the most important thing to quantify the effect on the users
{{{
SQL>
TM               ,      EXEC,      LIOS,   CPUSECS,  ELAPSECS,CPU_WAIT_SECS,  CPU_EXEC, ELAP_EXEC,CPU_WAIT_EXEC, LIOS_EXEC, LIOS_ELAP
-----------------,----------,----------,----------,----------,-------------,----------,----------,-------------,----------,----------
12/05/12 01:02:47,      1002, 254336620,   1476.18,   2128.63,       652.45,      1.47,      2.12,          .65, 253828.96, 119483.73
}}}
check here [[cpu centric benchmark comparisons]] for more info






https://help.ubuntu.com/community/SwitchingToUbuntu/FromLinux/RedHatEnterpriseLinuxAndFedora

{{{
Contents

Administrative Tasks
Package Management
   Graphical Tools
   Command Line Tools
   Table of Equivalent Commands
Services
   Graphical Tools
   Command Line Tools
Network
   Graphical Tools
   Command Line Tools
}}}

http://dtrace.org/blogs/brendan/2011/10/15/using-systemtap/  <-- brendan's experience
http://web.elastic.org/~fche/blog2/archive/2011/10/17/using_systemtap_better  <-- systemtap author response
http://sprocket.io/blog/2007/11/systemtap-its-like-dtrace-for-linux-yo/   <-- a quick howto guide
nits to please a kernel hacker http://lwn.net/Articles/301285/

systemtap example commands <-- http://sourceware.org/systemtap/examples/index.html
installation <-- http://goo.gl/OS7u7
http://sourceware.org/systemtap/wiki <-- wiki
http://publib.boulder.ibm.com/infocenter/lnxinfo/v3r0m0/index.jsp?topic=%2Fliaai%2Fstapgui%2Finstallation.htm <-- systemtap IDE GUI
http://sourceforge.net/projects/stapgui/
http://publib.boulder.ibm.com/infocenter/lnxinfo/v3r0m0/index.jsp?topic=%2Fliaai%2Fstapgui%2Fdemo.htm <-- GUI demo
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/SystemTap_Beginners_Guide/useful-systemtap-scripts.html <-- scripts
http://sourceware.org/systemtap/SystemTap_Beginners_Guide/using-usage.html <-- Running SystemTap Scripts
http://publib.boulder.ibm.com/infocenter/lnxinfo/v3r0m0/index.jsp?topic=%2Fliaai%2FsystemTap%2Fliaaisystapinstallrhel.htm <-- Installing SystemTap on Red Hat Enterprise Linux 5.2



Life of an Oracle I/O: tracing logical and physical I/O with systemtap  https://db-blog.web.cern.ch/blog/luca-canali/2014-12-life-oracle-io-tracing-logical-and-physical-io-systemtap
https://www.cnblogs.com/zengkefu/p/6580389.html

http://ksun-oracle.blogspot.com/2018/01/oracle-logical-read-current-gets-access.html
<<<
There’s really no available SPECint number for the T4 but it’s pretty much the same range of speed as the T5 but fewer cores (16 vs 8 per processor)

You can size it at 30 (speed/core) as well. 
 
$ less spec.txt | sort -rnk1 | grep -i sparc | grep -i oracle
30.5625, 16, 1, 16, 8, 441, 489, Oracle Corporation, SPARC T5-1B, Oct-13
29.2969, 128, 8, 16, 8, 3490, 3750, Oracle Corporation, SPARC T5-8, Apr-13
29.1875, 16, 1, 16, 8, 436, 467, Oracle Corporation, SPARC T5-1B, Apr-13



T5 http://www.tpc.org/tpch/results/tpch_result_detail.asp?id=113060701
T4 http://www.tpc.org/tpch/results/tpch_result_detail.asp?id=111092601
T5 https://www.spec.org/jEnterprise2010/results/res2014q1/jEnterprise2010-20140107-00047.html
T4 https://www.spec.org/jEnterprise2010/results/res2011q3/jEnterprise2010-20110907-00027.html

<<<
disagree. it's not going to be the same performance. SPECint_rate2006/core says it all. see the slide here 
<<<
[img(50%,50%)[ http://goo.gl/csMbU ]] 
<<<
and SPECint_rate2006/core comparison here (higher the better)
the Oracle slide used the "baseline" number.. where I usually use the "result" (in csv) which is equivalent to the "peak" column in the SPECint_rate2006 main page
so the 2830 is a baseline number divide by # of cores which is 64
<<<
[img[ http://goo.gl/0XnNo ]]
<<<
and that rules out storage.



! on E7 comparison (x3-8)
well yeah they're about the same performance range
{{{
$ cat spec.txt | grep -i intel | grep 8870 | sort -rnk1
27, 40, 4, 10, 2, 1010, 1080, Unisys Corporation, Unisys ES7000 Model 7600R G3 (Intel Xeon E7-8870)
26.75, 40, 4, 10, 2, 1010, 1070, NEC Corporation, Express5800/A1080a-S (Intel Xeon E7-8870)
26.75, 40, 4, 10, 2, 1010, 1070, NEC Corporation, Express5800/A1080a-D (Intel Xeon E7-8870)
26.5, 40, 4, 10, 2, 1000, 1060, Oracle Corporation, Sun Server X2-8 (Intel Xeon E7-8870 2.40 GHz)
25.875, 80, 8, 10, 2, 1960, 2070, Supermicro, SuperServer 5086B-TRF (X8OBN-F Intel E7-8870)
24.875, 80, 8, 10, 2, 1890, 1990, Oracle Corporation, Sun Server X2-8 (Intel Xeon E7-8870 2.40 GHz)
}}}

! on E5 comparison (x3-2)
x3-2 is still way faster than t5-8 ;)  44 vs 29 SPECint_rate2006/core.. oh yeah, faster.
{{{
$ cat spec.txt | grep -i intel | grep -i "E5-26" | grep -i sun | sort -rnk1
44.0625, 16, 2, 8, 2, 632, 705, Oracle Corporation, Sun Blade X6270 M3 (Intel Xeon E5-2690 2.9GHz)
44.0625, 16, 2, 8, 2, 632, 705, Oracle Corporation, Sun Blade X3-2B (Intel Xeon E5-2690 2.9GHz)
44.0625, 16, 2, 8, 2, 630, 705, Oracle Corporation, Sun Server X3-2L (Intel Xeon E5-2690 2.9GHz)
44.0625, 16, 2, 8, 2, 630, 705, Oracle Corporation, Sun Fire X4270 M3 (Intel Xeon E5-2690 2.9GHz)
43.875, 16, 2, 8, 2, 628, 702, Oracle Corporation, Sun Server X3-2 (Intel Xeon E5-2690 2.9GHz)
43.875, 16, 2, 8, 2, 628, 702, Oracle Corporation, Sun Fire X4170 M3 (Intel Xeon E5-2690 2.9GHz)
}}}



http://oracle-dba-yi.blogspot.com/2010/01/taf-vs-fan-vs-fcf-vs-ons.html <- good stuff 

{{{
What the differences and relationship among TAF/FAN/FCF/ONS?
1 Definition
1) TAF
a feature of Oracle Net Services for OCI8 clients. TAF is transparent application failover which will move a session to a backup connection if the session fails. With Oracle 10g Release 2, you can define the TAF policy on the service using dbms_service package. It will only work with OCI clients. It will only move the session and if the parameter is set, it will failover the select statement. For insert, update or delete transactions, the application must be TAF aware and roll back the transaction. YES, you should enable FCF on your OCI client when you use TAF, it will make the failover faster.

Note: TAF will not work with JDBC thin.
2) FAN
FAN is a feature of Oracle RAC which stands for Fast Application Notification. This allows the database to notify the client of any change (Node up/down, instance up/down, database up/down). For integrated clients, inflight transactions are interrupted and an error message is returned. Inactive connections are terminated. 
FCF is the client feature for Oracle Clients that have integrated with FAN to provide fast failover for connections. Oracle JDBC Implicit Connection Cache, Oracle Data Provider for .NET (ODP.NET) and Oracle Call Interface are all integrated clients which provide the Fast Connection Failover feature.
3) FCF
FCF is a feature of Oracle clients that are integrated to receive FAN events and abort inflight transactions, clean up connections when a down event is received as well as create new connections when a up event is received. Tomcat or JBOSS can take advantage of FCF if the Oracle connection pool is used underneath. This can be either UCP (Universal Connection Pool for JAVA) or ICC (JDBC Implicit Connection Cache). UCP is recommended as ICC will be deprecated in a future release. 
4) ONS

http://forums.oracle.com/forums/thread.jspa?messageID=3566976

ONS is part of the clusterware and is used to propagate messages both between nodes and to application-tiers

ONS is the foundation for FAN upon which is built FCF.

RAC uses FAN to publish configuration changes and LBA events. Applications can react as those published events in two way :
- by using ONS api (you need to program it)
- by using FCF (automatic by using JDBC implicit connection cache on the application server)

you can also respond to FAN event by using server-side callout but this on the server side (as their name suggests it)


Rodrigo Mufalani
"ONS send/receive messages about failures automatically. It is a daemon process that runs on each node notifying status from components of database, nodeapps.
If listener process fails on node1 his failure is notified by EVMD, then local ONS communicates the failure to remote ONS in remote nodes, then local ONS on these nodes notifying all aplications about failure that occurred on node1."


2 Relationship
ONS --> FAN --> FCF
ONS -> send/receive messages on local and remote nodes.
FAN -> uses ONS to notify other processes about changes in configuration of service level
FCF -> uses FAN information working with conection pools JAVA and others.
http://forums.oracle.com/forums/thread.jspa?messageID=3566976

3 To use TAF/FAN/FCF/ONS, do you need to configure/install in server or client side?

4 Does ONS automatically send messages ? 
or is there any settings to be done ?
Does ONS only broadcast msgs ?
http://forums.oracle.com/forums/thread.jspa?messageID=3566976

ONS is part of the clusterware and is used to propagate messages both between nodes and to application-tiers

ONS is the foundation for FAN upon which is built FCF.

RAC uses FAN to publish configuration changes and LBA events. Applications can react as those published events in two way :
- by using ONS api (you need to program it)
- by using FCF (automatic by using JDBC implicit connection cache on the application server)

you can also respond to FAN event by using server-side callout but this on the server side (as their name suggests it)


Rodrigo Mufalani
"ONS send/receive messages about failures automatically. It is a daemon process that runs on each node notifying status from components of database, nodeapps.
If listener process fails on node1 his failure is notified by EVMD, then local ONS communicates the failure to remote ONS in remote nodes, then local ONS on these nodes notifying all aplications about failure that occurred on node1."


5 Are TAF and FAN mutually exclusive? or if TAF and FCF are mutually exclusive?
No. You can use both TAF and FAN at the same time, or both TAF and FCF, it depends on what you want to achieve with it. 

6 TAF Basic Configuration with FAN: Example
Oracle Database 10g Release 2 supports server-side TAF with FAN. 
To use server-side TAF:
1) create and start your service using SRVCTL
$ srvctl add service -d RACDB -s AP -r I1,I2
$ srvctl start service -d RACDB -s AP
2) configure TAF in the RDBMS by using the DBMS_SERVICE package.
execute dbms_service.modify_service ( ,-
service_name => 'AP' ,-
aq_ha_notifications => true ,-
failover_method => dbms_service.failover_method_basic ,-
failover_type => dbms_service.failover_type_session ,-
failover_retries => 180, failover_delay => 5 ,-
clb_goal => dbms_service.clb_goal_long);
3) When done, make sure that you define a TNS entry for it in your tnsnames.ora file. 
AP =
(DESCRIPTION =(FAILOVER=ON)(LOAD_BALANCE=ON)
(ADDRESS=(PROTOCOL=TCP)(HOST=N1VIP)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=N2VIP)(PORT=1521))
(CONNECT_DATA = (SERVICE_NAME = AP)))
Note that this TNS name does not need to specify TAF parameters as with the previous slide.

7 TAF Basic Configuration without FAN: Example
1) Before using TAF, it is recommended that you create and start a service that is used during connections. 
By doing so, you benefit from the integration of TAF and services. When you want to use BASIC TAF with a service, you should have the -P BASIC option when creating the service.
After the service is created, you simply start it on your database.
$ srvctl add service -d RACDB -s AP -r I1,I2  -P BASIC
$ srvctl start service -d RACDB -s AP
2) Then, your application needs to connect to the service by using a connection descriptor similar to the one shown in the slide. The FAILOVER_MODE parameter must be included in the CONNECT_DATA section of your connection descriptor.
AP =
(DESCRIPTION =(FAILOVER=ON)(LOAD_BALANCE=ON)
(ADDRESS=(PROTOCOL=TCP)(HOST=N1VIP)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=N2VIP)(PORT=1521))
(CONNECT_DATA =
(SERVICE_NAME = AP)
(FAILOVER_MODE =
(TYPE=SESSION)
(METHOD=BASIC)
(RETRIES=180)
(DELAY=5))))

Note: If using TAF, do not set the GLOBAL_DBNAME parameter in your listener.ora file.

8 Metalink notes
--Understanding Transparent Application Failover (TAF) and Fast Connection Failover (FCF) [ID 334471.1]
--How To Verify And Test Fast Connection Failover (FCF) Setup From a JDBC Thin Client Against a 10.2.x RAC Cluster [ID 433827.1] 
--Fast Connection Failover (FCF) Test Client Using 11g JDBC Driver and 11g RAC Cluster [ID 566573.1]
--Questions about how ONS and FCF work with JDBC [ID 752595.1]
--Configuring ONS For Fast Connection Failover
--How To Implement (Fast Connection Failover) FCF Using JDBC driver ? [ID 414199.1]

--How to Implement Load Balancing With RAC Configured System Using JDBC [ID 247135.1]
}}}
transparent application failover https://vzw.webex.com/vzw/j.php?MTID=ma1137d5dea6daf255be06f6fb672fbd8
Implementing Transparent Application Failover https://docs.oracle.com/cd/E19509-01/820-3492/boaem/index.html


https://www.dropbox.com/s/ez5wlxg9uylxtw0/RacTaf-Demo.sql

* if you kill the instance all the sessions will failover.. 
* when I was just killing with post_transactions sometimes they are just being killed and not failing over

see the differences below.. 

{{{

##################################
Relocate sessions: TPM decreased 
##################################

srvctl add service -d dw -s dw_service -r dw1,dw2

SYS@dw1> /

INSTANCE_NAME     USERCOUNT USER_NAME                      FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1                       7 SOE                            SESSION       BASIC      NO
dw2                      23 SOE                            SESSION       BASIC      NO

--after killed dw1

INSTANCE_NAME     USERCOUNT USER_NAME                      FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw2                      23 SOE                            SESSION       BASIC      NO
dw2                       7 SOE                            SESSION       BASIC      YES

-- db and service state after kill of dw1

ora.dw.db                                database       ONLINE       OFFLINE                      Instance Shutdown
ora.dw.db                                database       ONLINE       ONLINE       db2             Open
ora.dw.dw_service.svc                    service        ONLINE       OFFLINE
ora.dw.dw_service.svc                    service        ONLINE       ONLINE       db2

-- execute relocate service

$ srvctl relocate service -d dw -s dw_service -i dw2 -t dw1

-- after relocate 

ora.dw.db                                database       ONLINE       ONLINE       db1             Open
ora.dw.db                                database       ONLINE       ONLINE       db2             Open
ora.dw.dw_service.svc                    service        ONLINE       OFFLINE
ora.dw.dw_service.svc                    service        ONLINE       ONLINE       db1

-- kill 15 sessions

alter system disconnect session '59,3' post_transaction;
alter system disconnect session '48,21' post_transaction;
alter system disconnect session '85,5807' post_transaction;
alter system disconnect session '105,1229' post_transaction;
alter system disconnect session '96,521' post_transaction;
alter system disconnect session '82,5313' post_transaction;
alter system disconnect session '50,1467' post_transaction;
alter system disconnect session '86,1837' post_transaction;
alter system disconnect session '38,1161' post_transaction;
alter system disconnect session '47,903' post_transaction;
alter system disconnect session '70,3049' post_transaction;
alter system disconnect session '67,4253' post_transaction;
alter system disconnect session '40,7959' post_transaction;
alter system disconnect session '75,3105' post_transaction;
alter system disconnect session '66,8437' post_transaction;

-- after kill on dw2

INSTANCE_NAME     USERCOUNT USER_NAME                      FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1                       4 SOE                            SESSION       BASIC      YES
dw2                      10 SOE                            SESSION       BASIC      NO
dw2                       5 SOE                            SESSION       BASIC      YES

-- after kill of pmon on dw2

INSTANCE_NAME     USERCOUNT USER_NAME                      FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1                       4 SOE                            NONE          NONE       NO
dw1                      11 SOE                            SESSION       BASIC      YES

INSTANCE_NAME     USERCOUNT USER_NAME                      FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1                      15 SOE                            SESSION       BASIC      YES






#########################
kill sessions - TPM did not change
#########################

-- before kill on dw2

INSTANCE_NAME     USERCOUNT USER_NAME                      FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1                       8 SOE                            SESSION       BASIC      NO
dw2                      22 SOE                            SESSION       BASIC      NO

-- 1st kill on dw2

INSTANCE_NAME     USERCOUNT USER_NAME                      FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1                       8 SOE                            SESSION       BASIC      NO
dw1                       2 SOE                            SESSION       BASIC      YES
dw2                       9 SOE                            SESSION       BASIC      NO
dw2                      11 SOE                            SESSION       BASIC      YES

-- 2nd kill on dw2

INSTANCE_NAME     USERCOUNT USER_NAME                      FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1                       8 SOE                            SESSION       BASIC      NO
dw1                       7 SOE                            SESSION       BASIC      YES
dw2                       4 SOE                            SESSION       BASIC      NO
dw2                      11 SOE                            SESSION       BASIC      YES

-- 3rd kill on dw2, it actually takes care of the rebalance of sessions

INSTANCE_NAME     USERCOUNT USER_NAME                      FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1                       8 SOE                            SESSION       BASIC      NO
dw1                       7 SOE                            SESSION       BASIC      YES
dw2                       1 SOE                            SESSION       BASIC      NO
dw2                      14 SOE                            SESSION       BASIC      YES



#####################
kill sessions part2 
#####################

INSTANCE_NAME     USERCOUNT USER_NAME                      FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1                       1 SOE                            SESSION       BASIC      NO
dw1                       6 SOE                            SESSION       BASIC      YES
dw2                      23 SOE                            SESSION       BASIC      NO


21:37:30 SYS@dw2> alter system disconnect session '1,31' post_transaction;
alter system disconnect session '77,1' post_transaction;
alter system disconnect session '65,3' post_transaction;
alter system disconnect session '47,239' post_transaction;
alter system disconnect session '53,1' post_transaction;
alter system disconnect session '45,5' post_transaction;
alter system disconnect session '61,1' post_transaction;
alter system disconnect session '46,39' post_transaction;
alter system disconnect session '72,5' post_transaction;
alter system disconnect session '51,1085' post_transaction;



INSTANCE_NAME     USERCOUNT USER_NAME                      FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1                       1 SOE                            SESSION       BASIC      NO
dw1                      11 SOE                            SESSION       BASIC      YES
dw2                      13 SOE                            SESSION       BASIC      NO
dw2                       5 SOE                            SESSION       BASIC      YES


-- after pmon kill 

INSTANCE_NAME     USERCOUNT USER_NAME                      FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1                       3 SOE                            NONE          NONE       NO
dw1                       1 SOE                            SESSION       BASIC      NO
dw1                      11 SOE                            SESSION       BASIC      YES

-- after restart of instance by agent

INSTANCE_NAME     USERCOUNT USER_NAME                      FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1                       1 SOE                            SESSION       BASIC      NO
dw1                      14 SOE                            SESSION       BASIC      YES
dw2                      15 SOE                            SESSION       BASIC      YES

-- before kill dw1

INSTANCE_NAME     USERCOUNT USER_NAME                      FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1                       1 SOE                            SESSION       BASIC      NO
dw1                      14 SOE                            SESSION       BASIC      YES
dw2                      15 SOE                            SESSION       BASIC      YES

-- after kill dw1

INSTANCE_NAME     USERCOUNT USER_NAME                      FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw2                       6 SOE                            NONE          NONE       NO
dw2                      24 SOE                            SESSION       BASIC      YES

-- after kill dw1

INSTANCE_NAME     USERCOUNT USER_NAME                      FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw2                      30 SOE                            SESSION       BASIC      YES


-- after kill of some session from dw2

SYS@dw1> /

INSTANCE_NAME     USERCOUNT USER_NAME                      FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1                       5 SOE                            SESSION       BASIC      YES
dw2                      25 SOE                            SESSION       BASIC      YES

SYS@dw1> /

INSTANCE_NAME     USERCOUNT USER_NAME                      FAILOVER_TYPE FAILOVER_M FAI
---------------- ---------- ------------------------------ ------------- ---------- ---
dw1                      12 SOE                            SESSION       BASIC      YES
dw2                      18 SOE                            SESSION       BASIC      YES

}}}
http://www.xaprb.com/blog/2012/02/23/black-box-performance-analysis-with-tcp-traffic/
http://www.percona.com/files/white-papers/mysql-performance-analysis-percona-toolkit-tcp.pdf

''High ping times on interconnect for new Cisco UCS servers''..... https://mail.google.com/mail/u/0/?shva=1#inbox/13d23918df3d1965
{{{
graham - We have seen cases where 10GigE NICs have lower packet rate limits  than 1GigE NICs. So if you are pushing many packets to and fro you might be hitting that. 
Having said that 'v' is another possibility as others have said. Any chance of removing the 'v'?

kevin - Even though your 10GbE stuff is exhibiting pathology I'd recommend against expecting improved latency on the road from 1GbE to 10GbE. They are fatter pipes, not faster.  Yes, I have seen 10GbE cards that exhibit better latency at sub 1GbE payload but those efficiencies were not attributable to 10GbE per se.

james - And basically forget it completely moving forward, because 40gigE is 4xlane 10gig, and 100gigE is 10 lane. We're done with latency improvements.

martin b - Umm, I think I read somewhere that if you really really need low latency you'd go FDR IB. There are PCIe gen 3 cards out there that are good enough to handle the data.
Apparently in HPC FDR is even beating 40 GBit Ethernet, and I guess it's for reasons mentioned here. Not that I heard of any real-life system with 40 GBit Ethernet though. And admittedly the use cases in the RDBMS world requiring FDR IB are a bit limited.

Kevin - Apples oranges.
FDR IB is not mainstream so let's jaw out QDR.
QDR IB with a good protocol like Oracle's RDS (OFED owned) is the creme de la creme.
RoCE doesn't suck if one starts out with head extracted from butt.
What most people end up doing is comparing IPoE to RDMAoIB and that's just not a very fund conversation.

Kyle - For network testing I've been using 3 tools
netio - simple good push button throughput test
netperf - more nobs and whistles, a bit overwhelming
ttcp - java based net test tool
}}}
http://www.agiledata.org/essays/tdd.html
https://www.google.com/search?sxsrf=ACYBGNRr6391gd_-pTZ7h1k2W2uwXmAUCQ%3A1567868802431&ei=gsdzXbfsGcnM_AbdrbuoAw&q=oracle+TDD&oq=oracle+TDD&gs_l=psy-ab.3..0j0i22i30.17663.19125..19373...0.2..0.91.733.10......0....1..gws-wiz.......0i71j35i39j0i131j0i67j0i20i263j0i131i20i263.IfRwUT1wEBk&ved=0ahUKEwi3tZe4_r7kAhVJJt8KHd3WDjUQ4dUDCAs&uact=5


https://softwareengineering.stackexchange.com/questions/162268/tdd-with-sql-and-data-manipulation-functions
https://plunit.com/
https://mikesmithers.wordpress.com/2016/07/31/test-driven-development-and-plsql-the-odyssey-begins/
https://stackoverflow.com/questions/7440008/is-there-any-way-to-apply-tdd-techniques-for-dev-in-pl-sql
https://blog.disy.net/tdd-for-plsql-with-junit/
http://engineering.pivotal.io/post/oracle-sql-tdd/
<<showtoc>>

Master Note For Transparent Data Encryption ( TDE ) (Doc ID 1228046.1)

! backup wallet 
Backup Auto-login keystore https://support.oracle.com/epmos/faces/CommunityDisplay?resultUrl=https%3A%2F%2Fcommunity.oracle.com%2Fthread%2F3916985&_afrLoop=390268269273865&resultTitle=Backup+Auto-login+keystore&commId=3916985&displayIndex=1&_afrWindowMode=0&_adf.ctrl-state=j7vbtfksh_175 

https://oracle-base.com/articles/12c/multitenant-transparent-data-encryption-tde-12cr1
TDE Wallet Problem in 12c: Cannot do a Set Key operation when an auto-login wallet is present (Doc ID 1944507.1)
https://wiki.loopback.org/display/KB/How+to+crack+Oracle+Wallets


! change password
http://www.asktheway.org/official-documents/oracle/E50529_01/DBIMI/to_dbimi9742_d235.htm#DBIMI9742
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/asoag/configuring-transparent-data-encryption.html#GUID-4FBC5088-A045-4306-88C0-FEBC07CA18AC
How To Change The Wallet Password For A Secure External Password Store? (Doc ID 557382.1)
Oracle Wallet Manager and orapki https://docs.oracle.com/cd/E28280_01/core.1111/e10105/walletmgr.htm#ASADM10177
http://brainsurface.blogspot.com/2016/03/creating-and-changing-walletspasswords.html


! change encryption 
How to change TDE Tablespace Encryption From AES128 To AES256 in 12.2 (Doc ID 2456250.1)


! losing the wallet key 
https://technology.amis.nl/2013/12/08/wheres-my-wallet-loosing-the-encryption-master-key-in-11g-db-compatibility-in-12c/







-TDP (thermal design power)
average maximum power a processor can dissipate while running commercially available software
DP is primarily used as a guideline for manufacturers of thermal solutions (heatsinks/fans, etc) which tells them how much heat their solution should dissipate
TDP is usually 20% - 30% lower than the CPU